repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
pdhimal1/AI-Project
|
Yahoo Finace Notebook.ipynb
|
mit
|
[
"Yahoo Finance\nTutorial on how to get current/historical prices from yahoo finance using yahoo_finace\nImport yahoo finance and other necessary packages",
"from yahoo_finance import Share\nimport numpy as np\n",
"First we need to create an instance of Share. Using that instance we will get prices, volumes, ratios and all other company information",
"#for this Example I will use google's finances\n\n#create an instance of Share\ngoogle = Share('GOOG')",
"Price Information",
"#now that an instance of Share is created (google), we will call its functions to get the prices\n\n#date and time of the trade\ndate = google.get_trade_datetime()\n\n#opening price\nopening_price = google.get_open()\n\n#Price right now (Yahoo finance is delayed by 15 mins)\ncurrent_price = google.get_price()\n\n#Day's high and low prices \nday_high = google.get_days_high()\nday_low = google.get_days_low()\n\n#price changes from opening price\nprice_change = google.get_change()\n\nprint \"trading date: \", date\nprint \"current price: \", current_price\nprint \"opening price: $\" , opening_price\nprint \"day high: $\", day_high\nprint \"day low: $\", day_low\nprint \"print price change: $\", price_change\n#Refresh to get a new price\n# Note that after the market closes @ 4PM EST, the price will stay the same\ngoogle.refresh()\n\ndate = google.get_trade_datetime()\ncurrent_price = google.get_price()\nprice_change = google.get_change()\n\nprint \"\\n########## After refreshing ####################\"\nprint \"trading date: \", date\nprint \"current price: \", current_price\nprint \"opening price: $\" , opening_price\nprint \"print price change: $\", price_change",
"Moving averages. Get a peek of what prices have been like in the past.",
"#If current prices are higher than 50 or 200 days moving average, that means prices are going up\n\n#200 days moving average\nth_moving_avg = google.get_200day_moving_avg()\n\n#50 days moving average\nfifty_moving_avg = google.get_50day_moving_avg()\n\nprint \"200 days moving average: $\", th_moving_avg\nprint \"50 days moving average: $\", fifty_moving_avg",
"Volume Information",
"#Volume speaks (If more people are trading, there's gotta be something good or bad happening)\n\nvolume = google.get_volume()\n#compare this days volume with average volume\naverage_daily_volume = google.get_avg_daily_volume()\n\nprint \"Today's volume: \", volume\nprint \"Average volume: \", average_daily_volume",
"Ratios are important for technical analysis. Price to Earning ratio is the most important of them all. Value investors like Warren Buffet use this to their analysis.",
"#PE ratio ---> price per share divided by earnings per share\n#Lower PE the better \nPE = google.get_price_earnings_ratio()\n\n#PEG ratio ---> pe ratio divided by 1-reinvestment (growth)\nPEG = google.get_price_earnings_growth_ratio()\n\nprint \"Price to earning (PE) ratio : \", PE\nprint \"Price earning to growth (PEG) ratio: \", PEG",
"Book Value",
"#book value -> what the numbers say this company is worth\nprint \"book value\", google.get_book_value()",
"Dividends: how is the company paying its investors",
"div_per_share = google.get_dividend_share()\ndiv_yield = google.get_dividend_yield()\n\n#for some reason Google's dividend information was not available\nprint \"dividend per share: $\", div_per_share\nprint \"divident yield: \", div_yield",
"Historical Prices\n*Data not available for Saturday/Sunday",
"historical = google.get_historical('2015-07-28', '2015-09-08')\nprint len(historical)\n\nprint len(historical)\n#To get the closing price for first day\nprint historical[0]['Close']\n\n#opening price for first day\nprint historical[0]['Open']\n",
"More on historical prices coming soon",
"#to get all opening prices together\n\nopening = [] #is a dynamic array (list) for python\n\nfor i in range(len(historical)):\n x = historical[i]['Open']\n opening.append(x)\n\nclosing = [] #is a dynamic array (list) for python\n\nfor i in range(len(historical)):\n x = historical[i]['Close']\n closing.append(x)\n\nx_axis = np.arange(0+1, len(historical)+1)\n\n\n#print opening\n#print closing\n#print x_axis\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(x_axis,opening, 'b', x_axis, closing, 'r')\nplt.xlabel('Day')\nplt.ylabel('Price ($)')\nplt.show()\n\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
qaisermazhar/qaisermazhar.github.io
|
markdown_generator/publications.ipynb
|
mit
|
[
"Publications markdown generator for academicpages\nTakes a TSV of publications with metadata and converts them for use with academicpages.github.io. This is an interactive Jupyter notebook (see more info here). The core python code is also in publications.py. Run either from the markdown_generator folder after replacing publications.tsv with one containing your data.\nTODO: Make this work with BibTex and other databases of citations, rather than Stuart's non-standard TSV format and citation style.\nData format\nThe TSV needs to have the following columns: pub_date, title, venue, excerpt, citation, site_url, and paper_url, with a header at the top. \n\nexcerpt and paper_url can be blank, but the others must have values. \npub_date must be formatted as YYYY-MM-DD.\nurl_slug will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be YYYY-MM-DD-[url_slug].md and the permalink will be https://[yourdomain]/publications/YYYY-MM-DD-[url_slug]\n\nThis is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).",
"!cat publications.tsv",
"Import pandas\nWe are using the very handy pandas library for dataframes.",
"import pandas as pd",
"Import TSV\nPandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or \\t.\nI found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.",
"publications = pd.read_csv(\"publications.tsv\", sep=\"\\t\", header=0)\npublications\n",
"Escape special characters\nYAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.",
"html_escape_table = {\n \"&\": \"&\",\n '\"': \""\",\n \"'\": \"'\"\n }\n\ndef html_escape(text):\n \"\"\"Produce entities within text.\"\"\"\n return \"\".join(html_escape_table.get(c,c) for c in text)",
"Creating the markdown files\nThis is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (md) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.",
"import os\nfor row, item in publications.iterrows():\n \n md_filename = str(item.pub_date) + \"-\" + item.url_slug + \".md\"\n html_filename = str(item.pub_date) + \"-\" + item.url_slug\n year = item.pub_date[:4]\n \n ## YAML variables\n \n md = \"---\\ntitle: \\\"\" + item.title + '\"\\n'\n \n md += \"\"\"collection: publications\"\"\"\n \n md += \"\"\"\\npermalink: /publication/\"\"\" + html_filename\n \n if len(str(item.excerpt)) > 5:\n md += \"\\nexcerpt: '\" + html_escape(item.excerpt) + \"'\"\n \n md += \"\\ndate: \" + str(item.pub_date) \n \n md += \"\\nvenue: '\" + html_escape(item.venue) + \"'\"\n \n if len(str(item.paper_url)) > 5:\n md += \"\\npaperurl: '\" + item.paper_url + \"'\"\n \n md += \"\\ncitation: '\" + html_escape(item.citation) + \"'\"\n \n md += \"\\n---\"\n \n ## Markdown description for individual page\n \n if len(str(item.excerpt)) > 5:\n md += \"\\n\" + html_escape(item.excerpt) + \"\\n\"\n \n if len(str(item.paper_url)) > 5:\n md += \"\\n[Download paper here](\" + item.paper_url + \")\\n\" \n \n md += \"\\nRecommended citation: \" + item.citation\n \n md_filename = os.path.basename(md_filename)\n \n with open(\"../_publications/\" + md_filename, 'w') as f:\n f.write(md)",
"These files are in the publications directory, one directory below where we're working from.",
"!ls ../_publications/\n\n!cat ../_publications/2009-10-01-paper-title-number-1.md"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
patemotter/trilinos-prediction
|
ml_files/pandas-profiling.ipynb
|
mit
|
[
"This script generates profiles of pandas dataframes using the Pandas-Profiling library",
"import pandas_profiling\nimport pandas as pd",
"Generate profile just for the properties",
"props = pd.read_csv('../data/processed_properties.csv', \n header=0, index_col=0)\nprops = props.drop_duplicates()\nprops = props.dropna()\nprops.info()\nprops_profile = pandas_profiling.ProfileReport(props)\nprops_profile.to_file('props_profile.html')",
"Generate profiles for the individual systems+properties",
"comet = pd.read_csv('../data/comet/comet_unprocessed_timings.csv', \n header=0, index_col=0)\ncomet = comet.drop_duplicates()\ncomet = comet.dropna()\ncomet.info()\ncomet_merged = pd.merge(comet, props, on='matrix')\ncomet_profile = pandas_profiling.ProfileReport(comet_merged)\ncomet_profile.to_file('comet_unprocessed_timings_profile.html')\n\njanus = pd.read_csv('../data/janus/janus_unprocessed_timings.csv', \n header=0, index_col=0)\njanus = janus.drop_duplicates()\njanus = janus.dropna()\njanus.info()\njanus_merged = pd.merge(janus, props, on='matrix')\njanus_profile = pandas_profiling.ProfileReport(janus_merged)\njanus_profile.to_file('janus_unprocessed_timings_profile.html')\n\nbridges = pd.read_csv('../data/bridges/bridges_unprocessed_timings.csv', \n header=0, index_col=0)\nbridges = bridges.drop_duplicates()\nbridges = bridges.dropna()\nbridges.info()\nbridges_merged = pd.merge(bridges, props, on='matrix')\nbridges_profile = pandas_profiling.ProfileReport(bridges_merged)\nbridges_profile.to_file('bridges_unprocessed_timings_profile.html')",
"Generate profiles for the combined times+properties",
"all_times = pd.concat([comet, bridges, janus], ignore_index=True)\nall_times.info()\n\ncombined = pd.merge(props, all_times, on=['matrix','matrix_id'])\ncombined.info()\ncombined = combined.drop_duplicates()\ncombined = combined.dropna()\ncombined_profile = pandas_profiling.ProfileReport(combined)\ncombined_profile.to_file('unprocessed_combined_profile.html')\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier\n\ncombined_new = combined.drop(['matrix', 'solver', 'prec', \n 'status', 'system'], axis=1)\ncombined_new = combined_new.dropna()\n\nX = combined_new.iloc[:,:-2]\ny = combined_new.iloc[:, -1]\n\nclf = RandomForestClassifier()\nclf.fit(X, y)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
CalPolyPat/phys202-2015-work
|
assignments/assignment04/MatplotlibExercises.ipynb
|
mit
|
[
"Visualization 1: Matplotlib Basics Exercises",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom __future__ import print_function\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.html import widgets",
"Scatter plots\nLearn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.\n\nGenerate random data using np.random.randn.\nStyle the markers (color, size, shape, alpha) appropriately.\nInclude an x and y label and title.",
"randx = np.random.randn(500)\nrandy = np.random.randn(500)\nplt.scatter(randx, randy, color = \"g\", marker = \"x\")\nplt.xlabel(\"Random X\")\nplt.ylabel(\"Random Y\")\nplt.title(\"Random Data!!!!!\")\nplt.box(False)\nplt.grid(True)",
"Histogram\nLearn how to use Matplotlib's plt.hist function to make a 1d histogram.\n\nGenerate randpom data using np.random.randn.\nFigure out how to set the number of histogram bins and other style options.\nInclude an x and y label and title.",
"data = np.random.randn(500000)\ndef plothist(bins, numdata):\n plt.hist(np.random.randn(numdata), bins=bins, color = \"k\", ec = \"w\")\ninteract(plothist, bins=widgets.IntSlider(min=1,max=100,step=1,value=10), numdata=\\\n widgets.IntSlider(min=10,max=10000,step=10,value=10));\nplt.xlabel(\"Random Variable X\")\nplt.ylabel(\"Counts\")\nplt.title(\"Distribution of a random variable in abjustable bins.\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rsterbentz/phys202-2015-work
|
days/day20/Cython.ipynb
|
mit
|
[
"Short Tour of Cython\nIn general, Python is slower than other programming languages such as C, C++ and Java. Much of this comes from the fact that Python is dynamically typed. This means that when Python is compiling code to be run by the interpreter, it doesn't make any assumptions about what type of object each variable contains. In Python code, this shows up in your ability to assign, and even change, any type to a variable:\npython\na = 10 # int\n...\na = 1.0 # float\nA statically typed language, such as C, C++ and Java, forces you to declare the type of each variable ahead of time. This allows the compilers for these languages to perform significant performance optimizations. Thus, a variable declaration and assigment in C looks like this:\nC\nint a = 10;\nCython is a Python package that allows you to provide static typing for Python. The Cython compiler can then generate and compile optimized C code that can still be called from Python. The result is that with a little bit of work on your part, Cython can speed up your Python code significantly.\nThe documentation for Cython can be found here.",
"import numpy as np\n\n%load_ext Cython",
"Primes",
"def primes(kmax):\n p = []\n k = 0\n n = 2\n while k < kmax:\n i = 0\n while i < k and n % p[i] != 0:\n i = i + 1\n if i == k:\n p.append(n)\n k = k + 1\n n = n + 1\n return p\n\n%timeit primes(100)\n\n%%cython -a\n\ndef primes_cython(int kmax):\n cdef int n, k, i\n cdef int p[1000]\n result = []\n if kmax > 1000:\n kmax = 1000\n k = 0\n n = 2\n while k < kmax:\n i = 0\n while i < k and n % p[i] <> 0:\n i = i + 1\n if i == k:\n p[k] = n\n k = k + 1\n result.append(n)\n n = n + 1\n return result\n\n%timeit primes_cython(100)",
"Pairwise distances\nThis example is taken from this blog post of Jake VanderPlas.",
"X = np.random.random((1000, 3))\n\ndef pairwise_python(X):\n M = X.shape[0]\n N = X.shape[1]\n D = np.empty((M, M), dtype=np.float)\n for i in range(M):\n for j in range(M):\n d = 0.0\n for k in range(N):\n tmp = X[i, k] - X[j, k]\n d += tmp * tmp\n D[i, j] = np.sqrt(d)\n return D\n%timeit pairwise_python(X)\n\n%%cython -a\ncimport cython\ncimport numpy\nfrom libc.math cimport sqrt\n\n@cython.boundscheck(False)\n@cython.wraparound(False)\ndef pairwise_cython(double[:, ::1] X):\n cdef int M = X.shape[0]\n cdef int N = X.shape[1]\n cdef double tmp, d\n cdef double[:, ::1] D = np.empty((M, M), dtype=np.float64)\n for i in range(M):\n for j in range(M):\n d = 0.0\n for k in range(N):\n tmp = X[i, k] - X[j, k]\n d += tmp * tmp\n D[i, j] = sqrt(d)\n return np.asarray(D)\n\n%timeit pairwise_cython(X)",
"Lorentz derivs",
"def lorentz_derivs(yvec, t, sigma, rho, beta):\n \"\"\"Compute the the derivatives for the Lorentz system at yvec(t).\"\"\"\n x = yvec[0]\n y = yvec[1]\n z = yvec[2]\n return [sigma*(y-x), x*(rho-z)-y, x*y-beta*z]\n\nyvec = np.array([1.0,1.0,1.0])\nt = 1.0\nsigma = 1.0\nrho = 1.0\nbeta = 1.0\n\n%timeit lorentz_derivs(yvec, t, sigma, rho, beta)\n\n%%cython -a\n\ncimport cython\ncimport numpy\nimport numpy as np\n\ndef lorentz_derivs_cython(numpy.ndarray[double, ndim=1] yvec, double t, \n double sigma, double rho, double beta):\n \"\"\"Compute the the derivatives for the Lorentz system at yvec(t).\"\"\"\n cdef double x = yvec[0]\n cdef double y = yvec[1]\n cdef double z = yvec[2]\n return [sigma*(y-x), x*(rho-z)-y, x*y-beta*z]\n\n%timeit lorentz_derivs_fast(yvec, t, sigma, rho, beta)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
m7thon/tom
|
doc/example.ipynb
|
mit
|
[
"Usage example\nTo showcase the use of this toolkit, we first create a simple learning task, and then learn an OOM model using spectral learning.\nWe start by importing the toolkit and initializing a random generator.",
"import tom\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nrand = tom.Random(1234567)",
"1. The learning task\nFirst, we randomly create a sparse 7-dimensional OOM with an alphabet size of $|\\Sigma| = 5$. This describes a stationary and ergodic symbol process. We sample a training sequence of length $10^6$ and five test sequences each of length $10^4$.\nWe will use initial subsequences of the training sequence of increasing lengths ${10^2, 10^{2.5}, 10^3, 10^{3.5}, 10^4, 10^{4.5}, 10^{5}, 10^{5.5}, 10^6 }$ as data for the OOM estimation, and test the performance of the learnt models on the test sequences by computing the time-averaged negative $\\log_2$-likelihood.",
"oom = tom.Oom(7, 5, 0, 20, 1e-7, rand)\ntrain_sequence = oom.sample(10**6, rand)\ntest_sequences = []\nfor i in range(5):\n oom.reset()\n test_sequences.append(oom.sample(10**4, rand))\ntrain_lengths = [int(10**(k/2)) for k in range(4,13)]",
"2. Performing spectral learning\nSpectral learning requires the following steps. For details consult the publication: Michael Thon and Herbert Jaeger. Links between multiplicity automata, observable operator models and predictive state representations -- a unified learning framework. Journal of Machine Learning Research, 16:103–147, 2015.\n\n\nFor words $\\bar{x}\\in\\Sigma^*$, estimate from the available data the values $\\hat{f}(\\bar{x})$, where $f(\\bar{x}) = P(\\bar{x})$ is the stationary probability of observing $\\bar{x}$. This is accomplished by a tom.Estimator object, which uses a suffix tree representation of the data in the form of a tom.STree to compute these estimates efficiently.\n\n\nSelect sets $X, Y \\subseteq \\Sigma^*$ of \"indicative\" and \"characteristic\" words that determine which of the above estimates will be used for the spectral learning. Here, we will use the at most 1000 words occurring most often in the training sequence. This is computed efficiently by the function tom.getWordsFromData from a suffix tree representation of the training data.\n\n\nEstimate an appropriate target dimension $d$ by the numerical rank of the matrix $\\hat{F}^{Y,X} = [\\hat{f}(\\bar{x}\\bar{y})]_{\\bar{y}\\in Y, \\bar{x}\\in X}$.\n\n\nPerform the actual spectral learning using the function tom.learn.spectral. This consists of the following steps:\n\nFind the best rank-$d$ approximation $BA \\approx \\hat{F}^{Y,X}$ to the matrix $\\hat{F}^{Y,X}$.\nProject the columns of $\\hat{F}^{Y,X}$ and $\\hat{F}z^{Y,X} = [\\hat{f}(\\bar{x} z \\bar{y})]{\\bar{y}\\in Y, \\bar{x}\\in X}$, as well as the vector $\\hat{F}^{X, \\varepsilon} = [\\hat{f}(\\bar{x})]{\\bar{x}\\in X}$ to the principal subspace spanned by $B$, giving the coordinate representations $A$, $A_z$ and $\\hat{\\omega}\\varepsilon$, respectively.\nSolve $\\hat{\\tau_z} A = A_z$ in the least-squares sense for each symbol $z\\in \\Sigma$, as well as $\\hat{\\sigma} A = \\hat{F}^{\\varepsilon, Y} = [\\hat{f}(\\bar{y})]^\\top_{\\bar{y}\\in Y}$.\n\n\n\nThe estimated model should be \"stabilized\" to insure that is cannot produce negative probability estimates.\n\n\nThis is performed once for each training sequence length.",
"# Initialize a tom.Data object that computes the desired estimates from the training\n# data (using a suffix tree representation internally) and provides the required\n# data matrices including variance estimates.\ndata = tom.Data()\n\n# For every training sequence length, learn a model via spectral learning\nlearnt_ooms = []\nfor train_length in train_lengths:\n # 1. Use the current training sequence to obtain estimates\n data.sequence = train_sequence.sub(train_length)\n \n # 2. Select sets of indicative and characteristic words:\n data.X = data.Y = tom.wordsFromData(data.stree, maxWords = 1000)\n \n # 3. Estimate an appropriate target dimension (using no weights here):\n d = tom.learn.dimension_estimate(data, v=(1,1))\n \n # 4. Perform spectral learning to estimate an OOM:\n learnt_oom = tom.learn.model_estimate(data, d)\n \n # 5. Set default stabilization parameters for the learnt model:\n learnt_oom.stabilization(preset='default')\n\n learnt_ooms.append(learnt_oom)\n\n # Print a very simple progress indicator:\n print('.', end='', flush=True)\nprint('done!')",
"3. Evaluate the learnt models and plot the results\nWe first print the estimated model dimension to see if the dimension estimation has produced reasonable values.\nNext we evaluate the learnt models by computing the time-averaged negative $\\log_2$-likelihood (cross-entropy) on the test sequences by the member function Oom.l2l(test_sequence). Note that a value of $\\log_2(|\\Sigma|) \\approx 2.32$ corresponds to pure chance level (i.e., a model guessing the next symbol uniformly randomly). Furthermore, we can estimate the best possible value by computing the time-averaged negative $\\log_2$-\"likelihood\" of the true model on the test sequences, which samples the entropy of the stochastic process.\nWe then plot the performance of the estimated models (y-axis), where we scale the plot such that the minimum corresponds to the best possible model, and the maximum corresponds to pure chance.",
"# Let's examine the estimated model dimensions:\nprint('Estimated model dimensions: ', [learnt_oom.dimension() for learnt_oom in learnt_ooms])\n\n# The time-averaged negative log2-likelihood is computed by the function `oom.l2l(test_sequence)`.\nresults = [np.average([ learnt_oom.l2l(test_sequence) for test_sequence in test_sequences ])\n for learnt_oom in learnt_ooms]\n\n# Compute an approximation to the optimum value:\nl2l_opt = np.average([oom.l2l(test_sequence) for test_sequence in test_sequences])\n\n# Plot the performance of the estimated models:\nplt.semilogx(train_lengths, results);\nplt.xlim((train_lengths[0], train_lengths[-1]));\nplt.ylim((l2l_opt, np.log2(5)));\nplt.title('Performance of the estimated models');\nplt.ylabel('cross-entropy');\nplt.xlabel('Length of training data');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sjev/talks
|
pythonMeetupDec16/slides.ipynb
|
mit
|
[
"# import libraries \n%pylab inline\nimport tradingWithPython as twp\nimport pandas as pd\ntwp.extra.setNotebookStyle()\nfigsize(10,5)",
"<img src=\"files/img/cover_sheet.svg\">\nOutline\n\n\nIntro\n\n\nScientific Python tools\n\n\nGetting data\n\n\nExample strategy\n\n\nAbout\n\nProgramming since 1992 \nBackground in Applied Physics (TU Delft)\n\nWorking at Oce since 2005\n\nalgorithm development\nmachine visioin\nimage processing\n\n\n\nTrading stocks as a hobby since 2009\n\nsee my adventures at tradingWithPython blog\n\n\n\nWhy Python\n\nPerfect all-round tool\n(web) application development\nscientific calculations\nmassive community\n\n\n\nScientific python - dev tools\n(see http://github.com/jrjohansson/scientific-python-lectures for more)\n\nIPython - interactive python - hacking\nJupyter Notebook - code & document - fantastic research tool\nSpyder - IDE - good development tool\nEclipse & others - engineering tools \n\nLibraries - general\n\nnumpy - matlab-like matrix calculations\nscipy - scientific libraries (interpolation, transformations etc)\nscikit-learn - machine learning \nkeras - deep learning\n\nLibraries - Finance\n\npandas - data analysys library\ntrading-with-python - my toolbox\nzipline - backtesting (I don't use it)\nibpy - interfacing with InteractiveBrokers API\n\nJupyter notebook\n( previously called IPython notebook )\nProject found on jupyter.org\n\nCombine code, equations, visualisations , html etc\nExplore & document\nShare with others\n\n<img src=\"img/jupyterpreview.png\" width=\"800\">\nSpyder\nSpyder is a MATLAB-like IDE for scientific computing with python. It has the many advantages of a traditional IDE environment, for example that everything from code editing, execution and debugging is carried out in a single environment, and work on different calculations can be organized as projects in the IDE environment.\n<!-- <img src=\"files/images/spyder-screenshot.jpg\" width=\"800\"> -->\n<img src=\"img/spyder-screenshot.jpg\" width=\"800\">\nSome advantages of Spyder:\n\nPowerful code editor, with syntax high-lighting, dynamic code introspection and integration with the python debugger.\nVariable explorer, IPython command prompt.\nIntegrated documentation and help.\n\nPlotting\n\nmatplotlib - Matlab plotting clone\nbokeh - interactive javascript plots (see tutorial )",
"# matplotlib example \n# plot 5-sec data\nprice = pd.DataFrame.from_csv('data/SPY_20160411205955.csv')\nprice.close.plot()\n\n# bokeh example\nfrom bokeh.io import output_notebook, show\nfrom bokeh.plotting import figure\nfrom bokeh.charts import Line\noutput_notebook()\n\n\nline = Line(price.close, plot_width=800, plot_height=400)\nshow(line)",
"Getting the data\nYahoo Finance\n\nfree daily OHLC data\n\nInteractive Brokers\n\nfree (for clients) intraday data \n\nCBOE - Chicago Board Options Echange\n\ndaily volatility data\n\nQuantdl\n\nsubscription packages, easy interface",
"# get data from yahoo finance\nprice = twp.yahooFinance.getHistoricData('SPY')\n\nprice['adj_close'].plot()",
"Interactive brokers\n\nhas descent API for historic & realtime data and order submission\nprovides data down to 1 s resolution\nhistoric data - see downloader code\nrealtime quotes - see tick logger \n\nSimple volatility strategy\n\ntrade VXX\nuse VIX-VXV as inidicator \nvery simple approximation\nno transaction cost\nsimple summation of percent returns\n\n\n\n... this one actually makes money. \nDisclaimer : you will lose money. Don't blame me for anything.",
"# get data \nimport tradingWithPython.lib.cboe as cboe # cboe data\n\nsymbols = ['SPY','VXX','VXZ']\npriceData = twp.yahooFinance.getHistoricData(symbols).minor_xs('adj_close')\nvolData = cboe.getHistoricData(['VIX','VXV']).dropna()\n\nvolData.plot();\n\n\ndelta = volData.VIX - volData.VXV\ndelta.plot()\ntitle('delta')\n\n# prepare data\n\ndf = pd.DataFrame({'VXX':priceData.VXX, 'delta':delta}).dropna()\ndf.plot(subplots='True');\n\n# strategy simulation function\n\ndef backtest(thresh):\n \"\"\" backtest strategy with a threshold value\"\"\"\n\n df['dir'] = 0 # init with zeros\n df['ret'] = df.VXX.pct_change()\n\n long = df.delta > thresh\n short = df.delta < thresh\n #df.ix[long,'dir'] = 1 # set long positions\n df.ix[short,'dir'] = -1 # set short positions\n\n df['dir'] = df['dir'].shift(1) # dont forget to shift one day forward!\n\n df['pnl'] = df['dir'] * df['ret']\n return df\n\ndf = backtest(0)\ndf\n\n# check relationship delta-returns\ndf.plot(kind='scatter',x='delta',y='ret')",
"Do a parameter scan\n... simulate for different values of thresh variable",
"T = np.linspace(-3,0,10)\nh = ['%.2f' % t for t in T] # make table header\npnl ={} # pnl dict\n\nPNL = pd.DataFrame(index=df.index, columns=h)\n\nfor i, t in enumerate(T):\n PNL[h[i]] = backtest(thresh=t)['pnl']\n\n\nPNL.cumsum().plot()\n\n# evaluate performance\ntwp.sharpe(PNL).plot(kind='bar')\nxlabel('threshold')\nylabel('sharpe')\ntitle('strategy performance')\n\n# plot best strategy\nPNL['-2.00'].cumsum().plot()",
"Closing remarks...\n\n\na lot of info is available on the web, go google! \n\nmy blog\nquantocracy\nquantStart\n\n\n\nTrading stocks is hard, REALLY hard\n\nexpect to spend 1000+ hours before becoming profitable\nmargin call is your worst enemy\nyou will panic\nyou will lose money"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
adityaka/misc_scripts
|
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/04_03/Begin/.ipynb_checkpoints/Indexing-checkpoint.ipynb
|
bsd-3-clause
|
[
"Indexing and Selection\n| Operation | Syntax | Result |\n|-------------------------------|----------------|-----------|\n| Select column | df[col] | Series |\n| Select row by label | df.loc[label] | Series |\n| Select row by integer | df.iloc[loc] | Series |\n| Select rows | df[start:stop] | DataFrame |\n| Select rows with boolean mask | df[mask] | DataFrame |\ndocumentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html",
"import pandas as pd\nimport numpy as np\n\nproduce_dict = {'veggies': ['potatoes', 'onions', 'peppers', 'carrots'],'fruits': ['apples', 'bananas', 'pineapple', 'berries']}\nproduce_df = pd.DataFrame(produce_dict)\nproduce_df",
"selection using dictionary-like string\nlist of strings as index (note: double square brackets)\nselect row using integer index\nselect rows using integer slice\n+ is over-loaded as concatenation operator\nData alignment and arithmetic\nData alignment between DataFrame objects automatically align on both the columns and the index (row labels).\nNote locations for 'NaN'",
"df = pd.DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D'])\ndf2 = pd.DataFrame(np.random.randn(7, 3), columns=['A', 'B', 'C'])\nsum_df = df + df2\nsum_df",
"Boolean indexing\nfirst select rows in column B whose values are less than zero\nthen, include information for all columns in that row in the resulting data set\nisin function\nwhere function"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jcmgray/quijy
|
docs/examples/ex_tn_train_circuit.ipynb
|
mit
|
[
"import quimb as qu\nimport quimb.tensor as qtn\n\ndef single_qubit_layer(circ, gate_round=None):\n \"\"\"Apply a parametrizable layer of single qubit ``U3`` gates.\n \"\"\"\n for i in range(circ.N):\n # initialize with random parameters\n params = qu.randn(3, dist='uniform')\n circ.apply_gate(\n 'U3', *params, i, \n gate_round=gate_round, parametrize=True)\n \ndef two_qubit_layer(circ, gate2='CZ', reverse=False, gate_round=None):\n \"\"\"Apply a layer of constant entangling gates.\n \"\"\"\n regs = range(0, circ.N - 1)\n if reverse:\n regs = reversed(regs)\n \n for i in regs:\n circ.apply_gate(\n gate2, i, i + 1, gate_round=gate_round)\n\ndef ansatz_circuit(n, depth, gate2='CZ', **kwargs):\n \"\"\"Construct a circuit of single qubit and entangling layers.\n \"\"\"\n circ = qtn.Circuit(n, **kwargs)\n \n for r in range(depth):\n # single qubit gate layer\n single_qubit_layer(circ, gate_round=r)\n \n # alternate between forward and backward CZ layers\n two_qubit_layer(\n circ, gate2=gate2, gate_round=r, reverse=r % 2 == 0)\n \n # add a final single qubit layer\n single_qubit_layer(circ, gate_round=r + 1)\n \n return circ\n\nn = 6\ndepth = 9\ngate2 = 'CZ'\n\ncirc = ansatz_circuit(n, depth, gate2=gate2)\ncirc",
"We can extract just the unitary part of the circuit as a tensor network like so:",
"V = circ.uni",
"You can see it already has various tags identifying its structure (indeed enough to uniquely identify each gate):",
"V.graph(color=['U3', gate2], show_inds=True)\n\nV.graph(color=[f'ROUND_{i}' for i in range(depth)], show_inds=True)\n\nV.graph(color=[f'I{i}' for i in range(n)], show_inds=True)\n\n# the hamiltonian\nH = qu.ham_ising(n, jz=1.0, bx=0.7, cyclic=False)\n\n# the propagator for the hamiltonian\nt = 2\nU_dense = qu.expm(-1j * t * H)\n\n# 'tensorized' version of the unitary propagator\nU = qtn.Tensor(\n data=U_dense.reshape([2] * (2 * n)),\n inds=[f'k{i}' for i in range(n)] + [f'b{i}' for i in range(n)],\n tags={'U_TARGET'}\n)\nU.graph(color=['U3', gate2, 'U_TARGET'])",
"The core object describing how similar two unitaries are is: $\\mathrm{Tr}(V^{\\dagger}U)$, which we can naturally visualize at a tensor network:",
"(V.H & U).graph(color=['U3', gate2, 'U_TARGET'])",
"For our loss function we'll normalize this and negate it (since the optimizer minimizes).",
"def loss(V, U):\n return 1 - abs((V.H & U).contract(all, optimize='auto-hq')) / 2**n\n\n# check our current unitary 'infidelity':\nloss(V, U)\n\n# use the autograd/jax based optimizer\nimport quimb.tensor.optimize_autograd as qto\n\ntnopt = qto.TNOptimizer(\n V, # the tensor network we want to optimize\n loss, # the function we want to minimize\n loss_constants={'U': U}, # supply U to the loss function as a constant TN\n constant_tags=[gate2], # within V we also want to keep all the CZ gates constant\n autograd_backend='jax', # use 'autograd' for non-compiled optimization\n optimizer='L-BFGS-B', # the optimization algorithm\n)",
"We could call optimize for pure gradient based optimization, but since unitary circuits can be tricky we'll use optimize_basinhopping which combines gradient descent with 'hopping' to escape local minima:",
"# allow 10 hops with 500 steps in each 'basin'\nV_opt = tnopt.optimize_basinhopping(n=500, nhop=10)",
"The optimized tensor network still contains PTensor instances but now with optimized parameters. \nFor example, here's the tensor of the U3 gate acting on qubit-2 in round-4:",
"V_opt['U3', 'I2', 'ROUND_4']",
"We can see the parameters have been updated by the training:",
"# the initial values\nV['U3', 'ROUND_4', 'I2'].params\n\n# the optimized values\nV_opt['U3', 'ROUND_4', 'I2'].params",
"We can see what gate these parameters would generate:",
"qu.U_gate(*V_opt['U3', 'ROUND_4', 'I2'].params)",
"A final sanity check we can perform is to try evolving a random state with the target unitary and trained circuit and check the fidelity between the resulting states.\nFirst we turn the tensor network version of $V$ into a dense matrix:",
"V_opt_dense = V_opt.to_dense([f'k{i}' for i in range(n)], [f'b{i}' for i in range(n)])",
"Next we create a random initial state, and evolve it with the",
"psi0 = qu.rand_ket(2**n)\n\n# this is the exact state we want\npsif_exact = U_dense @ psi0\n\n# this is the state our circuit will produce if fed `psi0`\npsif_apprx = V_opt_dense @ psi0",
"The (in)fidelity should broadly match our training loss:",
"f\"Fidelity: {100 * qu.fidelity(psif_apprx, psif_exact):.2f} %\""
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
|
ex24-Visualize CO2 Time Series with Python.ipynb
|
mit
|
[
"Visualize CO2 Time Series with Python\nNowadays, when people talk about the rise of our planet's average surface temperature, they will inevitably mention carbon dioxide and other Greenhouse Gases (GHGs). We can easily check the latest CO2 data using Python. CO2 data can be downloaded from esrl, covering the period from Mar/1958 to Apr/2018. CO2 expressed as a mole fraction in dry air, micromol/mol, abbreviated as ppm. \nThe data are a typical time series data, which are one of the most common data types. One powerful yet simple method for analyzing and predicting periodic data is the additive model. The idea is straightforward: represent a time-series as a combination of patterns at different scales such as daily, weekly, seasonally, and yearly, along with an overall trend.\nIn this notebook, we will introduce some common techniques used in time-series analysis and walk through the iterative steps required to manipulate, visualize time-series data.\n1. Load all needed libraries",
"import pandas as pd\nimport statsmodels.api as sm\nfrom matplotlib import pyplot as plt\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline\n\n# Set some parameters to apply to all plots. These can be overridden\nimport matplotlib\n# Plot size to 12\" x 7\"\nmatplotlib.rc('figure', figsize = (12, 7))\n# Font size to 14\nmatplotlib.rc('font', size = 14)\n# Do not display top and right frame lines\nmatplotlib.rc('axes.spines', top = False, right = False)\n# Remove grid lines\nmatplotlib.rc('axes', grid = False)\n# Set backgound color to white\nmatplotlib.rc('axes', facecolor = 'white')",
"2. Read CO2 time series data\n2.1 Load data",
"co2 = pd.read_csv('data\\co2_mm_mlo.txt', \n skiprows=72,\n header=None, \n comment = \"#\", \n delim_whitespace = True, \n names = [\"year\", \"month\", \"decimal_date\", \"average\", \"interpolated\", \"trend\", \"days\"],\n na_values =[-99.99, -1])\n\nco2['Date'] = co2['year']*100 + co2['month']\nco2['Date'] = pd.to_datetime(co2['Date'], format='%Y%m', errors='ignore')\nco2.set_index('Date', inplace=True)",
"2.2 Drop other columns, only keep the original data",
"co2.drop([\"year\", \"month\", \"decimal_date\", \"interpolated\", \"trend\", \"days\"], axis=1, inplace=True)\n\nco2.head()",
"2.3 Handle missing values\nReal world data tends to be messy. Data can have missing values for a number of reasons such as observations that were not recorded and data corruption. Handling missing data is important as many data analysis algorithms do not support data with missing values.\nThe simplest way is using the command of isnull to reveal missing data.",
"co2.isnull().sum()",
"There are 7 months with missing values in our time series.\nThe simplest strategy for handling missing data is to drop those records that contain a missing value. Pandas provides the dropna() function that can be used to drop either columns or rows with missing data. The syntax of drop rows with missing values looks like: dataset.dropna(inplace=True).\nHowever, we should \"fill in\" missing values if they are not too numerous so that we don’t have gaps in the data. This can be done using the fillna() command in pandas. The filling methods consist of\n* backfill\n* bfill\n* pad\n* ffill\n* None (default)\nFor simplicity, missing values are filled with the closest non-null value in CO2 time series, although it is important to note that a rolling mean would sometimes be preferable.",
"co2 = co2.fillna(co2.bfill())",
"Now the number of missing values should be 0.",
"co2.isnull().sum()",
"3. Visualizing CO2 Time-series Data\n3.1 Start with a quick plot\nIt is very easy to use Pandas to plot the co2 time series. Moreover, deeper analysis always starts with the first view of data.",
"co2.plot(title='Monthly CO2 (ppm)')",
"From the above image, we can find that there may be a linear trend, but it is hard to be sure from eye-balling. Moreover, it has an obvious seasonality pattern, but the amplitude (height) of the cycles appears to be stable, suggesting that it should be suitable for an additive model.\nWe can also visualize our data using a method called time-series decomposition. As its name suggests, time series decomposition allows us to decompose our time series into three distinct components: trend, seasonality, and noise.\n3.2 Decompose time-series\nSeasonal_decompose function provided by statsmodels is applied to perform seasonal decomposition of the CO2 data.",
"decomposition = sm.tsa.seasonal_decompose(co2, model='additive')\nfig = decomposition.plot()",
"Each component of decomposition is accessible via:\n\ndecomposition.resid\ndecomposition.seasonal\ndecomposition.trend\n\nFor example, we can check the trend in 1991.",
"decomposition.trend['1991']",
"Summary\nThe plot above clearly shows an upward trend of the monthly CO2, along with a stable seasonality using time-series decomposition. \nReferences\nSeabold, Skipper, and Josef Perktold. “Statsmodels: Econometric and statistical modeling with python.” Proceedings of the 9th Python in Science Conference. 2010.\nData Structures for Statistical Computing in Python; presented at SciPy 2010\npandas: a Foundational Python Library for Data Analysis and Statistics; presented at PyHPC2011\nhttp://www.statsmodels.org/dev/generated/statsmodels.tsa.seasonal.seasonal_decompose.html\nhttps://climatedataguide.ucar.edu/climate-data-tools-and-analysis/trend-analysis"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/gapic/custom/showcase_custom_image_classification_batch_explain.ipynb
|
apache-2.0
|
[
"# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Vertex client library: Custom training image classification model for batch prediction with explanation\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_image_classification_batch_explain.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_image_classification_batch_explain.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom image classification model for batch prediction with explanation.\nDataset\nThe dataset used for this tutorial is the CIFAR10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.\nObjective\nIn this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the Vertex client library, and then do a batch prediction with explanations on the uploaded model. You can alternatively create custom models using gcloud command-line tool or online using Google Cloud Console.\nThe steps performed include:\n\nCreate a Vertex custom job for training a model.\nTrain the TensorFlow model.\nRetrieve and load the model artifacts.\nView the model evaluation.\nSet explanation parameters.\nUpload the model as a Vertex Model resource.\nMake a batch prediction with explanations.\n\nCosts\nThis tutorial uses billable components of Google Cloud (GCP):\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nInstallation\nInstall the latest version of Vertex client library.",
"import os\nimport sys\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install -U google-cloud-aiplatform $USER_FLAG",
"Install the latest GA version of google-cloud-storage library as well.",
"! pip3 install -U google-cloud-storage $USER_FLAG\n\nif os.environ[\"IS_TESTING\"]:\n ! pip3 install -U tensorflow\n\nif os.environ[\"IS_TESTING\"]:\n ! pip3 install -U opencv-python",
"Restart the kernel\nOnce you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.",
"if not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"Before you begin\nGPU runtime\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nThe Google Cloud SDK is already installed in Google Cloud Notebook.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.",
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID",
"Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation",
"REGION = \"us-central1\" # @param {type: \"string\"}",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Authenticate your Google Cloud account\nIf you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.",
"# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a custom training job using the Vertex client library, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. Vertex runs\nthe code from this package. In this tutorial, Vertex also saves the\ntrained model that results from your job in the same bucket. You can then\ncreate an Endpoint resource based on this output in order to serve\nonline predictions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.",
"BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION $BUCKET_NAME",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al $BUCKET_NAME",
"Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex client library\nImport the Vertex client library into our Python environment.",
"import time\n\nimport google.cloud.aiplatform_v1beta1 as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.struct_pb2 import Value",
"Vertex constants\nSetup up the following constants for Vertex:\n\nAPI_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.\nPARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.",
"# API service endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION",
"Hardware Accelerators\nSet the hardware accelerators (e.g., GPU), if any, for training and prediction.\nSet the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:\n(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n\nFor GPU, available accelerators include:\n - aip.AcceleratorType.NVIDIA_TESLA_K80\n - aip.AcceleratorType.NVIDIA_TESLA_P100\n - aip.AcceleratorType.NVIDIA_TESLA_P4\n - aip.AcceleratorType.NVIDIA_TESLA_T4\n - aip.AcceleratorType.NVIDIA_TESLA_V100\nOtherwise specify (None, None) to use a container image to run on a CPU.\nNote: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.",
"if os.getenv(\"IS_TESTING_TRAIN_GPU\"):\n TRAIN_GPU, TRAIN_NGPU = (\n aip.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_TRAIN_GPU\")),\n )\nelse:\n TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)\n\nif os.getenv(\"IS_TESTING_DEPOLY_GPU\"):\n DEPLOY_GPU, DEPLOY_NGPU = (\n aip.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_DEPOLY_GPU\")),\n )\nelse:\n DEPLOY_GPU, DEPLOY_NGPU = (None, None)",
"Container (Docker) image\nNext, we will set the Docker container images for training and prediction\n\nTensorFlow 1.15\ngcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest\nTensorFlow 2.1\ngcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest\nTensorFlow 2.2\ngcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest\nTensorFlow 2.3\ngcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest\nTensorFlow 2.4\ngcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest\nXGBoost\ngcr.io/cloud-aiplatform/training/xgboost-cpu.1-1\nScikit-learn\ngcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest\nPytorch\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest\n\nFor the latest list, see Pre-built containers for training.\n\nTensorFlow 1.15\ngcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest\ngcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest\nTensorFlow 2.1\ngcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest\ngcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest\nTensorFlow 2.2\ngcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest\ngcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest\nTensorFlow 2.3\ngcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest\ngcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest\nXGBoost\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest\nScikit-learn\ngcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest\ngcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest\ngcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest\n\nFor the latest list, see Pre-built containers for prediction",
"if os.getenv(\"IS_TESTING_TF\"):\n TF = os.getenv(\"IS_TESTING_TF\")\nelse:\n TF = \"2-1\"\n\nif TF[0] == \"2\":\n if TRAIN_GPU:\n TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf2-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf2-cpu.{}\".format(TF)\nelse:\n if TRAIN_GPU:\n TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf-cpu.{}\".format(TF)\n\nTRAIN_IMAGE = \"gcr.io/cloud-aiplatform/training/{}:latest\".format(TRAIN_VERSION)\nDEPLOY_IMAGE = \"gcr.io/cloud-aiplatform/prediction/{}:latest\".format(DEPLOY_VERSION)\n\nprint(\"Training:\", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)\nprint(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)",
"Machine Type\nNext, set the machine type to use for training and prediction.\n\nSet the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.\nmachine type\nn1-standard: 3.75GB of memory per vCPU.\nn1-highmem: 6.5GB of memory per vCPU\nn1-highcpu: 0.9 GB of memory per vCPU\n\n\nvCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]\n\nNote: The following is not supported for training:\n\nstandard: 2 vCPUs\nhighcpu: 2, 4 and 8 vCPUs\n\nNote: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.",
"if os.getenv(\"IS_TESTING_TRAIN_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_TRAIN_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nTRAIN_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Train machine type\", TRAIN_COMPUTE)\n\nif os.getenv(\"IS_TESTING_DEPLOY_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_DEPLOY_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)",
"Tutorial\nNow you are ready to start creating your own custom model and training for CIFAR10.\nSet up clients\nThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.\nYou will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.\n\nModel Service for Model resources.\nEndpoint Service for deployment.\nJob Service for batch jobs and custom training.\nPrediction Service for serving.",
"# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_job_client():\n client = aip.JobServiceClient(client_options=client_options)\n return client\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"job\"] = create_job_client()\nclients[\"model\"] = create_model_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\n\nfor client in clients.items():\n print(client)",
"Train a model\nThere are two ways you can train a custom model using a container image:\n\n\nUse a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.\n\n\nUse your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.\n\n\nPrepare your custom job specification\nNow that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:\n\nworker_pool_spec : The specification of the type of machine(s) you will use for training and how many (single or distributed)\npython_package_spec : The specification of the Python package to be installed with the pre-built container.\n\nPrepare your machine specification\nNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training.\n - machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.\n - accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.\n - accelerator_count: The number of accelerators.",
"if TRAIN_GPU:\n machine_spec = {\n \"machine_type\": TRAIN_COMPUTE,\n \"accelerator_type\": TRAIN_GPU,\n \"accelerator_count\": TRAIN_NGPU,\n }\nelse:\n machine_spec = {\"machine_type\": TRAIN_COMPUTE, \"accelerator_count\": 0}",
"Prepare your disk specification\n(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.\n\nboot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.\nboot_disk_size_gb: Size of disk in GB.",
"DISK_TYPE = \"pd-ssd\" # [ pd-ssd, pd-standard]\nDISK_SIZE = 200 # GB\n\ndisk_spec = {\"boot_disk_type\": DISK_TYPE, \"boot_disk_size_gb\": DISK_SIZE}",
"Define the worker pool specification\nNext, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:\n\nreplica_count: The number of instances to provision of this machine type.\nmachine_spec: The hardware specification.\n\ndisk_spec : (optional) The disk storage specification.\n\n\npython_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.\n\n\nLet's dive deeper now into the python package specification:\n-executor_image_spec: This is the docker image which is configured for your custom training job.\n-package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.\n-python_module: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix.\n-args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:\n - \"--model-dir=\" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts:\n - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or\n - indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.\n - \"--epochs=\" + EPOCHS: The number of epochs for training.\n - \"--steps=\" + STEPS: The number of steps (batches) per epoch.\n - \"--distribute=\" + TRAIN_STRATEGY\" : The training distribution strategy to use for single or distributed training.\n - \"single\": single device.\n - \"mirror\": all GPU devices on a single compute instance.\n - \"multi\": all GPU devices on all compute instances.",
"JOB_NAME = \"custom_job_\" + TIMESTAMP\nMODEL_DIR = \"{}/{}\".format(BUCKET_NAME, JOB_NAME)\n\nif not TRAIN_NGPU or TRAIN_NGPU < 2:\n TRAIN_STRATEGY = \"single\"\nelse:\n TRAIN_STRATEGY = \"mirror\"\n\nEPOCHS = 20\nSTEPS = 100\n\nDIRECT = True\nif DIRECT:\n CMDARGS = [\n \"--model-dir=\" + MODEL_DIR,\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n \"--distribute=\" + TRAIN_STRATEGY,\n ]\nelse:\n CMDARGS = [\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n \"--distribute=\" + TRAIN_STRATEGY,\n ]\n\nworker_pool_spec = [\n {\n \"replica_count\": 1,\n \"machine_spec\": machine_spec,\n \"disk_spec\": disk_spec,\n \"python_package_spec\": {\n \"executor_image_uri\": TRAIN_IMAGE,\n \"package_uris\": [BUCKET_NAME + \"/trainer_cifar10.tar.gz\"],\n \"python_module\": \"trainer.task\",\n \"args\": CMDARGS,\n },\n }\n]",
"Assemble a job specification\nNow assemble the complete description for the custom job specification:\n\ndisplay_name: The human readable name you assign to this custom job.\njob_spec: The specification for the custom job.\nworker_pool_specs: The specification for the machine VM instances.\nbase_output_directory: This tells the service the Cloud Storage location where to save the model artifacts (when variable DIRECT = False). The service will then pass the location to the training script as the environment variable AIP_MODEL_DIR, and the path will be of the form: <output_uri_prefix>/model",
"if DIRECT:\n job_spec = {\"worker_pool_specs\": worker_pool_spec}\nelse:\n job_spec = {\n \"worker_pool_specs\": worker_pool_spec,\n \"base_output_directory\": {\"output_uri_prefix\": MODEL_DIR},\n }\n\ncustom_job = {\"display_name\": JOB_NAME, \"job_spec\": job_spec}",
"Examine the training package\nPackage layout\nBefore you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.\n\nPKG-INFO\nREADME.md\nsetup.cfg\nsetup.py\ntrainer\n__init__.py\ntask.py\n\nThe files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.\nThe file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).\nPackage Assembly\nIn the following cells, you will assemble the training package.",
"# Make folder for Python training script\n! rm -rf custom\n! mkdir custom\n\n# Add package information\n! touch custom/README.md\n\nsetup_cfg = \"[egg_info]\\n\\ntag_build =\\n\\ntag_date = 0\"\n! echo \"$setup_cfg\" > custom/setup.cfg\n\nsetup_py = \"import setuptools\\n\\nsetuptools.setup(\\n\\n install_requires=[\\n\\n 'tensorflow_datasets==1.3.0',\\n\\n ],\\n\\n packages=setuptools.find_packages())\"\n! echo \"$setup_py\" > custom/setup.py\n\npkg_info = \"Metadata-Version: 1.0\\n\\nName: CIFAR10 image classification\\n\\nVersion: 0.0.0\\n\\nSummary: Demostration training script\\n\\nHome-page: www.google.com\\n\\nAuthor: Google\\n\\nAuthor-email: aferlitsch@google.com\\n\\nLicense: Public\\n\\nDescription: Demo\\n\\nPlatform: Vertex\"\n! echo \"$pkg_info\" > custom/PKG-INFO\n\n# Make the training subfolder\n! mkdir custom/trainer\n! touch custom/trainer/__init__.py",
"Task.py contents\nIn the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:\n\nGet the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.\nLoads CIFAR10 dataset from TF Datasets (tfds).\nBuilds a model using TF.Keras model API.\nCompiles the model (compile()).\nSets a training distribution strategy according to the argument args.distribute.\nTrains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps\nSaves the trained model (save(args.model_dir)) to the specified model directory.",
"%%writefile custom/trainer/task.py\n# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10\n\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nimport argparse\nimport os\nimport sys\ntfds.disable_progress_bar()\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--model-dir', dest='model_dir',\n default=os.getenv(\"AIP_MODEL_DIR\"), type=str, help='Model dir.')\nparser.add_argument('--lr', dest='lr',\n default=0.01, type=float,\n help='Learning rate.')\nparser.add_argument('--epochs', dest='epochs',\n default=10, type=int,\n help='Number of epochs.')\nparser.add_argument('--steps', dest='steps',\n default=200, type=int,\n help='Number of steps per epoch.')\nparser.add_argument('--distribute', dest='distribute', type=str, default='single',\n help='distributed training strategy')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\nprint('TensorFlow Version = {}'.format(tf.__version__))\nprint('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\nprint('DEVICES', device_lib.list_local_devices())\n\n# Single Machine, single compute device\nif args.distribute == 'single':\n if tf.test.is_gpu_available():\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n else:\n strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\n# Single Machine, multiple compute device\nelif args.distribute == 'mirror':\n strategy = tf.distribute.MirroredStrategy()\n# Multiple Machine, multiple compute device\nelif args.distribute == 'multi':\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n\n# Multi-worker configuration\nprint('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n\n# Preparing dataset\nBUFFER_SIZE = 10000\nBATCH_SIZE = 64\n\n\ndef make_datasets_unbatched():\n\n # Scaling CIFAR10 data from (0, 255] to (0., 1.]\n def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255.0\n return image, label\n\n\n datasets, info = tfds.load(name='cifar10',\n with_info=True,\n as_supervised=True)\n return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()\n\n\n# Build the Keras model\ndef build_and_compile_cnn_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Conv2D(32, 3, activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n model.compile(\n loss=tf.keras.losses.sparse_categorical_crossentropy,\n optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),\n metrics=['accuracy'])\n return model\n\n\n# Train the model\nNUM_WORKERS = strategy.num_replicas_in_sync\n# Here the batch size scales up by number of workers since\n# `tf.data.Dataset.batch` expects the global batch size.\nGLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS\ntrain_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)\n\nwith strategy.scope():\n # Creation of dataset, and model building/compiling need to be within\n # `strategy.scope()`.\n model = build_and_compile_cnn_model()\n\nmodel.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)\nmodel.save(args.model_dir)",
"Store training script on your Cloud Storage bucket\nNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.",
"! rm -f custom.tar custom.tar.gz\n! tar cvf custom.tar custom\n! gzip custom.tar\n! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz",
"Train the model\nNow start the training of your custom training job on Vertex. Use this helper function create_custom_job, which takes the following parameter:\n-custom_job: The specification for the custom job.\nThe helper function calls job client service's create_custom_job method, with the following parameters:\n-parent: The Vertex location path to Dataset, Model and Endpoint resources.\n-custom_job: The specification for the custom job.\nYou will display a handful of the fields returned in response object, with the two that are of most interest are:\nresponse.name: The Vertex fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.\nresponse.state: The current state of the custom training job.",
"def create_custom_job(custom_job):\n response = clients[\"job\"].create_custom_job(parent=PARENT, custom_job=custom_job)\n print(\"name:\", response.name)\n print(\"display_name:\", response.display_name)\n print(\"state:\", response.state)\n print(\"create_time:\", response.create_time)\n print(\"update_time:\", response.update_time)\n return response\n\n\nresponse = create_custom_job(custom_job)",
"Now get the unique identifier for the custom job you created.",
"# The full unique ID for the custom job\njob_id = response.name\n# The short numeric ID for the custom job\njob_short_id = job_id.split(\"/\")[-1]\n\nprint(job_id)",
"Get information on a custom job\nNext, use this helper function get_custom_job, which takes the following parameter:\n\nname: The Vertex fully qualified identifier for the custom job.\n\nThe helper function calls the job client service'sget_custom_job method, with the following parameter:\n\nname: The Vertex fully qualified identifier for the custom job.\n\nIf you recall, you got the Vertex fully qualified identifier for the custom job in the response.name field when you called the create_custom_job method, and saved the identifier in the variable job_id.",
"def get_custom_job(name, silent=False):\n response = clients[\"job\"].get_custom_job(name=name)\n if silent:\n return response\n\n print(\"name:\", response.name)\n print(\"display_name:\", response.display_name)\n print(\"state:\", response.state)\n print(\"create_time:\", response.create_time)\n print(\"update_time:\", response.update_time)\n return response\n\n\nresponse = get_custom_job(job_id)",
"Deployment\nTraining the above model may take upwards of 20 minutes time.\nOnce your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/saved_model.pb'.",
"while True:\n response = get_custom_job(job_id, True)\n if response.state != aip.JobState.JOB_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n model_path_to_deploy = None\n if response.state == aip.JobState.JOB_STATE_FAILED:\n break\n else:\n if not DIRECT:\n MODEL_DIR = MODEL_DIR + \"/model\"\n model_path_to_deploy = MODEL_DIR\n print(\"Training Time:\", response.update_time - response.create_time)\n break\n time.sleep(60)\n\nprint(\"model_to_deploy:\", model_path_to_deploy)",
"Load the saved model\nYour model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.\nTo load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.",
"import tensorflow as tf\n\nmodel = tf.keras.models.load_model(MODEL_DIR)",
"Evaluate the model\nNow find out how good the model is.\nLoad evaluation data\nYou will load the CIFAR10 test (holdout) data from tf.keras.datasets, using the method load_data(). This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.\nYou don't need the training data, and hence why we loaded it as (_, _).\nBefore you can run the data through evaluation, you need to preprocess it:\nx_test:\n1. Normalize (rescaling) the pixel data by dividing each pixel by 255. This will replace each single byte integer pixel with a 32-bit floating point number between 0 and 1.\ny_test:<br/>\n2. The labels are currently scalar (sparse). If you look back at the compile() step in the trainer/task.py script, you will find that it was compiled for sparse labels. So we don't need to do anything more.",
"import numpy as np\nfrom tensorflow.keras.datasets import cifar10\n\n(_, _), (x_test, y_test) = cifar10.load_data()\nx_test = (x_test / 255.0).astype(np.float32)\n\nprint(x_test.shape, y_test.shape)",
"Perform the model evaluation\nNow evaluate how well the model in the custom job did.",
"model.evaluate(x_test, y_test)",
"Upload the model for serving\nNext, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.\nHow does the serving function work\nWhen you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.\nThe serving function consists of two parts:\n\npreprocessing function:\nConverts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph).\nPerforms the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.\npost-processing function:\nConverts the model output to format expected by the receiving application -- e.q., compresses the output.\nPackages the output for the the receiving application -- e.g., add headings, make JSON object, etc.\n\nBoth the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.\nOne consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.\nServing function for image data\nTo pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.\nTo resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).\nWhen you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model:\n- io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).\n- image.convert_image_dtype - Changes integer pixel values to float 32, and rescales pixel data between 0 and 1.\n- image.resize - Resizes the image to match the input shape for the model.\nAt this point, the data can be passed to the model (m_call).\nXAI Signatures\nWhen the serving function is saved back with the underlying model (tf.saved_model.save), you specify the input layer of the serving function as the signature serving_default.\nFor XAI image models, you need to save two additional signatures from the serving function:\n\nxai_preprocess: The preprocessing function in the serving function.\nxai_model: The concrete function for calling the model.",
"CONCRETE_INPUT = \"numpy_inputs\"\n\n\ndef _preprocess(bytes_input):\n decoded = tf.io.decode_jpeg(bytes_input, channels=3)\n decoded = tf.image.convert_image_dtype(decoded, tf.float32)\n resized = tf.image.resize(decoded, size=(32, 32))\n return resized\n\n\n@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])\ndef preprocess_fn(bytes_inputs):\n decoded_images = tf.map_fn(\n _preprocess, bytes_inputs, dtype=tf.float32, back_prop=False\n )\n return {\n CONCRETE_INPUT: decoded_images\n } # User needs to make sure the key matches model's input\n\n\n@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])\ndef serving_fn(bytes_inputs):\n images = preprocess_fn(bytes_inputs)\n prob = m_call(**images)\n return prob\n\n\nm_call = tf.function(model.call).get_concrete_function(\n [tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]\n)\n\ntf.saved_model.save(\n model,\n model_path_to_deploy,\n signatures={\n \"serving_default\": serving_fn,\n # Required for XAI\n \"xai_preprocess\": preprocess_fn,\n \"xai_model\": m_call,\n },\n)",
"Get the serving function signature\nYou can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.\nWhen making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.\nYou also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.",
"loaded = tf.saved_model.load(model_path_to_deploy)\n\nserving_input = list(\n loaded.signatures[\"serving_default\"].structured_input_signature[1].keys()\n)[0]\nprint(\"Serving function input:\", serving_input)\nserving_output = list(loaded.signatures[\"serving_default\"].structured_outputs.keys())[0]\nprint(\"Serving function output:\", serving_output)\n\ninput_name = model.input.name\nprint(\"Model input name:\", input_name)\noutput_name = model.output.name\nprint(\"Model output name:\", output_name)",
"Explanation Specification\nTo get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of:\n\nparameters: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:\nShapley - Note, not recommended for image data -- can be very long running\nXRAI\nIntegrated Gradients\nmetadata: This is the specification for how the algoithm is applied on your custom model.\n\nExplanation Parameters\nLet's first dive deeper into the settings for the explainability algorithm.\nShapley\nAssigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.\nUse Cases:\n - Classification and regression on tabular data.\nParameters:\n\npath_count: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).\n\nFor any non-trival number of features, this is too compute expensive. You can reduce the number of paths over the features to M * path_count.\nIntegrated Gradients\nA gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.\nUse Cases:\n - Classification and regression on tabular data.\n - Classification on image data.\nParameters:\n\nstep_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.\n\nXRAI\nBased on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.\nUse Cases:\n\nClassification on image data.\n\nParameters:\n\nstep_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.\n\nIn the next code cell, set the variable XAI to which explainabilty algorithm you will use on your custom model.",
"XAI = \"ig\" # [ shapley, ig, xrai ]\n\nif XAI == \"shapley\":\n PARAMETERS = {\"sampled_shapley_attribution\": {\"path_count\": 10}}\nelif XAI == \"ig\":\n PARAMETERS = {\"integrated_gradients_attribution\": {\"step_count\": 50}}\nelif XAI == \"xrai\":\n PARAMETERS = {\"xrai_attribution\": {\"step_count\": 50}}\n\nparameters = aip.ExplanationParameters(PARAMETERS)",
"Explanation Metadata\nLet's first dive deeper into the explanation metadata, which consists of:\n\n\noutputs: A scalar value in the output to attribute -- what to explain. For example, in a probability output [0.1, 0.2, 0.7] for classification, one wants an explanation for 0.7. Consider the following formulae, where the output is y and that is what we want to explain.\ny = f(x)\n\n\nConsider the following formulae, where the outputs are y and z. Since we can only do attribution for one scalar value, we have to pick whether we want to explain the output y or z. Assume in this example the model is object detection and y and z are the bounding box and the object classification. You would want to pick which of the two outputs to explain.\ny, z = f(x)\n\nThe dictionary format for outputs is:\n{ \"outputs\": { \"[your_display_name]\":\n \"output_tensor_name\": [layer]\n }\n}\n\n<blockquote>\n - [your_display_name]: A human readable name you assign to the output to explain. A common example is \"probability\".<br/>\n - \"output_tensor_name\": The key/value field to identify the output layer to explain. <br/>\n - [layer]: The output layer to explain. In a single task model, like a tabular regressor, it is the last (topmost) layer in the model.\n</blockquote>\n\n\n\ninputs: The features for attribution -- how they contributed to the output. Consider the following formulae, where a and b are the features. We have to pick which features to explain how the contributed. Assume that this model is deployed for A/B testing, where a are the data_items for the prediction and b identifies whether the model instance is A or B. You would want to pick a (or some subset of) for the features, and not b since it does not contribute to the prediction.\ny = f(a,b)\n\n\nThe minimum dictionary format for inputs is:\n{ \"inputs\": { \"[your_display_name]\":\n \"input_tensor_name\": [layer]\n }\n}\n\n<blockquote>\n - [your_display_name]: A human readable name you assign to the input to explain. A common example is \"features\".<br/>\n - \"input_tensor_name\": The key/value field to identify the input layer for the feature attribution. <br/>\n - [layer]: The input layer for feature attribution. In a single input tensor model, it is the first (bottom-most) layer in the model.\n</blockquote>\n\nSince the inputs to the model are tabular, you can specify the following two additional fields as reporting/visualization aids:\n<blockquote>\n - \"modality\": \"image\": Indicates the field values are image data.\n</blockquote>\n\nSince the inputs to the model are images, you can specify the following additional fields as reporting/visualization aids:\n<blockquote>\n - \"modality\": \"image\": Indicates the field values are image data.\n</blockquote>",
"random_baseline = np.random.rand(32, 32, 3)\ninput_baselines = [{\"number_vaue\": x} for x in random_baseline]\n\nINPUT_METADATA = {\"input_tensor_name\": CONCRETE_INPUT, \"modality\": \"image\"}\n\nOUTPUT_METADATA = {\"output_tensor_name\": serving_output}\n\ninput_metadata = aip.ExplanationMetadata.InputMetadata(INPUT_METADATA)\noutput_metadata = aip.ExplanationMetadata.OutputMetadata(OUTPUT_METADATA)\n\nmetadata = aip.ExplanationMetadata(\n inputs={\"image\": input_metadata}, outputs={\"class\": output_metadata}\n)\n\nexplanation_spec = aip.ExplanationSpec(metadata=metadata, parameters=parameters)",
"Upload the model\nUse this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.\nThe helper function takes the following parameters:\n\ndisplay_name: A human readable name for the Endpoint service.\nimage_uri: The container image for the model deployment.\nmodel_uri: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the trainer/task.py saved the model artifacts, which we specified in the variable MODEL_DIR.\n\nThe helper function calls the Model client service's method upload_model, which takes the following parameters:\n\nparent: The Vertex location root path for Dataset, Model and Endpoint resources.\nmodel: The specification for the Vertex Model resource instance.\n\nLet's now dive deeper into the Vertex model specification model. This is a dictionary object that consists of the following fields:\n\ndisplay_name: A human readable name for the Model resource.\nmetadata_schema_uri: Since your model was built without an Vertex Dataset resource, you will leave this blank ('').\nartificat_uri: The Cloud Storage path where the model is stored in SavedModel format.\ncontainer_spec: This is the specification for the Docker container that will be installed on the Endpoint resource, from which the Model resource will serve predictions. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.\nexplanation_spec: This is the specification for enabling explainability for your model.\n\nUploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.\nThe helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.",
"IMAGE_URI = DEPLOY_IMAGE\n\n\ndef upload_model(display_name, image_uri, model_uri):\n\n model = aip.Model(\n display_name=display_name,\n artifact_uri=model_uri,\n metadata_schema_uri=\"\",\n explanation_spec=explanation_spec,\n container_spec={\"image_uri\": image_uri},\n )\n\n response = clients[\"model\"].upload_model(parent=PARENT, model=model)\n print(\"Long running operation:\", response.operation.name)\n upload_model_response = response.result(timeout=180)\n print(\"upload_model_response\")\n print(\" model:\", upload_model_response.model)\n return upload_model_response.model\n\n\nmodel_to_deploy_id = upload_model(\n \"cifar10-\" + TIMESTAMP, IMAGE_URI, model_path_to_deploy\n)",
"Get Model resource information\nNow let's get the model information for just your model. Use this helper function get_model, with the following parameter:\n\nname: The Vertex unique identifier for the Model resource.\n\nThis helper function calls the Vertex Model client service's method get_model, with the following parameter:\n\nname: The Vertex unique identifier for the Model resource.",
"def get_model(name):\n response = clients[\"model\"].get_model(name=name)\n print(response)\n\n\nget_model(model_to_deploy_id)",
"Model deployment for batch prediction\nNow deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction.\nFor online prediction, you:\n\n\nCreate an Endpoint resource for deploying the Model resource to.\n\n\nDeploy the Model resource to the Endpoint resource.\n\n\nMake online prediction requests to the Endpoint resource.\n\n\nFor batch-prediction, you:\n\n\nCreate a batch prediction job.\n\n\nThe job service will provision resources for the batch prediction request.\n\n\nThe results of the batch prediction request are returned to the caller.\n\n\nThe job service will unprovision the resoures for the batch prediction request.\n\n\nMake a batch prediction request\nNow do a batch prediction to your deployed model.\nGet test items\nYou will use examples out of the test (holdout) portion of the dataset as a test items.",
"test_image_1 = x_test[0]\ntest_label_1 = y_test[0]\ntest_image_2 = x_test[1]\ntest_label_2 = y_test[1]\nprint(test_image_1.shape)",
"Prepare the request content\nYou are going to send the CIFAR10 images as compressed JPG image, instead of the raw uncompressed bytes:\n\ncv2.imwrite: Use openCV to write the uncompressed image to disk as a compressed JPEG image.\nDenormalize the image data from [0,1) range back to [0,255).\nConvert the 32-bit floating point values to 8-bit unsigned integers.",
"import cv2\n\ncv2.imwrite(\"tmp1.jpg\", (test_image_1 * 255).astype(np.uint8))\ncv2.imwrite(\"tmp2.jpg\", (test_image_2 * 255).astype(np.uint8))",
"Copy test item(s)\nFor the batch prediction, you will copy the test items over to your Cloud Storage bucket.",
"! gsutil cp tmp1.jpg $BUCKET_NAME/tmp1.jpg\n! gsutil cp tmp2.jpg $BUCKET_NAME/tmp2.jpg\n\ntest_item_1 = BUCKET_NAME + \"/tmp1.jpg\"\ntest_item_2 = BUCKET_NAME + \"/tmp2.jpg\"",
"Make the batch input file\nNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:\n\ninput_name: the name of the input layer of the underlying model.\n'b64': A key that indicates the content is base64 encoded.\ncontent: The compressed JPG image bytes as a base64 encoded string.\n\nEach instance in the prediction request is a dictionary entry of the form:\n {serving_input: {'b64': content}}\n\nTo pass the image data to the prediction service you encode the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network.\n\ntf.io.read_file: Read the compressed JPG images into memory as raw bytes.\nbase64.b64encode: Encode the raw bytes into a base64 encoded string.",
"import base64\nimport json\n\ngcs_input_uri = BUCKET_NAME + \"/\" + \"test.jsonl\"\nwith tf.io.gfile.GFile(gcs_input_uri, \"w\") as f:\n bytes = tf.io.read_file(test_item_1)\n b64str = base64.b64encode(bytes.numpy()).decode(\"utf-8\")\n data = {serving_input: {\"b64\": b64str}}\n f.write(json.dumps(data) + \"\\n\")\n bytes = tf.io.read_file(test_item_2)\n b64str = base64.b64encode(bytes.numpy()).decode(\"utf-8\")\n data = {serving_input: {\"b64\": b64str}}\n f.write(json.dumps(data) + \"\\n\")",
"Compute instance scaling\nYou have several choices on scaling the compute instances for handling your batch prediction requests:\n\nSingle Instance: The batch prediction requests are processed on a single compute instance.\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.\n\n\nManual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified.\n\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.\n\n\nAuto Scaling: The batch prediction requests are split across a scaleable number of compute instances.\n\nSet the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.\n\nThe minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.",
"MIN_NODES = 1\nMAX_NODES = 1",
"Make batch prediction request\nNow that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters:\n\ndisplay_name: The human readable name for the prediction job.\nmodel_name: The Vertex fully qualified identifier for the Model resource.\ngcs_source_uri: The Cloud Storage path to the input file -- which you created above.\ngcs_destination_output_uri_prefix: The Cloud Storage path that the service will write the predictions to.\nparameters: Additional filtering parameters for serving prediction results.\n\nThe helper function calls the job client service's create_batch_prediction_job metho, with the following parameters:\n\nparent: The Vertex location root path for Dataset, Model and Pipeline resources.\nbatch_prediction_job: The specification for the batch prediction job.\n\nLet's now dive into the specification for the batch_prediction_job:\n\ndisplay_name: The human readable name for the prediction batch job.\nmodel: The Vertex fully qualified identifier for the Model resource.\ndedicated_resources: The compute resources to provision for the batch prediction job.\nmachine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.\nstarting_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.\nmax_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.\nmodel_parameters: Additional filtering parameters for serving prediction results. No Additional parameters are supported for custom models.\ninput_config: The input source and format type for the instances to predict.\ninstances_format: The format of the batch prediction request file: csv or jsonl.\ngcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.\noutput_config: The output destination and format for the predictions.\nprediction_format: The format of the batch prediction response file: csv or jsonl.\ngcs_destination: The output destination for the predictions.\n\nThis call is an asychronous operation. You will print from the response object a few select fields, including:\n\nname: The Vertex fully qualified identifier assigned to the batch prediction job.\ndisplay_name: The human readable name for the prediction batch job.\nmodel: The Vertex fully qualified identifier for the Model resource.\ngenerate_explanations: Whether True/False explanations were provided with the predictions (explainability).\nstate: The state of the prediction job (pending, running, etc).\n\nSince this call will take a few moments to execute, you will likely get JobState.JOB_STATE_PENDING for state.",
"BATCH_MODEL = \"cifar10_batch-\" + TIMESTAMP\n\n\ndef create_batch_prediction_job(\n display_name,\n model_name,\n gcs_source_uri,\n gcs_destination_output_uri_prefix,\n parameters=None,\n):\n\n if DEPLOY_GPU:\n machine_spec = {\n \"machine_type\": DEPLOY_COMPUTE,\n \"accelerator_type\": DEPLOY_GPU,\n \"accelerator_count\": DEPLOY_NGPU,\n }\n else:\n machine_spec = {\n \"machine_type\": DEPLOY_COMPUTE,\n \"accelerator_count\": 0,\n }\n\n batch_prediction_job = {\n \"display_name\": display_name,\n # Format: 'projects/{project}/locations/{location}/models/{model_id}'\n \"model\": model_name,\n \"model_parameters\": json_format.ParseDict(parameters, Value()),\n \"input_config\": {\n \"instances_format\": IN_FORMAT,\n \"gcs_source\": {\"uris\": [gcs_source_uri]},\n },\n \"output_config\": {\n \"predictions_format\": OUT_FORMAT,\n \"gcs_destination\": {\"output_uri_prefix\": gcs_destination_output_uri_prefix},\n },\n \"dedicated_resources\": {\n \"machine_spec\": machine_spec,\n \"starting_replica_count\": MIN_NODES,\n \"max_replica_count\": MAX_NODES,\n },\n \"generate_explanation\": True,\n }\n response = clients[\"job\"].create_batch_prediction_job(\n parent=PARENT, batch_prediction_job=batch_prediction_job\n )\n print(\"response\")\n print(\" name:\", response.name)\n print(\" display_name:\", response.display_name)\n print(\" model:\", response.model)\n try:\n print(\" generate_explanation:\", response.generate_explanation)\n except:\n pass\n print(\" state:\", response.state)\n print(\" create_time:\", response.create_time)\n print(\" start_time:\", response.start_time)\n print(\" end_time:\", response.end_time)\n print(\" update_time:\", response.update_time)\n print(\" labels:\", response.labels)\n return response\n\n\nIN_FORMAT = \"jsonl\"\nOUT_FORMAT = \"jsonl\"\n\nresponse = create_batch_prediction_job(\n BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME\n)",
"Now get the unique identifier for the batch prediction job you created.",
"# The full unique ID for the batch job\nbatch_job_id = response.name\n# The short numeric ID for the batch job\nbatch_job_short_id = batch_job_id.split(\"/\")[-1]\n\nprint(batch_job_id)",
"Get information on a batch prediction job\nUse this helper function get_batch_prediction_job, with the following paramter:\n\njob_name: The Vertex fully qualified identifier for the batch prediction job.\n\nThe helper function calls the job client service's get_batch_prediction_job method, with the following paramter:\n\nname: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- batch_job_id\n\nThe helper function will return the Cloud Storage path to where the predictions are stored -- gcs_destination.",
"def get_batch_prediction_job(job_name, silent=False):\n response = clients[\"job\"].get_batch_prediction_job(name=job_name)\n if silent:\n return response.output_config.gcs_destination.output_uri_prefix, response.state\n\n print(\"response\")\n print(\" name:\", response.name)\n print(\" display_name:\", response.display_name)\n print(\" model:\", response.model)\n try: # not all data types support explanations\n print(\" generate_explanation:\", response.generate_explanation)\n except:\n pass\n print(\" state:\", response.state)\n print(\" error:\", response.error)\n gcs_destination = response.output_config.gcs_destination\n print(\" gcs_destination\")\n print(\" output_uri_prefix:\", gcs_destination.output_uri_prefix)\n return gcs_destination.output_uri_prefix, response.state\n\n\npredictions, state = get_batch_prediction_job(batch_job_id)",
"Get the predictions\nWhen the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED.\nFinally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name prediction, and under that folder will be a file called prediction.results-xxxxx-of-xxxxx.\nNow display (cat) the contents. You will see multiple JSON objects, one for each prediction.\nFinally you view the explanations stored at the Cloud Storage path you set as output. The explanations will be in a JSONL format, which you indicated at the time you made the batch explanation job, under a subfolder starting with the name prediction, and under that folder will be a file called explanation-results-xxxx-of-xxxx.\nLet's display (cat) the contents. You will a row for each prediction -- in this case, there is just one row. The row is the softmax probability distribution for the corresponding CIFAR10 classes.",
"def get_latest_predictions(gcs_out_dir):\n \"\"\" Get the latest prediction subfolder using the timestamp in the subfolder name\"\"\"\n folders = !gsutil ls $gcs_out_dir\n latest = \"\"\n for folder in folders:\n subfolder = folder.split(\"/\")[-2]\n if subfolder.startswith(\"prediction-\"):\n if subfolder > latest:\n latest = folder[:-1]\n return latest\n\n\nwhile True:\n predictions, state = get_batch_prediction_job(batch_job_id, True)\n if state != aip.JobState.JOB_STATE_SUCCEEDED:\n print(\"The job has not completed:\", state)\n if state == aip.JobState.JOB_STATE_FAILED:\n break\n else:\n folder = get_latest_predictions(predictions)\n ! gsutil ls $folder/explanation.results*\n\n print(\"Results:\")\n ! gsutil cat $folder/explanation.results*\n\n print(\"Errors:\")\n ! gsutil cat $folder/prediction.errors*\n break\n time.sleep(60)",
"Cleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket",
"delete_dataset = True\ndelete_pipeline = True\ndelete_model = True\ndelete_endpoint = True\ndelete_batchjob = True\ndelete_customjob = True\ndelete_hptjob = True\ndelete_bucket = True\n\n# Delete the dataset using the Vertex fully qualified identifier for the dataset\ntry:\n if delete_dataset and \"dataset_id\" in globals():\n clients[\"dataset\"].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline\ntry:\n if delete_pipeline and \"pipeline_id\" in globals():\n clients[\"pipeline\"].delete_training_pipeline(name=pipeline_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the Vertex fully qualified identifier for the model\ntry:\n if delete_model and \"model_to_deploy_id\" in globals():\n clients[\"model\"].delete_model(name=model_to_deploy_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex fully qualified identifier for the endpoint\ntry:\n if delete_endpoint and \"endpoint_id\" in globals():\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex fully qualified identifier for the batch job\ntry:\n if delete_batchjob and \"batch_job_id\" in globals():\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the custom job using the Vertex fully qualified identifier for the custom job\ntry:\n if delete_customjob and \"job_id\" in globals():\n clients[\"job\"].delete_custom_job(name=job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job\ntry:\n if delete_hptjob and \"hpt_job_id\" in globals():\n clients[\"job\"].delete_hyperparameter_tuning_job(name=hpt_job_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.23/_downloads/fb92190904499e5a95e92ab70177abf7/60_make_fixed_length_epochs.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Creating epochs of equal length\nThis tutorial shows how to create equal length epochs and briefly demonstrates\nan example of their use in connectivity analysis.\nFirst, we import necessary modules and read in a sample raw\ndata set. This data set contains brain activity that is event-related, i.e.\nsynchronized to the onset of auditory stimuli. However, rather than creating\nepochs by segmenting the data around the onset of each stimulus, we will\ncreate 30 second epochs that allow us to perform non-event-related analyses of\nthe signal.",
"import os\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport mne\nfrom mne.preprocessing import compute_proj_ecg\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\n\nraw = mne.io.read_raw_fif(sample_data_raw_file)",
"For this tutorial we'll crop and resample the raw data to a manageable size\nfor our web server to handle, ignore EEG channels, and remove the heartbeat\nartifact so we don't get spurious correlations just because of that.",
"raw.crop(tmax=150).resample(100).pick('meg')\necg_proj, _ = compute_proj_ecg(raw, ch_name='MEG 0511') # No ECG chan\nraw.add_proj(ecg_proj)\nraw.apply_proj()",
"To create fixed length epochs, we simply call the function and provide it\nwith the appropriate parameters indicating the desired duration of epochs in\nseconds, whether or not to preload data, whether or not to reject epochs that\noverlap with raw data segments annotated as bad, whether or not to include\nprojectors, and finally whether or not to be verbose. Here, we choose a long\nepoch duration (30 seconds). To conserve memory, we set preload to\nFalse.",
"epochs = mne.make_fixed_length_epochs(raw, duration=30, preload=False)",
"Characteristics of Fixed Length Epochs\nFixed length epochs are generally unsuitable for event-related analyses. This\ncan be seen in an image map of our fixed length\nepochs. When the epochs are averaged, as seen at the bottom of the plot,\nmisalignment between onsets of event-related activity results in noise.",
"event_related_plot = epochs.plot_image(picks=['MEG 1142'])",
"For information about creating epochs for event-related analyses, please see\ntut-epochs-class.\nExample Use Case for Fixed Length Epochs: Connectivity Analysis\nFixed lengths epochs are suitable for many types of analysis, including\nfrequency or time-frequency analyses, connectivity analyses, or\nclassification analyses. Here we briefly illustrate their utility in a sensor\nspace connectivity analysis.\nThe data from our epochs object has shape (n_epochs, n_sensors, n_times)\nand is therefore an appropriate basis for using MNE-Python's envelope\ncorrelation function to compute power-based connectivity in sensor space. The\nlong duration of our fixed length epochs, 30 seconds, helps us reduce edge\nartifacts and achieve better frequency resolution when filtering must\nbe applied after epoching.\nLet's examine the alpha band. We allow default values for filter parameters\n(for more information on filtering, please see tut-filter-resample).",
"epochs.load_data().filter(l_freq=8, h_freq=12)\nalpha_data = epochs.get_data()",
"If desired, separate correlation matrices for each epoch can be obtained.\nFor envelope correlations, this is done by passing combine=None to the\nenvelope correlations function.",
"corr_matrix = mne.connectivity.envelope_correlation(alpha_data, combine=None)",
"Now we can plot correlation matrices. We'll compare the first and last\n30-second epochs of the recording:",
"first_30 = corr_matrix[0]\nlast_30 = corr_matrix[-1]\ncorr_matrices = [first_30, last_30]\ncolor_lims = np.percentile(np.array(corr_matrices), [5, 95])\ntitles = ['First 30 Seconds', 'Last 30 Seconds']\n\nfig, axes = plt.subplots(nrows=1, ncols=2)\nfig.suptitle('Correlation Matrices from First 30 Seconds and Last 30 Seconds')\nfor ci, corr_matrix in enumerate(corr_matrices):\n ax = axes[ci]\n mpbl = ax.imshow(corr_matrix, clim=color_lims)\n ax.set_xlabel(titles[ci])\nfig.subplots_adjust(right=0.8)\ncax = fig.add_axes([0.85, 0.2, 0.025, 0.6])\ncbar = fig.colorbar(ax.images[0], cax=cax)\ncbar.set_label('Correlation Coefficient')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ageron/tensorflow-safari-course
|
06_readers.ipynb
|
apache-2.0
|
[
"Try not to peek at the solutions when you go through the exercises. ;-)\nFirst let's make sure this notebook works well in both Python 2 and Python 3:",
"from __future__ import absolute_import, division, print_function, unicode_literals\n\nimport tensorflow as tf\ntf.__version__",
"From previous notebooks",
"learning_rate = 0.01\nmomentum = 0.8",
"Using Readers",
"filenames = [\"data/life_satisfaction.csv\"]\nn_epochs = 500\n\ngraph = tf.Graph()\nwith graph.as_default():\n reader = tf.TextLineReader(skip_header_lines=1)\n\n filename_queue = tf.train.string_input_producer(filenames, num_epochs=n_epochs)\n record_id, record = reader.read(filename_queue)\n\n record_defaults = [[''], [0.0], [0.0]]\n country, gdp_per_capita, life_satisfaction = tf.decode_csv(record, record_defaults=record_defaults)\n\nbatch_size = 5\nwith graph.as_default():\n X_batch, y_batch = tf.train.batch([gdp_per_capita, life_satisfaction], batch_size=batch_size)\n X_batch_reshaped = tf.reshape(X_batch, [-1, 1])\n y_batch_reshaped = tf.reshape(y_batch, [-1, 1])\n\nwith graph.as_default():\n X = tf.placeholder_with_default(X_batch_reshaped, shape=[None, 1], name=\"X\")\n y = tf.placeholder_with_default(y_batch_reshaped, shape=[None, 1], name=\"y\")\n\n b = tf.Variable(0.0, name=\"b\")\n w = tf.Variable(tf.zeros([1, 1]), name=\"w\")\n y_pred = tf.add(tf.matmul(X / 10000, w), b, name=\"y_pred\") # X @ w + b\n \n mse = tf.reduce_mean(tf.square(y_pred - y), name=\"mse\")\n\n global_step = tf.Variable(0, trainable=False, name='global_step')\n optimizer = tf.train.MomentumOptimizer(learning_rate, momentum)\n training_op = optimizer.minimize(mse, global_step=global_step)\n\n init = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())\n saver = tf.train.Saver()\n\nwith tf.Session(graph=graph) as sess:\n init.run()\n coord = tf.train.Coordinator()\n threads = tf.train.start_queue_runners(coord=coord)\n try:\n while not coord.should_stop():\n _, mse_val, global_step_val = sess.run([training_op, mse, global_step])\n if global_step_val % 100 == 0:\n print(global_step_val, mse_val)\n except tf.errors.OutOfRangeError:\n print(\"End of training\")\n coord.request_stop()\n coord.join(threads)\n saver.save(sess, \"./my_life_satisfaction_model\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jeicher/cobrapy
|
documentation_builder/milp.ipynb
|
lgpl-2.1
|
[
"Mixed-Integer Linear Programming\nIce Cream\nThis example was originally contributed by Joshua Lerman.\nAn ice cream stand sells cones and popsicles. It wants to maximize its profit, but is subject to a budget.\nWe can write this problem as a linear program:\n\nmax cone $\\cdot$ cone_margin + popsicle $\\cdot$ popsicle margin\nsubject to\ncone $\\cdot$ cone_cost + popsicle $\\cdot$ popsicle_cost $\\le$ budget",
"cone_selling_price = 7.\ncone_production_cost = 3.\npopsicle_selling_price = 2.\npopsicle_production_cost = 1.\nstarting_budget = 100.",
"This problem can be written as a cobra.Model",
"from cobra import Model, Metabolite, Reaction\n\ncone = Reaction(\"cone\")\npopsicle = Reaction(\"popsicle\")\n\n# constrainted to a budget\nbudget = Metabolite(\"budget\")\nbudget._constraint_sense = \"L\"\nbudget._bound = starting_budget\ncone.add_metabolites({budget: cone_production_cost})\npopsicle.add_metabolites({budget: popsicle_production_cost})\n\n# objective coefficient is the profit to be made from each unit\ncone.objective_coefficient = \\\n cone_selling_price - cone_production_cost\npopsicle.objective_coefficient = \\\n popsicle_selling_price - popsicle_production_cost\n\nm = Model(\"lerman_ice_cream_co\")\nm.add_reactions((cone, popsicle))\n\nm.optimize().x_dict",
"In reality, cones and popsicles can only be sold in integer amounts. We can use the variable kind attribute of a cobra.Reaction to enforce this.",
"cone.variable_kind = \"integer\"\npopsicle.variable_kind = \"integer\"\nm.optimize().x_dict",
"Now the model makes both popsicles and cones.\nRestaurant Order\nTo tackle the less immediately obvious problem from the following XKCD comic:",
"from IPython.display import Image\nImage(url=r\"http://imgs.xkcd.com/comics/np_complete.png\")",
"We want a solution satisfying the following constraints:\n$\\left(\\begin{matrix}2.15&2.75&3.35&3.55&4.20&5.80\\end{matrix}\\right) \\cdot \\vec v = 15.05$\n$\\vec v_i \\ge 0$\n$\\vec v_i \\in \\mathbb{Z}$\nThis problem can be written as a COBRA model as well.",
"total_cost = Metabolite(\"constraint\")\ntotal_cost._bound = 15.05\n\ncosts = {\"mixed_fruit\": 2.15, \"french_fries\": 2.75,\n \"side_salad\": 3.35, \"hot_wings\": 3.55,\n \"mozarella_sticks\": 4.20, \"sampler_plate\": 5.80}\n\nm = Model(\"appetizers\")\n\nfor item, cost in costs.items():\n r = Reaction(item)\n r.add_metabolites({total_cost: cost})\n r.variable_kind = \"integer\"\n m.add_reaction(r)\n\n# To add to the problem, suppose we want to\n# eat as little mixed fruit as possible.\nm.reactions.mixed_fruit.objective_coefficient = 1\n \nm.optimize(objective_sense=\"minimize\").x_dict",
"There is another solution to this problem, which would have been obtained if we had maximized for mixed fruit instead of minimizing.",
"m.optimize(objective_sense=\"maximize\").x_dict",
"Boolean Indicators\nTo give a COBRA-related example, we can create boolean variables as integers, which can serve as indicators for a reaction being active in a model. For a reaction flux $v$ with lower bound -1000 and upper bound 1000, we can create a binary variable $b$ with the following constraints:\n$b \\in {0, 1}$\n$-1000 \\cdot b \\le v \\le 1000 \\cdot b$\nTo introduce the above constraints into a cobra model, we can rewrite them as follows\n$v \\le b \\cdot 1000 \\Rightarrow v- 1000\\cdot b \\le 0$\n$-1000 \\cdot b \\le v \\Rightarrow v + 1000\\cdot b \\ge 0$",
"import cobra.test\nmodel = cobra.test.create_test_model(\"textbook\")\n\n# an indicator for pgi\npgi = model.reactions.get_by_id(\"PGI\")\n# make a boolean variable\npgi_indicator = Reaction(\"indicator_PGI\")\npgi_indicator.lower_bound = 0\npgi_indicator.upper_bound = 1\npgi_indicator.variable_kind = \"integer\"\n# create constraint for v - 1000 b <= 0\npgi_plus = Metabolite(\"PGI_plus\")\npgi_plus._constraint_sense = \"L\"\n# create constraint for v + 1000 b >= 0\npgi_minus = Metabolite(\"PGI_minus\")\npgi_minus._constraint_sense = \"G\"\n\npgi_indicator.add_metabolites({pgi_plus: -1000,\n pgi_minus: 1000})\npgi.add_metabolites({pgi_plus: 1, pgi_minus: 1})\nmodel.add_reaction(pgi_indicator)\n\n\n# an indicator for zwf\nzwf = model.reactions.get_by_id(\"G6PDH2r\")\nzwf_indicator = Reaction(\"indicator_ZWF\")\nzwf_indicator.lower_bound = 0\nzwf_indicator.upper_bound = 1\nzwf_indicator.variable_kind = \"integer\"\n# create constraint for v - 1000 b <= 0\nzwf_plus = Metabolite(\"ZWF_plus\")\nzwf_plus._constraint_sense = \"L\"\n# create constraint for v + 1000 b >= 0\nzwf_minus = Metabolite(\"ZWF_minus\")\nzwf_minus._constraint_sense = \"G\"\n\nzwf_indicator.add_metabolites({zwf_plus: -1000,\n zwf_minus: 1000})\nzwf.add_metabolites({zwf_plus: 1, zwf_minus: 1})\n\n# add the indicator reactions to the model\nmodel.add_reaction(zwf_indicator)\n",
"In a model with both these reactions active, the indicators will also be active",
"solution = model.optimize()\nprint(\"PGI indicator = %d\" % solution.x_dict[\"indicator_PGI\"])\nprint(\"ZWF indicator = %d\" % solution.x_dict[\"indicator_ZWF\"])\nprint(\"PGI flux = %.2f\" % solution.x_dict[\"PGI\"])\nprint(\"ZWF flux = %.2f\" % solution.x_dict[\"G6PDH2r\"])",
"Because these boolean indicators are in the model, additional constraints can be applied on them. For example, we can prevent both reactions from being active at the same time by adding the following constraint:\n$b_\\text{pgi} + b_\\text{zwf} = 1$",
"or_constraint = Metabolite(\"or\")\nor_constraint._bound = 1\nzwf_indicator.add_metabolites({or_constraint: 1})\npgi_indicator.add_metabolites({or_constraint: 1})\n\nsolution = model.optimize()\nprint(\"PGI indicator = %d\" % solution.x_dict[\"indicator_PGI\"])\nprint(\"ZWF indicator = %d\" % solution.x_dict[\"indicator_ZWF\"])\nprint(\"PGI flux = %.2f\" % solution.x_dict[\"PGI\"])\nprint(\"ZWF flux = %.2f\" % solution.x_dict[\"G6PDH2r\"])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cccma/cmip6/models/sandbox-3/atmos.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: CCCMA\nSource ID: SANDBOX-3\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:46\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccma', 'sandbox-3', 'atmos')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Overview\n2. Key Properties --> Resolution\n3. Key Properties --> Timestepping\n4. Key Properties --> Orography\n5. Grid --> Discretisation\n6. Grid --> Discretisation --> Horizontal\n7. Grid --> Discretisation --> Vertical\n8. Dynamical Core\n9. Dynamical Core --> Top Boundary\n10. Dynamical Core --> Lateral Boundary\n11. Dynamical Core --> Diffusion Horizontal\n12. Dynamical Core --> Advection Tracers\n13. Dynamical Core --> Advection Momentum\n14. Radiation\n15. Radiation --> Shortwave Radiation\n16. Radiation --> Shortwave GHG\n17. Radiation --> Shortwave Cloud Ice\n18. Radiation --> Shortwave Cloud Liquid\n19. Radiation --> Shortwave Cloud Inhomogeneity\n20. Radiation --> Shortwave Aerosols\n21. Radiation --> Shortwave Gases\n22. Radiation --> Longwave Radiation\n23. Radiation --> Longwave GHG\n24. Radiation --> Longwave Cloud Ice\n25. Radiation --> Longwave Cloud Liquid\n26. Radiation --> Longwave Cloud Inhomogeneity\n27. Radiation --> Longwave Aerosols\n28. Radiation --> Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --> Boundary Layer Turbulence\n31. Turbulence Convection --> Deep Convection\n32. Turbulence Convection --> Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --> Large Scale Precipitation\n35. Microphysics Precipitation --> Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --> Optical Cloud Properties\n38. Cloud Scheme --> Sub Grid Scale Water Distribution\n39. Cloud Scheme --> Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --> Isscp Attributes\n42. Observation Simulation --> Cosp Attributes\n43. Observation Simulation --> Radar Inputs\n44. Observation Simulation --> Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --> Orographic Gravity Waves\n47. Gravity Waves --> Non Orographic Gravity Waves\n48. Solar\n49. Solar --> Solar Pathways\n50. Solar --> Solar Constant\n51. Solar --> Orbital Parameters\n52. Solar --> Insolation Ozone\n53. Volcanos\n54. Volcanos --> Volcanoes Treatment \n1. Key Properties --> Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of atmospheric model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.4. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.5. High Top\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the orography.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n",
"4.2. Changes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n",
"5. Grid --> Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Discretisation --> Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n",
"6.3. Scheme Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation function order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.4. Horizontal Pole\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal discretisation pole singularity treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7. Grid --> Discretisation --> Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType of vertical coordinate system",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere dynamical core",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the dynamical core of the model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Timestepping Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTimestepping framework type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of the model prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Dynamical Core --> Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTop boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Top Heat\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary heat treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Top Wind\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary wind treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Dynamical Core --> Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nType of lateral boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Dynamical Core --> Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nHorizontal diffusion scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal diffusion scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Dynamical Core --> Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nTracer advection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.3. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.4. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracer advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Dynamical Core --> Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMomentum advection schemes name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Scheme Staggering Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Radiation --> Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nShortwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nShortwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Radiation --> Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Radiation --> Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18. Radiation --> Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Radiation --> Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Radiation --> Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21. Radiation --> Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Radiation --> Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLongwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLongwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23. Radiation --> Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Radiation --> Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Physical Reprenstation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25. Radiation --> Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Radiation --> Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27. Radiation --> Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28. Radiation --> Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere convection and turbulence",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. Turbulence Convection --> Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nBoundary layer turbulence scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBoundary layer turbulence scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.3. Closure Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nBoundary layer turbulence scheme closure order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Counter Gradient\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"31. Turbulence Convection --> Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDeep convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Turbulence Convection --> Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nShallow convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nshallow convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nshallow convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n",
"32.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Microphysics Precipitation --> Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.2. Hydrometeors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35. Microphysics Precipitation --> Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLarge scale cloud microphysics processes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the atmosphere cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.3. Atmos Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n",
"36.4. Uses Separate Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.6. Prognostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.7. Diagnostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.8. Prognostic Variables\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37. Cloud Scheme --> Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37.2. Cloud Inhomogeneity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Cloud Scheme --> Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale water distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"38.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale water distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale water distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"38.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale water distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"39. Cloud Scheme --> Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale ice distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"39.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale ice distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"39.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale ice distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"39.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of observation simulator characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Observation Simulation --> Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. Top Height Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator ISSCP top height direction",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42. Observation Simulation --> Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator COSP run configuration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42.2. Number Of Grid Points\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of grid points",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.3. Number Of Sub Columns\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.4. Number Of Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of levels",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43. Observation Simulation --> Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nCloud simulator radar frequency (Hz)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43.2. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator radar type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"43.3. Gas Absorption\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses gas absorption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"43.4. Effective Radius\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses effective radius",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"44. Observation Simulation --> Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator lidar ice type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"44.2. Overlap\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator lidar overlap",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"45.2. Sponge Layer\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.3. Background\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBackground wave distribution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.4. Subgrid Scale Orography\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSubgrid scale orography effects taken into account.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46. Gravity Waves --> Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"46.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47. Gravity Waves --> Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"47.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n",
"47.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of solar insolation of the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"49. Solar --> Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"50. Solar --> Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the solar constant.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"50.2. Fixed Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"50.3. Transient Characteristics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nsolar constant transient characteristics (W m-2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51. Solar --> Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"51.2. Fixed Reference Date\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"51.3. Transient Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescription of transient orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51.4. Computation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used for computing orbital parameters.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"52. Solar --> Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"54. Volcanos --> Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kwinkunks/rainbow
|
notebooks/Machine_learning_approach_w_sklearn.ipynb
|
apache-2.0
|
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Deep learning approach: sklearn MLP\nWays to frame problem:\n\nRecover colourmap from pseudocolour, then lookup data from code book.\nRecover data from pseudocolour directly. \n\nMake map data",
"def kernel(sizex, sizey):\n x, y = np.mgrid[-sizex:sizex+1, -sizey:sizey+1]\n g = np.exp(-0.333*(x**2/float(sizex)+y**2/float(sizey)))\n return g / g.sum()\n\nimport scipy.signal\n\ndef make_map(n, nx=100, ny=100, kernel_size=None, seed=None):\n\n imgs = []\n for i in range(n):\n rng = np.random.RandomState(seed=seed)\n z = rng.rand(nx, ny)\n kernel_size = kernel_size or (30, 30)\n f = kernel(*kernel_size)\n\n z = scipy.signal.convolve(z, f, mode='valid')\n z = (z - z.min())/(z.max() - z.min())\n imgs.append(z)\n \n return np.stack(imgs)\n\nn = 1600\ndata = make_map(n, kernel_size=(29,29))\n\nnv = 20\ndata_val = make_map(nv, kernel_size=(29,29))\n\ndata.shape",
"Make seismic data",
"raw = np.load(\"/home/matt/Dropbox/dev/geocomp-19/data/F3_volume_3x3_16bit.npy\")\n\ndef rms(arr):\n epsilon=1e-6\n return np.sqrt(epsilon + np.sum(arr**2)/arr.size)\n\ndef make_seismic(raw, n=1600, s=41):\n slices = []\n while len(slices) < n:\n x, y, z = [np.random.randint(0, h) for h in np.array(raw.shape)-np.array([s, 0, s])]\n this = raw[x:x+s, y, z:z+s]\n if rms(this) < 1:\n # Make sure it's not blank\n continue\n slices.append(this.T)\n data = np.array(slices, dtype=np.int64)\n\n mi, ma = np.percentile(data, (0.5, 99.5))\n data[data < mi] = mi\n data[data > ma] = ma\n\n data[np.isnan(data)] = 0\n mi, ma = np.min(data), np.max(data)\n data = (data - mi) / (ma - mi)\n\n data = data.astype(np.float32)\n return data\n\nn = 2500\ndata = make_seismic(raw, n=n)\n\nnp.min(data), np.max(data)\n\nplt.imshow(data[313])\n\nnv = 48\ndata_val = make_seismic(raw, n=nv)",
"Make X and y",
"CMAPS = {'Perceptually Uniform Sequential':\n ['viridis', 'inferno', 'plasma', 'magma'],\n 'Sequential': ['Blues', 'BuGn', 'BuPu',\n 'GnBu', 'Greens', 'Greys', 'Oranges', 'OrRd',\n 'PuBu', 'PuBuGn', 'PuRd', 'Purples', 'RdPu',\n 'Reds', 'YlGn', 'YlGnBu', 'YlOrBr', 'YlOrRd',],\n 'Sequential2': ['afmhot', 'autumn', 'bone', 'cool',\n 'copper', 'gist_heat', 'gray', 'hot',\n 'pink', 'spring', 'summer', 'winter'],\n 'Diverging': ['BrBG', 'bwr', 'coolwarm', 'PiYG', 'PRGn', 'PuOr',\n 'RdBu', 'RdGy', 'RdYlBu', 'RdYlGn', 'Spectral',\n 'seismic'],\n 'Seismic': ['bwr', 'RdBu', 'RdGy', 'Greys', 'gray', 'Spectral',\n 'seismic', 'bone', 'bone_r',\n 'bwr_r', 'RdBu_r', 'RdGy_r', 'Greys_r', 'gray_r', 'seismic_r'],\n 'SeismicDiv': ['bwr', 'RdBu', 'RdGy', 'Spectral','seismic', \n 'bwr_r', 'RdBu_r', 'RdGy_r', 'Spectral_r', 'seismic_r'],\n 'SeismicRamp': ['Greys', 'gray_r', 'bone_r', 'Blues'],\n 'Qualitative': ['Accent', 'Dark2', 'Paired', 'Pastel1',\n 'Pastel2', 'Set1', 'Set2', 'Set3'],\n 'Miscellaneous': ['gist_earth', 'terrain', 'ocean', 'gist_stern',\n 'brg', 'CMRmap', 'cubehelix',\n 'gnuplot', 'gnuplot2', 'gist_ncar',\n 'nipy_spectral', 'jet', 'rainbow',\n 'gist_rainbow', 'hsv', 'flag', 'prism'],\n 'Rainbow': ['nipy_spectral', 'jet',\n 'gist_rainbow', 'hsv'],\n }\n\nimport os\nimport logging\nfrom matplotlib import cm\nfrom functools import reduce\n\nlogging.basicConfig(level='INFO')\n\ndef save_image_files(y, path:str) -> None:\n \"\"\"\n Make and save an image (via a matplotlib figure)\n for every image (first dimension slice) of y.\n \n Produces numbered PNG files in the path specified.\n \n Returns:\n None. Saves files as side-effect.\n \"\"\"\n for i, img in enumerate(y):\n fig = plt.figure(frameon=False)\n fig.set_size_inches(5,5)\n\n ax = plt.Axes(fig, [0., 0., 1., 1.])\n ax.set_axis_off()\n fig.add_axes(ax)\n\n # Note: interpolation introduces new colours.\n plt.imshow(img, cmap=\"jet\", aspect='auto', interpolation=\"bicubic\")\n \n if path is None:\n path = \".\"\n fname = os.path.join(path, f'img_{i:03d}.png')\n fig.savefig(fname, dpi=100)\n plt.close()\n logging.info(f\"Saved {fname}\")\n return\n\ndef make_pseudo(data, cmap:str='viridis', steps:int=128):\n cmap = cm.get_cmap(cmap)\n colours = cmap(np.linspace(0, 1, steps))[..., :3]\n pseudocolour = cmap(data)[..., :3]\n return pseudocolour, colours\n\ndef make_X_and_y(data,\n cmap_group:str='Perceptually Uniform Sequential',\n cmap_except:str='Qualitative',\n steps:int=128) -> tuple:\n \"\"\"\n Make a 3-channel pseudo-colour array for each\n slice in y, using the specified colourmap.\n \n Args:\n data (ndarray): M x H x W array for M examples.\n cmap_group (str): Which group of cmaps to use. (Or\n can be a single cmap.)\n steps (int): The number of steps.\n \n Returns:\n tuple: Two ndarrays, the pseudocolour images,\n and the code book.\n \"\"\"\n if cmap_group.lower() == 'all':\n cmaps = reduce(lambda x, y: x + y, CMAPS.values())\n else:\n cmaps = CMAPS.get(cmap_group)\n \n cmaps = [c for c in cmaps if c not in CMAPS.get(cmap_except)]\n \n pseudos, colours = [], []\n for d in data:\n try:\n cmap = np.random.choice(cmaps)\n except:\n cmap = cmap_group\n cmap = cm.get_cmap(cmap)\n \n p, c = make_pseudo(d, cmap, steps)\n \n pseudos.append(p)\n colours.append(c)\n \n return np.stack(pseudos), np.stack(colours)\n\ndef display_slices(data):\n fig, axs = plt.subplots(ncols=6, nrows=2, figsize=(12,4))\n print(axs.shape)\n for ax, d in zip(axs.flatten(), data):\n ax.imshow(d, aspect='auto')\n ax.axis('off')\n plt.show()\n\ndef luminance(arr):\n r, g, b = arr.T\n return np.sqrt(0.299 * r**2. + 0.587 * g**2. + 0.114 * b**2.)\n\nsteps = 24\ngroup = 'SeismicRamp'\n\nX, y = make_X_and_y(data, cmap_group=group, steps=steps)\n\nX.shape, y.shape\n\ndisplay_slices(X)\n\ndisplay_slices(y[:, :, None, :])\n\nplt.imshow(X[3])",
"Feature engineering\nThis is a sequence learning problem.\nWant to constrain the output to only know about the colours in the training data.\nFeels like we should be able to provide the training images and a code book, and ask only for the order of the colours in the image.\nOptions:\n\nNaive neighbours with shifts\nLSTM or spatial LSTM?\nDo a classification first, into discrete colours. Then give those colours to an ordering network.\n'Lock' the colours together into a generated number or triple... but would have to span the full scale otheriw seit's no longer a regression problem. (I doubt this would work)\n\nAnother thought: use HLS space, or some other representation?",
"X.shape\n\nXo = X[:, :-1, :-1, :]\nXs = X[:, 1:, 1:, :]\n\nassert Xo.shape == Xs.shape\n\nX = np.hstack([Xo.reshape((n, -1, 3)), Xs.reshape((n, -1, 3))])\n\nX.shape\n\nX_val, y_val = make_X_and_y(data_val, cmap_group=group, steps=steps)\n\nXo = X_val[:, :-1, :-1, :]\nXs = X_val[:, 1:, 1:, :]\n\nX_val = np.hstack([Xo.reshape((nv, -1, 3)), Xs.reshape((nv, -1, 3))])\n\nX_val.shape",
"Training\nDeep neural net",
"n = X.shape[0]\nassert n == y.shape[0]\n\nX_train, y_train = X.reshape((n, -1)), y.reshape((n, -1))\n\nX_val, y_val = X_val.reshape((nv, -1)), y_val.reshape((nv, -1))\n\nfrom sklearn.neural_network import MLPRegressor\n\nhidden = [\n 6*steps,\n 12*steps,\n 6*steps,\n ]\n\nnn = MLPRegressor(hidden_layer_sizes=hidden,\n max_iter=10*n,\n random_state=42,\n )\n\nnn.fit(X_train, y_train)\n\nX_val.shape, X_train.shape\n\ny_pred = nn.predict(X_val)\n\ndisplay_slices(y_val.reshape((nv, -1, 3))[:, :, None, :])\n\ny_pred[y_pred < 0] = 0\ny_pred[y_pred > 1] = 1\ndisplay_slices(y_pred.reshape((nv, -1, 3))[:, :, None, :])\n\nfrom sklearn.metrics import mean_squared_error\n\nnp.sqrt(mean_squared_error(y_pred, y_val))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/examples
|
courses/udacity_intro_to_tensorflow_lite/tflite_c02_transfer_learning.ipynb
|
apache-2.0
|
[
"Copyright 2018 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Transfer Learning with TensorFlow Hub for TFLite\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c02_transfer_learning.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c02_transfer_learning.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n</table>\n\nSetup",
"import os\n\nimport matplotlib.pylab as plt\nimport numpy as np\n\nimport tensorflow as tf\nimport tensorflow_hub as hub\n\nprint(\"Version: \", tf.__version__)\nprint(\"Eager mode: \", tf.executing_eagerly())\nprint(\"Hub version: \", hub.__version__)\nprint(\"GPU is\", \"available\" if tf.config.list_physical_devices('GPU') else \"NOT AVAILABLE\")",
"Select the Hub/TF2 module to use\nHub modules for TF 1.x won't work here, please use one of the selections provided.",
"module_selection = (\"mobilenet_v2\", 224, 1280) #@param [\"(\\\"mobilenet_v2\\\", 224, 1280)\", \"(\\\"inception_v3\\\", 299, 2048)\"] {type:\"raw\", allow-input: true}\nhandle_base, pixels, FV_SIZE = module_selection\nMODULE_HANDLE =\"https://tfhub.dev/google/tf2-preview/{}/feature_vector/4\".format(handle_base)\nIMAGE_SIZE = (pixels, pixels)\nprint(\"Using {} with input size {} and output dimension {}\".format(\n MODULE_HANDLE, IMAGE_SIZE, FV_SIZE))",
"Data preprocessing\nUse TensorFlow Datasets to load the cats and dogs dataset.\nThis tfds package is the easiest way to load pre-defined data. If you have your own data, and are interested in importing using it with TensorFlow see loading image data",
"import tensorflow_datasets as tfds\ntfds.disable_progress_bar()",
"The tfds.load method downloads and caches the data, and returns a tf.data.Dataset object. These objects provide powerful, efficient methods for manipulating data and piping it into your model.\nSince \"cats_vs_dogs\" doesn't define standard splits, use the subsplit feature to divide it into (train, validation, test) with 80%, 10%, 10% of the data respectively.",
"(train_examples, validation_examples, test_examples), info = tfds.load(\n 'cats_vs_dogs',\n split=['train[80%:]', 'train[80%:90%]', 'train[90%:]'],\n with_info=True, \n as_supervised=True, \n)\n\nnum_examples = info.splits['train'].num_examples\nnum_classes = info.features['label'].num_classes",
"Format the Data\nUse the tf.image module to format the images for the task.\nResize the images to a fixes input size, and rescale the input channels",
"def format_image(image, label):\n image = tf.image.resize(image, IMAGE_SIZE) / 255.0\n return image, label",
"Now shuffle and batch the data",
"BATCH_SIZE = 32 #@param {type:\"integer\"}\n\ntrain_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(1)\nvalidation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(1)\ntest_batches = test_examples.map(format_image).batch(1)",
"Inspect a batch",
"for image_batch, label_batch in train_batches.take(1):\n pass\n\nimage_batch.shape",
"Defining the model\nAll it takes is to put a linear classifier on top of the feature_extractor_layer with the Hub module.\nFor speed, we start out with a non-trainable feature_extractor_layer, but you can also enable fine-tuning for greater accuracy.",
"do_fine_tuning = False #@param {type:\"boolean\"}",
"Load TFHub Module",
"feature_extractor = hub.KerasLayer(MODULE_HANDLE,\n input_shape=IMAGE_SIZE + (3,), \n output_shape=[FV_SIZE],\n trainable=do_fine_tuning)\n\nprint(\"Building model with\", MODULE_HANDLE)\nmodel = tf.keras.Sequential([\n feature_extractor,\n tf.keras.layers.Dense(num_classes)\n])\nmodel.summary()\n\n#@title (Optional) Unfreeze some layers\nNUM_LAYERS = 7 #@param {type:\"slider\", min:1, max:50, step:1}\n \nif do_fine_tuning:\n feature_extractor.trainable = True\n \n for layer in model.layers[-NUM_LAYERS:]:\n layer.trainable = True\n\nelse:\n feature_extractor.trainable = False",
"Training the model",
"if do_fine_tuning:\n model.compile(\n optimizer=tf.keras.optimizers.SGD(lr=0.002, momentum=0.9), \n loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\nelse:\n model.compile(\n optimizer='adam', \n loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nEPOCHS = 5\nhist = model.fit(train_batches,\n epochs=EPOCHS,\n validation_data=validation_batches)",
"Export the model",
"CATS_VS_DOGS_SAVED_MODEL = \"exp_saved_model\"",
"Export the SavedModel",
"tf.saved_model.save(model, CATS_VS_DOGS_SAVED_MODEL)\n\n%%bash -s $CATS_VS_DOGS_SAVED_MODEL\nsaved_model_cli show --dir $1 --tag_set serve --signature_def serving_default\n\nloaded = tf.saved_model.load(CATS_VS_DOGS_SAVED_MODEL)\n\nprint(list(loaded.signatures.keys()))\ninfer = loaded.signatures[\"serving_default\"]\nprint(infer.structured_input_signature)\nprint(infer.structured_outputs)",
"Convert using TFLite's Converter\nLoad the TFLiteConverter with the SavedModel",
"converter = tf.lite.TFLiteConverter.from_saved_model(CATS_VS_DOGS_SAVED_MODEL)",
"Post-training quantization\nThe simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.\nTo further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.",
"converter.optimizations = [tf.lite.Optimize.DEFAULT]",
"Post-training integer quantization\nWe can get further latency improvements, reductions in peak memory usage, and access to integer only hardware accelerators by making sure all model math is quantized. To do this, we need to measure the dynamic range of activations and inputs with a representative data set. You can simply create an input data generator and provide it to our converter.",
"def representative_data_gen():\n for input_value, _ in test_batches.take(100):\n yield [input_value]\n\nconverter.representative_dataset = representative_data_gen",
"The resulting model will be fully quantized but still take float input and output for convenience.\nOps that do not have quantized implementations will automatically be left in floating point. This allows conversion to occur smoothly but may restrict deployment to accelerators that support float. \nFull integer quantization\nTo require the converter to only output integer operations, one can specify:",
"converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]",
"Finally convert the model",
"tflite_model = converter.convert()\ntflite_model_file = 'converted_model.tflite'\n\nwith open(tflite_model_file, \"wb\") as f:\n f.write(tflite_model)",
"Test the TFLite model using the Python Interpreter",
"# Load TFLite model and allocate tensors.\n \ninterpreter = tf.lite.Interpreter(model_path=tflite_model_file)\ninterpreter.allocate_tensors()\n\ninput_index = interpreter.get_input_details()[0][\"index\"]\noutput_index = interpreter.get_output_details()[0][\"index\"]\n\nfrom tqdm import tqdm\n\n# Gather results for the randomly sampled test images\npredictions = []\n\ntest_labels, test_imgs = [], []\nfor img, label in tqdm(test_batches.take(10)):\n interpreter.set_tensor(input_index, img)\n interpreter.invoke()\n predictions.append(interpreter.get_tensor(output_index))\n \n test_labels.append(label.numpy()[0])\n test_imgs.append(img)\n\n#@title Utility functions for plotting\n# Utilities for plotting\n\nclass_names = ['cat', 'dog']\n\ndef plot_image(i, predictions_array, true_label, img):\n predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n \n img = np.squeeze(img)\n\n plt.imshow(img, cmap=plt.cm.binary)\n\n predicted_label = np.argmax(predictions_array)\n if predicted_label == true_label:\n color = 'green'\n else:\n color = 'red'\n \n plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n 100*np.max(predictions_array),\n class_names[true_label]),\n color=color)\n",
"NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite doesn't have super optimized server CPU kernels. For this reason post-training full-integer quantized models may be slower here than the other kinds of optimized models. But for mobile CPUs, considerable speedup can be observed.",
"#@title Visualize the outputs { run: \"auto\" }\nindex = 0 #@param {type:\"slider\", min:0, max:9, step:1}\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(index, predictions, test_labels, test_imgs)\nplt.show()",
"Download the model.\nNOTE: You might have to run to the cell below twice",
"labels = ['cat', 'dog']\n\nwith open('labels.txt', 'w') as f:\n f.write('\\n'.join(labels))\n\ntry:\n from google.colab import files\n files.download('converted_model.tflite')\n files.download('labels.txt')\nexcept:\n pass",
"Prepare the test images for download (Optional)\nThis part involves downloading additional test images for the Mobile Apps only in case you need to try out more samples",
"!mkdir -p test_images\n\nfrom PIL import Image\n\nfor index, (image, label) in enumerate(test_batches.take(50)):\n image = tf.cast(image * 255.0, tf.uint8)\n image = tf.squeeze(image).numpy()\n pil_image = Image.fromarray(image)\n pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]], index))\n\n!ls test_images\n\n!zip -qq cats_vs_dogs_test_images.zip -r test_images/\n\ntry:\n files.download('cats_vs_dogs_test_images.zip')\nexcept:\n pass"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
slundberg/shap
|
notebooks/genomic_examples/DeepExplainer Genomics Example.ipynb
|
mit
|
[
"This runs DeepExplainer with the model trained on simualted genomic data from the DeepLIFT repo (https://github.com/kundajelab/deeplift/blob/master/examples/genomics/genomics_simulation.ipynb), using a dynamic reference (i.e. the reference varies depending on the input sequence; in this case, the reference is a collection of dinucleotide-shuffled versions of the input sequence)",
"%matplotlib inline\nfrom __future__ import print_function, division",
"Pull in the relevant data",
"! [[ ! -f sequences.simdata.gz ]] && wget https://raw.githubusercontent.com/AvantiShri/model_storage/db919b12f750e5844402153233249bb3d24e9e9a/deeplift/genomics/sequences.simdata.gz\n! [[ ! -f keras2_conv1d_record_5_model_PQzyq_modelJson.json ]] && wget https://raw.githubusercontent.com/AvantiShri/model_storage/b6e1d69/deeplift/genomics/keras2_conv1d_record_5_model_PQzyq_modelJson.json\n! [[ ! -f keras2_conv1d_record_5_model_PQzyq_modelWeights.h5 ]] && wget https://raw.githubusercontent.com/AvantiShri/model_storage/b6e1d69/deeplift/genomics/keras2_conv1d_record_5_model_PQzyq_modelWeights.h5\n! [[ ! -f test.txt.gz ]] && wget https://raw.githubusercontent.com/AvantiShri/model_storage/9aadb769735c60eb90f7d3d896632ac749a1bdd2/deeplift/genomics/test.txt.gz",
"Load the data",
"! pip install simdna\n\nimport simdna.synthetic as synthetic\nimport gzip\ndata_filename = \"sequences.simdata.gz\"\n\n#read in the data in the testing set\ntest_ids_fh = gzip.open(\"test.txt.gz\",\"rb\")\nids_to_load = [x.decode(\"utf-8\").rstrip(\"\\n\") for x in test_ids_fh]\ndata = synthetic.read_simdata_file(data_filename, ids_to_load=ids_to_load)\n\nimport numpy as np\n\n#this is set up for 1d convolutions where examples\n#have dimensions (len, num_channels)\n#the channel axis is the axis for one-hot encoding.\ndef one_hot_encode_along_channel_axis(sequence):\n to_return = np.zeros((len(sequence),4), dtype=np.int8)\n seq_to_one_hot_fill_in_array(zeros_array=to_return,\n sequence=sequence, one_hot_axis=1)\n return to_return\n\ndef seq_to_one_hot_fill_in_array(zeros_array, sequence, one_hot_axis):\n assert one_hot_axis==0 or one_hot_axis==1\n if (one_hot_axis==0):\n assert zeros_array.shape[1] == len(sequence)\n elif (one_hot_axis==1): \n assert zeros_array.shape[0] == len(sequence)\n #will mutate zeros_array\n for (i,char) in enumerate(sequence):\n if (char==\"A\" or char==\"a\"):\n char_idx = 0\n elif (char==\"C\" or char==\"c\"):\n char_idx = 1\n elif (char==\"G\" or char==\"g\"):\n char_idx = 2\n elif (char==\"T\" or char==\"t\"):\n char_idx = 3\n elif (char==\"N\" or char==\"n\"):\n continue #leave that pos as all 0's\n else:\n raise RuntimeError(\"Unsupported character: \"+str(char))\n if (one_hot_axis==0):\n zeros_array[char_idx,i] = 1\n elif (one_hot_axis==1):\n zeros_array[i,char_idx] = 1\n \nonehot_data = np.array([one_hot_encode_along_channel_axis(seq) for seq in data.sequences])",
"Load the model",
"from keras.models import model_from_json\n\n#load the keras model\nkeras_model_weights = \"keras2_conv1d_record_5_model_PQzyq_modelWeights.h5\"\nkeras_model_json = \"keras2_conv1d_record_5_model_PQzyq_modelJson.json\"\n\nkeras_model = model_from_json(open(keras_model_json).read())\nkeras_model.load_weights(keras_model_weights)",
"Install the deeplift package for the dinucleotide shuffling and visualzation code",
"!pip install deeplift",
"Compute importance scores\nDefine the function that generates the reference, in this case by performing a dinucleotide shuffle of the given input sequence",
"from deeplift.dinuc_shuffle import dinuc_shuffle, traverse_edges, shuffle_edges, prepare_edges\nfrom collections import Counter\n\ndef onehot_dinuc_shuffle(s): \n s = np.squeeze(s)\n argmax_vals = \"\".join([str(x) for x in np.argmax(s, axis=-1)])\n shuffled_argmax_vals = [int(x) for x in traverse_edges(argmax_vals, \n shuffle_edges(prepare_edges(argmax_vals)))] \n to_return = np.zeros_like(s) \n to_return[list(range(len(s))), shuffled_argmax_vals] = 1 \n return to_return\n\nshuffle_several_times = lambda s: np.array([onehot_dinuc_shuffle(s) for i in range(100)])",
"Run DeepExplainer with the dynamic reference function",
"from deeplift.visualization import viz_sequence\nimport shap\nimport shap.explainers.deep.deep_tf\nreload(shap.explainers.deep.deep_tf)\nreload(shap.explainers.deep)\nreload(shap.explainers)\nreload(shap)\nimport numpy as np\nnp.random.seed(1)\nimport random\n\nseqs_to_explain = onehot_data[[0,3,9]] #these three are positive for task 0\ndinuc_shuff_explainer = shap.DeepExplainer((keras_model.input, keras_model.output[:,0]), shuffle_several_times)\nraw_shap_explanations = dinuc_shuff_explainer.shap_values(seqs_to_explain)\n\n#project the importance at each position onto the base that's actually present\ndinuc_shuff_explanations = np.sum(raw_shap_explanations,axis=-1)[:,:,None]*seqs_to_explain\nfor dinuc_shuff_explanation in dinuc_shuff_explanations:\n viz_sequence.plot_weights(dinuc_shuff_explanation, subticks_frequency=20)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/ja/guide/keras/customizing_what_happens_in_fit.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Model.fit の処理をカスタマイズする\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org で表示</a> </td>\n <td> <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/customizing_what_happens_in_fit.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab で実行</a> </td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/customizing_what_happens_in_fit.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub でソースを表示</a> </td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/keras/customizing_what_happens_in_fit.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a> </td>\n</table>\n\nはじめに\n教師あり学習を実行するときに fit() を使用するとスムーズに学習を進めることができます。\n独自のトレーニングループを新規で書く必要がある場合には、GradientTape を使用すると、細かく制御することができます。\nしかし、カスタムトレーニングアルゴリズムが必要ながらも、コールバック、組み込みの分散サポート、ステップ結合など、fit() の便利な機能を利用したい場合には、どうすればよいのでしょうか?\nKeras の基本原則は、複雑性のプログレッシブディスクロージャ―です。常に段階的に低レベルのワークフローに入ることが可能で、高レベルの機能性がユースケースと完全に一致しない場合でも、急激に性能が落ちるようなことはありません。相応の高レベルの利便性を維持しながら細部をさらに制御することができます。\nfit() の動作をカスタマイズする必要がある場合は、Model クラスのトレーニングステップ関数をオーバーライドする必要があります。これはデータのバッチごとに fit() に呼び出される関数です。これによって、通常通り fit() を呼び出せるようになり、独自の学習アルゴリズムが実行されるようになります。\nこのパターンは Functional API を使用したモデル構築を妨げるものではないことに注意してください。これは、Sequential モデル、Functional API モデル、サブクラス化されたモデルのいずれを構築する場合にも適用可能です。\nでは、その仕組みを見ていきましょう。\nセットアップ\nTensorFlow 2.2 以降が必要です。",
"import tensorflow as tf\nfrom tensorflow import keras",
"最初の簡単な例\n簡単な例から始めてみましょう。\n\nkeras.Model をサブクラス化する新しいクラスを作成します。\ntrain_step(self, data) メソッドだけをオーバーライドします。\nメトリクス名(損失を含む)をマッピングするディクショナリを現在の値に返します。\n\n入力引数の data は、トレーニングデータとして適合するために渡される値です。\n\nfit(x, y, ...) を呼び出して Numpy 配列を渡す場合は、data はタプル型 (x, y) になります。\nfit(dataset, ...) を呼び出して tf.data.Dataset を渡す場合は、data は各バッチで dataset により生成される値になります。\n\ntrain_step メソッドの本体には、既に使い慣れているものと同様の定期的なトレーニングアップデートを実装しています。重要なのは、損失の計算を self.compiled_loss を介して行っていることで、それによって compile() に渡された損失関数がラップされています。\n同様に、self.compiled_metrics.update_state(y, y_pred) を呼び出して compile() に渡されたメトリクスの状態を更新し、最後に self.metrics の結果をクエリして現在の値を取得しています。",
"class CustomModel(keras.Model):\n def train_step(self, data):\n # Unpack the data. Its structure depends on your model and\n # on what you pass to `fit()`.\n x, y = data\n\n with tf.GradientTape() as tape:\n y_pred = self(x, training=True) # Forward pass\n # Compute the loss value\n # (the loss function is configured in `compile()`)\n loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)\n\n # Compute gradients\n trainable_vars = self.trainable_variables\n gradients = tape.gradient(loss, trainable_vars)\n # Update weights\n self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n # Update metrics (includes the metric that tracks the loss)\n self.compiled_metrics.update_state(y, y_pred)\n # Return a dict mapping metric names to current value\n return {m.name: m.result() for m in self.metrics}\n",
"これを試してみましょう。",
"import numpy as np\n\n# Construct and compile an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"mae\"])\n\n# Just use `fit` as usual\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nmodel.fit(x, y, epochs=3)",
"低レベルにする\n当然ながら、compile() に損失関数を渡すことを省略し、代わりに train_step ですべてを手動で実行することは可能です。これはメトリクスの場合でも同様です。\nオプティマイザの構成に compile() のみを使用した、低レベルの例を次に示します。\n\nまず、損失と MAE スコアを追跡する Metric インスタンスを作成します。\nこれらのメトリクスの状態を更新するカスタム train_step() を実装し(メトリクスで update_state() を呼び出します)、現在の平均値を返して進捗バーで表示し、任意のコールバックに渡せるようにメトリクスをクエリします(result() を使用)。\nエポックごとにメトリクスに reset_states() を呼び出す必要があるところに注意してください。呼び出さない場合、result() は通常処理しているエポックごとの平均ではなく、トレーニングを開始してからの平均を返してしまいます。幸いにも、これはフレームワークが行ってくれるため、モデルの metrics プロパティにリセットするメトリクスをリストするだけで実現できます。モデルは、そこにリストされているオブジェクトに対する reset_states() の呼び出しを各 fit() エポックの開始時または evaluate() への呼び出しの開始時に行うようになります。",
"loss_tracker = keras.metrics.Mean(name=\"loss\")\nmae_metric = keras.metrics.MeanAbsoluteError(name=\"mae\")\n\n\nclass CustomModel(keras.Model):\n def train_step(self, data):\n x, y = data\n\n with tf.GradientTape() as tape:\n y_pred = self(x, training=True) # Forward pass\n # Compute our own loss\n loss = keras.losses.mean_squared_error(y, y_pred)\n\n # Compute gradients\n trainable_vars = self.trainable_variables\n gradients = tape.gradient(loss, trainable_vars)\n\n # Update weights\n self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n\n # Compute our own metrics\n loss_tracker.update_state(loss)\n mae_metric.update_state(y, y_pred)\n return {\"loss\": loss_tracker.result(), \"mae\": mae_metric.result()}\n\n @property\n def metrics(self):\n # We list our `Metric` objects here so that `reset_states()` can be\n # called automatically at the start of each epoch\n # or at the start of `evaluate()`.\n # If you don't implement this property, you have to call\n # `reset_states()` yourself at the time of your choosing.\n return [loss_tracker, mae_metric]\n\n\n# Construct an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\n\n# We don't passs a loss or metrics here.\nmodel.compile(optimizer=\"adam\")\n\n# Just use `fit` as usual -- you can use callbacks, etc.\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nmodel.fit(x, y, epochs=5)\n",
"sample_weight と class_weight をサポートする\n最初の基本的な例では、サンプルの重み付けについては何も言及していないことに気付いているかもしれません。fit() の引数 sample_weight と class_weight をサポートする場合には、次のようにします。\n\ndata 引数から sample_weight をアンパックします。\nそれを compiled_loss と compiled_metrics に渡します(もちろん、 損失とメトリクスが compile() に依存しない場合は手動での適用が可能です)。\nそれがリストです。",
"class CustomModel(keras.Model):\n def train_step(self, data):\n # Unpack the data. Its structure depends on your model and\n # on what you pass to `fit()`.\n if len(data) == 3:\n x, y, sample_weight = data\n else:\n sample_weight = None\n x, y = data\n\n with tf.GradientTape() as tape:\n y_pred = self(x, training=True) # Forward pass\n # Compute the loss value.\n # The loss function is configured in `compile()`.\n loss = self.compiled_loss(\n y,\n y_pred,\n sample_weight=sample_weight,\n regularization_losses=self.losses,\n )\n\n # Compute gradients\n trainable_vars = self.trainable_variables\n gradients = tape.gradient(loss, trainable_vars)\n\n # Update weights\n self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n\n # Update the metrics.\n # Metrics are configured in `compile()`.\n self.compiled_metrics.update_state(y, y_pred, sample_weight=sample_weight)\n\n # Return a dict mapping metric names to current value.\n # Note that it will include the loss (tracked in self.metrics).\n return {m.name: m.result() for m in self.metrics}\n\n\n# Construct and compile an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\nmodel.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"mae\"])\n\n# You can now use sample_weight argument\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nsw = np.random.random((1000, 1))\nmodel.fit(x, y, sample_weight=sw, epochs=3)",
"独自の評価ステップを提供する\nmodel.evaluate() への呼び出しに同じことをする場合はどうしたらよいでしょう?その場合は、まったく同じ方法で test_step をオーバーライドします。これは次のようになります。",
"class CustomModel(keras.Model):\n def test_step(self, data):\n # Unpack the data\n x, y = data\n # Compute predictions\n y_pred = self(x, training=False)\n # Updates the metrics tracking the loss\n self.compiled_loss(y, y_pred, regularization_losses=self.losses)\n # Update the metrics.\n self.compiled_metrics.update_state(y, y_pred)\n # Return a dict mapping metric names to current value.\n # Note that it will include the loss (tracked in self.metrics).\n return {m.name: m.result() for m in self.metrics}\n\n\n# Construct an instance of CustomModel\ninputs = keras.Input(shape=(32,))\noutputs = keras.layers.Dense(1)(inputs)\nmodel = CustomModel(inputs, outputs)\nmodel.compile(loss=\"mse\", metrics=[\"mae\"])\n\n# Evaluate with our custom test_step\nx = np.random.random((1000, 32))\ny = np.random.random((1000, 1))\nmodel.evaluate(x, y)",
"まとめ: エンドツーエンド GAN の例\nここで学んだことをすべて採り入れたエンドツーエンドの例を見てみましょう。\n以下を検討してみましょう。\n\n28x28x1 の画像を生成するジェネレーターネットワーク。\n28x28x1 の画像を 2 つのクラス(「偽物」と「本物」)に分類するディスクリミネーターネットワーク。\nそれぞれに 1 つのオプティマイザ。\nディスクリミネーターをトレーニングする損失関数。",
"from tensorflow.keras import layers\n\n# Create the discriminator\ndiscriminator = keras.Sequential(\n [\n keras.Input(shape=(28, 28, 1)),\n layers.Conv2D(64, (3, 3), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2D(128, (3, 3), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.GlobalMaxPooling2D(),\n layers.Dense(1),\n ],\n name=\"discriminator\",\n)\n\n# Create the generator\nlatent_dim = 128\ngenerator = keras.Sequential(\n [\n keras.Input(shape=(latent_dim,)),\n # We want to generate 128 coefficients to reshape into a 7x7x128 map\n layers.Dense(7 * 7 * 128),\n layers.LeakyReLU(alpha=0.2),\n layers.Reshape((7, 7, 128)),\n layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2D(1, (7, 7), padding=\"same\", activation=\"sigmoid\"),\n ],\n name=\"generator\",\n)",
"ここにフィーチャーコンプリートの GAN クラスがあります。compile()をオーバーライドして独自のシグネチャを使用することにより、GAN アルゴリズム全体をtrain_stepの 17 行で実装しています。",
"class GAN(keras.Model):\n def __init__(self, discriminator, generator, latent_dim):\n super(GAN, self).__init__()\n self.discriminator = discriminator\n self.generator = generator\n self.latent_dim = latent_dim\n\n def compile(self, d_optimizer, g_optimizer, loss_fn):\n super(GAN, self).compile()\n self.d_optimizer = d_optimizer\n self.g_optimizer = g_optimizer\n self.loss_fn = loss_fn\n\n def train_step(self, real_images):\n if isinstance(real_images, tuple):\n real_images = real_images[0]\n # Sample random points in the latent space\n batch_size = tf.shape(real_images)[0]\n random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))\n\n # Decode them to fake images\n generated_images = self.generator(random_latent_vectors)\n\n # Combine them with real images\n combined_images = tf.concat([generated_images, real_images], axis=0)\n\n # Assemble labels discriminating real from fake images\n labels = tf.concat(\n [tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0\n )\n # Add random noise to the labels - important trick!\n labels += 0.05 * tf.random.uniform(tf.shape(labels))\n\n # Train the discriminator\n with tf.GradientTape() as tape:\n predictions = self.discriminator(combined_images)\n d_loss = self.loss_fn(labels, predictions)\n grads = tape.gradient(d_loss, self.discriminator.trainable_weights)\n self.d_optimizer.apply_gradients(\n zip(grads, self.discriminator.trainable_weights)\n )\n\n # Sample random points in the latent space\n random_latent_vectors = tf.random.normal(shape=(batch_size, self.latent_dim))\n\n # Assemble labels that say \"all real images\"\n misleading_labels = tf.zeros((batch_size, 1))\n\n # Train the generator (note that we should *not* update the weights\n # of the discriminator)!\n with tf.GradientTape() as tape:\n predictions = self.discriminator(self.generator(random_latent_vectors))\n g_loss = self.loss_fn(misleading_labels, predictions)\n grads = tape.gradient(g_loss, self.generator.trainable_weights)\n self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights))\n return {\"d_loss\": d_loss, \"g_loss\": g_loss}\n",
"試運転してみましょう。",
"# Prepare the dataset. We use both the training & test MNIST digits.\nbatch_size = 64\n(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()\nall_digits = np.concatenate([x_train, x_test])\nall_digits = all_digits.astype(\"float32\") / 255.0\nall_digits = np.reshape(all_digits, (-1, 28, 28, 1))\ndataset = tf.data.Dataset.from_tensor_slices(all_digits)\ndataset = dataset.shuffle(buffer_size=1024).batch(batch_size)\n\ngan = GAN(discriminator=discriminator, generator=generator, latent_dim=latent_dim)\ngan.compile(\n d_optimizer=keras.optimizers.Adam(learning_rate=0.0003),\n g_optimizer=keras.optimizers.Adam(learning_rate=0.0003),\n loss_fn=keras.losses.BinaryCrossentropy(from_logits=True),\n)\n\n# To limit the execution time, we only train on 100 batches. You can train on\n# the entire dataset. You will need about 20 epochs to get nice results.\ngan.fit(dataset.take(100), epochs=1)",
"ディープラーニングの背後にある考え方は単純なわけですから、当然、実装も単純なのです。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tpin3694/tpin3694.github.io
|
machine-learning/handling_time_zones.ipynb
|
mit
|
[
"Title: Handling Time Zones\nSlug: handling_time_zones\nSummary: How to handle timezones for machine learning in Python. \nDate: 2017-09-11 12:00\nCategory: Machine Learning\nTags: Preprocessing Dates And Times \nAuthors: Chris Albon\nPreliminaries",
"# Load libraries\nimport pandas as pd\nfrom pytz import all_timezones",
"View Timezones",
"# Show ten time zones\nall_timezones[0:10]",
"Create Timestamp With Time Zone",
"# Create datetime\npd.Timestamp('2017-05-01 06:00:00', tz='Europe/London')",
"Create Timestamp Without Time Zone",
"# Create datetime\ndate = pd.Timestamp('2017-05-01 06:00:00')",
"Add Time Zone",
"# Set time zone\ndate_in_london = date.tz_localize('Europe/London')",
"Convert Time Zone",
"# Change time zone\ndate_in_london.tz_convert('Africa/Abidjan')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rajanshah/dx
|
03_dx_valuation_single_risk.ipynb
|
agpl-3.0
|
[
"<img src=\"http://hilpisch.com/tpq_logo.png\" alt=\"The Python Quants\" width=\"45%\" align=\"right\" border=\"4\">\nSingle-Risk Derivatives Valuation\nThis part introduces into the modeling and valuation of derivatives instruments (contingent claims) based on a single risk factor (e.g. a stock price, stock index level or interest rate). It also shows how to model and value portfolios composed of such instruments.",
"from dx import *\nimport seaborn as sns; sns.set()",
"The following single risk factor valuation classes are available:\n\nvaluation_mcs_european_single for derivatives with European exercise\nvaluation_mcs_american_single for derivatives with American/Bermudan exercise\n\nModeling the Risk Factor\nBefore moving on to the valuation classes, we need to model an instantiate an underlying risk factor, in this case a geometric_brownian_motion object. Background information is provided in the respective part of the documentation about model classes.",
"r = constant_short_rate('r', 0.06)\n\nme = market_environment('me', dt.datetime(2015, 1, 1))\n\nme.add_constant('initial_value', 36.)\nme.add_constant('volatility', 0.2)\nme.add_constant('final_date', dt.datetime(2015, 12, 31))\nme.add_constant('currency', 'EUR')\nme.add_constant('frequency', 'W')\nme.add_constant('paths', 25000)\n\nme.add_curve('discount_curve', r)\n\ngbm = geometric_brownian_motion('gbm', me)",
"valuation_mcs_european_single\nThe first instrument we value is a European call option written on the single relevant risk factor as embodied by the gbm model object. To this end, we add a maturity date to the market environment and a strike price.",
"me.add_constant('maturity', dt.datetime(2015, 12, 31))\nme.add_constant('strike', 40.)",
"To instantiate a the valuation_mcs_european_single class, the following information/data is to be provided:\n\nname as a string object\ninstance of a model class\nmarket environment\npayoff of the instrument a string object and containing \"regular\" Python/NumPy code",
"call_eur = valuation_mcs_european_single(\n name='call_eur',\n underlying=gbm,\n mar_env=me,\n payoff_func='np.maximum(maturity_value - strike, 0)')",
"In this case, the payoff is that of a regular, plain vanilla European call option. If $T$ is the maturity date, $S_T$ the value of the relevant risk factor at that date and $K$ the strike price, the payoff $h_T$ at maturity of such an option is given by\n$$\nh_T = \\max[S_T - K, 0]\n$$\nmaturity_value represents the value vector of the risk factor at maturity. Any other \"sensible\" payoff definition is possible. For instance, the following works as well:",
"payoff = 'np.maximum(np.minimum(maturity_value) * 2 - 50, 0)'",
"Other standardized payoff elemenets include mean_value, max_value and min_value representing maturity value vectors with the pathwise means, maxima and minima. Using these payoff elements allows the easy definition of options with Asian features.\nHaving instantiated the valuation class, the present_value method returns the present value Monte Carlo estimator for the call option.",
"call_eur.present_value()",
"Similarly, the delta and vega methods return the delta and the vega of the option, estimated numerically by a forward difference scheme and Monte Carlo simulation.",
"call_eur.delta()\n\ncall_eur.vega()",
"This approach allows to work with such a valuation object similar to an analytical valuation formula like the one of Black-Scholes-Merton (1973). For example, you can estimate and plot present values, deltas and vegas for a range of different initial values of the risk factor.",
"%%time\ns_list = np.arange(34., 46.1, 2.)\npv = []; de = []; ve = []\nfor s in s_list:\n call_eur.update(s)\n pv.append(call_eur.present_value())\n de.append(call_eur.delta(.5))\n ve.append(call_eur.vega(0.2))\n\n%matplotlib inline",
"There is a little plot helper function available to plot these statistics conveniently.",
"plot_option_stats(s_list, pv, de, ve)",
"valuation_mcs_american_single\nThe modeling and valuation of derivatives with American/Bermudan exercise is almost completely the same as in the more simple case of European exercise.",
"me.add_constant('initial_value', 36.)\n # reset initial_value\n\nput_ame = valuation_mcs_american_single(\n name='put_eur',\n underlying=gbm,\n mar_env=me,\n payoff_func='np.maximum(strike - instrument_values, 0)')",
"The only difference to consider here is that for American options where exercise can take place at any time before maturity, the inner value of the option (payoff of immediate exercise) is relevant over the whole set of dates. Therefore, maturity_value needs to be replaced by instrument_values in the definition of the payoff function.",
"put_ame.present_value()",
"Since DX Analytics relies on Monte Carlo simulation and other numerical methods, the calculation of the delta and vega of such an option is identical to the European exercise case.",
"put_ame.delta()\n\nput_ame.vega()\n\n%%time\ns_list = np.arange(34., 46.1, 2.)\npv = []; de = []; ve = []\nfor s in s_list:\n put_ame.update(s)\n pv.append(put_ame.present_value())\n de.append(put_ame.delta(.5))\n ve.append(put_ame.vega(0.2))\n\nplot_option_stats(s_list, pv, de, ve)",
"Portfolio Valuation\nIn general, market players (asset managers, investment banks, hedge funds, insurance companies, etc.) have to value not only single derivatvies instruments but rather portfolios composed of several derivatives instruments. A consistent derivatives portfolio valuation is particularly important when there are multiple derivatives written on the same risk factor and/or correlations between different risk factors.\nThese are the classes availble for a consistent portfolio valuation:\n\nderivatives_position to model a portfolio position\nderivatives_portfolio to model a derivatives portfolio\n\nderivatives_position\nWe work with the market_environment object from before and add information about the risk factor model we are using.",
"me.add_constant('model', 'gbm')",
"A derivatives position consists of \"data only\" and not instantiated model or valuation objects. The necessary model and valuation objects are instantiated during the portfolio valuation.",
"put = derivatives_position(\n name='put', # name of position\n quantity=1, # number of instruments\n underlyings=['gbm'], # relevant risk factors\n mar_env=me, # market environment\n otype='American single', # the option type\n payoff_func='np.maximum(40. - instrument_values, 0)')\n # the payoff funtion",
"The method get_info prints an overview of the all relevant information stored for the respective derivatives_position object.",
"put.get_info()",
"derivatives_portfolio\nThe derivatives_portfolio class implements the core portfolio valuation tasks. This sub-section illustrates to cases, one with uncorrelated underlyings and another one with correlated underlyings\nUncorrelated Underlyings\nThe first example is based on a portfolio with two single-risk factor instruments on two different risk factors which are not correlated. In addition to the gbm object, we define a jump_diffusion object.",
"me_jump = market_environment('me_jump', dt.datetime(2015, 1, 1))\n\nme_jump.add_environment(me)\nme_jump.add_constant('lambda', 0.8)\nme_jump.add_constant('mu', -0.8)\nme_jump.add_constant('delta', 0.1)\nme_jump.add_constant('model', 'jd')",
"Based on this new risk factor model object, a European call option is defined.",
"call_jump = derivatives_position(\n name='call_jump',\n quantity=3,\n underlyings=['jd'],\n mar_env=me_jump,\n otype='European single',\n payoff_func='np.maximum(maturity_value - 36., 0)')",
"Our relevant market now takes on the following form (defined a dictionary objects):",
"risk_factors = {'gbm': me, 'jd' : me_jump}\npositions = {'put' : put, 'call_jump' : call_jump}",
"To instantiate the derivatives_portfolio class, a valuation environment (instance of market_environment class) is needed.",
"val_env = market_environment('general', dt.datetime(2015, 1, 1))\nval_env.add_constant('frequency', 'M')\nval_env.add_constant('paths', 50000)\nval_env.add_constant('starting_date', val_env.pricing_date)\nval_env.add_constant('final_date', val_env.pricing_date)\nval_env.add_curve('discount_curve', r)",
"For the instantiation, we pass all the elements to the portfolio class.",
"port = derivatives_portfolio(\n name='portfolio', # name \n positions=positions, # derivatives positions\n val_env=val_env, # valuation environment\n risk_factors=risk_factors, # relevant risk factors\n correlations=False, # correlation between risk factors\n fixed_seed=False, # fixed seed for randon number generation\n parallel=False) # parallel valuation of portfolio positions",
"Once instantiated, the method get_statistics provides major portfolio statistics like position values, position deltas ans position vegas.",
"%%time\nstats = port.get_statistics()\n\nstats",
"The method returns a standard pandas DataFrame object with which you can work as you are used to.",
"stats[['pos_value', 'pos_delta', 'pos_vega']].sum()",
"The metod get_values only calculates the present values of the derivatives instruments and positions and is therefore a bit less compute and time intensive.",
"%time port.get_values()",
"The method get_positions provides detailed information about the single derivatives positions of the derivatives_portfolio object.",
"port.get_positions()",
"Correlated Underlyings\nThe second example case is exactly the same but now with a highly positive correlation between the two relevant risk factors. Correlations are to be provided as a list of list objects using the risk factor model names to reference them.",
"correlations = [['gbm', 'jd', 0.9]]",
"Except from now passing this new object, the application and usage remains the same.",
"port = derivatives_portfolio(\n name='portfolio',\n positions=positions,\n val_env=val_env,\n risk_factors=risk_factors,\n correlations=correlations,\n fixed_seed=True,\n parallel=False)\n\nport.get_statistics()",
"The Cholesky matrix has been added to the valuation environment (which gets passed to the risk factor model objects).",
"port.val_env.lists['cholesky_matrix']",
"Let us pick two specific simulated paths, one for each risk factor, and let us visualize these.",
"path_no = 0\npaths1 = port.underlying_objects['gbm'].get_instrument_values()[:, path_no]\npaths2 = port.underlying_objects['jd'].get_instrument_values()[:, path_no]",
"The plot illustrates that the two paths are indeed highly positively correlated. However, in this case a large jump occurs for the jump_diffusion object.",
"plt.figure(figsize=(10, 6))\nplt.plot(port.time_grid, paths1, 'r', label='gbm')\nplt.plot(port.time_grid, paths2, 'b', label='jd')\nplt.gcf().autofmt_xdate()\nplt.legend(loc=0); plt.grid(True)\n# highly correlated underlyings\n# -- with a large jump for one risk factor",
"Copyright, License & Disclaimer\n© Dr. Yves J. Hilpisch | The Python Quants GmbH\nDX Analytics (the \"dx library\") is licensed under the GNU Affero General Public License\nversion 3 or later (see http://www.gnu.org/licenses/).\nDX Analytics comes with no representations\nor warranties, to the extent permitted by applicable law.\n<img src=\"http://hilpisch.com/tpq_logo.png\" alt=\"The Python Quants\" width=\"35%\" align=\"right\" border=\"0\"><br>\nhttp://tpq.io | team@tpq.io | http://twitter.com/dyjh\nQuant Platform |\nhttp://quant-platform.com\nDerivatives Analytics with Python (Wiley Finance) |\nhttp://derivatives-analytics-with-python.com\nPython for Finance (O'Reilly) |\nhttp://python-for-finance.com"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
goodwordalchemy/thinkstats_notes_and_exercises
|
code/chap05ex.ipynb
|
gpl-3.0
|
[
"Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>\nAllen Downey",
"from __future__ import division\n\nimport thinkstats2\nimport thinkplot\nimport numpy as np\n\n%matplotlib inline",
"Exercise 5.1\nIn the BRFSS (see Section 5.4), the distribution of heights is roughly normal with parameters µ = 178 cm and σ = 7.7 cm for men, and µ = 163 cm and σ = 7.3 cm for women.\nIn order to join Blue Man Group, you have to be male between 5’10” and 6’1” (see http://bluemancasting.com). What percentage of the U.S. male population is in this range? Hint: use scipy.stats.norm.cdf.\n<tt>scipy.stats</tt> contains objects that represent analytic distributions",
"import scipy.stats",
"For example <tt>scipy.stats.norm</tt> represents a normal distribution.",
"mu = 178\nsigma = 7.7\ndist = scipy.stats.norm(loc=mu, scale=sigma)\ntype(dist)",
"A \"frozen random variable\" can compute its mean and standard deviation.",
"dist.mean(), dist.std()",
"It can also evaluate its CDF. How many people are more than one standard deviation below the mean? About 16%",
"dist.cdf(mu-sigma)",
"How many people are between 5'10\" and 6'1\"?",
"def heightToCentimeters(ft, inches):\n height_in = ft * 12 + inches\n return height_in * 2.54\n\nminHeight = heightToCentimeters(5,10)\nminPercentile = dist.cdf(minHeight)\nprint('minPercentile', minPercentile )\n\nmaxHeight = heightToCentimeters(6,1)\nmaxPercentile = dist.cdf(maxHeight)\nprint('maxPercentile', maxPercentile)\n\nprint('population percent', maxPercentile - minPercentile)\nprint('my Answer: %d%%' % round((maxPercentile - minPercentile) * 100, 2))",
"Exercise 5.2\nTo get a feel for the Pareto distribution, let’s see how different the world would be if the distribution of human height were Pareto. With the parameters $x_m = 1$ m and $α = 1.7$, we get a distribution with a reasonable minimum, 1 m, and median, 1.5 m.\nPlot this distribution. What is the mean human height in Pareto world? What fraction of the population is shorter than the mean? If there are 7 billion people in Pareto world, how many do we expect to be taller than 1 km? How tall do we expect the tallest person to be?\n<tt>scipy.stats.pareto</tt> represents a pareto distribution. In Pareto world, the distribution of human heights has parameters alpha=1.7 and xmin=1 meter. So the shortest person is 100 cm and the median is 150.",
"alpha = 1.7\nxmin = 1\ndist = scipy.stats.pareto(b=alpha, scale=xmin)\ndist.median()\n\nxs, ps = thinkstats2.RenderParetoCdf(xmin, alpha, 0, 10.0, n=100) \nthinkplot.Plot(xs, ps, label=r'$\\alpha=%g$' % alpha)\nthinkplot.Config(xlabel='height (m)', ylabel='CDF')",
"What is the mean height in Pareto world?",
"pMean = dist.mean()\npMean",
"What fraction of people are shorter than the mean?",
"dist.cdf(pMean)",
"Out of 7 billion people, how many do we expect to be taller than 1 km? You could use <tt>dist.cdf</tt> or <tt>dist.sf</tt>.",
"fracTall = 1 - dist.cdf(1000)\nfracTall * 7e9",
"How tall do we expect the tallest person to be? Hint: find the height that yields about 1 person.",
"\"\"\"\nthe probability that one is that tall is 1 in 7 billion.\nI need to find the probability that corresponds to 1 - that height.\n\"\"\"\ntallestProb = 1 - (1 / 7e9)\ndist.ppf(tallestProb)\n\ndist.sf(618349.61067595053) * 7e9 ",
"Exercise 5.3\nThe Weibull distribution is a generalization of the exponential distribution that comes up in failure analysis (see http://wikipedia.org/wiki/Weibull_distribution). Its CDF is\n$CDF(x) = 1 − \\exp(−(x / λ)^k)$ \nCan you find a transformation that makes a Weibull distribution look like a straight line? What do the slope and intercept of the line indicate?\nUse random.weibullvariate to generate a sample from a Weibull distribution and use it to test your transformation.",
"import random, math\nWB_sample = [random.weibullvariate(3, 5) for i in xrange(1000)]\nWB_cdf = thinkstats2.Cdf(WB_sample)\nWB_sample2 = [w for w in WB_sample if -math.log(WB_cdf.Prob(w)) > 0]\nWB_sample2.sort()\n\nt_sample = [math.log(x) for x in WB_sample2] \nt_cdf = [math.log(-math.log(1 - WB_cdf.Prob(y))) for y in WB_sample2]\n\nthinkplot.plot(t_sample, t_cdf)\n\n",
"Exercise 5.4\nFor small values of n, we don’t expect an empirical distribution to fit an analytic distribution exactly. One way to evaluate the quality of fit is to generate a sample from an analytic distribution and see how well it matches the data.\nFor example, in Section 5.1 we plotted the distribution of time between births and saw that it is approximately exponential. But the distribution is based on only 44 data points. To see whether the data might have come from an exponential distribution, generate 44 values from an exponential distribution with the same mean as the data, about 33 minutes between births.\nPlot the distribution of the random values and compare it to the actual distribution. You can use random.expovariate to generate the values.",
"import analytic\n\ndf = analytic.ReadBabyBoom()\ndiffs = df.minutes.diff()\ncdf = thinkstats2.Cdf(diffs, label='actual')\n\nsampMean = 33\nlam = 44.0 / 24 / 60\nrandSamp = [random.expovariate(lam) for i in range(44)]\n\nsampDiffs = np.diff(randSamp)\ncdfSamp = thinkstats2.Cdf(sampDiffs, label='sample')\n\nthinkplot.Cdfs([cdf, cdfSamp], complement=True)\nthinkplot.Config(yscale='log')\n",
"Exercise 5.5\n\nmystery0 --> linear and weibull\nmystery1 --> weibull and normal are both pretty good\nmystery2 --> expo and weibull\nmystery3 --> normal (and pareto)\nmystery4 --> lognormal (b/c) looks like normal, but ?. e + n\nmystery5 --> paretomystery6 --> normal and weibull\nmystery7 --> lognormal and expo\n\nExercise 5.6",
"import hinc\n\nincome = hinc.ReadData()\n\ninc_freq = dict(zip(inc.income, income.freq))\ninc_hist = thinkstats2.Hist(inc_freq, label='income distribution')\ninc_cdf = thinkstats2.Cdf(inc_freq, label='income distribution cdf')\nprint 'done'\n\n#exponential:\nprint('starting...')\nthinkplot.Cdf(inc_cdf, \n complement=True, \n yscale='log',\n label=\"Exponential\")\nthinkplot.Show()\n\n\n\n##normal\ninc_list = [inc for i in range(freq) for inc, freq in inc_freq.iteritems()]\ninc_list.sort()\nrand_samp = np.random.normal(0,1,len(inc_list))\nrand_samp.sort()\nthinkplot.Plot(inc_list, rand_samp, label='Normal')\nthinkplot.Show()\n\n##lognormal\nloginc_list = [math.log(inc) for i in range(freq) for inc, freq in inc_freq.iteritems()]\nloginc_list.sort()\nlogrand_samp = [np.random.normal(0,1) for i in range(len(loginc_list))]\nlogrand_samp.sort()\nthinkplot.PrePlot(2, rows=2)\nthinkplot.SubPlot(1)\nthinkplot.Plot(loginc_list, logrand_samp, label='lognormal')\nthinkplot.Show()\n\nlog_cdf = thinkstats2.Cdf(loginc_list)\nthinkplot.SubPlot(2)\nthinkplot.Cdf(log_cdf, label='lognormal plotted as normal')\nthinkplot.Show()\n\n##Pareto\ninc_list = [inc for i in range(freq) for inc, freq in inc_freq.iteritems()]\ninc_cdf = thinkstats2.Cdf(inc_list)\nthinkplot.Cdf(inc_cdf, transform=\"pareto\", label='Pareto')\nthinkplot.Show()\n\n##Weibull\nthinkplot.figure()\nthinkplot.Cdf(inc_cdf, transform='weibull')\n\nimport hinc_soln\nhinc_soln.main()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tata-antares/tagging_LHCb
|
Stefania_files/vertex-based-new-loss.ipynb
|
apache-2.0
|
[
"%pylab inline",
"Import",
"import pandas\nimport numpy\n\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import roc_curve, roc_auc_score\n\nfrom rep.metaml import FoldingClassifier\nfrom rep.data import LabeledDataStorage\nfrom rep.report import ClassificationReport\nfrom rep.report.metrics import RocAuc\n\nfrom utils import get_N_B_events, get_events_number, get_events_statistics",
"Reading initial data",
"import root_numpy\ndata = pandas.DataFrame(root_numpy.root2array('datasets/1016_vtx.root'))",
"Define label\nlabel = signB * signVtx > 0\n* same sign of B and vtx -> label = 1\n* opposite sign of B and vtx -> label = 0",
"event_id_column = 'event_id'\ndata[event_id_column] = data.runNum.apply(str) + '_' + (data.evtNum.apply(int)).apply(str)\n# reconstructing sign of B\ndata['signB'] = data.tagAnswer * (2 * data.iscorrect - 1)\n# assure sign is +1 or -1\ndata['signVtx'] = (data.signVtx.values > 0) * 2 - 1\ndata['label'] = (data.signVtx.values * data.signB.values > 0) * 1\n\ndata.head()\n\nget_events_statistics(data)\n\nN_pass = get_events_number(data)\ntagging_efficiency = 1. * N_pass / get_N_B_events()\ntagging_efficiency_delta = sqrt(N_pass) / get_N_B_events()\nprint tagging_efficiency, tagging_efficiency_delta\n\nBdata_tracks = pandas.read_csv('models/Bdata_tracks_PID_less.csv')\n\nBdata_tracks.index = Bdata_tracks.event_id\n\ndata['initial_pred'] = Bdata_tracks.ix[data.event_id, 'track_relation_prob'].values",
"Define B-like events for training and others for prediction",
"sweight_threshold = 1.\ndata_sw_passed = data[data.N_sig_sw > sweight_threshold]\ndata_sw_not_passed = data[data.N_sig_sw <= sweight_threshold]\nget_events_statistics(data_sw_passed)",
"Define features",
"features = ['mult', 'nnkrec', 'ptB', 'vflag', 'ipsmean', 'ptmean', 'vcharge', \n 'svm', 'svp', 'BDphiDir', 'svtau', 'docamax']",
"Find good vtx to define sign B\ntrying to guess sign of B based on sign of vtx. If the guess is good, the vtx will be used on next step to train classifier.\n2-folding random forest selection for right tagged events",
"data_sw_passed_lds = LabeledDataStorage(data_sw_passed, data_sw_passed.label, data_sw_passed.N_sig_sw)",
"Training on all vtx\nin this case we don't use preselection with RandomForest\nDT full",
"from hep_ml.decisiontrain import DecisionTrainClassifier\nfrom hep_ml.losses import LogLossFunction\n\nfrom hep_ml.losses import HessianLossFunction\nfrom hep_ml.commonutils import check_sample_weight\nfrom scipy.special import expit\n\nclass LogLossFunctionTagging(HessianLossFunction):\n \"\"\"Logistic loss function (logloss), aka binomial deviance, aka cross-entropy,\n aka log-likelihood loss.\n \"\"\" \n def fit(self, X, y, sample_weight):\n self.sample_weight = check_sample_weight(y, sample_weight=sample_weight,\n normalize=True, normalize_by_class=True)\n self.initial_pred = numpy.log(X['initial_pred'].values)\n self.y_signed = 2 * y - 1\n self.minus_y_signed = - self.y_signed\n self.y_signed_times_weights = self.y_signed * self.sample_weight\n HessianLossFunction.fit(self, X, y, sample_weight=self.sample_weight)\n return self\n\n def __call__(self, y_pred):\n y_pred = y_pred + self.initial_pred\n return numpy.sum(self.sample_weight * numpy.logaddexp(0, self.minus_y_signed * y_pred))\n\n def negative_gradient(self, y_pred):\n y_pred = y_pred + self.initial_pred\n return self.y_signed_times_weights * expit(self.minus_y_signed * y_pred)\n\n def hessian(self, y_pred):\n y_pred = y_pred + self.initial_pred\n expits = expit(y_pred)\n return self.sample_weight * expits * (1 - expits)\n\n def prepare_tree_params(self, y_pred):\n y_pred = y_pred + self.initial_pred\n return self.y_signed * expit(self.minus_y_signed * y_pred), self.sample_weight\n\ntt_base = DecisionTrainClassifier(learning_rate=0.02, n_estimators=1500, depth=6, \n max_features=8, loss=LogLossFunctionTagging(regularization=100), train_features=features)\ntt_folding = FoldingClassifier(tt_base, n_folds=2, random_state=11,\n features=features + ['initial_pred'])\n%time tt_folding.fit_lds(data_sw_passed_lds)\npass\n\nfrom scipy.special import expit, logit\n\ndata_temp = data_sw_not_passed\n\nprint roc_auc_score(data_temp.signB.values, data_temp['initial_pred'].values, sample_weight=data_temp.N_sig_sw.values)\n\np = tt_folding.predict_proba(data_temp)[:, 1]\nprint roc_auc_score(data_temp.signB.values,\n log(data_temp['initial_pred'].values) + logit(p) * data_temp.signB.values,\n sample_weight=data_temp.N_sig_sw.values)\n\nhist(tt_folding.estimators[0].loss.initial_pred, )\npass",
"Report for all vtx",
"report = ClassificationReport({'tt': tt_folding}, data_sw_passed_lds)\n\nreport.learning_curve(RocAuc())\n\nreport.compute_metric(RocAuc())\n\nreport.roc()",
"Calibrating results $p(\\text{vrt same sign}|B)$ and combining them",
"models = []\n\nfrom utils import get_result_with_bootstrap_for_given_part\n\nmodels.append(get_result_with_bootstrap_for_given_part(tagging_efficiency, tagging_efficiency_delta, tt_folding, \n [data_sw_passed, data_sw_not_passed], \n logistic=True, name=\"tt-log\",\n sign_part_column='signVtx', part_name='vertex'))\n\nmodels.append(get_result_with_bootstrap_for_given_part(tagging_efficiency, tagging_efficiency_delta, tt_folding, \n [data_sw_passed, data_sw_not_passed], \n logistic=False, name=\"tt-iso\",\n sign_part_column='signVtx', part_name='vertex'))",
"Comparison of different models",
"pandas.concat(models)\n\npandas.concat(models)",
"Implementing the best vertex model\nand saving its predictions",
"from utils import prepare_B_data_for_given_part\n\nBdata_prepared = prepare_B_data_for_given_part(tt_folding, [data_sw_passed, data_sw_not_passed], logistic=False,\n sign_part_column='signVtx', part_name='vertex')\n\nBdata_prepared.to_csv('models/Bdata_vertex_new_loss.csv', header=True, index=False)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session08/Day1/OOP Lecture.ipynb
|
mit
|
[
"Object Oriented Programming\nJ. S. Oishi",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt",
"Programming \"Paradigms\"\nWays of organizing programs\n\nProcedural (e.g. FORTRAN, C)\nFunctional (e.g. LISP, Haskall)\nObject Oriented (e.g. C++, Java)\n\nPython is...all of these\nPython is a multi-paradigmatic language; this is why you may have programed for years in python and not know what an object is.\nProblem\nMake a list of the first n squares.\nProcedural",
"def square(n):\n squares = []\n for i in range(n):\n squares.append(i**2)\n return squares\nprint(square(10))",
"Functional",
"sq = lambda n: [i**2 for i in range(n)]\n\nprint(sq(10)) #actually this isn't really functional! printing is a \"side effect\"",
"Objects\nObjects have \n\ndata called attributes\nfunctions to act on their data called methods",
"class Observation(): # \"object\" and \"class\" are interchangable!\n def __init__(self, data): # method\n self.data = data #attribute\n def average(self): # method\n dsum = 0\n for i,d in enumerate(self.data):\n dsum += d\n average=dsum/(i+1)\n return average",
"Instances\nInstances are not the same thing as objects",
"obs1 = Observation([0,1,2])\nobs2 = Observation([4,5,6])\n\nprint(\"Avg 1 = {:e}; Avg 2 = {:e}\".format(obs1.average(), obs2.average()))\nprint(\"Type of Avg 1 = {:}; Type of Avg 2 = {:}\".format(type(obs1), type(obs2)))\n\nprint(obs1.data)\nprint(obs2.data)",
"Inheritance\nWe can make new objects by adding to existing objects. This is called inheritance",
"class TimeSeries(Observation): # inherits all the methods and attributes from Observation\n def __init__(self, time, data):\n self.time = time\n Observation.__init__(self, data) # this calls the constructor of the base class\n if len(self.time) != len(self.data):\n raise ValueError(\"Time and data must have same length!\")\n def stop_time(self):\n return self.time[-1] # unclear why you would want this\n\ntobs = TimeSeries([0,1,2],[3,4,5])\nprint(tobs)\nprint(\"Stop time = {:e}\".format(tobs.stop_time())) # new method\nprint(\"tobs average = {:e}\".format(tobs.average())) # but tobs also has methods from Observation",
"Objects in practice\nIn python everything is an object:",
"print(print) #functions are objects!\ndont_do_this = print # this is the object representing a function!\ndont_do_this(\"dont do this!\")",
"Example: Matplotlib\nMatplotlib has a completely object oriented way of dealing with plots, which is very well suited to complex figures.\nFirst, some dumb fake data.",
"x = np.linspace(0,2*np.pi, 1000) \ny_theory = np.sinc(x)\ny = y_theory + np.random.rand(1000)\n\nfig = plt.figure(figsize=(8,8)) # create a figure object\nax_data = fig.add_axes([0.1,0.4,0.8,0.8]) # figure objects have an add_axes method\nax_residual = fig.add_axes([0.1,0.1,0.8,0.3]) \n\n# this is one axis\nax_data.plot(x,y, label='sinc(x)') # a plot is a **method** of a set of axes!\nax_data.legend() # so is a legend\nax_data.set_ylabel('f(x)') # the labels are attributes, set_ylabel sets it\n# this is another...just refer to them by name!\nax_residual.plot(x, y-y_theory, label='residual')\nax_residual.legend()\nax_residual.set_xlabel('x')\nax_residual.set_ylabel('residual')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/making_with_ml
|
instafashion/scripts/getMatches.ipynb
|
apache-2.0
|
[
"from pyvisionproductsearch import ProductSearch, ProductCategories\nfrom google.cloud import storage\nfrom google.cloud import firestore\nimport pandas as pd\nfrom google.cloud import vision\nfrom google.cloud.vision import types\nfrom utils import detectLabels, detectObjects\nimport io\nfrom tqdm.notebook import tqdm\nimport os\nfrom dotenv import load_dotenv\nload_dotenv()\n\n# Fill these out with your own values\n# GCP config\nGCP_PROJECTID=\"YOUR_PROJECT_ID\"\nBUCKET=\"YOUR_BUCKET\"\nCREDS=\"key.json\"\nPRODUCT_SET=\"YOUR_PRODUCT_SET\"\nINSPO_BUCKET = \"YOUR_INSPO_PIC_BUCKET\"\n# If your inspiration pictures are in a subfolder, list it here:\nINSPO_SUBFOLDER = \"YOUR_SUBFOLDER_NAME\"\n\n# To use this notebook, make a copy of .env_template --> .env and fill out the fields!\nps = ProductSearch(GCP_PROJECTID, CREDS, BUCKET)\nproductSet = ps.getProductSet(PRODUCT_SET)",
"Download fashion influence pics and filter them by \"Fashion\" images",
"# For each fashion inspiration pic, check to make sure that it's \n# a \"fashion\" picture. Ignore all other pics\nstorage_client = storage.Client()\nblobs = list(storage_client.list_blobs(INSPO_BUCKET, prefix=INSPO_SUBFOLDER))\nuris = [os.path.join(\"gs://\", blobs[0].bucket.name, x.name)\n for x in blobs if '.jpg' in x.name]\nurls = [x.public_url for x in blobs if '.jpg' in x.name]\n\nfashionPics = []\nfor uri, url in tqdm(list(zip(uris, urls))):\n labels = detectLabels(image_uri=uri)\n if any([x.description == \"Fashion\" for x in labels]):\n fashionPics.append((uri, url))\nfashion_pics = pd.DataFrame(fashionPics, columns=[\"uri\", \"url\"])\n\n# Run this line to verify you can actually search your product set using a picture\nproductSet.search(\"apparel\", image_uri=fashion_pics['uri'].iloc[0])",
"Example Response:\n{'score': 0.7648860812187195,\n 'label': 'Shoe',\n 'matches': [{'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x14992d2e0>,\n 'score': 0.35719582438468933,\n 'image': 'projects/yourprojectid/locations/us-west1/products/high_rise_white_jeans_pants/referenceImages/6550f579-6b26-433a-8fa6-56e5bbca95c1'},\n {'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x14992d5b0>,\n 'score': 0.32596680521965027,\n 'image': 'projects/yourprojectid/locations/us-west1/products/white_boot_shoe/referenceImages/56248bb2-9d5e-4004-b397-6c3b2fb0edc3'},\n {'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x14a423850>,\n 'score': 0.26240724325180054,\n 'image': 'projects/yourprojectid/locations/us-west1/products/tan_strap_sandal_shoe/referenceImages/f970af65-c51e-42e8-873c-d18080f00430'}],\n 'boundingBox': [x: 0.6475263833999634\n y: 0.8726409077644348\n , x: 0.7815263271331787\n y: 0.8726409077644348\n , x: 0.7815263271331787\n y: 0.9934644103050232\n , x: 0.6475263833999634\n y: 0.9934644103050232\n ]},\n {'score': 0.8066604733467102,\n 'label': 'Shorts',\n 'matches': [{'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x106a4fa60>,\n 'score': 0.27552375197410583,\n 'image': 'projects/yourprojectid/locations/us-west1/products/white_sneaker_shoe_*/referenceImages/a109b530-56ff-42bc-ac73-d60578b7f363'},\n {'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x106a4f400>,\n 'score': 0.2667400538921356,\n 'image': 'projects/yourprojectid/locations/us-west1/products/grey_vneck_tee_top_*/referenceImages/cc6f873c-328e-481a-86fb-a2116614ce80'},\n {'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x106a4f8e0>,\n 'score': 0.2606571912765503,\n 'image': 'projects/yourprojectid/locations/us-west1/products/high_rise_white_jeans_pants_*/referenceImages/360b26d8-a844-4a83-bf97-ef80f2243fdb'},\n {'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x106a4fb80>],\n 'boundingBox': [x: 0.4181176424026489\n y: 0.40305882692337036\n , x: 0.6837647557258606\n y: 0.40305882692337036\n , x: 0.6837647557258606\n y: 0.64000004529953\n , x: 0.4181176424026489\n y: 0.64000004529953\n ]}]\n\nThe response above returns a set of matches for each item identified in your inspiration photo.\nIn the example above, \"Shorts\" and \"Shoes\" were recognized. For each of those items, a bounding box is returned that indicates where the item is in the picture.\nFor each matched item in your closet, a Product object is returned along with its image id and a confidence score.\nGet clothing matches\nWe want to make sure that when we recommend users similar items that we respect clothing type. \nFor example, the Product Search API might (accidentally) return a dress as a match for a shirt, but we wouldn't want to expose that to the end user. So this function--getBestMatch--sorts through the results returned by the API and makes sure that a. only the highest confidence match for each item is returned and b. that the item types match.",
"# The API sometimes uses different names for similar items, so this\n# function tells you whether two labels are roughly equivalent\ndef isTypeMatch(label1, label2):\n # everything in a single match group are more or less synonymous\n matchGroups = [(\"skirt\", \"miniskirt\"), \n (\"jeans\", \"pants\"), \n (\"shorts\"),\n (\"jacket\", \"vest\", \"outerwear\", \"coat\", \"suit\"),\n (\"top\", \"shirt\"),\n (\"dress\"),\n (\"swimwear\", \"underpants\"),\n (\"footwear\", \"sandal\", \"boot\", \"high heels\"),\n (\"handbag\", \"suitcase\", \"satchel\", \"backpack\", \"briefcase\"),\n (\"sunglasses\", \"glasses\"),\n (\"bracelet\"),\n (\"scarf\", \"bowtie\", \"tie\"),\n (\"earrings\"),\n (\"necklace\"),\n (\"sock\"),\n (\"hat\", \"cowboy hat\", \"straw hat\", \"fedora\", \"sun hat\", \"sombrero\")]\n for group in matchGroups:\n if label1.lower() in group and label2.lower() in group:\n return True\n return False\n\ndef getBestMatch(searchResponse):\n label = searchResponse['label']\n matches = searchResponse['matches']\n viableMatches = [match for match in matches if any([isTypeMatch(label, match['product'].labels['type'])])]\n return max(viableMatches, key= lambda x: x['score']) if len(viableMatches) else None\n",
"After we run getBestMatch above, we're left with a bunch of items from our own closet that match our inspiration picture. But the next step is transform those matches into an \"outfit,\" and outfits have rules: you can't wear a dress and pants at the same time (probably). You usually only wear one type of shoe. This next function, canAddItem, allows us to add clothing items to an outfit one at a time without breaking any of the \"rules\" of fashion.",
"def canAddItem(existingArray, newType):\n bottoms = {\"pants\", \"skirt\", \"shorts\", \"dress\"}\n newType = newType.lower()\n # Don't add the same item type twice\n if newType in existingArray:\n return False\n if newType == \"shoe\":\n return True\n # Only add one type of bottom (pants, skirt, etc)\n if newType in bottoms and len(bottoms.intersection(existingArray)):\n return False\n # You can't wear both a top and a dress\n if newType == \"top\" and \"dress\" in existingArray:\n return False\n return True",
"Finally, we need a function that allows us to evaluate how \"good\" an outfit recommendation is. We'll do this by creating a score function. This part is creative, and you can do it however you like. Here are some example score functions:",
"# Option 1: sum up the confidence scores for each closet item matched to the inspo photo\ndef scoreOutfit1(matches):\n if not matches:\n return 0\n return sum([match['score'] for match in matches]) / len(matches)\n\n# Option 2: Sum up the confidence scores only of items that matched with the inspo photo \n# with confidence > 0.3. Also, because shoes will match most images _twice_ \n# (because people have two feet), only count the shoe confidence score once\ndef scoreOutfit2(matches):\n if not len(matches):\n return 0\n \n noShoeSum = sum([x['score'] for x in matches if (x['score'] > 0.3 and not isTypeMatch(\"shoe\", x[\"label\"]))])\n shoeScore = 0\n try:\n shoeScore = max([x['score'] for x in matches if isTypeMatch(\"shoe\", x[\"label\"])])\n except:\n pass\n return noShoeSum + shoeScore * 0.5 # half the weight for shoes",
"Great--now that we have all our helper functions written, let's combine them into one big function for \nconstructing an outfit and computing its score!",
"def getOutfit(imgUri, verbose=False):\n # 1. Search for matching items\n response = productSet.search(\"apparel\", image_uri=imgUri)\n if verbose:\n print(\"Found matching \" + \", \".join([x['label'] for x in response]) + \" in closet.\")\n\n clothes = []\n # 2. For each item in the inspo pic, find the best match in our closet and add it to \n # the outfit array\n for item in response:\n bestMatch = getBestMatch(item)\n if not bestMatch:\n if verbose:\n print(f\"No good match found for {item['label']}\")\n continue\n if verbose:\n print(f\"Best match for {item['label']} was {bestMatch['product'].displayName}\")\n clothes.append(bestMatch)\n\n # 3. Sort the items by highest confidence score first\n clothes.sort(key=lambda x: x['score'], reverse=True)\n\n # 4. Add as many items as possible to the outfit while still\n # maintaining a logical outfit\n outfit = []\n addedTypes = []\n for item in clothes:\n itemType = item['product'].labels['type'] # i.e. shorts, top, etc\n if canAddItem(addedTypes, itemType):\n addedTypes.append(itemType)\n outfit.append(item)\n if verbose:\n print(f\"Added a {itemType} to the outfit\")\n\n # 5. Now that we have a whole outfit, compute its score!\n score1 = scoreOutfit1(outfit)\n score2 = scoreOutfit2(outfit)\n if verbose:\n print(\"Algorithm 1 score: %0.3f\" % score1)\n print(\"Algorithm 2 score: %0.3f\" % score2)\n return (outfit, score1, score2)\n \n\ngetOutfit(fashion_pics.iloc[0]['uri'], verbose=True)",
"Output:\n Found matching Shorts, Shoe in closet.\n Best match for Shorts was high_rise_white_shorts_*\n No good match found for Shoe\n Added a shorts to the outfit\n Algorithm 1 score: 0.247\n Algorithm 2 score: 0.000\n {'outfit': [{'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x149fa6760>,\n 'score': 0.24715223908424377,\n 'image': 'projects/yourprojectid/locations/us-west1/products/high_rise_white_shorts_*/referenceImages/71cc9936-2a35-4a81-8f43-75e1bf50fc22'}],\n 'score1': 0.24715223908424377,\n 'score2': 0.0}\n\nAdd Data to Firestore\nNow that we have a way of constructing and scoring outfits, let's add them to Firestore\nso we can later use them in our app.",
"db = firestore.Client()\nuserid = u\"youruserd\" # I like to store all data in Firestore as users, incase I decide to add more in the future!\nthisUser = db.collection(u'users').document(userid)\noutfits = thisUser.collection(u'outfitsDEMO')\n\n# Go through all of the inspo pics and compute matches.\nfor row in fashion_pics.iterrows():\n srcUrl = row[1]['url']\n srcUri = row[1]['uri']\n (outfit, score1, score2) = getOutfit(srcUri, verbose=False)\n \n # Construct a name for the source image--a key we can use to store it in the database\n srcId = srcUri[len(\"gs://\"):].replace(\"/\",\"-\")\n \n # Firestore writes json to the database, so let's construct an object and fill it with data\n fsMatch = {\n \"srcUrl\": srcUrl,\n \"srcUri\": srcUri,\n \"score1\": score1,\n \"score2\": score2,\n }\n # Go through all of the outfit matches and put them into json that can be\n # written to firestore\n theseMatches = []\n for match in outfit:\n image = match['image']\n imgName = match['image'].split('/')[-1]\n name = match['image'].split('/')[-3]\n # The storage api makes these images publicly accessible through url\n imageUrl = f\"https://storage.googleapis.com/{BUCKET}/\" + imgName\n label = match['product'].labels['type']\n score = match['score']\n\n theseMatches.append({\n \"score\": score,\n \"image\": image,\n \"imageUrl\": imageUrl,\n \"label\": label\n })\n fsMatch[\"matches\"] = theseMatches\n # Add the outfit to firestore!\n outfits.document(srcId).set(fsMatch)",
"Voila! Now you have a bunch of matches to recommend in Firestore! Just build a nice frontend to back it up!"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
opesci/devito
|
examples/compiler/04_iet-B.ipynb
|
mit
|
[
"In this tutorial we will learn how to build, compose, and transform Iteration/Expression Trees (IETs).\nPart II - Bottom Up\nDimensions are the building blocks of both Iterations and Expressions.",
"from devito import SpaceDimension, TimeDimension\n\ndims = {'i': SpaceDimension(name='i'),\n 'j': SpaceDimension(name='j'),\n 'k': SpaceDimension(name='k'),\n 't0': TimeDimension(name='t0'),\n 't1': TimeDimension(name='t1')}\n\ndims",
"Elements such as Scalars, Constants and Functions are used to build SymPy equations.",
"from devito import Grid, Constant, Function, TimeFunction\nfrom devito.types import Array, Scalar\n\ngrid = Grid(shape=(10, 10))\nsymbs = {'a': Scalar(name='a'),\n 'b': Constant(name='b'),\n 'c': Array(name='c', shape=(3,), dimensions=(dims['i'],)).indexify(),\n 'd': Array(name='d', \n shape=(3,3), \n dimensions=(dims['j'],dims['k'])).indexify(),\n 'e': Function(name='e', \n shape=(3,3,3), \n dimensions=(dims['t0'],dims['t1'],dims['i'])).indexify(),\n 'f': TimeFunction(name='f', grid=grid).indexify()}\nsymbs",
"An IET Expression wraps a SymPy equation. Below, DummyEq is a subclass of sympy.Eq with some metadata attached. What, when and how metadata are attached is here irrelevant.",
"from devito.ir.iet import Expression\nfrom devito.ir.equations import DummyEq\nfrom devito.tools import pprint\n\ndef get_exprs(a, b, c, d, e, f):\n return [Expression(DummyEq(a, b + c + 5.)),\n Expression(DummyEq(d, e - f)),\n Expression(DummyEq(a, 4 * (b * a))),\n Expression(DummyEq(a, (6. / b) + (8. * a)))]\n\nexprs = get_exprs(symbs['a'],\n symbs['b'],\n symbs['c'],\n symbs['d'],\n symbs['e'],\n symbs['f'])\n\npprint(exprs)",
"An Iteration typically wraps one or more Expressions.",
"from devito.ir.iet import Iteration\n\ndef get_iters(dims):\n return [lambda ex: Iteration(ex, dims['i'], (0, 3, 1)),\n lambda ex: Iteration(ex, dims['j'], (0, 5, 1)),\n lambda ex: Iteration(ex, dims['k'], (0, 7, 1)),\n lambda ex: Iteration(ex, dims['t0'], (0, 4, 1)),\n lambda ex: Iteration(ex, dims['t1'], (0, 4, 1))]\n\niters = get_iters(dims)",
"Here, we can see how blocks of Iterations over Expressions can be used to build loop nests.",
"def get_block1(exprs, iters):\n # Perfect loop nest:\n # for i\n # for j\n # for k\n # expr0\n return iters[0](iters[1](iters[2](exprs[0])))\n \ndef get_block2(exprs, iters):\n # Non-perfect simple loop nest:\n # for i\n # expr0\n # for j\n # for k\n # expr1\n return iters[0]([exprs[0], iters[1](iters[2](exprs[1]))])\n\ndef get_block3(exprs, iters):\n # Non-perfect non-trivial loop nest:\n # for i\n # for s\n # expr0\n # for j\n # for k\n # expr1\n # expr2\n # for p\n # expr3\n return iters[0]([iters[3](exprs[0]),\n iters[1](iters[2]([exprs[1], exprs[2]])),\n iters[4](exprs[3])])\n\nblock1 = get_block1(exprs, iters)\nblock2 = get_block2(exprs, iters)\nblock3 = get_block3(exprs, iters)\n\npprint(block1), print('\\n')\npprint(block2), print('\\n')\npprint(block3)",
"And, finally, we can build Callable kernels that will be used to generate C code. Note that Operator is a subclass of Callable.",
"from devito.ir.iet import Callable\n\nkernels = [Callable('foo', block1, 'void', ()),\n Callable('foo', block2, 'void', ()),\n Callable('foo', block3, 'void', ())]\n\nprint('kernel no.1:\\n' + str(kernels[0].ccode) + '\\n')\nprint('kernel no.2:\\n' + str(kernels[1].ccode) + '\\n')\nprint('kernel no.3:\\n' + str(kernels[2].ccode) + '\\n')",
"An IET is immutable. It can be \"transformed\" by replacing or dropping some of its inner nodes, but what this actually means is that a new IET is created. IETs are transformed by Transformer visitors. A Transformer takes in input a dictionary encoding replacement rules.",
"from devito.ir.iet import Transformer\n\n# Replaces a Function's body with another\ntransformer = Transformer({block1: block2})\nkernel_alt = transformer.visit(kernels[0])\nprint(kernel_alt)",
"Specific Expressions within the loop nest can also be substituted.",
"# Replaces an expression with another\ntransformer = Transformer({exprs[0]: exprs[1]})\nnewblock = transformer.visit(block1)\nnewcode = str(newblock.ccode)\nprint(newcode)\n\nfrom devito.ir.iet import Block\nimport cgen as c\n\n# Creates a replacer for replacing an expression\nline1 = '// Replaced expression'\nreplacer = Block(c.Line(line1))\ntransformer = Transformer({exprs[1]: replacer})\nnewblock = transformer.visit(block2)\nnewcode = str(newblock.ccode)\nprint(newcode)\n\n# Wraps an expression in comments\nline1 = '// This is the opening comment'\nline2 = '// This is the closing comment'\nwrapper = lambda n: Block(c.Line(line1), n, c.Line(line2))\ntransformer = Transformer({exprs[0]: wrapper(exprs[0])})\nnewblock = transformer.visit(block1)\nnewcode = str(newblock.ccode)\nprint(newcode)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
frankbearzou/Data-analysis
|
Recent Grads/Recent Grads.ipynb
|
mit
|
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline",
"Data Exploration",
"recent_grads = pd.read_csv('recent-grads.csv')\n\nrecent_grads.head()\n\nrecent_grads.tail()\n\nrecent_grads.describe()\n\nrecent_grads.shape",
"how many rows contain null values?",
"recent_grads.shape[0] - recent_grads.dropna().shape[0]",
"Data Visualization\nLet's compare ShareWomen and Unemployment_rate.",
"from pandas.tools.plotting import scatter_matrix\n\nscatter_matrix(recent_grads[['ShareWomen', 'Unemployment_rate']], figsize=(12,8))\nplt.show()\n\nsns.pairplot(recent_grads[['ShareWomen', 'Unemployment_rate']].dropna(), size=4)\nsns.plt.show()",
"Let's compare the share of men and women in engineering major.",
"grads_eng_share = recent_grads[recent_grads['Major_category'] == 'Engineering']\n\ngrads_eng_share['ShareMen'] = 1 - grads_eng_share['ShareWomen']\n\n\ngrads_eng_share = grads_eng_share.set_index('Major')\n\ngrads_eng_share = grads_eng_share[['ShareMen', 'ShareWomen']]\n\ngrads_eng_share.head()\n\ngrads_eng_share.plot(kind='bar', figsize=(12,12))\nplt.show()",
"from the plot above, we found most students in engineering major are male.\nLet's compare the number of men and women in engineering major.",
"grads_eng_num = recent_grads[recent_grads['Major_category'] == 'Engineering']\n\ngrads_eng_num.head()\n\ngrads_eng_num = grads_eng_num.set_index('Major')\n\ngrads_eng_num = grads_eng_num[['Men', 'Women']]\n\ngrads_eng_num.head()\n\ngrads_eng_num.plot(kind='bar', stacked=True, figsize=(12,12))\nplt.show()",
"from the stacked box plot above, we found that most students learn mechanical and electrical engineering, and the ratio between men and women."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
thinkingmachines/deeplearningworkshop
|
codelab_7_ml_engine/2. ML Engine - Deployment.ipynb
|
mit
|
[
"Deploying a trained model in ML Engine\nAfter training our CNN model, we can now deploy it to ML Engine and run our predictions on the cloud!\nDeploy a version from your trained model",
"%%bash\ncd cifar10\n\nMODEL_NAME=\"cifar10\"\nVERSION_NAME=\"v1\"\nJOB_DIR=\"gs://dost_deeplearning_cifar10/cifar10_train_1499931245\" # Change this to your own\n\ngcloud ml-engine models create $MODEL_NAME\ngcloud ml-engine versions create \\\n $VERSION_NAME \\\n --model $MODEL_NAME \\\n --origin $JOB_DIR/model",
"Predict with your deployed model\nLet's try predicting with our deployed model! We've prepared a input json instance containing an image of a frog for testing.",
"%%bash\ncd cifar10\n\nMODEL_NAME=\"cifar10\"\nVERSION_NAME=\"v1\"\n\ngcloud ml-engine predict \\\n --model $MODEL_NAME \\\n --version $VERSION_NAME \\\n --json-instances predict_test.json",
"It should output 6 which is the label index for the frog class.\nEmojify\nLet's run a web application that will use our deployed model to \"emojify\" arbitrary images!\nInstall dependencies",
"!pip install -r emojify/requirements.txt",
"Run server",
"import os\nimport subprocess\nimport IPython\nfrom google.datalab.utils import pick_unused_port\n\nport = pick_unused_port()\n\n# Config is reckoned from env vars\nenv = {\n 'PROJECT_ID': 'dost-deeplearning', # Change this to your project id\n 'MODEL_NAME': 'cifar10',\n 'PORT': str(port),\n}\n\nargs = ['python', 'emojify/emojify.py']\nsubprocess.Popen(args, env=env)\n \nurl = '/_proxy/%d/' % port\nhtml = 'Running emojify! Click <a href=\"%s\" target=\"_blank\">here</a> to access it.' % url\nIPython.display.display_html(html, raw=True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
alsam/Claw.jl
|
src/euler/Euler_approximate.ipynb
|
mit
|
[
"Approximate solvers for the Euler equations of gas dynamics\nIn this chapter we discuss approximate solvers for the one-dimensional Euler equations:\n\\begin{align}\n \\rho_t + (\\rho u)_x & = 0 \\\n (\\rho u)_t + (\\rho u^2 + p)_x & = 0 \\\n E_t + ((E+p)u)_x & = 0.\n\\end{align}\nAs in Euler, we focus on the case of an ideal gas, for which the total energy is given by\n\\begin{align} \\label{EA:EOS}\n E = \\frac{p}{\\gamma-1} + \\frac{1}{2}\\rho u^2.\n\\end{align}\nTo examine the Python code for this chapter, and for the exact Riemann solution, see:\n\nexact_solvers/euler.py ...\n on github.\n\nRoe solver\nWe first derive a Roe solver for the Euler equations, following the same approach as in Shallow_water_approximate. Namely, we assume that $\\hat{A} = f'(\\hat{q})$ for some average state $\\hat{q}$, and impose the condition of conservation:\n\\begin{align} \\label{EA:cons}\n f'(\\hat{q}) (q_r - q_\\ell) & = f(q_r) - f(q_\\ell).\n\\end{align}\nWe will need the following quantities:\n\\begin{align}\nq & = \\begin{pmatrix} \\rho \\ \\rho u \\ E \\end{pmatrix}, \\ \\ \\ \\ \\ \\ f(q) = \\begin{pmatrix} \\rho u \\ \\rho u^2 + p \\ H u \\rho \\end{pmatrix}, \\\nf'(\\hat{q}) & = \\begin{pmatrix} \n 0 & 1 & 0 \\ \n \\frac{\\gamma-3}{2}\\hat{u}^2 & (3-\\gamma)\\hat{u} & \\gamma-1 \\\n \\frac{\\gamma-1}{2}\\hat{u}^3 - \\hat{u}\\hat{H} & \\hat{H} - (\\gamma-1)\\hat{u}^2 & \\gamma \\hat{u} \\end{pmatrix}.\n\\end{align}\nHere $H = \\frac{E+p}{\\rho}$ is the enthalpy. We have rewritten most expressions involving $E$ in terms of $H$ because it simplifies the derivation that follows. We now solve (\\ref{EA:cons}) to find $\\hat{u}$ and $\\hat{H}$. It turns out that, for the case of a polytropic ideal gas, the average density $\\hat{\\rho}$ plays no role in the Roe solver.\nThe first equation of (\\ref{EA:cons}) is an identity, satisfied independently of our choice of $\\hat{q}$. The second equation is (using (\\ref{EA:EOS}))\n\\begin{align}\n \\frac{\\gamma-3}{2}\\hat{u}^2 (\\rho_r - \\rho_\\ell) + (3-\\gamma)\\hat{u}(\\rho_r u_r - \\rho_\\ell u_\\ell) \\ + (\\gamma-1)\\left( \\frac{p_r-p_\\ell}{\\gamma-1} + \\frac{1}{2}(\\rho_r u_r^2 - \\rho_\\ell u_\\ell^2) \\right) & = \\rho_r u_r^2 - \\rho_\\ell u_\\ell^2 + p_r - p_\\ell,\n\\end{align}\nwhich simplifies to a quadratic equation for $\\hat{u}$:\n\\begin{align} \\label{EA:u_quadratic}\n (\\rho_r - \\rho_\\ell)\\hat{u}^2 - 2(\\rho_r u_r - \\rho_\\ell u_\\ell) \\hat{u} + (\\rho_r u_r^2 - \\rho_\\ell u_\\ell^2) & = 0,\n\\end{align}\nwith roots\n\\begin{align}\n \\hat{u}\\pm & = \\frac{\\rho_r u_r - \\rho\\ell u_\\ell \\mp \\sqrt{\\rho_r \\rho_\\ell} (u_\\ell - u_r)}{\\rho_r - \\rho_\\ell} = \\frac{\\sqrt{\\rho_r} u_r \\pm \\sqrt{\\rho_\\ell} u_\\ell}{\\sqrt{\\rho_r}\\pm\\sqrt{\\rho_\\ell}}\n\\end{align}\nNotice that this is identical to the Roe average of the velocity for the shallow water equations, if we replace the density $\\rho$ with depth $h$. As before, we choose the root $u_+$ since it is well defined for all values of $\\rho_r, \\rho_\\ell$.\nNext we find $\\hat{H}$ by solving the last equation of (\\ref{EA:cons}), which reads\n\\begin{align}\n \\left( \\frac{\\gamma-1}{2}\\hat{u}^3 - \\hat{u}\\hat{H} \\right)(\\rho_r - \\rho_\\ell) \\ + \\left( \\hat{H} - (\\gamma-1)\\hat{u}^2 \\right)(\\rho_r u_r - \\rho_\\ell u_\\ell) + \\gamma \\hat{u}(E_r - E_\\ell) & = H_r u_r \\rho_r - H_\\ell u_\\ell \\rho_\\ell.\n\\end{align}\nWe can simplify this using the equality $\\gamma E = \\rho H + \\frac{\\gamma-1}{2}\\rho u^2$ and solve for $\\hat{H}$ to find\n\\begin{align}\n \\hat{H}{\\pm} & = \\frac{\\rho_r H_r (u_r - \\hat{u}+) - \\rho_\\ell H_\\ell (u_\\ell - \\hat{u}+)}{\\rho_r u_r - \\rho\\ell u_\\ell - \\hat{u}\\pm(\\rho_r -\\rho\\ell)} \\\n & = \\frac{\\rho_r H_r (u_r - \\hat{u}+) - \\rho\\ell H_\\ell (u_\\ell - \\hat{u}+)}{\\pm\\sqrt{\\rho_r \\rho\\ell}(u_r-u_\\ell)} \\\n & = \\frac{\\rho_r H_r - \\rho_\\ell H_\\ell \\mp\\sqrt{\\rho_r \\rho_\\ell}(H_r - H_\\ell)}{\\rho_r - \\rho_\\ell} \\\n & = \\frac{\\sqrt{\\rho_r}H_r \\pm \\sqrt{\\rho_\\ell} H_\\ell}{\\sqrt{\\rho_r}\\pm\\sqrt{\\rho_\\ell}}.\n\\end{align}\nOnce more, we take the plus sign in the final expression for $\\hat{H}$, giving the Roe averages\n$$\n\\hat{u} = \\frac{\\sqrt{\\rho_r} u_r + \\sqrt{\\rho_\\ell} u_\\ell}{\\sqrt{\\rho_r} + \\sqrt{\\rho_\\ell}},\n\\qquad \\hat{H} = \\frac{\\sqrt{\\rho_r}H_r + \\sqrt{\\rho_\\ell} H_\\ell}{\\sqrt{\\rho_r} + \\sqrt{\\rho_\\ell}}.\n$$\nTo implement the Roe solver, we also need the eigenvalues and eigenvectors of the averaged flux Jacobian $f'(\\hat{q})$. These are just the eigenvalues of the true Jacobian, evaluated at the averaged state:\n\\begin{align}\n \\lambda_1 & = \\hat{u} - \\hat{c}, & \\lambda_2 & = \\hat{u} & \\lambda_3 & = \\hat{u} + \\hat{c},\n\\end{align}\n\\begin{align}\nr_1 & = \\begin{bmatrix} 1 \\ \\hat{u}-\\hat{c} \\ \\hat{H}-\\hat{u}\\hat{c}\\end{bmatrix} &\nr_2 & = \\begin{bmatrix} 1 \\ \\hat{u} \\ \\frac{1}{2}\\hat{u}^2 \\end{bmatrix} &\nr_3 & = \\begin{bmatrix} 1 \\ \\hat{u}+\\hat{c} \\ \\hat{H}+\\hat{u}\\hat{c}\\end{bmatrix}.\n\\end{align}\nHere $\\hat{c} = \\sqrt{(\\gamma-1)(\\hat{H}-\\hat{u}^2/2)}$.\nSolving the system of equations\n\\begin{align}\nq_r - q_\\ell & = \\sum_{p=1}^3 {\\mathcal W}p = \\sum{p=1}^3 \\alpha_p r_p\n\\end{align}\nfor the wave strengths gives\n\\begin{align}\n \\alpha_2 & = \\delta_1 + (\\gamma-1)\\frac{\\hat{u}\\delta_2 - \\delta_3}{\\hat{c}^2} \\\n \\alpha_3 & = \\frac{\\delta_2 + (\\hat{c}-\\hat{u})\\delta_1 - \\hat{c}\\alpha_2}{2\\hat{c}} \\\n \\alpha_1 & = \\delta_1 - \\alpha_2 - \\alpha_3,\n\\end{align}\nwhere $\\delta = q_r - q_\\ell$. We now have everything we need to implement the Roe solver.",
"%matplotlib inline\n\n%config InlineBackend.figure_format = 'svg'\nimport numpy as np\nfrom exact_solvers import euler\nfrom utils import riemann_tools as rt\nfrom ipywidgets import interact\nfrom ipywidgets import widgets\nState = euler.Primitive_State\n\ndef roe_averages(q_l, q_r, gamma=1.4):\n rho_sqrt_l = np.sqrt(q_l[0])\n rho_sqrt_r = np.sqrt(q_r[0])\n p_l = (gamma-1.)*(q_l[2]-0.5*(q_l[1]**2)/q_l[0])\n p_r = (gamma-1.)*(q_r[2]-0.5*(q_r[1]**2)/q_r[0])\n denom = rho_sqrt_l + rho_sqrt_r\n u_hat = (q_l[1]/rho_sqrt_l + q_r[1]/rho_sqrt_r)/denom\n H_hat = ((q_l[2]+p_l)/rho_sqrt_l + (q_r[2]+p_r)/rho_sqrt_r)/denom\n c_hat = np.sqrt((gamma-1)*(H_hat-0.5*u_hat**2))\n \n return u_hat, c_hat, H_hat\n \n \ndef Euler_roe(q_l, q_r, gamma=1.4):\n \"\"\"\n Approximate Roe solver for the Euler equations.\n \"\"\"\n \n rho_l = q_l[0]\n rhou_l = q_l[1]\n u_l = rhou_l/rho_l\n rho_r = q_r[0]\n rhou_r = q_r[1]\n u_r = rhou_r/rho_r\n \n u_hat, c_hat, H_hat = roe_averages(q_l, q_r, gamma)\n \n dq = q_r - q_l\n \n s1 = u_hat - c_hat\n s2 = u_hat\n s3 = u_hat + c_hat\n \n alpha2 = (gamma-1.)/c_hat**2 *((H_hat-u_hat**2)*dq[0]+u_hat*dq[1]-dq[2])\n alpha3 = (dq[1] + (c_hat - u_hat)*dq[0] - c_hat*alpha2) / (2.*c_hat)\n alpha1 = dq[0] - alpha2 - alpha3\n \n r1 = np.array([1., u_hat-c_hat, H_hat - u_hat*c_hat])\n r2 = np.array([1., u_hat, 0.5*u_hat**2])\n q_l_star = q_l + alpha1*r1\n q_r_star = q_l_star + alpha2*r2\n \n states = np.column_stack([q_l,q_l_star,q_r_star,q_r])\n speeds = [s1, s2, s3]\n wave_types = ['contact','contact', 'contact']\n \n def reval(xi):\n rho = (xi<s1)*states[0,0] + (s1<=xi)*(xi<s2)*states[0,1] + \\\n (s2<=xi)*(xi<s3)*states[0,2] + (s3<=xi)*states[0,3]\n mom = (xi<s1)*states[1,0] + (s1<=xi)*(xi<s2)*states[1,1] + \\\n (s2<=xi)*(xi<s3)*states[1,2] + (s3<=xi)*states[1,3]\n E = (xi<s1)*states[2,0] + (s1<=xi)*(xi<s2)*states[2,1] + \\\n (s2<=xi)*(xi<s3)*states[2,2] + (s3<=xi)*states[2,3]\n return rho, mom, E\n \n return states, speeds, reval, wave_types",
"An implementation of this solver for use in Clawpack can be found here. Recall that an exact Riemann solver for the Euler equations appears in exact_solvers/euler.py.\nExamples\nLet's compare the Roe approximation to the exact solution. As a first example, we use the Sod shock tube.",
"def compare_solutions(left_state, right_state, solvers=['Exact','HLLE']):\n q_l = np.array(euler.primitive_to_conservative(*left_state))\n q_r = np.array(euler.primitive_to_conservative(*right))\n\n outputs = []\n states = {}\n\n for solver in solvers:\n if solver.lower() == 'exact':\n outputs.append(euler.exact_riemann_solution(q_l,q_r))\n if solver.lower() == 'hlle':\n outputs.append(Euler_hlle(q_l, q_r))\n states['hlle'] = outputs[-1][0]\n if solver.lower() == 'roe':\n outputs.append(Euler_roe(q_l, q_r))\n states['roe'] = outputs[-1][0]\n\n plot_function = \\\n rt.make_plot_function([val[0] for val in outputs],\n [val[1] for val in outputs],\n [val[2] for val in outputs],\n [val[3] for val in outputs],\n solvers, layout='vertical',\n variable_names=euler.primitive_variables,\n derived_variables=euler.cons_to_prim,\n vertical_spacing=0.15,\n show_time_legend=True)\n \n interact(plot_function,\n t=widgets.FloatSlider(min=0,max=0.9,step=0.1,value=0.4));\n \n return states\n\nleft = State(Density = 3.,\n Velocity = 0.,\n Pressure = 3.)\nright = State(Density = 1.,\n Velocity = 0.,\n Pressure = 1.)\n\nstates = compare_solutions(left, right, solvers=['Exact','Roe'])\n\neuler.phase_plane_plot(left, right, approx_states=states['roe'])",
"Recall that in the true solution the middle wave is a contact discontinuity and carries only a jump in the density. For that reason the three-dimensional phase space plot is generally shown projected onto the pressure-velocity plane as shown above: The two intermediate states in the true solution have the same pressure and velocity, and so are denoted by a single Middle state in the phase plane plot. \nThe Roe solver, on the other hand, generates a middle wave that carries a jump in all 3 variables and there are two green dots appearing in the plot above for the two middle states (though the pressure jump is quite small in this example). For a Riemann problem like this one with zero initial velocity on both sides, the Roe average velocity must also be zero, so the middle wave is stationary; this is of course not typically true in the exact solution, even when $u_\\ell=u_r=0$.\nHere is a second example. Experiment with the initial states to explore how the Roe solution compares to the exact solution.",
"left = State(Density = 0.1,\n Velocity = 0.,\n Pressure = 0.1)\nright = State(Density = 1.,\n Velocity = 1.,\n Pressure = 1.)\n\nstates = compare_solutions(left, right, solvers=['Exact','Roe'])\n\neuler.phase_plane_plot(left, right, approx_states=states['roe'])",
"Single-shock solution\nNext we demonstrate the exactness property of the Roe solver by applying it to a case where the left and right states are connected by a single shock wave.",
"M = 2. # Mach number of the shock wave\ngamma = 1.4\nmu = 2*(M**2-1)/(M*(gamma+1.))\nright = State(Density = 1.,\n Velocity = 0.,\n Pressure = 1.)\nc_r = np.sqrt(gamma*right.Pressure/right.Density)\n\nrho_l = right.Density * M/(M-mu)\np_l = right.Pressure * ((2*M**2-1)*gamma+1)/(gamma+1)\nu_l = mu*c_r\n\nleft = State(Density = rho_l,\n Velocity = u_l,\n Pressure = p_l)\n\nstates = compare_solutions(left, right, solvers=['Exact','Roe'])\n\neuler.phase_plane_plot(left, right, approx_states=states['roe'])",
"It is evident that the solution consists of a single right-going shock. The exact solution cannot be seen because it coincides exactly with the Roe solution. The path of the shock in the first plot also cannot be seen since it is plotted under the path of the rightmost Roe solution wave. The two solutions differ only in the wave speeds predicted for the other two waves, but since these waves have zero strength this makes no difference.\nTransonic rarefactions and an entropy fix\nHere is an example of a Riemann problem whose solution includes a transonic 2-rarefaction:",
"left = State(Density = 0.1,\n Velocity = -2.,\n Pressure = 0.1)\nright = State(Density = 1.,\n Velocity = -1.,\n Pressure = 1.)\n\nstates = compare_solutions(left, right, solvers=['Exact','Roe'])",
"Notice that in the exact solution, the right edge of the rarefaction travels to the right. In the Roe solution, all waves travel to the left. As in the case of the shallow water equations, here too this behavior can lead to unphysical solutions when this approximate solver is used in a numerical discretization. In order to correct this, we can split the single wave into two when a transonic rarefaction is present, in a way similar to what is done in the shallow water equations. We do not go into details here.\nHLLE Solver\nRecall that an HLL solver uses only two waves with a constant state between them. The Euler equations are our first example for which the number of waves in the true solution is larger than the number of waves in the approximate solution. As one might expect, this leads to noticeable inaccuracy in solutions produced by the solver.\nAgain following Einfeldt, the left-going wave speed is chosen to be the minimum of the Roe speed for the 1-wave and the characterstic speed $\\lambda^1$ in the left state $q_\\ell$. The right-going wave speed is chosen to be the maximum of the Roe speed for the 3-wave and the characteristic speed $\\lambda^3$ in the right state $q_r$. Effectively, this means that\n\\begin{align}\n s_1 & = \\min(u_\\ell - c_\\ell, \\hat{u}-\\hat{c}) \\\n s_2 & = \\max(u_r + c_r, \\hat{u}+\\hat{c})\n\\end{align}\nRecall that once we have chosen these two wave speeds, conservation dictates the value of the intermediate state:\n\\begin{align} \\label{SWA:hll_middle_state}\nq_m = \\frac{f(q_r) - f(q_\\ell) - s_2 q_r + s_1 q_\\ell}{s_1 - s_2}.\n\\end{align}",
"def Euler_hlle(q_l, q_r, gamma=1.4):\n \"\"\"HLLE approximate solver for the Euler equations.\"\"\"\n \n rho_l = q_l[0]\n rhou_l = q_l[1]\n u_l = rhou_l/rho_l\n rho_r = q_r[0]\n rhou_r = q_r[1]\n u_r = rhou_r/rho_r\n E_r = q_r[2]\n E_l = q_l[2]\n \n u_hat, c_hat, H_hat = roe_averages(q_l, q_r, gamma)\n p_r = (gamma-1.) * (E_r - rho_r*u_r**2/2.)\n p_l = (gamma-1.) * (E_l - rho_l*u_l**2/2.)\n H_r = (E_r+p_r) / rho_r\n H_l = (E_l+p_l) / rho_l\n c_r = np.sqrt((gamma-1.)*(H_r-u_r**2/2.))\n c_l = np.sqrt((gamma-1.)*(H_l-u_l**2/2.))\n \n s1 = min(u_l-c_l,u_hat-c_hat)\n s2 = max(u_r+c_r,u_hat+c_hat)\n \n rho_m = (rhou_r - rhou_l - s2*rho_r + s1*rho_l)/(s1-s2)\n rhou_m = (rho_r*u_r**2 - rho_l*u_l**2 \\\n + p_r - p_l - s2*rhou_r + s1*rhou_l)/(s1-s2)\n E_m = ( u_r*(E_r+p_r) - u_l*(E_l+p_l) - s2*E_r + s1*E_l)/(s1-s2)\n q_m = np.array([rho_m, rhou_m, E_m])\n \n states = np.column_stack([q_l,q_m,q_r])\n speeds = [s1, s2]\n wave_types = ['contact','contact']\n \n def reval(xi):\n rho = (xi<s1)*rho_l + (s1<=xi)*(xi<=s2)*rho_m + (s2<xi)*rho_r\n mom = (xi<s1)*rhou_l + (s1<=xi)*(xi<=s2)*rhou_m + (s2<xi)*rhou_r\n E = (xi<s1)*E_l + (s1<=xi)*(xi<=s2)*E_m + (s2<xi)*E_r\n return rho, mom, E\n\n return states, speeds, reval, wave_types",
"Examples",
"left = State(Density = 3.,\n Velocity = 0.,\n Pressure = 3.)\nright = State(Density = 1.,\n Velocity = 0.,\n Pressure = 1.)\n \nstates = compare_solutions(left, right, solvers=['Exact','HLLE'])\n\neuler.phase_plane_plot(left, right, approx_states=states['hlle'])",
"Preservation of positivity\nJust as we saw in the case of the shallow water equations, the Roe solver (or any linearized solver) for the Euler equations fails to preserve positivity of the pressure and/or density in some situations. Here is one example.",
"left = State(Density = 1.,\n Velocity = -5.,\n Pressure = 1.)\nright = State(Density = 1.,\n Velocity = 1.,\n Pressure = 1.)\n\nstates = compare_solutions(left, right, solvers=['Exact', 'Roe'])",
"As we can see, in this example each Roe solver wave moves much more slowly than the leading edge of the corresponding true rarefaction. In order to maintain conservation, this implies that the middle Roe state must have lower density than the true middle state. This leads to a negative density. Note that the velocity and pressure take huge values in the intermediate state.\nThe HLLE solver, on the other hand, guarantees positivity of the density and pressure. Since the HLLE wave speed in the case of a rarefaction is always the speed of the leading edge of the true rarefaction, and since the HLLE solution is conservative, the density in a rarefaction will always be at least as great as that of the true solution. This can be seen clearly in the example below.",
"left = State(Density = 1.,\n Velocity = -10.,\n Pressure = 1.)\nright = State(Density = 1.,\n Velocity = 1.,\n Pressure = 1.)\n\nstates = compare_solutions(left, right, solvers=['Exact', 'HLLE']);\n\neuler.phase_plane_plot(left,right,approx_states=states['hlle'])",
"Again recall that we are only considering a single Riemann solution in this chapter. In FV_compare we observe the effect of using these approximate solvers in a full discretization."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
MartyWeissman/Python-for-number-theory
|
P3wNT Notebook 6.ipynb
|
gpl-3.0
|
[
"Part 6: Ciphers and Key exchange in Python 3.x\nIn this notebook, we introduce cryptography -- how to communicate securely over insecure channels. We begin with a study of two basic ciphers, the Caesar cipher and its fancier variant, the Vigenère cipher. The Vigenère cipher uses a key to turn plaintext (i.e., the message) into ciphertext (the coded message), and uses the same key to turn the ciphertext back into plaintext. Therefore, two parties can communicate securely if they -- and only they -- possess the key. \nIf the security of communication rests on possession of a common key, then we're left with a new problem: how do the two parties agree on a common key, especially if they are far apart and communicating over an insecure channel? \nA clever solution to this problem was published in 1976 by Whitfield Diffie and Martin Hellman, and so it's called Diffie-Hellman key exchange. It takes advantage of modular arithmetic: the existence of a primitive root (modulo a prime) and the difficulty of solving the discrete logarithm problem. \nThis part complements Chapter 6 of An Illustrated Theory of Numbers.\nTable of Contents\n\nCiphers\nKey exchange\n\n<a id='cipher'></a>\nCiphers\nA cipher is a way of transforming a message, called the plaintext into a different form, the ciphertext, which conceals the meaning to all but the intended recipient(s). A cipher is a code, and can take many forms. A substitution cipher might simply change every letter to a different letter in the alphabet. This is the idea behind \"Cryptoquip\" puzzles. These are not too hard for people to solve, and are easy for computers to solve, using frequency analysis (understanding how often different letters or letter-combinations occur).\nASCII code and the Caesar cipher\nEven though substitution ciphers are easy to break, they are a good starting point. To implement substitution ciphers in Python, we need to study the string type in a bit more detail. To declare a string variable, just put your string in quotes. You can use any letters, numbers, spaces, and many symbols inside a string. You can enclose your string by single quotes, like 'Hello' or double-quotes, like \"Hello\". This flexibility is convenient, if you want to use quotes within your string. For example, the string Prince's favorite prime is 1999 should be described in Python with double-quotes \"Prince's favorite prime is 1999\" so that the apostrophe doesn't confuse things. \nStrings are indexed, and their letters can be retrieved as if the string were a list of letters. Python experts will note that strings are immutable while lists are mutable objects, but we aren't going to worry about that here.",
"W = \"Hello\"\nprint(W)\nfor j in range(len(W)): # len(W) is the length of the string W.\n print(W[j]) # Access the jth character of the string.",
"Each \"letter\" of a string again belongs to the string type. A string of length one is called a character.",
"print(type(W))\nprint(type(W[0])) # W[0] is a character.",
"Since computers store data in binary, the designers of early computers (1960s) created a code called ASCII (American Standard Code for Information Interchange) to associate to each character a number between 0 and 127. Every number between 0 and 127 is represented in binary by 7 bits (between 0000000 and 1111111), and so each character is stored with 7 bits of memory. Later, ASCII was extended with another 128 characters, so that codes between 0 and 255 were used, requiring 8 bits. 8 bits of memory is called a byte. One byte of memory suffices to store one (extended ASCII) character.\nYou might notice that there are 256 ASCII codes available, but there are fewer than 256 characters available on your keyboard, even once you include symbols like # and ;. Some of these \"extra\" codes are for accented letters, and others are relics of old computers. For example, ASCII code 7 (0000111) stands for the \"bell\", and readers born in the 1970s or earlier might remember making the Apple II computer beep by pressing Control-G on the keyboard (\"G\" is the 7th letter). You can look up a full ASCII table if you're curious. \nNowadays, the global community of computer users requires far more than 256 \"letters\" -- there are many alphabets around the world! So instead of ASCII, we can access over 100 thousand unicode characters. Scroll through a unicode table to see what is possible. With emoji, unicode tables have entered unexpected territory. Python version 3.x fully supports Unicode in all strings.\nBut here we stay within old-fashioned ASCII codes, since they will suffice for basic English messages. Python has built-in commands chr and ord for converting from code-number (0--255) to character and back again.",
"chr(65)\n\nord('A')",
"The following code will produce a table of the ASCII characters with codes between 32 and 126. This is a good range which includes all the most common English characters and symbols on a U.S. keyboard. Note that ASCII code 32 corresponds to an empty space (an important character for long messages!)",
"for a in range(32,127):\n c = chr(a)\n print(\"ASCII {} is {}\".format(a, c))",
"Since we only work with the ASCII range between 32 and 126, it will be useful to \"cycle\" other numbers into this range. For example, we will interpret 127 as 32, 128 as 33, etc., when we convert out-of-range numbers into characters.\nThe following function forces a number into a given range, using the mod operator. It's a common trick, to make lists loop around cyclically.",
"def inrange(n,range_min, range_max):\n '''\n The input number n can be any integer.\n The output number will be between range_min and range_max (inclusive)\n If the input number is already within range, it will not change.\n '''\n range_len = range_max - range_min + 1\n a = n % range_len\n if a < range_min:\n a = a + range_len\n return a\n\ninrange(13,1,10)\n\ninrange(17,5,50)",
"Now we can implement a substitution cipher by converting characters to their ASCII codes, shuffling the codes, and converting back. One of the simplest substitution ciphers is called a Caesar cipher, in which each character is shifted -- by a fixed amount -- down the list. For example, a Caesar cipher of shift 3 would send 'A' to 'D' and 'B' to 'E', etc.. Near the end of the list, characters are shifted back to the beginning -- the list is considered cyclicly, using our inrange function. \nHere is an implementation of the Caesar cipher, using the ASCII range between 32 and 126. We begin with a function to shift a single character.",
"def Caesar_shift(c, shift):\n '''\n Shifts the character c by shift units\n within the ASCII table between 32 and 126. \n The shift parameter can be any integer!\n '''\n ascii = ord(c)\n a = ascii + shift # Now we have a number between 32+shift and 126+shift.\n a = inrange(a,32,126) # Put the number back in range.\n return chr(a)",
"Let's see the effect of the Caesar cipher on our ASCII table.",
"for a in range(32,127):\n c = chr(a)\n print(\"ASCII {} is {}, which shifts to {}\".format(a, c, Caesar_shift(c,5))) # Shift by 5.",
"Now we can use the Caesar cipher to encrypt strings.",
"def Caesar_cipher(plaintext, shift):\n ciphertext = ''\n for c in plaintext: # Iterate through the characters of a string.\n ciphertext = ciphertext + Caesar_shift(c,shift) \n return ciphertext\n\nprint(Caesar_cipher('Hello! Can you read this?', 5)) # Shift forward 5 units in ASCII.",
"As designed, the Caesar cipher turns plaintext into ciphertext by using a shift of the ASCII table. To decipher the ciphertext, one can just use the Caesar cipher again, with the negative shift.",
"print(Caesar_cipher('Mjqqt&%%Hfs%~tz%wjfi%ymnxD', -5)) # Shift back 5 units in ASCII.",
"The Vigenère cipher\nThe Caesar cipher is pretty easy to break, by a brute force attack (shift by all possible values) or a frequency analysis (compare the frequency of characters in a message to the frequency of characters in typical English messages, to make a guess). \nThe Vigenère cipher is a variant of the Caesar cipher which uses an ecryption key to vary the shift-parameter throughout the encryption process. For example, to encrypt the message \"This is very secret\" using the key \"Key\", you line up the characters of the message above repeated copies of the key.\nT | h | i | s | | i | s | | v | e | r | y | | s | e | c | r | e | t\n--|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--\nK | e | y | K | e | y | K | e | y | K | e | y | K | e | y | K | e | y | K\nThen, you turn everything into ASCII (or your preferred numerical system), and use the bottom row to shift the top row.\nASCII message | 84 | 104 | 105 | 115 | 32 | 105 | 115 | 32 | 118 | 101 | 114 | 121 | 32 | 115 | 101 | 99 | 114 | 101 | 116 \n---|-----|-----\nShift | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 | 101 | 121 | 75 \nASCII shifted | 159 | 205 | 226 | 190 | 133 | 226 | 190 | 133 | 239 | 176 | 215 | 242 | 107 | 216 | 222 | 174 | 215 | 222 | 191 \nASCII shifted in range | 64 | 110 | 36 | 95 | 38 | 36 | 95 | 38 | 49 | 81 | 120 | 52 | 107 | 121 | 32 | 79 | 120 | 32 | 96 \nFinally, the shifted ASCII codes are converted back into characters for transmission. In this case, the codes 64,110,36,95, etc., are converted to the ciphertext \"@n$_&$_&1Qx4ky Ox \\`\"\nThe Vigenère cipher is much harder to crack than the Caesar cipher, if you don't have the key. Indeed, the varying shifts make frequency analysis more difficult. The Vigenère cipher is weak by today's standards (see Wikipedia for a description of 19th century attacks), but illustrates the basic actors in a symmetric key cryptosystem: the plaintext, ciphertext, and a single key. Today, symmetric key cryptosystems like AES and 3DES are used all the time for secure communication.\nBelow, we implement the Vigenère cipher.",
"def Vigenere_cipher(plaintext, key):\n ciphertext = '' # Start with an empty string\n for j in range(len(plaintext)): \n c = plaintext[j] # the jth letter of the plaintext\n key_index = j % len(key) # Cycle through letters of the key.\n shift = ord(key[key_index]) # How much we shift c by.\n ciphertext = ciphertext + Caesar_shift(c,shift) # Add new letter to ciphertext\n return ciphertext\n\nprint(Vigenere_cipher('This is very secret', 'Key')) # 'Key' is probably a bad key!!",
"The Vigenère cipher is called a symmetric cryptosystem, because the same key that is used to encrypt the plaintext can be used to decrypt the ciphertext. All we do is subtract the shift at each stage.",
"def Vigenere_decipher(ciphertext, key):\n plaintext = '' # Start with an empty string\n for j in range(len(ciphertext)): \n c = ciphertext[j] # the jth letter of the ciphertext\n key_index = j % len(key) # Cycle through letters of the key.\n shift = - ord(key[key_index]) # Note the negative sign to decipher!\n plaintext = plaintext + Caesar_shift(c,shift) # Add new letter to plaintext\n return plaintext\n\nprint(Vigenere_decipher('@n$_&$_&1Qx4ky Ox `', 'Key'))\n\n# Try a few cipher/deciphers yourself to get used to the Vigenere system.\n",
"The Vigenère cipher becomes an effective way for two parties to communicate securely, as long as they share a secret key. In the 19th century, this often meant that the parties would require an initial in-person meeting to agree upon a key, or a well-guarded messenger would carry the key from one party to the other. \nToday, as we wish to communicate securely over long distances on a regular basis, the process of agreeing on a key is more difficult. It seems like a chicken-and-egg problem, where we need a shared secret to communicate securely, but we can't share a secret without communicating securely in the first place! \nRemarkably, this secret-sharing problem can be solved with some modular arithmetic tricks. This is the subject of the next section.\nExercises\n\n\nA Caesar cipher was used to encode a message, with the resulting ciphertext: 'j!\\'1r$v1\"$v&&+1t}v(v$2'. Use a loop (brute force attack) to figure out the original message. \n\n\nImagine that you encrypt a long message (e.g., 1000 words of standard English) with a Vigenère cipher. How might you detect the length of the key, if it is short (e.g. 3 or 4 characters)?\n\n\nConsider running a plaintext message through a Vigenère cipher with a 3-character key, and then running the ciphertext through a Vigenère cipher with a 4-character key. Explain how this is equivalent to running the original message through a single cipher with a 12-character key.\n\n\n<a id='keyexchange'></a>\nKey exchange\nNow we study Diffie-Hellman key exchange, a remarkable way for two parties to share a secret without ever needing to directly communicate the secret with each other. Their method is based on properties of modular exponentiation and the existence of a primitive root modulo prime numbers. \nPrimitive roots and Sophie Germain primes\nIf $p$ is a prime number, and $GCD(a,p) = 1$, then recall Fermat's Little Theorem: $$a^{p-1} \\equiv 1 \\text{ mod } p.$$\nIt may be the case that $a^\\ell \\equiv 1$ mod $p$ for some smaller (positive) value of $\\ell$ however. The smallest such positive value of $\\ell$ is called the order (multiplicative order, to be precise) of $a$ modulo $p$, and it is always a divisor of $p-1$.\nThe following code determines the order of a number, mod $p$, with a brute force approach.",
"def mult_order(a,p):\n '''\n Determines the (multiplicative) order of an integer\n a, modulo p. Here p is prime, and GCD(a,p) = 1.\n If bad inputs are used, this might lead to a \n never-ending loop!\n '''\n current_number = a % p\n current_exponent = 1\n while current_number != 1:\n current_number = (current_number * a)%p\n current_exponent = current_exponent + 1\n return current_exponent\n \n\nfor j in range(1,37):\n print(\"The multiplicative order of {} modulo 37 is {}\".format(j,mult_order(j,37)))\n # These orders should all be divisors of 36.",
"A theorem of Gauss states that, if $p$ is prime, there exists an integer $b$ whose order is precisely $p-1$ (as big as possible!). Such an integer is called a primitive root modulo $p$. For example, the previous computation found 12 primitive roots modulo $37$: they are 2,5,13,15,17,18,19,20,22,24,32,35. To see these illustrated (mod 37), check out this poster (yes, that is blatant self-promotion!)\nFor everything that follows, suppose that $p$ is a prime number. Not only do primitive roots exist mod $p$, but they are pretty common. In fact, the number of primitive roots mod $p$ equals $\\phi(p-1)$, where $\\phi$ denotes Euler's totient. On average, $\\phi(n)$ is about $6 / \\pi^2$ times $n$ (for positive integers $n$). While numbers of the form $p-1$ are not \"average\", one still expects that $\\phi(p-1)$ is a not-very-small fraction of $p-1$. You should not have to look very far if you want to find a primitive root.\nThe more difficult part, in practice, is determining whether a number $b$ is or is not a primitive root modulo $p$. When $p$ is very large (like hundreds or thousands of digits), $p-1$ is also very large. It is certainly not practical to cycle all the powers (from $1$ to $p-1$) of $b$ to determine whether $b$ is a primitive root!\nThe better approach, sometimes, is to use the fact that the multiplicative order of $b$ must be a divisor of $p-1$. If one can find all the divisors of $p-1$, then one can just check whether $b^d \\equiv 1$ mod $p$ for each divisor $d$. This makes the problem of determining whether $b$ is a primitive root just about as hard as the problem of factoring $p-1$. This is a hard problem, in general!\nBut, for the application we're interested in, we will want to have a large prime number $p$ and a primitive root mod $p$. The easiest way to do this is to use a Sophie Germain prime $q$. A Sophie Germain prime is a prime number $q$ such that $2q + 1$ is also prime. When $q$ is a Sophie Germain prime, the resulting prime $p = 2q + 1$ is called a safe prime.\nObserve that when $p$ is a safe prime, the prime decomposition of $p-1$ is \n$$p-1 = 2 \\cdot q.$$\nThat's it. So the possible multiplicative orders of an element $b$, mod $p$, are the divisors of $2q$, which are\n$$1, 2, q, \\text{ or } 2q.$$\nIn order to check whether $b$ is a primitive root, modulo a safe prime $p = 2q + 1$, we must check just three things: is $b \\equiv 1$, is $b^2 \\equiv 1$, or is $b^q \\equiv 1$, mod $p$? If the answer to these three questions is NO, then $b$ is a primitive root mod $p$.",
"def is_primroot_safe(b,p):\n '''\n Checks whether b is a primitive root modulo p,\n when p is a safe prime. If p is not safe,\n the results will not be good!\n '''\n q = (p-1) // 2 # q is the Sophie Germain prime\n if b%p == 1: # Is the multiplicative order 1?\n return False\n if (b*b)%p == 1: # Is the multiplicative order 2?\n return False\n if pow(b,q,p) == 1: # Is the multiplicative order q?\n return False\n return True # If not, then b is a primitive root mod p.",
"This would not be very useful if we couldn't find Sophie Germain primes. Fortunately, they are not so rare. The first few are 2, 3, 5, 11, 23, 29, 41, 53, 83, 89. It is expected, but unproven that there are infinitely many Sophie Germain primes. In practice, they occur fairly often. If we consider numbers of magnitude $N$, about $1 / \\log(N)$ of them are prime. Among such primes, we expect about $1.3 / \\log(N)$ to be Sophie Germain primes. In this way, we can expect to stumble upon Sophie Germain primes if we search for a bit (and if $\\log(N)^2$ is not too large).\nThe code below tests whether a number $p$ is a Sophie Germain prime. We construct it by simply testing whether $p$ and $2p+1$ are both prime. We use the Miller-Rabin test (the code from the previous Python notebook) in order to test whether each is prime.",
"from random import randint # randint chooses random integers.\n\ndef Miller_Rabin(p, base):\n '''\n Tests whether p is prime, using the given base.\n The result False implies that p is definitely not prime.\n The result True implies that p **might** be prime.\n It is not a perfect test!\n '''\n result = 1\n exponent = p-1\n modulus = p\n bitstring = bin(exponent)[2:] # Chop off the '0b' part of the binary expansion of exponent\n for bit in bitstring: # Iterates through the \"letters\" of the string. Here the letters are '0' or '1'.\n sq_result = result*result % modulus # We need to compute this in any case.\n if sq_result == 1:\n if (result != 1) and (result != exponent): # Note that exponent is congruent to -1, mod p.\n return False # a ROO violation occurred, so p is not prime\n if bit == '0':\n result = sq_result \n if bit == '1':\n result = (sq_result * base) % modulus\n if result != 1:\n return False # a FLT violation occurred, so p is not prime.\n \n return True # If we made it this far, no violation occurred and p might be prime.\n\ndef is_prime(p, witnesses=50): # witnesses is a parameter with a default value.\n '''\n Tests whether a positive integer p is prime.\n For p < 2^64, the test is deterministic, using known good witnesses.\n Good witnesses come from a table at Wikipedia's article on the Miller-Rabin test,\n based on research by Pomerance, Selfridge and Wagstaff, Jaeschke, Jiang and Deng.\n For larger p, a number (by default, 50) of witnesses are chosen at random.\n '''\n if (p%2 == 0): # Might as well take care of even numbers at the outset!\n if p == 2:\n return True\n else:\n return False \n \n if p > 2**64: # We use the probabilistic test for large p.\n trial = 0\n while trial < witnesses:\n trial = trial + 1\n witness = randint(2,p-2) # A good range for possible witnesses\n if Miller_Rabin(p,witness) == False:\n return False\n return True\n \n else: # We use a determinisic test for p <= 2**64.\n verdict = Miller_Rabin(p,2)\n if p < 2047:\n return verdict # The witness 2 suffices.\n verdict = verdict and Miller_Rabin(p,3)\n if p < 1373653:\n return verdict # The witnesses 2 and 3 suffice.\n verdict = verdict and Miller_Rabin(p,5)\n if p < 25326001:\n return verdict # The witnesses 2,3,5 suffice.\n verdict = verdict and Miller_Rabin(p,7)\n if p < 3215031751:\n return verdict # The witnesses 2,3,5,7 suffice.\n verdict = verdict and Miller_Rabin(p,11)\n if p < 2152302898747:\n return verdict # The witnesses 2,3,5,7,11 suffice.\n verdict = verdict and Miller_Rabin(p,13)\n if p < 3474749660383:\n return verdict # The witnesses 2,3,5,7,11,13 suffice.\n verdict = verdict and Miller_Rabin(p,17)\n if p < 341550071728321:\n return verdict # The witnesses 2,3,5,7,11,17 suffice.\n verdict = verdict and Miller_Rabin(p,19) and Miller_Rabin(p,23)\n if p < 3825123056546413051:\n return verdict # The witnesses 2,3,5,7,11,17,19,23 suffice.\n verdict = verdict and Miller_Rabin(p,29) and Miller_Rabin(p,31) and Miller_Rabin(p,37)\n return verdict # The witnesses 2,3,5,7,11,17,19,23,29,31,37 suffice for testing up to 2^64. \n \n\ndef is_SGprime(p):\n '''\n Tests whether p is a Sophie Germain prime\n '''\n if is_prime(p): # A bit faster to check whether p is prime first.\n if is_prime(2*p + 1): # and *then* check whether 2p+1 is prime.\n return True\n return False",
"Let's test this out by finding the Sophie Germain primes up to 100, and their associated safe primes.",
"for j in range(1,100):\n if is_SGprime(j):\n print(j, 2*j+1)",
"Next, we find the first 100-digit Sophie Germain prime! This might take a minute!",
"test_number = 10**99 # Start looking at the first 100-digit number, which is 10^99.\nwhile not is_SGprime(test_number):\n test_number = test_number + 1\nprint(test_number)",
"In the seconds or minutes your computer was running, it checked the primality of almost 90 thousand numbers, each with 100 digits. Not bad!\nThe Diffie-Hellman protocol\nWhen we study protocols for secure communication, we must keep track of the communicating parties (often called Alice and Bob), and who has knowledge of what information. We assume at all times that the \"wire\" between Alice and Bob is tapped -- anything they say to each other is actively monitored, and is therefore public knowledge. We also assume that what happens on Alice's private computer is private to Alice, and what happens on Bob's private computer is private to Bob. Of course, these last two assumptions are big assumptions -- they point towards the danger of computer viruses which infect computers and can violate such privacy!\nThe goal of the Diffie-Hellman protocol is -- at the end of the process -- for Alice and Bob to share a secret without ever having communicated the secret with each other. The process involves a series of modular arithmetic calculations performed on each of Alice and Bob's computers.\nThe process begins when Alice or Bob creates and publicizes a large prime number p and a primitive root g modulo p. It is best, for efficiency and security, to choose a safe prime p. Alice and Bob can create their own safe prime, or choose one from a public list online, e.g., from the RFC 3526 memo. Nowadays, it's common to take p with 2048 bits, i.e., a prime which is between $2^{2046}$ and $2^{2047}$ (a number with 617 decimal digits!).\nFor the purposes of this introduction, we use a smaller safe prime, with about 256 bits. We use the SystemRandom functionality of the random package to create a good random prime. It is not so much of an issue here, but in general one must be very careful in cryptography that one's \"random\" numbers are really \"random\"! The SystemRandom function uses chaotic properties of your computer's innards in order to initialize a random number generator, and is considered cryptographically secure.",
"from random import SystemRandom # Import the necessary package.\n\nr = SystemRandom().getrandbits(256)\nprint(\"The random integer is {}\".format(r))\nprint(\"with binary expansion {}\".format(bin(r))) # r is an integer constructed from 256 random bits.\nprint(\"with bit-length {}.\".format(len(bin(r)) - 2)) # In case you want to check. Remember '0b' is at the beginning.\n\ndef getrandSGprime(bitlength):\n '''\n Creates a random Sophie Germain prime p with about \n bitlength bits.\n '''\n while True:\n p = SystemRandom().getrandbits(bitlength) # Choose a really random number.\n if is_SGprime(p):\n return p ",
"The function above searches and searches among random numbers until it finds a Sophie Germain prime. The (possibly endless!) search is performed with a while True: loop that may look strange. The idea is to stay in the loop until such a prime is found. Then the return p command returns the found prime as output and halts the loop. One must be careful with while True loops, since they are structured to run forever -- if there's not a loop-breaking command like return or break inside the loop, your computer will be spinning for a long time.",
"q = getrandSGprime(256) # A random ~256 bit Sophie Germain prime\np = 2*q + 1 # And its associated safe prime\n\nprint(\"p is \",p) # Just to see what we're working with.\nprint(\"q is \",q)",
"Next we find a primitive root, modulo the safe prime p.",
"def findprimroot_safe(p):\n '''\n Finds a primitive root, \n modulo a safe prime p.\n '''\n b = 2 # Start trying with 2.\n while True: # We just keep on looking.\n if is_primroot_safe(b,p):\n return b\n b = b + 1 # Try the next base. Shouldn't take too long to find one!\n\ng = findprimroot_safe(p)\nprint(g)",
"The pair of numbers $(g, p)$, the primitive root and the safe prime, chosen by either Alice or Bob, is now made public. They can post their $g$ and $p$ on a public website or shout it in the streets. It doesn't matter. They are just tools for their secret-creation algorithm below.\nAlice and Bob's private secrets\nNext, Alice and Bob invent private secret numbers $a$ and $b$. They do not tell anyone these numbers. Not each other. Not their family. Nobody. They don't write them on a chalkboard, or leave them on a thumbdrive that they lose. These are really secret.\nBut they don't use their phone numbers, or social security numbers. It's best for Alice and Bob to use a secure random number generator on their separate private computers to create $a$ and $b$. They are often 256 bit numbers in practice, so that's what we use below.",
"a = SystemRandom().getrandbits(256) # Alice's secret number\nb = SystemRandom().getrandbits(256) # Bob's secret number\n\nprint(\"Only Alice should know that a = {}\".format(a))\nprint(\"Only Bob should know that b = {}\".format(b))\n\nprint(\"But everyone can know p = {} and g = {}\".format(p,g))",
"Now Alice and Bob use their secrets to generate new numbers. Alice computes the number \n$$A = g^a \\text{ mod } p,$$\nand Bob computes the number\n$$B = g^b \\text{ mod } p.$$",
"A = pow(g,a,p) # This would be computed on Alice's computer.\nB = pow(g,b,p) # This would be computed on Bob's computer.",
"Now Alice and Bob do something that seems very strange at first. Alice sends Bob her new number $A$ and Bob sends Alice his new number $B$. Since they are far apart, and the channel is insecure, we can assume everyone in the world now knows $A$ and $B$.",
"print(\"Everyone knows A = {} and B = {}.\".format(A,B))",
"Now Alice, on her private computer, computes $B^a$ mod $p$. She can do that because everyone knows $B$ and $p$, and she knows $a$ too.\nSimilarly, Bob, on his private computer, computes $A^b$ mod $p$. He can do that because everyone knows $A$ and $p$, and he knows $b$ too.\nAlice and Bob do not share the results of their computations!",
"print(pow(B,a,p)) # This is what Alice computes.\n\nprint(pow(A,b,p)) # This is what Bob computes.",
"Woah! What happened? In terms of exponents, it's elementary. For\n$$B^a = (g^{b})^a = g^{ba} = g^{ab} = (g^a)^b = A^b.$$\nSo these two computations yield the same result (mod $p$, the whole way through).\nIn the end, we find that Alice and Bob share a secret. We call this secret number $S$.\n$$S = B^a = A^b.$$",
"S = pow(B,a,p) # Or we could have used pow(A,b,p)\nprint(S)",
"This common secret $S$ can be used as a key for Alice and Bob to communicate hereafter. For example, they might use $S$ (converted to a string, if needed) as the key for a Vigenère cipher, and chat with each other knowing that only they have the secret key to encrypt and decrypt their messages.",
"# We use the triple-quotes for a long string, that occupies multiple lines.\n# The backslash at the end of the line tells Python to ignore the newline character.\n# Imagine that Alice has a secret message she wants to send to Bob. \n# She writes the plaintext on her computer. \n\nplaintext = '''Did you hear that the American Mathematical Society has an annual textbook sale? \\\n It's 40 percent off for members and 25 percent off for everyone else.'''\n\n# Now Alice uses the secret S (as a string) to encrypt. \nciphertext = Vigenere_cipher(plaintext, str(S))\nprint(ciphertext)\n# Alice sends the following ciphertext to Bob, over an insecure channel.\n\n# When Bob receives the ciphertext, he decodes it with the secret S again.\nprint(Vigenere_decipher(ciphertext, str(S)))",
"To have confidence in this protocol, one needs to be convinced that their secret is truly secret! The public has a lot of information: they know \n1. the prime $p$, \n2. the primitive root $g$, \n3. the number $A = g^a$ (mod $p$), and \n4. the number $B = g^b$ (mod $p$). \nIf the public could figure out either $a$ or $b$, they could figure out the secret (by raising $A^b$ or $B^a$ like Alice and Bob did). \nThis is the essence of the discrete logarithm problem. If we know the value of $g^a$ mod $p$, can we figure out the possible value(s) of $a$? If this were ordinary arithmetic, we would say that $a = \\log_g(A)$. But this is modular arithmetic, and there's no easy way to figure out such logarithms. The values of $g^a$ tend to bounce all over the place, mod $p$, especially since we chose $a$ to be pretty large (256 bits!). \nThe security of the Diffie-Hellman protocol, i.e., the security of Alice and Bob's shared secret, depends on the difficulty of the discrete logarithm problem. When $p$ is a large (e.g. 2048 bits) safe prime, and $a$ and $b$ are suitably large (roughly 256 bits), there seems to be no way to solve the discrete logarithm problem mod $p$ in any reasonable amount of time. Someday we might have quantum computers to quickly solve discrete logarithm problems, and the Diffie-Hellman protocol will not be secure. But for now, Alice and Bob's secret key seems safe.\nExercises\n\n\nHow many Sophie Germain primes are there between 1 and 1000000? What proportion of primes in this range are Sophie Germain primes?\n\n\nIt is expected that there are infinitely primes $p$ such that 2 is a primitive root mod $p$. Study the density of such primes. For example, among the primes up to 1000, how often is $2$ a primitive root? Does this density seem to change? \n\n\nAdapt Diffie-Hellman to work with a group of three parties who wish to share a common secret. Hint: the common secret will have the form $g^{abc}$, and other exponents like $g^a$, $g^b$, $g^c$, $g^{ab}$, $g^{bc}$, $g^{ac}$ will be public information.\n\n\nSadly, Alice and Bob have agreed to use the primitive root $g = 3$ and the prime $p = 65537$. Listening into their conversation, you intercept the following: $A = 40360$ and $B = 21002$ and the ciphertext is $;6HWD;P5LVJ99W+EH9JVx=I:V7ESpGC^. If you know that they use a protocol with a Vigenère cipher, with key equal the string associated to their secret $S$, what is the plaintext message? Hint: you should be able to solve the discrete logarithm problem with a brute force attack."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kongjy/hyperAFM
|
Notebooks/multiple regression_1-varun.ipynb
|
mit
|
[
"import matplotlib.pyplot as plt\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport matplotlib.mlab as mlab\nimport math\n\nmu = 0\nmu2 = 0.5\nmu3 = 0.75\n\nvariance = 0.5\nvariance2 = 1\nvariance3 = 1.5\n\nsigma = math.sqrt(variance)\nsigma2 = math.sqrt(variance2)\nsigma3 = math.sqrt(variance3)\n\nx = np.linspace(mu-3*variance,mu+3*variance, 40)\nx2 = np.linspace(mu2-3*variance2, mu+3*variance2, 40)\nx3 = np.linspace(mu2-3*variance3, mu+3*variance3, 40)\n\nA = np.zeros((559,1))\nA[20:60] = mlab.normpdf(x, mu, sigma).reshape(40,1)\nA[230:270] = mlab.normpdf(x2, mu2, sigma2).reshape(40,1)\nA[420:460] = mlab.normpdf(x3, mu3, sigma3).reshape(40,1)\nA = A.reshape(559)\n\nB = np.zeros((559,1))\nB[23:63] = mlab.normpdf(x, mu, sigma).reshape(40,1)\nB[400:440] = mlab.normpdf(x3, mu3, sigma3).reshape(40,1)\nB[470:510] = mlab.normpdf(x2, mu2, sigma2).reshape(40,1)\nB = B.reshape(559)\n\n\nC = np.zeros((559, 1))\nC[320:360] = mlab.normpdf(x2, mu2, sigma2).reshape(40,1)\nC[433:473] = mlab.normpdf(x, mu, sigma).reshape(40,1)\nC[128:168] = mlab.normpdf(x3, mu3, sigma3).reshape(40,1)\nC = C.reshape(559)\n\nspectralmatrix = np.zeros((256, 256, 559))\nfunctionalmatrix = np.zeros((256, 256))\nAmatrix = np.zeros((256, 256))\nBmatrix = np.zeros((256, 256))\nCmatrix =np.zeros((256, 256))\nxaxis = spectralmatrix.shape[0]\nyaxis = spectralmatrix.shape[1]\n\nnp.random.seed(122)\na=np.random.rand(1)\nb=np.random.rand(1)\nc=np.random.rand(1)\nspatialfrequency = (2*np.pi)/64\nfor x in range(xaxis):\n for y in range(yaxis):\n a = abs(np.sin(y*spatialfrequency))\n b = abs(np.sin(x*spatialfrequency) + np.sin(y*spatialfrequency))\n c = np.sin(x*spatialfrequency)**2\n #can make a, b, c as a function of x and y with some random noise\n spectralmatrix[x,y,:] = a*A + b*B + c*C\n functionalmatrix[x][y] = 2*a + b + 9*c\n Amatrix[x][y]=a\n Bmatrix[x][y]=b\n Cmatrix[x][y]=c\n \n\n#spectralmatrix[1,2,:]\nspectralmatrix.shape\n\nfunctionalmatrix.shape\n\n#LinearRegression\n#model: Y = 2a+b+9c\n\npts=256\na=Amatrix\nb=Bmatrix\nc=Cmatrix\nB0=0\nB1=2\nB2=1\nB3=9\nyactual=B0+B1*a[0]+B2*b[0]+B3*c[0]",
"In the above cell, I have used the first element of the array for calculating 'yactual' value",
"len(Amatrix[0])\n\n#performing multiple simple linear regression for only the a,Amatrix, because of error of the .fit function\n\nfrom sklearn import linear_model\nregr=linear_model.LinearRegression()#performing the simple linear regression\nregr.fit(a[0].reshape(len(a),1),yactual.reshape(len(yactual),1))",
"The .fit function is throwing out an error saying that first argument in that function must be 2 Dimensional or lesser.\nWhen I try to put in all the three matrixes A, B, C, it is giving an error saying that the first argument is four dimensional, which I could'nt resolve\nHence, to see how it works out for a single matrix, I have used the fit function",
"plt.scatter(yactual.reshape(len(yactual),1),a[0].reshape(len(yactual),1)) \nplt.plot([0,2],[0,23],lw=4,color='red')#the line Y=2a+b+9c\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
pyrdr/charlas
|
periodicos-dominicanos/noticias csv/Presentación_Mineria_De_Texto.ipynb
|
mit
|
[
"Minería de Texto\n\nLa minería de texto es el proceso de obtener información de alta calidad a partir del texto.\n¿Qué clase de información?\n-Palabras Clave: Soplan los vientos, Leonel 2020.\n-Sentimiento: El Iphone X es un disparate.\n-Agrupaciones: Todos esos tweets son bien parecidos.\nY muchos más.\n\nEl texto es el dato más abundante, ya que es generado cada milisegundo en un sitio que todos visitamos, la internet.\n\nTenemos datos infinitos con los que jugar. Pero como lo conseguimos?\nLos textos son datos NO Estructurados.\n\nLos metodos convencionales para analizar datos no funcionan aquí. ¿Qué Procede?\nProceso de Minería de Texto.\n\nCaso de Uso, minería de texto de Periodicos Dominicanos\n \nPrimer Paso: Obtener los datos\nComo los sitios web de los periodicos no son muy amigables para navegar hacia al pasado, y también tienen estructuras de portadas diferentes, recurrimos a un vínculo en común: Facebook.",
"def testFacebookPageFeedData(page_id, access_token):\n \n # construct the URL string\n base = \"https://graph.facebook.com/v2.10\"\n node = \"/\" + page_id + \"/feed\" # changed\n parameters = \"/?fields=message,created_time,reactions.type(LOVE).limit(0).summary(total_count).as(reactions_love),reactions.type(WOW).limit(0).summary(total_count).as(reactions_wow),reactions.type(HAHA).limit(0).summary(total_count).as(reactions_haha),reactions.type(ANGRY).limit(0).summary(total_count).as(reactions_angry),reactions.type(SAD).limit(0).summary(total_count).as(reactions_sad),reactions.type(LIKE).limit(0).summary(total_count).as(reactions_like)&limit={}&access_token={}\".format(100, access_token) # changed\n url = base + node + parameters\n \n # retrieve data\n data = json.loads(request_until_succeed(url))\n \n return data\n\ndef Get_News(limit = 10):\n result = {}\n nex = None\n for i in range(limit):\n range_dates = []\n range_messages = []\n range_ids= []\n if i == 0:\n data = testFacebookPageFeedData(page_id,access_token)\n nex = data['paging']['next']\n for d in data['data']:\n range_dates.append(d['created_time'])\n range_messages.append(d['message'])\n range_ids.append(d['id'])\n result['dates'] = range_dates\n result['messages'] = range_messages\n result['angry'] = range_angry\n result['id'] = range_ids\n \n else:\n data = json.loads(request_until_succeed(nex))\n try:\n nex = data['paging']['next']\n except:\n break\n for d in data['data']:\n try:\n range_messages.append(d['message'])\n range_dates.append(d['created_time'])\n range_ids.append(d['id'])\n \n except:\n print(d)\n result['dates'].extend(range_dates)\n result['messages'].extend(range_messages)\n result['id'].extend(range_ids)\n \n \n result_df = pd.DataFrame(result)\n return result_df\n\nimport pandas as pd\npd.set_option('chained_assignment',None)\ndiario_libre_fb = pd.read_csv('diario_libre_fb.csv',encoding='latin1')\n\ndef get_url(url):\n urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', url)\n try:\n result = urls[0]\n except:\n result = 'Not found'\n return result\n\ndiario_libre_fb.head()",
"Segundo paso, obtener el contenido.\nDe los datos de Facebok, solo tenemos los titulos y los urls. Necesitamos los articulos. Para esto, necesitamos acceder a los urls y extraer los datos de la página web. Esto es Web Scraping. \n\nNada que ver aquí, Pedro presenta el Web Scraping en R.\nTercer paso: Analizar el texto\nYa con el texto guardado y estructurado, solo falta analizarlo.",
"import os\npath = os.getcwd()\ncsv_files =[]\nfor file in os.listdir(path):\n if file.endswith(\".csv\") and 'diario_libre_fb' not in file:\n csv_files.append(os.path.join(path, file))\n\nfrom matplotlib import rcParams\nrcParams['figure.figsize'] = (8, 4) # Size of plot\nrcParams['figure.dpi'] = 100 #Dots per inch of plot\nrcParams['lines.linewidth'] = 2 # Width of lines of the plot\nrcParams['axes.facecolor'] = 'white' #Color of the axes\nrcParams['font.size'] = 12 # Size of the text.\nrcParams['patch.edgecolor'] = 'white' #Patch edge color.\nrcParams['font.family'] = 'StixGeneral' #Font of the plot text.\n\ndiarios = ['Diario Libre','El Dia','Hoy','Listin Diario','El Nacional']\nnoticias_df_all = None\nfor i,periodico in enumerate(csv_files):\n \n noticias_df = pd.read_csv(csv_files[0],encoding = 'latin1').iloc[:,1:]\n noticias_df['Diario'] = diarios[i]\n if noticias_df_all is None:\n noticias_df_all = noticias_df\n else:\n noticias_df_all = noticias_df_all.append(noticias_df)\n\nnoticias_df_all.reset_index(drop = True,inplace = True)\nnoticias_df_all.describe() \n\nnoticias_df_completas = noticias_df_all.loc[pd.notnull(noticias_df_all.contenidos)]\nnoticias_df_completas.shape",
"Ya casi podemos comenzar a analizar. Vamos a utilizar el modelo de bolsa de palabras (bag of words). En este modelo contamos la ocurrencia de cada palabra en cada texto.\n\nPero para lograr esto de la manera más efectiva hay que limpiar el texto:\n\n\nConvertir a minuscula: Santiago -> santiago\n\n\nEliminar caracteres no alfabeticos -> No pararon. -> No pararon\n\n\nEliminar tildes -> República Dominicana -> Republica Dominicana\n\n\nEliminar palabras sin ningun valor análitico -> Falleció la mañana de este sábado -> Falleció mañana sabado\n\n\nPara facilitar esto, vamos a utilizar la librería de texto Natural Language Toolkit o NLTK. Contiene un número inmenso de funcionalidades como :\n\n\nCorpus de texto\n\n\nConversión de oraciones a las partes de texto (POS).\n\n\nTokenización de palabras y oraciones.\n\n\nY mucho más...",
"pd.options.mode.chained_assignment = None \n\nimport nltk\nspanish_stops = set(nltk.corpus.stopwords.words('Spanish'))\nlist(spanish_stops)[:10]\n\nimport unicodedata\nimport re\ndef strip_accents(s):\n return ''.join(c for c in unicodedata.normalize('NFD', s)\n if unicodedata.category(c) != 'Mn')\n\n\ndef Clean_Text(text):\n \n words = text.lower().split()\n removed_stops = [strip_accents(w) for w in words if w not in spanish_stops and len(w)!=1]\n stops_together = \" \".join(removed_stops)\n letters_only = re.sub(\"[^a-zA-Z]\",\" \", stops_together)\n \n \n return letters_only\n\nnoticias_df_completas['contenido limpio'] = noticias_df_completas.contenidos.apply(Clean_Text)\nnoticias_df_completas[['contenidos','contenido limpio']].head()",
"Una alternativa para estandarizar las palabras es stemming. Esto devuelve a una palabra a la raíz de su familia",
"from nltk.stem.snowball import SnowballStemmer\nspanish_stemmer = SnowballStemmer(\"spanish\")\nprint(spanish_stemmer.stem(\"corriendo\"))\nprint(spanish_stemmer.stem(\"correr\"))\n\ndef stem_text(text):\n stemmed_text = [spanish_stemmer.stem(word) for word in text.split()]\n return \" \".join(stemmed_text)\n\nnoticias_df_completas['contenido stemmed'] = noticias_df_completas['contenido limpio'].apply(stem_text)\nnoticias_df_completas.head()",
"Cuenta de Palabras",
"import itertools\ndef Create_ngrams(all_text,number=1):\n result = {}\n for text in all_text:\n text = [w for w in text.split() if len(w) != 1]\n for comb in list(itertools.combinations(text, number)):\n found = False\n temp_dict = {}\n i =0\n while not found and i < len(comb):\n if comb[i] not in temp_dict:\n temp_dict[comb[i]] = \"Found\"\n else:\n found = True\n i += 1\n if not found:\n if comb not in result:\n result[comb]= 1\n else:\n result[comb]+=1\n df = pd.DataFrame({ str(number) + \"-Combinations\": list(result.keys()),\"Count\":list(result.values())})\n return df.sort_values(by=\"Count\",ascending=False)\n\none_ngrams = Create_ngrams(noticias_df_completas['contenido limpio'])\none_ngrams.head()",
"Para verlo de manera más facil para los ojos",
"from matplotlib import rcParams\nrcParams['figure.figsize'] = (8, 4) # Size of plot\nrcParams['figure.dpi'] = 100 #Dots per inch of plot\nrcParams['lines.linewidth'] = 2 # Width of lines of the plot\nrcParams['axes.facecolor'] = 'white' #Color of the axes\nrcParams['font.size'] = 12 # Size of the text.\nrcParams['patch.edgecolor'] = 'white' #Patch edge color.\nrcParams['font.family'] = 'StixGeneral' #Font of the plot text.\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\ndef Plot_nCombination(comb_df,n,title):\n sns.barplot(x=str(n) + \"-Combinations\",y = \"Count\",data = comb_df.head(10))\n plt.title(title)\n plt.xlabel(\"Combination\")\n plt.ylabel(\"Count\")\n plt.xticks(rotation = \"75\")\n plt.show()\n \nPlot_nCombination(one_ngrams,1,\"Top 10 palabras más comunes, noticias.\")\n\ntwo_ngrams = Create_ngrams(noticias_df_completas['contenido limpio'],2)\nPlot_nCombination(two_ngrams,2,\"Top 10 pares de palabras más comunes.\")",
"Un metodo muy util para medir la importancia de las palabras es TF-IDF.",
"from sklearn.feature_extraction.text import TfidfVectorizer\nimport numpy as np\ndef Calculate_tfidf(text):\n corpus = text\n vectorizer = TfidfVectorizer( min_df = 0.025, max_df = 0.25)\n vector_weights = vectorizer.fit_transform(corpus)\n weights= list(np.asarray(vector_weights.mean(axis=0)).ravel())\n df = pd.DataFrame({\"Word\":vectorizer.get_feature_names(),\"Score\":weights})\n df = df.sort_values(by = \"Score\" ,ascending = False)\n return df,vector_weights.toarray()\n\n\ndef Plot_Score(data,title):\n sns.barplot(x=\"Word\",y = \"Score\",data = data.head(10))\n plt.title(title)\n plt.xlabel(\"Palabra\")\n plt.ylabel(\"Score\")\n plt.xticks(rotation = \"75\")\n plt.show()\n \n\nText_TfIdf,Text_Vector = Calculate_tfidf(noticias_df_completas['contenido limpio'])\nPlot_Score(Text_TfIdf,\"TF-IDF Top 10 palabras\")",
"Word Clouds\nLos word clouds o nubes de palabras nos ayudan a visualizar el texto de manera más intuitiva. Las palabras más grandes son las más frecuentes.",
"noticias_df_completas = noticias_df_completas.loc[pd.notnull(noticias_df_completas.fechas)]\nnoticias_df_completas.fechas = pd.to_datetime(noticias_df_completas.fechas)\nnoticias_df_completas['Mes'] = noticias_df_completas.fechas.dt.month\nnoticias_df_completas['Año'] = noticias_df_completas.fechas.dt.year\nnoticias_df_completas.head()\n\nfrom wordcloud import WordCloud\nrcParams['figure.dpi'] = 600\ndef crear_wordcloud_mes_anio(data,mes,anio):\n data = data.loc[(data.Mes == mes) & (data.Año == anio)]\n print(\"Existen {} articulos en los datos para el mes {} del año {}.\".format(data.shape[0],mes,anio))\n wordcloud = WordCloud(background_color='white',max_words=200,\n max_font_size=40,random_state=42).generate(str(data['contenido limpio']))\n \n fig = plt.figure(1)\n plt.imshow(wordcloud)\n plt.axis('off')\n plt.show()\n\ncrear_wordcloud_mes_anio(noticias_df_completas,9,2017)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rastala/mmlspark
|
notebooks/samples/202 - Amazon Book Reviews - Word2Vec.ipynb
|
mit
|
[
"202 - Training and Evaluaiting CNTK Models in Spark ML Pipelines\nYet again, now using the Word2Vec Estimator from Spark. We can use the tree-based\nlearners from spark in this scenario due to the lower dimensionality representation of\nfeatures.",
"import pandas as pd\nimport mmlspark\nfrom pyspark.sql.types import IntegerType, StringType, StructType, StructField\n\ndataFile = \"BookReviewsFromAmazon10K.tsv\"\ntextSchema = StructType([StructField(\"rating\", IntegerType(), False),\n StructField(\"text\", StringType(), False)])\nimport os, urllib\nif not os.path.isfile(dataFile):\n urllib.request.urlretrieve(\"https://mmlspark.azureedge.net/datasets/\"+dataFile, dataFile)\ndata = spark.createDataFrame(pd.read_csv(dataFile, sep=\"\\t\", header=None), textSchema)\ndata.limit(10).toPandas()",
"Modify the label column to predict a rating greater than 3.",
"processedData = data.withColumn(\"label\", data[\"rating\"] > 3) \\\n .select([\"text\", \"label\"])\nprocessedData.limit(5).toPandas()",
"Split the dataset into train, test and validation sets.",
"train, test, validation = processedData.randomSplit([0.60, 0.20, 0.20])",
"Use Tokenizer and Word2Vec to generate the features.",
"from pyspark.ml import Pipeline\nfrom pyspark.ml.feature import Tokenizer, Word2Vec\ntokenizer = Tokenizer(inputCol=\"text\", outputCol=\"words\")\npartitions = train.rdd.getNumPartitions()\nword2vec = Word2Vec(maxIter=4, seed=42, inputCol=\"words\", outputCol=\"features\",\n numPartitions=partitions)\ntextFeaturizer = Pipeline(stages = [tokenizer, word2vec]).fit(train)",
"Transform each of the train, test and validation datasets.",
"ptrain = textFeaturizer.transform(train).select([\"label\", \"features\"])\nptest = textFeaturizer.transform(test).select([\"label\", \"features\"])\npvalidation = textFeaturizer.transform(validation).select([\"label\", \"features\"])\nptrain.limit(5).toPandas()",
"Generate several models with different parameters from the training data.",
"from pyspark.ml.classification import LogisticRegression, RandomForestClassifier, GBTClassifier\nfrom mmlspark.TrainClassifier import TrainClassifier\nimport itertools\n\nlrHyperParams = [0.05, 0.2]\nlogisticRegressions = [LogisticRegression(regParam = hyperParam)\n for hyperParam in lrHyperParams]\nlrmodels = [TrainClassifier(model=lrm, labelCol=\"label\").fit(ptrain)\n for lrm in logisticRegressions]\n\nrfHyperParams = itertools.product([5, 10], [3, 5])\nrandomForests = [RandomForestClassifier(numTrees=hyperParam[0], maxDepth=hyperParam[1])\n for hyperParam in rfHyperParams]\nrfmodels = [TrainClassifier(model=rfm, labelCol=\"label\").fit(ptrain)\n for rfm in randomForests]\n\nrfHyperParams = itertools.product([8, 16], [3, 5])\ngbtclassifiers = [GBTClassifier(maxBins=hyperParam[0], maxDepth=hyperParam[1])\n for hyperParam in rfHyperParams]\ngbtmodels = [TrainClassifier(model=gbt, labelCol=\"label\").fit(ptrain)\n for gbt in gbtclassifiers]\n\ntrainedModels = lrmodels + rfmodels + gbtmodels",
"Find the best model for the given test dataset.",
"from mmlspark import FindBestModel\nbestModel = FindBestModel(evaluationMetric=\"AUC\", models=trainedModels).fit(ptest)",
"Get the accuracy from the validation dataset.",
"from mmlspark.ComputeModelStatistics import ComputeModelStatistics\npredictions = bestModel.transform(pvalidation)\nmetrics = ComputeModelStatistics().transform(predictions)\nprint(\"Best model's accuracy on validation set = \"\n + \"{0:.2f}%\".format(metrics.first()[\"accuracy\"] * 100))\nprint(\"Best model's AUC on validation set = \"\n + \"{0:.2f}%\".format(metrics.first()[\"AUC\"] * 100))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Intel-tensorflow/tensorflow
|
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Post-training integer quantization with int16 activations\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/performance/post_training_integer_quant_16x8\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nOverview\nTensorFlow Lite now supports\nconverting activations to 16-bit integer values and weights to 8-bit integer values during model conversion from TensorFlow to TensorFlow Lite's flat buffer format. We refer to this mode as the \"16x8 quantization mode\". This mode can improve accuracy of the quantized model significantly, when activations are sensitive to the quantization, while still achieving almost 3-4x reduction in model size. Moreover, this fully quantized model can be consumed by integer-only hardware accelerators. \nSome examples of models that benefit from this mode of the post-training quantization include: \n* super-resolution, \n* audio signal processing such\nas noise cancelling and beamforming, \n* image de-noising, \n* HDR reconstruction\nfrom a single image\nIn this tutorial, you train an MNIST model from scratch, check its accuracy in TensorFlow, and then convert the model into a Tensorflow Lite flatbuffer using this mode. At the end you check the accuracy of the converted model and compare it to the original float32 model. Note that this example demonstrates the usage of this mode and doesn't show benefits over other available quantization techniques in TensorFlow Lite.\nBuild an MNIST model\nSetup",
"import logging\nlogging.getLogger(\"tensorflow\").setLevel(logging.DEBUG)\n\nimport tensorflow as tf\nfrom tensorflow import keras\nimport numpy as np\nimport pathlib",
"Check that the 16x8 quantization mode is available",
"tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8",
"Train and export the model",
"# Load MNIST dataset\nmnist = keras.datasets.mnist\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n\n# Normalize the input image so that each pixel value is between 0 to 1.\ntrain_images = train_images / 255.0\ntest_images = test_images / 255.0\n\n# Define the model architecture\nmodel = keras.Sequential([\n keras.layers.InputLayer(input_shape=(28, 28)),\n keras.layers.Reshape(target_shape=(28, 28, 1)),\n keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),\n keras.layers.MaxPooling2D(pool_size=(2, 2)),\n keras.layers.Flatten(),\n keras.layers.Dense(10)\n])\n\n# Train the digit classification model\nmodel.compile(optimizer='adam',\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\nmodel.fit(\n train_images,\n train_labels,\n epochs=1,\n validation_data=(test_images, test_labels)\n)",
"For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy.\nConvert to a TensorFlow Lite model\nUsing the TensorFlow Lite Converter, you can now convert the trained model into a TensorFlow Lite model.\nNow, convert the model using TFliteConverter into default float32 format:",
"converter = tf.lite.TFLiteConverter.from_keras_model(model)\ntflite_model = converter.convert()",
"Write it out to a .tflite file:",
"tflite_models_dir = pathlib.Path(\"/tmp/mnist_tflite_models/\")\ntflite_models_dir.mkdir(exist_ok=True, parents=True)\n\ntflite_model_file = tflite_models_dir/\"mnist_model.tflite\"\ntflite_model_file.write_bytes(tflite_model)",
"To instead quantize the model to 16x8 quantization mode, first set the optimizations flag to use default optimizations. Then specify that 16x8 quantization mode is the required supported operation in the target specification:",
"converter.optimizations = [tf.lite.Optimize.DEFAULT]\nconverter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8]",
"As in the case of int8 post-training quantization, it is possible to produce a fully integer quantized model by setting converter options inference_input(output)_type to tf.int16.\nSet the calibration data:",
"mnist_train, _ = tf.keras.datasets.mnist.load_data()\nimages = tf.cast(mnist_train[0], tf.float32) / 255.0\nmnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)\ndef representative_data_gen():\n for input_value in mnist_ds.take(100):\n # Model has only one input so each data point has one element.\n yield [input_value]\nconverter.representative_dataset = representative_data_gen",
"Finally, convert the model as usual. Note, by default the converted model will still use float input and outputs for invocation convenience.",
"tflite_16x8_model = converter.convert()\ntflite_model_16x8_file = tflite_models_dir/\"mnist_model_quant_16x8.tflite\"\ntflite_model_16x8_file.write_bytes(tflite_16x8_model)",
"Note how the resulting file is approximately 1/3 the size.",
"!ls -lh {tflite_models_dir}",
"Run the TensorFlow Lite models\nRun the TensorFlow Lite model using the Python TensorFlow Lite Interpreter.\nLoad the model into the interpreters",
"interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))\ninterpreter.allocate_tensors()\n\ninterpreter_16x8 = tf.lite.Interpreter(model_path=str(tflite_model_16x8_file))\ninterpreter_16x8.allocate_tensors()",
"Test the models on one image",
"test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n\ninput_index = interpreter.get_input_details()[0][\"index\"]\noutput_index = interpreter.get_output_details()[0][\"index\"]\n\ninterpreter.set_tensor(input_index, test_image)\ninterpreter.invoke()\npredictions = interpreter.get_tensor(output_index)\n\nimport matplotlib.pylab as plt\n\nplt.imshow(test_images[0])\ntemplate = \"True:{true}, predicted:{predict}\"\n_ = plt.title(template.format(true= str(test_labels[0]),\n predict=str(np.argmax(predictions[0]))))\nplt.grid(False)\n\ntest_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n\ninput_index = interpreter_16x8.get_input_details()[0][\"index\"]\noutput_index = interpreter_16x8.get_output_details()[0][\"index\"]\n\ninterpreter_16x8.set_tensor(input_index, test_image)\ninterpreter_16x8.invoke()\npredictions = interpreter_16x8.get_tensor(output_index)\n\nplt.imshow(test_images[0])\ntemplate = \"True:{true}, predicted:{predict}\"\n_ = plt.title(template.format(true= str(test_labels[0]),\n predict=str(np.argmax(predictions[0]))))\nplt.grid(False)",
"Evaluate the models",
"# A helper function to evaluate the TF Lite model using \"test\" dataset.\ndef evaluate_model(interpreter):\n input_index = interpreter.get_input_details()[0][\"index\"]\n output_index = interpreter.get_output_details()[0][\"index\"]\n\n # Run predictions on every image in the \"test\" dataset.\n prediction_digits = []\n for test_image in test_images:\n # Pre-processing: add batch dimension and convert to float32 to match with\n # the model's input data format.\n test_image = np.expand_dims(test_image, axis=0).astype(np.float32)\n interpreter.set_tensor(input_index, test_image)\n\n # Run inference.\n interpreter.invoke()\n\n # Post-processing: remove batch dimension and find the digit with highest\n # probability.\n output = interpreter.tensor(output_index)\n digit = np.argmax(output()[0])\n prediction_digits.append(digit)\n\n # Compare prediction results with ground truth labels to calculate accuracy.\n accurate_count = 0\n for index in range(len(prediction_digits)):\n if prediction_digits[index] == test_labels[index]:\n accurate_count += 1\n accuracy = accurate_count * 1.0 / len(prediction_digits)\n\n return accuracy\n\nprint(evaluate_model(interpreter))",
"Repeat the evaluation on the 16x8 quantized model:",
"# NOTE: This quantization mode is an experimental post-training mode,\n# it does not have any optimized kernels implementations or\n# specialized machine learning hardware accelerators. Therefore,\n# it could be slower than the float interpreter.\nprint(evaluate_model(interpreter_16x8))",
"In this example, you have quantized a model to 16x8 with no difference in the accuracy, but with the 3x reduced size."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
YeEmrick/learning
|
cs231/assignment/assignment2/.ipynb_checkpoints/Dropout-checkpoint.ipynb
|
apache-2.0
|
[
"Dropout\nDropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.\n[1] Geoffrey E. Hinton et al, \"Improving neural networks by preventing co-adaptation of feature detectors\", arXiv 2012",
"# As usual, a bit of setup\n\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.iteritems():\n print '%s: ' % k, v.shape",
"Dropout forward pass\nIn the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.\nOnce you have done so, run the cell below to test your implementation.",
"x = np.random.randn(500, 500) + 10\n\nfor p in [0.3, 0.6, 0.1]:\n out, cache = dropout_forward(x, {'mode': 'train', 'p': p})\n out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})\n\n print 'Running tests with p = ', p\n print 'Mean of input: ', x.mean()\n print 'Mean of train-time output: ', out.mean()\n print 'Mean of test-time output: ', out_test.mean()\n print 'Fraction of train-time output set to zero: ', (out == 0).mean()\n print 'Fraction of test-time output set to zero: ', (out_test == 0).mean()\n print",
"Dropout backward pass\nIn the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.",
"x = np.random.randn(10, 10) + 10\ndout = np.random.randn(*x.shape)\n\ndropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}\nout, cache = dropout_forward(x, dropout_param)\ndx = dropout_backward(dout, cache)\ndx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)\n\nprint 'dx relative error: ', rel_error(dx, dx_num)",
"Fully-connected nets with Dropout\nIn the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.",
"N, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor dropout in [0, 0.25, 1.0]:\n print 'Running check with dropout = ', dropout\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n weight_scale=5e-2, dtype=np.float64,\n dropout=dropout, seed=123)\n\n loss, grads = model.loss(X, y)\n print 'Initial loss: ', loss\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))\n print",
"Regularization experiment\nAs an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.",
"# Train two identical nets, one with dropout and one without\n\nnum_train = 10\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nsolvers = {}\ndropout_choices = [0.0]\nfor dropout in dropout_choices:\n model = FullyConnectedNet([500], dropout=dropout)\n print dropout\n\n solver = Solver(model, small_data,\n num_epochs=200, batch_size=100,\n update_rule='adam',\n optim_config={\n 'learning_rate': 5e-4,\n },\n verbose=True, print_every=500)\n solver.train()\n solvers[dropout] = solver\n\n# Plot train and validation accuracies of the two models\n\ntrain_accs = []\nval_accs = []\nfor dropout in dropout_choices:\n solver = solvers[dropout]\n train_accs.append(solver.train_acc_history[-1])\n val_accs.append(solver.val_acc_history[-1])\n\nplt.subplot(3, 1, 1)\nfor dropout in dropout_choices:\n plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)\nplt.title('Train accuracy')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.legend(ncol=2, loc='lower right')\n \nplt.subplot(3, 1, 2)\nfor dropout in dropout_choices:\n plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)\nplt.title('Val accuracy')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.legend(ncol=2, loc='lower right')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"Question\nExplain what you see in this experiment. What does it suggest about dropout?\nAnswer"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
adrn/TriandRRLyrae
|
notebooks/Target selection.ipynb
|
mit
|
[
"import astropy.coordinates as coord\nfrom astropy.io import ascii\nimport astropy.table as at\nimport astropy.units as u\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom gary.observation.rrlyrae import M_V\nfrom gary.observation import distance\n\n# this contains all Catalina RR Lyrae stars\ntbl = ascii.read(\"/Users/adrian/projects/streams/data/catalog/Catalina_all_RRLyr.txt\")\ntbl.remove_column(\"Num\")\n\n# this contains Catalina RR Lyrae with measured radial velocities\nrvtbl = ascii.read(\"/Users/adrian/projects/streams/data/catalog/Catalina_vgsr_RRLyr.txt\")\nrvtbl = rvtbl.filled(np.nan)\nrvtbl = rvtbl[~np.isnan(rvtbl['Vgsr'])]\n\ntbl.colnames\n\nrvtbl.colnames\n\n# join the RV star table with the normal table to get positions for stars with RV's\njoined = at.join(tbl, rvtbl, keys=(\"ID\"))\n\n# j = joined\nc = coord.SkyCoord(ra=tbl['RAdeg'].data*u.deg, dec=tbl['DEdeg'].data*u.deg)\ncj = coord.SkyCoord(ra=joined['RAdeg'].data*u.deg, dec=joined['DEdeg'].data*u.deg)\n\ngal = c.galactic\ngalj = cj.galactic\n\nbox = [100,160,-35,-15]*u.degree\n\nix = ((c.galactic.l > box[0]) & (c.galactic.l < box[1]) & \n (c.galactic.b > box[2]) & (c.galactic.b < box[3]))\ntriand = tbl[ix].filled()\n_c_triand = c[ix]\n\nix = ((cj.galactic.l > box[0]) & (cj.galactic.l < box[1]) & \n (cj.galactic.b > box[2]) & (cj.galactic.b < box[3]))\ntriandj = joined[ix].filled()\n_c_triandj = cj[ix]\n\nprint(\"{} RR Lyrae stars in this region -- {} has a measured radial velocity.\".format(len(triand), len(triandj)))\n\nfig = plt.figure(figsize=(14,12))\nax = fig.add_subplot(111, projection='hammer')\nax.plot((coord.Angle(360*u.deg) - _c_triand.icrs.ra).wrap_at(180*u.deg).radian, \n _c_triand.icrs.dec.radian, linestyle='none')\n\ngplane = coord.SkyCoord(l=np.linspace(0,360,1000)*u.deg, b=np.zeros(1000)*u.deg, frame=coord.Galactic)\n\nax.plot((coord.Angle(360*u.deg) - gplane.icrs.ra).wrap_at(180*u.deg).radian, \n gplane.icrs.dec.radian, linestyle='none')",
"Now a distance cut:",
"d = triand['dh'].data\nd_cut = (d > 15) & (d < 21)\n\ntriand_dist = triand[d_cut]\nc_triand = _c_triand[d_cut]\nprint(len(triand_dist))\n\nplt.hist(triand_dist['<Vmag>'].data)",
"Stars I actually observed",
"ptf_triand = ascii.read(\"/Users/adrian/projects/streams/data/observing/triand.txt\")\nptf_c = coord.SkyCoord(ra=ptf_triand['ra']*u.deg, dec=ptf_triand['dec']*u.deg)\n\nprint ptf_triand.colnames, len(ptf_triand)\nobs_dist = distance(ptf_triand['Vmag'].data)\n((obs_dist > 12*u.kpc) & (obs_dist < 25*u.kpc)).sum()\n\nptf_triand[0]",
"Data for the observed stars",
"rrlyr_d = np.genfromtxt(\"/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt\", \n skiprows=2, dtype=None, names=['l','b','vhel','vgsr','src','ra','dec','name','dist'])\nobs_rrlyr = rrlyr_d[rrlyr_d['src'] == 'PTF']",
"Comparison of stars observed with Catalina",
"fig,ax = plt.subplots(1,1,figsize=(10,8))\n# ax.plot(c.galactic.l.degree, c.galactic.b.degree, linestyle='none',\n# marker='o', markersize=4, alpha=0.75) # ALL RR LYRAE\nax.plot(c_triand.galactic.l.degree, c_triand.galactic.b.degree, linestyle='none',\n marker='o', markersize=5, alpha=0.75)\nax.plot(ptf_c.galactic.l.degree, ptf_c.galactic.b.degree, linestyle='none', \n marker='o', markerfacecolor='none', markeredgewidth=2, markersize=12, alpha=0.75)\nax.plot(obs_rrlyr['l'], obs_rrlyr['b'], linestyle='none', mec='r',\n marker='o', markerfacecolor='none', markeredgewidth=2, markersize=12, alpha=0.75)\n\n# x = np.linspace(-10,40,100)\n# x[x < 0] += 360.\n# y = np.linspace(30,45,100)\n# x,y = map(np.ravel, np.meshgrid(x,y))\n# ccc = coord.SkyCoord(ra=x*u.deg,dec=y*u.deg)\n# ax.plot(ccc.galactic.l.degree, ccc.galactic.b.degree, linestyle='none')\n\nax.set_xlim(97,162)\nax.set_ylim(-37,-13)\n\nax.set_xlabel(\"$l$ [deg]\")\nax.set_ylabel(\"$b$ [deg]\")",
"Issues\nWhy are some of the PTF RR Lyrae missing from Catalina? Because they are too faint! (R>18)\nWhy are Catalina stars missing from PTF? More observations, larger selection window.",
"fig,ax = plt.subplots(1,1,figsize=(10,8))\n\nax.plot(c_triand.galactic.l.degree, c_triand.galactic.b.degree, linestyle='none',\n marker='o', markersize=4, alpha=0.75)\nax.plot(ptf_c.galactic.l.degree, ptf_c.galactic.b.degree, linestyle='none', \n marker='o', markerfacecolor='none', markeredgewidth=2, markersize=8, alpha=0.75)\nax.plot(obs_rrlyr['l'], obs_rrlyr['b'], linestyle='none', mec='r',\n marker='o', markerfacecolor='none', markeredgewidth=2, markersize=8, alpha=0.75)\n\nax.plot(c_triand.galactic.l.degree[10], c_triand.galactic.b.degree[10], linestyle='none',\n marker='o', markersize=25, alpha=0.75)\n\nax.set_xlim(97,162)\nax.set_ylim(-37,-13)\n\nc_triand.icrs[10]",
"Possible Blaschko stars:\n* R_13322281016459551106\n* R_13879390364114107826",
"brani = ascii.read(\"/Users/adrian/projects/triand-rrlyrae/brani_sample/TriAnd.dat\")\n\nblaschko = brani[(brani['objectID'] == \"13322281016459551106\") | (brani['objectID'] == \"13879390364114107826\")]\n\nfor b in blaschko:\n row = ptf_triand[np.argmin(np.sqrt((ptf_triand['ra'] - b['ra'])**2 + (ptf_triand['dec'] - b['dec'])**2))]\n print(row['name'])\n print(coord.SkyCoord(ra=row['ra']*u.deg, dec=row['dec']*u.deg).galactic)\n\nzip(obs_rrlyr['l'], obs_rrlyr['b'])\n\nd = V_to_dist(triand['<Vmag>'].data).to(u.kpc).value\n\nbins = np.arange(1., 60+5, 3)\n\nplt.figure(figsize=(10,8))\nn,bins,patches = plt.hist(triand['dh'].data, bins=bins, alpha=0.5, label='Catalina')\nfor pa in patches:\n if pa.xy[0] < 15. or pa.xy[0] > 40.:\n pa.set_alpha(0.2)\n\n# other_bins = np.arange(0, 15+2., 2.)\n# plt.hist(V_to_dist(triand['<Vmag>'].data), bins=other_bins, alpha=0.2, color='k')\n\n# other_bins = np.arange(40, 60., 2.)\n# plt.hist(V_to_dist(triand['<Vmag>'].data), bins=other_bins, alpha=0.2, color='k')\n\nplt.hist(V_to_dist(ptf_triand['Vmag'].data), \n bins=bins, alpha=0.5, label='PTF/MDM')\nplt.xlabel(\"Distance [kpc]\")\nplt.ylabel(\"Number\")\n# plt.ylim(0,35)\nplt.legend(fontsize=20)\nplt.axvline(18.)\nplt.axvline(28.)",
"For Kathryn's proposal",
"import emcee\nimport triangle\nfrom scipy.misc import logsumexp\n\n((distance(triand['<Vmag>'].data) > (15.*u.kpc)) & (distance(triand['<Vmag>'].data) < (40.*u.kpc))).sum()\n\n!head -n3 /Users/adrian/projects/triand-rrlyrae/data/triand_giants.txt\n\nd = np.loadtxt(\"/Users/adrian/projects/triand-rrlyrae/data/triand_giants.txt\", skiprows=1)\nd2 = np.genfromtxt(\"/Users/adrian/projects/triand-rrlyrae/data/TriAnd_Mgiant.txt\", skiprows=2)\n\nplt.plot(d[:,0], d[:,2], linestyle='none')\nplt.plot(d2[:,0], d2[:,3], linestyle='none')\n\nix = (d[:,2] < 100) & (d[:,2] > -50)\nix = np.ones_like(ix).astype(bool)\nplt.plot(d[ix,0], d[ix,2], linestyle='none')\nplt.plot(d[ix,0], -1*d[ix,0] + 170, marker=None)\nplt.xlabel('l [deg]')\nplt.ylabel('v_r [km/s]')\n\nplt.figure()\nplt.plot(d[ix,0], d[ix,1], linestyle='none')\nplt.xlabel('l [deg]')\nplt.ylabel('b [deg]')\n\ndef ln_normal(x, mu, sigma):\n return -0.5*np.log(2*np.pi) - np.log(sigma) - 0.5*((x-mu)/sigma)**2\n\n# def ln_prior(p):\n# m,b,V = p\n \n# if m > 0. or m < -50:\n# return -np.inf\n \n# if b < 0 or b > 500:\n# return -np.inf\n \n# if V <= 0.:\n# return -np.inf\n \n# return -np.log(V)\n\n# def ln_likelihood(p, l, vr, sigma_vr):\n# m,b,V = p\n# sigma = np.sqrt(sigma_vr**2 + V**2)\n# return ln_normal(vr, m*l + b, sigma)\n\n# mixture model - f_ol is outlier fraction\ndef ln_prior(p):\n m,b,V,f_ol = p\n \n if m > 0. or m < -50:\n return -np.inf\n \n if b < 0 or b > 500:\n return -np.inf\n \n if V <= 0.:\n return -np.inf\n \n if f_ol > 1. or f_ol < 0.:\n return -np.inf\n \n return -np.log(V)\n\ndef likelihood(p, l, vr, sigma_vr):\n m,b,V,f_ol = p\n sigma = np.sqrt(sigma_vr**2 + V**2)\n term1 = ln_normal(vr, m*l + b, sigma)\n term2 = ln_normal(vr, 0., 120.)\n return np.array([term1, term2])\n\ndef ln_likelihood(p, *args):\n m,b,V,f_ol = p\n x = likelihood(p, *args)\n \n # coefficients\n b = np.zeros_like(x)\n b[0] = 1-f_ol\n b[1] = f_ol\n \n return logsumexp(x,b=b, axis=0)\n \ndef ln_posterior(p, *args):\n lnp = ln_prior(p)\n if np.isinf(lnp):\n return -np.inf\n \n return lnp + ln_likelihood(p, *args).sum()\n\ndef outlier_prob(p, *args):\n m,b,V,f_ol = p\n p1,p2 = likelihood(p, *args)\n return f_ol*np.exp(p2) / ((1-f_ol)*np.exp(p1) + f_ol*np.exp(p2))\n\nvr_err = 2 # km/s\nnwalkers = 32\nsampler = emcee.EnsembleSampler(nwalkers=nwalkers, dim=4, lnpostfn=ln_posterior, \n args=(d[ix,0],d[ix,2],vr_err))\n\np0 = np.zeros((nwalkers,sampler.dim))\np0[:,0] = np.random.normal(-1, 0.1, size=nwalkers)\np0[:,1] = np.random.normal(150, 0.1, size=nwalkers)\np0[:,2] = np.random.normal(25, 0.5, size=nwalkers)\np0[:,3] = np.random.normal(0.1, 0.01, size=nwalkers)\n\nfor pp in p0:\n lnp = ln_posterior(pp, *sampler.args)\n if not np.isfinite(lnp):\n print(\"you suck\")\n\npos,prob,state = sampler.run_mcmc(p0, N=100)\nsampler.reset()\npos,prob,state = sampler.run_mcmc(pos, N=1000)\n\nfig = triangle.corner(sampler.flatchain,\n labels=[r'$\\mathrm{d}v/\\mathrm{d}l$', r'$v_0$', r'$\\sigma_v$', r'$f_{\\rm halo}$'])\n\nfigsize = (12,8)\n\nMAP = sampler.flatchain[sampler.flatlnprobability.argmax()]\npout = outlier_prob(MAP, d[ix,0], d[ix,2], vr_err)\n\nplt.figure(figsize=figsize)\ncl = plt.scatter(d[ix,0], d[ix,2], c=(1-pout), s=30, cmap='RdYlGn', vmin=0, vmax=1)\ncbar = plt.colorbar(cl)\ncbar.set_clim(0,1)\n\n# plt.plot(d[ix,0], d[ix,2], linestyle='none', marker='o', ms=4)\nplt.xlabel(r'$l\\,[{\\rm deg}]$')\nplt.ylabel(r'$v_r\\,[{\\rm km\\,s}^{-1}]$')\n\nls = np.linspace(d[ix,0].min(), d[ix,0].max(), 100)\nfor i in np.random.randint(len(sampler.flatchain), size=100):\n m,b,V,f_ol = sampler.flatchain[i]\n plt.plot(ls, m*ls+b, color='#555555', alpha=0.1, marker=None)\n\nbest_m,best_b,best_V,best_f_ol = MAP\nplt.plot(ls, best_m*ls + best_b, color='k', alpha=1, marker=None)\nplt.plot(ls, best_m*ls + best_b + best_V, color='k', alpha=1, marker=None, linestyle='--')\nplt.plot(ls, best_m*ls + best_b - best_V, color='k', alpha=1, marker=None, linestyle='--')\nplt.xlim(ls.max()+2, ls.min()-2)\nplt.title(\"{:.1f}% halo stars\".format(best_f_ol*100.))\n\nprint(((1-pout) > 0.75).tolist())\n\nprint best_m, best_b, best_V\n\nprint \"MAP velocity dispersion: {:.2f} km/s\".format(best_V)\n\nhigh_p = (1-pout) > 0.8\n\nplt.figure(figsize=figsize)\ncl = plt.scatter(d[high_p,0], d[high_p,1], c=d[high_p,2]-d[high_p,2].mean(), s=30, cmap='coolwarm', vmin=-40, vmax=40)\ncbar = plt.colorbar(cl)\nax = plt.gca()\nax.set_axis_bgcolor('#555555')\n\nplt.xlim(ls.max()+2,ls.min()-2)\nplt.ylim(-50,-10)\nplt.xlabel(r'$l\\,[{\\rm deg}]$')\nplt.ylabel(r'$b\\,[{\\rm deg}]$')\nplt.title(r'$P_{\\rm TriAnd} > 0.8$', y=1.02)",
"Now read in RR Lyrae data, compute prob for each star",
"rrlyr_d = np.genfromtxt(\"/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt\", skiprows=2, dtype=None)\n\n!cat \"/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt\"\n\nrrlyr_d = np.genfromtxt(\"/Users/adrian/projects/triand-rrlyrae/data/RRL_ALL.txt\", skiprows=2)\nrrlyr_vr_err = 10.\n\nMAP = sampler.flatchain[sampler.flatlnprobability.argmax()]\npout = outlier_prob(MAP, rrlyr_d[:,0], rrlyr_d[:,3], rrlyr_vr_err)\n\nplt.figure(figsize=figsize)\ncl = plt.scatter(rrlyr_d[:,0], rrlyr_d[:,1], c=(1-pout), s=30, cmap='RdYlGn', vmin=0, vmax=1)\ncbar = plt.colorbar(cl)\ncbar.set_clim(0,1)\n\n# plt.plot(d[ix,0], d[ix,2], linestyle='none', marker='o', ms=4)\nplt.xlabel(r'$l\\,[{\\rm deg}]$')\nplt.ylabel(r'$b\\,[{\\rm deg}]$')\n\nplt.xlim(ls.max()+2,ls.min()-2)\nplt.ylim(-50,-10)\n\nplt.title(\"RR Lyrae\")\n\nMAP = sampler.flatchain[sampler.flatlnprobability.argmax()]\npout = outlier_prob(MAP, rrlyr_d[:,0], rrlyr_d[:,3], rrlyr_vr_err)\n\nplt.figure(figsize=figsize)\ncl = plt.scatter(rrlyr_d[:,0], rrlyr_d[:,3], c=(1-pout), s=30, cmap='RdYlGn', vmin=0, vmax=1)\ncbar = plt.colorbar(cl)\ncbar.set_clim(0,1)\n\n# plt.plot(d[ix,0], d[ix,2], linestyle='none', marker='o', ms=4)\nplt.xlabel(r'$l\\,[{\\rm deg}]$')\nplt.ylabel(r'$v_r\\,[{\\rm km\\,s}^{-1}]$')\n\nls = np.linspace(d[ix,0].min(), d[ix,0].max(), 100)\n\nbest_m,best_b,best_V,best_f_ol = MAP\nplt.plot(ls, best_m*ls + best_b, color='k', alpha=1, marker=None)\nplt.plot(ls, best_m*ls + best_b + best_V, color='k', alpha=1, marker=None, linestyle='--')\nplt.plot(ls, best_m*ls + best_b - best_V, color='k', alpha=1, marker=None, linestyle='--')\nplt.xlim(ls.max()+2, ls.min()-2)\n\nplt.title(\"RR Lyrae\")"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cliburn/sta-663-2017
|
notebook/Extras_02_Functional_Word_Counting.ipynb
|
mit
|
[
"Bonus Material: Word count\nThe word count problem is the 'Hello world' equivalent of distributed programming. Word count is also the basic process by which text is converted into features for text mining and topic modeling. We show a variety of ways to solve the word count problem in Python to familiarize you with different coding approaches.",
"text = ''''Twas brillig, and the slithy toves\n Did gyre and gimble in the wabe;\n All mimsy were the borogoves,\n And the mome raths outgrabe.\n\n 'Beware the Jabberwock, my son!\n The jaws that bite, the claws that catch!\n Beware the Jubjub bird, and shun\n The frumious Bandersnatch!'\n\n He took his vorpal sword in hand:\n Long time the manxome foe he sought--\n So rested he by the Tumtum tree,\n And stood awhile in thought.\n\n And as in uffish thought he stood,\n The Jabberwock, with eyes of flame,\n Came whiffling through the tulgey wood,\n And burbled as it came!\n\n One, two! One, two! And through and through\n The vorpal blade went snicker-snack!\n He left it dead, and with its head\n He went galumphing back.\n\n 'And hast thou slain the Jabberwock?\n Come to my arms, my beamish boy!\n O frabjous day! Callooh! Callay!'\n He chortled in his joy.\n\n 'Twas brillig, and the slithy toves\n Did gyre and gimble in the wabe;\n All mimsy were the borogoves,\n And the mome raths outgrabe.'''",
"Convert to list of words",
"import string\n\ntable = dict.fromkeys(map(ord, string.punctuation))\nwords = text.translate(table).strip().lower().split()\n\nwords[:10]",
"Slower version without translate",
"for char in string.punctuation:\n text = text.replace(char, '')\nwords2 = text.strip().lower().split()\n\nwords2[:10]",
"Using a regular dictionary",
"c1 = {}\nfor word in words:\n c1[word] = c1.get(word, 0) + 1\n\nsorted(c1.items(), key=lambda x: x[1], reverse=True)[:3]",
"Using a default dictionary",
"from collections import defaultdict\n\nc2 = defaultdict(int)\nfor word in words:\n c2[word] += 1\n\nsorted(c2.items(), key=lambda x: x[1], reverse=True)[:3]",
"Using a Counter",
"from collections import Counter\n\nc3 = Counter(words)\n\nc3.most_common(3)",
"Using third party function",
"from toolz import frequencies\n\nc4 = frequencies(words)\n\nsorted(c4.items(), key=lambda x: x[1], reverse=True)[:3]",
"Counting without dictionaries",
"from itertools import groupby\n\nc5 = map(lambda x: (x[0], sum(1 for item in x[1])), \n groupby(sorted(words)))\n\nsorted(c5, key=lambda x: x[1], reverse=True)[:3]",
"Vectorized version",
"import numpy as np\n\nvalues, counts = np.unique(words, return_counts=True)\n\nc6 = dict(zip(values, counts))\n\nsorted(c6.items(), key=lambda x: x[1], reverse=True)[:3]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/empirical_calibration
|
notebooks/causal_inference_kang_schafer.ipynb
|
apache-2.0
|
[
"#@title Copyright 2019 The Empirical Calibration Authors.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ============================================================================",
"Causal Inference of Kang-Schafer simulation.\n<table align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/google/empirical_calibration/blob/master/notebooks/causal_inference_kang_schafer.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/google/empirical_calibration/blob/master/notebooks/causal_inference_kang_schafer.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\n\nCausal Inference of Kang-Schafer simulation.\n\nImports\nCorrectly Specified Model\nMisspecified Model\nBenchmark Execution Time\n\n\nWe illustrate empirical calibration to estimate the average treatment effect on the treated (ATT) on Kang-Schafer simulation under both correctly specified and misspecified models, and benchmark the execution time. For details of simulation setup, please refer to kang_schafer_population_mean.ipynb.\nImports",
"from matplotlib import pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport patsy\nimport seaborn as sns\nimport timeit\n\n# install and import ec\n!pip install -q git+https://github.com/google/empirical_calibration\nimport empirical_calibration as ec\n\nsns.set_style('whitegrid')\n%config InlineBackend.figure_format='retina'\n",
"Correctly Specified Model\nWe run the simulation 1000 times under correctly specified logistic propensity score and linear outcome regression. For each simulation we estimate the average treatment effect on the treated (ATT) using empirical calibration.",
"def estimate_att(formula):\n simulation = ec.data.kang_schafer.Simulation(size=1000)\n\n t = simulation.treatment\n y = simulation.outcome\n\n df = pd.DataFrame(\n np.column_stack(\n [simulation.covariates, simulation.transformed_covariates]))\n df.columns = [\"z1\", \"z2\", \"z3\", \"z4\", \"x1\", \"x2\", \"x3\", \"x4\"]\n x = patsy.dmatrix(formula, df, return_type=\"dataframe\").values\n\n\n weights = ec.maybe_exact_calibrate(covariates=x[t == 0],\n target_covariates=x[t == 1])[0]\n\n return (np.mean(y[t == 1]) - np.mean(y[t == 0]),\n np.mean(y[t == 1]) - np.sum(y[t == 0] * weights))\n\n\ndef show_estimates(estimates, col='weighted'):\n ax = estimates[col].hist(bins=20, alpha=0.8, edgecolor='none')\n plt.axvline(estimates[col].mean(), linestyle='dashed', color='red')\n print('bias of {} is {}'.format(col, estimates[col].mean()))\n print('rmse of {} is {}'.format(col, np.sqrt(np.mean((estimates[col] - 0.) ** 2))))\n\nestimates = pd.DataFrame(\n [estimate_att(\"-1 + z1 + z2 + z3 + z4\") for i in xrange(1000)])\nestimates.columns = ['raw', 'weighted']",
"The mean of the 1000 ATT estimates after weight correction is very close to the true zero ATT.",
"show_estimates(estimates,'raw')\n\nshow_estimates(estimates,'weighted')",
"Misspecified Model\nIf the transformed covariates are observed in place of the true covariates, both the propensity score model and outcome regression model become misspecified. We run 1000 simulations and for each simulation estimate the ATT by balancing the transformed covariates. The causal estimate is no longer unbiased.",
"estimates_miss = pd.DataFrame([estimate_att(\"-1 + x1 + x2 + x3 + x4\") for i in xrange(1000)])\nestimates_miss.columns = ['raw', 'weighted']\n\nshow_estimates(estimates_miss)",
"One reasonable strategy is to expand the set of balancing covariates and hope it will make the model less \"misspecified\". If we additional balance the two-way interactions and the log transformation, the bias indeed reduces.",
"formula = (\"-1 + (x1 + x2 + x3 + x4)**2 + I(np.log(x1)) + I(np.log(x2)) + \"\n \"I(np.log(x3)) + I(np.log(x4))\")\n\nestimates_expanded = pd.DataFrame([estimate_att(formula) for i in xrange(1000)])\nestimates_expanded.columns = ['raw', 'weighted']\n\nshow_estimates(estimates_expanded)",
"If the model was misspecified in the sense that more covariates are included than necessary, the causal estimate remains unbiased.",
"formula = \"-1 + z1 + z2 + z3 + z4 + x1 + x2 + x3 + x4\"\nestimates_redundant = pd.DataFrame([estimate_att(formula) for i in range(1000)])\nestimates_redundant.columns = ['raw', 'weighted']\n\nshow_estimates(estimates_redundant)",
"Benchmark Execution Time\nThe execution time is generally linear with respect to the sample size. With 1 million control units, it takes around 1 second to find the weights.",
"np.random.seed(123)\nsimulation = ec.data.kang_schafer.Simulation(size=2000)\nx1 = simulation.covariates[simulation.treatment == 1]\nx0 = simulation.covariates[simulation.treatment == 0]\npd.Series(timeit.repeat(\n 'ec.maybe_exact_calibrate(x0, x1)',\n setup='from __main__ import x1, x0, ec',\n repeat=100,\n number=1)).describe()\n\nnp.random.seed(123)\nsimulation = ec.data.kang_schafer.Simulation(size=20000)\nx1 = simulation.covariates[simulation.treatment == 1]\nx0 = simulation.covariates[simulation.treatment == 0]\npd.Series(timeit.repeat(\n 'ec.maybe_exact_calibrate(x0, x1)',\n setup='from __main__ import x1, x0, ec',\n repeat=100,\n number=1)).describe()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nyoungb2/CLdb
|
doc/examples/Methanosarcina/Setup.ipynb
|
gpl-2.0
|
[
"This notebook describes the setup of CLdb with a set of Methanosarcina genomes.\n\nDataset Notes\n\nThe CRISPR systems were classified according to Vestergaard G, Garrett RA, Shah SA. (2014). CRISPR adaptive immune systems of Archaea. RNA Biol 11: 156–167\n\n\nGeneral Notes\n\nThis notebook assumed that you have CLdb in your PATH",
"# path to raw files\n## CHANGE THIS!\nrawFileDir = \"~/perl/projects/CLdb/data/Methanosarcina/\"\n# directory where the CLdb database will be created\n## CHANGE THIS!\nworkDir = \"~/t/CLdb_Methanosarcina/\"\n\n# viewing file links\nimport os\nimport zipfile\nimport csv\nfrom IPython.display import FileLinks\n# pretty viewing of tables\n## get from: http://epmoyer.github.io/ipy_table/\nfrom ipy_table import *\n\nrawFileDir = os.path.expanduser(rawFileDir)\nworkDir = os.path.expanduser(workDir)",
"The required files are in '../ecoli_raw/':\n\na loci table\narray files\ngenome nucleotide sequences\ngenbank (preferred) or fasta format\n\nLet's look at the provided files for this example:",
"FileLinks(rawFileDir)",
"Checking that CLdb is installed in PATH",
"!CLdb -h",
"Setting up the CLdb directory",
"# this makes the working directory\nif not os.path.isdir(workDir):\n os.makedirs(workDir)\n\n# unarchiving files in the raw folder over to the newly made working folder\nfiles = ['array.zip','loci.zip', 'accessions.txt.zip']\nfiles = [os.path.join(rawFileDir, x) for x in files]\nfor f in files:\n if not os.path.isfile(f):\n raise IOError, 'Cannot find file: {}'.format(f)\n else:\n zip = zipfile.ZipFile(f)\n zip.extractall(path=workDir) \n\nprint 'unzipped raw files:' \nFileLinks(workDir) ",
"Downloading the genome genbank files. Using the 'GIs.txt' file\n\nGIs.txt is just a list of GIs and taxon names.",
"# making genbank directory\ngenbankDir = os.path.join(workDir, 'genbank')\nif not os.path.isdir(genbankDir):\n os.makedirs(genbankDir) \n\n# downloading genomes\n!cd $genbankDir; \\\n CLdb -- accession-GI2fastaGenome -format genbank -fork 9 < ../accessions.txt\n \n# checking files\n!cd $genbankDir; \\\n ls -thlc *.gbk",
"Creating/loading CLdb of E. coli CRISPR data",
"!CLdb -- makeDB -h",
"Making CLdb sqlite file",
"!cd $workDir; \\\n CLdb -- makeDB -r -drop\n \nCLdbFile = os.path.join(workDir, 'CLdb.sqlite')\nprint 'CLdb file location: {}'.format(CLdbFile)",
"Setting up CLdb config\n\nThis way, the CLdb script will know where the CLdb database is located.\nOtherwise, you would have to keep telling the CLdb script where the database is.",
"s = 'DATABASE = ' + CLdbFile\nconfigFile = os.path.join(os.path.expanduser('~'), '.CLdb')\n\nwith open(configFile, 'wb') as outFH:\n outFH.write(s)\n \nprint 'Config file written: {}'.format(configFile)\n\n# checking that the config is set\n!CLdb --config-params",
"Loading loci\n\nThe next step is loading the loci table.\nThis table contains the user-provided info on each CRISPR-CAS system in the genomes.\nLet's look at the table before loading it in CLdb\n\nChecking out the CRISPR loci table",
"lociFile = os.path.join(workDir, 'loci', 'loci.txt')\n\n# reading in file\ntbl = []\nwith open(lociFile, 'rb') as f:\n reader = csv.reader(f, delimiter='\\t')\n for row in reader:\n tbl.append(row)\n\n# making table\nmake_table(tbl)\napply_theme('basic')",
"Notes on the loci table:\n* As you can see, not all of the fields have values. Some are not required (e.g., 'fasta_file').\n* You will get an error if you try to load a table with missing values in required fields.\n* For a list of required columns, see the documentation for CLdb -- loadLoci -h.\nLoading loci info into database",
"!CLdb -- loadLoci -h\n\n!CLdb -- loadLoci < $lociFile",
"Notes on loading\n\nA lot is going on here:\nVarious checks on the input files\nExtracting the genome fasta sequence from each genbank file \nThe genome fasta is required\n\n\nLoading of the loci information into the sqlite database\n\nNotes on the command\n\nWhy didn't I use the 'required' -database flag for CLdb -- loadLoci???\nI didn't have to use the -database flag because it is provided via the .CLdb config file that was previously created.",
"# This is just a quick summary of the database \n## It should show 10 loci for the 'loci' rows\n!CLdb -- summary",
"The summary doesn't show anything for spacers, DRs, genes or leaders!\nThat's because we haven't loaded that info yet...\nLoading CRISPR arrays\n\nThe next step is to load the CRISPR array tables.\nThese are tables in 'CRISPRFinder format' that have CRISPR array info.\nLet's take a look at one of the array files before loading them all.",
"# an example array file (obtained from CRISPRFinder)\narrayFile = os.path.join(workDir, 'array', 'Methanosarcina_acetivorans_C2A_1.txt')\n!head $arrayFile",
"Note: the array file consists of 4 columns:\n\nspacer start\nspacer sequence\ndirect-repeat sequence\ndirect-repeat stop\n\nAll extra columns ignored!",
"# loading CRISPR array info\n!CLdb -- loadArrays \n\n# This is just a quick summary of the database \n!CLdb -- summary",
"Note: The output should show 75 spacer & 85 DR entries in the database\nLoading CAS genes\n\nTechnically, all coding seuqences in the region specified in the loci table (CAS_start, CAS_end) will be loaded.\nThis requires 2 subcommands:\nThe 1st gets the gene info\nThe 2nd loads the info into CLdb",
"geneDir = os.path.join(workDir, 'genes')\nif not os.path.isdir(geneDir):\n os.makedirs(geneDir)\n\n!cd $geneDir; \\\n CLdb -- getGenesInLoci 2> CAS.log > CAS.txt\n \n# checking output \n!cd $geneDir; \\\n head -n 5 CAS.log; \\\n echo -----------; \\\n tail -n 5 CAS.log; \\\n echo -----------; \\\n head -n 5 CAS.txt\n\n# loading gene table into the database\n!cd $geneDir; \\\n CLdb -- loadGenes < CAS.txt ",
"Setting array sense strand\n\nThe strand that is transcribed needs to be defined in order to have the correct sequence for downstream analyses (e.g., blasting spacers and getting PAM regions)\nThe sense (reading) strand is defined by (order of precedence):\nThe leader region (if defined; in this case, no).\nArray_start,Array_end in the loci table\nThe genome negative strand will be used if array_start > array_end",
"!CLdb -- setSenseStrand ",
"Spacer and DR clustering\n\nClustering of spacer and/or DR sequences accomplishes:\nA method of comparing within and between CRISPRs\nA reducing redundancy for spacer and DR blasting",
"!CLdb -- clusterArrayElements -s -r",
"Database summary",
"# summary\n!cd $workDir; \\\n CLdb -- summary -name -subtype > summary.txt\n\n# checking output\n!cd $workDir; \\\n cat summary.txt",
"Next Steps\n\narrayBlast\nBlast spacers (& DRs), get protospacers, PAM regions, mismatches to the protospacer & SEED sequence\nTODO: spacers_shared\nSpacer sequences shared among CRISPSRs\nTODO: DR_consensus\nConsensus sequences of direct repeats in each CRISPR\nTODO: loci_plots\nPlots of CRISPR arrays and CAS genes"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jeffcarter-github/MachineLearningLibrary
|
MachineLearningLibrary/Cluster/kmeans_example.ipynb
|
mit
|
[
"This notebook is designed for the exploration of the K-Means algorithm...\n1. Arbitrary data sets can be created...\n2. K-Means algo can be run with different intializations ('forgy', 'random', k_means++)...",
"from __future__ import print_function, division\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib notebook\n\nfrom KMeans import KMeans",
"Create Data...",
"np.random.seed(3)\n\n# controls distance between clusters...\nd_scalar = 2.0\n# control size of clusters...\ns_scalar = 0.5\n# number of dimensions...\ndimensions = 4\n# data points per cluster...\nn_data = 500\n# actual number of clusters...\nn_clusters = 5\n\n# create data offsets\nc = [d_scalar * np.random.randn(1, dimensions) for i in range(n_clusters)]\n# create scaled data with offsets\nx = [s_scalar * np.random.randn(n_data, dimensions) - c[i] for i in range(n_clusters)]\n\nX = np.concatenate(x, axis=0)\n\nplt.figure()\nplt.subplot(221)\nplt.title('2D Cluster Slice')\nfor i in range(n_clusters):\n plt.scatter(x[i][:,0], x[i][:,1], marker='.', alpha=0.25)\nplt.xlabel('X_0')\nplt.ylabel('X_1')\nplt.subplot(222)\nplt.title('2D Cluster Slice')\nfor i in range(n_clusters):\n plt.scatter(x[i][:,0], x[i][:,2], marker='.', alpha=0.25)\nplt.xlabel('X_0')\nplt.ylabel('X_2')\nplt.subplot(223)\nfor i in range(n_clusters):\n plt.scatter(x[i][:,0], x[i][:,3], marker='.', alpha=0.25)\nplt.xlabel('X_0')\nplt.ylabel('X_3')\nplt.subplot(224)\nfor i in range(n_clusters):\n plt.scatter(x[i][:,1], x[i][:,2], marker='.', alpha=0.25)\nplt.xlabel('X_1')\nplt.ylabel('X_2')\nplt.tight_layout()",
"Cluster via KMeans",
"trial_clusters = [2, 3, 4, 6, 8, 10, 15, 20]\ninertia_lst = []\n\nfor i in trial_clusters:\n kmeans = KMeans(n_clusters=i, init='kmeans++', max_iter=100)\n kmeans.fit(X)\n inertia_lst.append(kmeans.inertia)\n\nplt.figure()\nplt.plot(trial_clusters, inertia_lst, 'o--')\nplt.xlim(min(trial_clusters) - 1, max(trial_clusters) + 1)\nplt.ylabel('inertia')\nplt.xlabel('n_clusters')",
"Depending on the data set and the degree of algorithm convergence, the 'eblow' should be visable. The 'eblow' is a discontinuity of the second derivative of the inertia as a function of clusters... This 'elbow' is our best guess as the correct estimation of the actual number of clusters..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cochoa0x1/integer-programming-with-python
|
01-introduction/Linear Programming.ipynb
|
mit
|
[
"Introduction\nWhen life was easy\nAt some point in my calculus education I developed a simple rule, when in doubt set the derivative equal to zero and solve for x. You might recall doing this, and the reason for doing it is because for a smooth function its local maximum and minimum are found at places where the derivate is zero. Imagine if you have curve shaped like a hill, now if you go up the hill and at some point go over the top (the max), if you keep going you will start traveling downward. If you went from increasing to decreasing, the calculus argument was that at some point your rate of increase was zero and that would be at the top. Largely this works very well. It leads nicely to many optimization rules, but it breaks down a little bit when we don't have a curve. In particular it breaks down when our domain is the set of integers.\nEnter the real world, everything is a model\nOften times the problems are stated as find the \"best\" route, or find the \"best\" fitting function. For us to solve these problems, we need to model what \"best\" means. We need to mathematically describe a quantity to either maximize or minimize. For example we might seek select the route that minimizes the total distance traveled by a salesman (traveling salesman). Or if we were trying to fit a curve to some data, we might minimize the error between the predicted curve and the data (regression). We might try to select paths in a distribution network that maximize flow (network flow problems) etc. \nWhen we talk about programs, we mean a set of variables, a function of those variables to maximize or minimize, and some constraints that those variables must satisfy.\nformally:\n$$minimize\\: f(x)\\subject\\, to\\: g(x) < 0 $$\nWe will see that the form of this optimization metric $f(x)$ plays a huge role in the difficulty. The first best case is when $f(x)$ and the constraints $g(x)$ are linear.\nLinear Programs\nA linear program is one where the metric or objective to minimize is linear and the constraints are linear, i.e.:\n$$f(x) = a_{0}x_{0} + a_{1}x_{1}+ \\dots + a_{n}x_{n}$$\ncontinuous vs integer (and mixed integer!)\nIf all the variables are continuous life is good. If however the variables must take on integer solutions, for example, 1,2,3... then life can be very hectic. Problems with integer only solutions are integer programming problems and they can be very difficult to solve if a solution exists at all! When you have a mix of variable types, you have a mixed integer problem. While many algorithms exist to solve all of these types, the most common for continuous programs is the simplex algorithm and for mixed integer the insanely clever branch cut and bound method works very well. We will talk about these later.\nHow do you know if the program is linear? just make sure the expression and the constraints are a combination of constants multiplied by some variables. If any of variables are multiplied or divided by another, you are in trouble and you have a harder problem. It is also important that the variables be continuous. If for example Most interesting problems are not linear, however there are a number approximations and tricks that can turn non linear programs into linear ones. We will explore this later. For now, lets solve our first program using PuLP.",
"#load all the things! \nfrom pulp import *",
"lets try a small problem we can easily solve by hand\n$$minimize\\: f(x,y,z)=5x+10y+6z \\\nsubject\\, to\\\nx+y+z \\geq 20 \\\n0\\leq x,y,z \\leq 10 \n$$\nIn school we may have learned how to solve these types or problems by writing them in canonical form and throwing some linear algebra at them. PuLP is a library that removes this need, we can code our problem almost exactly as stated above in PuLP and it will do the hard work for us. What PuLP actually does is format the problem into a standard language that is used by many numerical solvers. \n1. Setup the problem",
"prob = LpProblem(\"Hello-Mathematical-Programming-World!\",LpMinimize)",
"2. Setup the variables:\nfor now we will make them manually, but there are convenience methods for when you need to make millions at a time",
"x = LpVariable('x',lowBound=0, upBound=10, cat='Continuous')\ny = LpVariable('y',lowBound=0, upBound=10, cat='Continuous')\nz = LpVariable('z',lowBound=0, upBound=10, cat='Continuous')",
"3. Setup the objective",
"objective = 5*x+10*y+6*z",
"what does this create?",
"print(type(objective))",
"It is an LpAffineExpression. You can actually print LpAffineExpressions to see what you have programmed. Be careful with this on larger problems",
"print(objective)",
"4. Setup the constraints",
"constraint = x + y + z >= 20",
"5. stuff the objective and the constraint into the problem\nTo add constraints and objectives to the problem, we literally just add them to it",
"#add the objective\nprob+= objective\n\n#add the constraints\nprob+=constraint",
"like the LpAffineExpression class, we can print the problem to see what PuLP has generated. This is very useful for small problems, but can print thousands of lines for large problems. Its always a good idea to start small.",
"print(prob)",
"6. Solve it!\nPulp comes packaged with an okay-ish solver. The really fast solvers like cplex and gurobi are either not free or not free for non academic use. I personally like GLPK which is the GNU linear programming solver, except it is for *nix platforms.",
"%time prob.solve()\nprint(LpStatus[prob.status])",
"7. Get the results",
"#get a single variables value\nprint(x.varValue)\n\n#or get all the variables\nfor v in prob.variables():\n print(v, v.varValue)",
"In this example, the optimal objective is {{x.varValue}}\nand the variables that give us that answer are:\n{{[q.varValue for q in prob.variables()]}}"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mitdbg/modeldb
|
client/workflows/demos/census-end-to-end.ipynb
|
mit
|
[
"Logistic Regression with Grid Search (scikit-learn)\n<a href=\"https://colab.research.google.com/github/VertaAI/modeldb/blob/master/client/workflows/demos/census-end-to-end.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"# restart your notebook if prompted on Colab\ntry:\n import verta\nexcept ImportError:\n !pip install verta",
"This example features:\n- scikit-learn's LinearRegression model\n- verta's Python client logging grid search results\n- verta's Python client retrieving the best run from the grid search to calculate full training accuracy\n- predictions against a deployed model",
"HOST = \"app.verta.ai\"\n\nPROJECT_NAME = \"Census Income Classification\"\nEXPERIMENT_NAME = \"Logistic Regression\"\n\n# import os\n# os.environ['VERTA_EMAIL'] = \n# os.environ['VERTA_DEV_KEY'] = ",
"Imports",
"from __future__ import print_function\n\nimport warnings\nfrom sklearn.exceptions import ConvergenceWarning\nwarnings.filterwarnings(\"ignore\", category=ConvergenceWarning)\nwarnings.filterwarnings(\"ignore\", category=FutureWarning)\n\nimport itertools\nimport os\nimport time\n\nimport six\n\nimport numpy as np\nimport pandas as pd\n\nimport sklearn\nfrom sklearn import model_selection\nfrom sklearn import linear_model\nfrom sklearn import metrics\n\ntry:\n import wget\nexcept ImportError:\n !pip install wget # you may need pip3\n import wget",
"Log Workflow\nThis section demonstrates logging model metadata and training artifacts to ModelDB.\nPrepare Data",
"train_data_url = \"http://s3.amazonaws.com/verta-starter/census-train.csv\"\ntrain_data_filename = wget.detect_filename(train_data_url)\nif not os.path.isfile(train_data_filename):\n wget.download(train_data_url)\n\ntest_data_url = \"http://s3.amazonaws.com/verta-starter/census-test.csv\"\ntest_data_filename = wget.detect_filename(test_data_url)\nif not os.path.isfile(test_data_filename):\n wget.download(test_data_url)\n\ndf_train = pd.read_csv(train_data_filename)\nX_train = df_train.iloc[:,:-1]\ny_train = df_train.iloc[:, -1]\n\ndf_train.head()",
"Prepare Hyperparameters",
"hyperparam_candidates = {\n 'C': [1e-6, 1e-4],\n 'solver': ['lbfgs'],\n 'max_iter': [15, 28],\n}\nhyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))\n for values\n in itertools.product(*hyperparam_candidates.values())]",
"Instantiate Client",
"from verta import Client\nfrom verta.utils import ModelAPI\n\nclient = Client(HOST)\nproj = client.set_project(PROJECT_NAME)\nexpt = client.set_experiment(EXPERIMENT_NAME)",
"Train Models",
"def run_experiment(hyperparams):\n # create object to track experiment run\n run = client.set_experiment_run()\n \n # create validation split\n (X_val_train, X_val_test,\n y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,\n test_size=0.2,\n shuffle=True)\n\n # log hyperparameters\n run.log_hyperparameters(hyperparams)\n print(hyperparams, end=' ')\n \n # create and train model\n model = linear_model.LogisticRegression(**hyperparams)\n model.fit(X_train, y_train)\n \n # calculate and log validation accuracy\n val_acc = model.score(X_val_test, y_val_test)\n run.log_metric(\"val_acc\", val_acc)\n print(\"Validation accuracy: {:.4f}\".format(val_acc))\n \n # create deployment artifacts\n model_api = ModelAPI(X_train, model.predict(X_train))\n requirements = [\"scikit-learn\"]\n \n # save and log model\n run.log_model(model, model_api=model_api)\n run.log_requirements(requirements)\n \n # log Git information as code version\n run.log_code()\n \n# NOTE: run_experiment() could also be defined in a module, and executed in parallel\nfor hyperparams in hyperparam_sets:\n run_experiment(hyperparams)",
"Revisit Workflow\nThis section demonstrates querying and retrieving runs via the Client.\nRetrieve Best Run",
"best_run = expt.expt_runs.sort(\"metrics.val_acc\", descending=True)[0]\nprint(\"Validation Accuracy: {:.4f}\".format(best_run.get_metric(\"val_acc\")))\n\nbest_hyperparams = best_run.get_hyperparameters()\nprint(\"Hyperparameters: {}\".format(best_hyperparams))",
"Train on Full Dataset",
"model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)\nmodel.fit(X_train, y_train)",
"Calculate Accuracy on Full Training Set",
"train_acc = model.score(X_train, y_train)\nprint(\"Training accuracy: {:.4f}\".format(train_acc))",
"Deployment and Live Predictions\nThis section demonstrates model deployment and predictions, if supported by your version of ModelDB.",
"model_id = 'YOUR_MODEL_ID'\n\nrun = client.set_experiment_run(id=model_id)",
"Prepare \"Live\" Data",
"df_test = pd.read_csv(test_data_filename)\nX_test = df_test.iloc[:,:-1]",
"Deploy Model",
"run.deploy(wait=True)\n\nrun",
"Query Deployed Model",
"deployed_model = run.get_deployed_model()\nfor x in itertools.cycle(X_test.values.tolist()):\n print(deployed_model.predict([x]))\n time.sleep(.5)",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mtasende/Machine-Learning-Nanodegree-Capstone
|
notebooks/dev/n02_separating_the_test_set.ipynb
|
mit
|
[
"On this notebook the test and training sets will be defined.",
"# Basic imports\nimport os\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport datetime as dt\nimport scipy.optimize as spo\nimport sys\n\n%matplotlib inline\n\n%pylab inline\npylab.rcParams['figure.figsize'] = (20.0, 10.0)\n\n%load_ext autoreload\n%autoreload 2\n\nsys.path.append('../')",
"Let's test the scikit learn example for TimeSeriesSplit (with some modifications)",
"from sklearn.model_selection import TimeSeriesSplit\nnum_samples = 30\ndims = 2\n\nX = np.random.random((num_samples,dims))\ny = np.array(range(num_samples))\ntscv = TimeSeriesSplit(n_splits=3)\nprint(tscv) \nTimeSeriesSplit(n_splits=3)\nfor train_index, test_index in tscv.split(X):\n print(\"TRAIN_indexes:\", train_index, \"TEST_indexes:\", test_index)\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = y[train_index], y[test_index]",
"It may be useful for validation purposes. The test set will be separated before, anyway. The criterion to follow is to always keep causality.\nLet's get the data and preserve one part as the test set.\nNote: The way the test set will be used, is still not defined. Also, the definition of X and y may depend on the length of the base time interval used for training. But, in any case, it is a good practise to separate a fraction of the data for test, that will be untouched regardless of all those decisions.",
"data_df = pd.read_pickle('../../data/data_df.pkl')\nprint(data_df.shape)\ndata_df.head(10)",
"I will save about two years worth of data for the test set (it wouldn't be correct to save a fixed fraction of the total set because the size of the \"optimal\" training set is still to be defined; I may end up using much less than the total dataset).",
"num_test_samples = 252 * 2\n\ndata_train_val_df, data_test_df = data_df.unstack().iloc[:-num_test_samples], data_df.unstack().iloc[-num_test_samples:] \n\ndef show_df_basic(df):\n print(df.shape)\n print('Starting value: %s\\nEnding value: %s' % (df.index.get_level_values(0)[0], df.index.get_level_values(0)[-1]))\n print(df.head())\n\nshow_df_basic(data_train_val_df)\n\nshow_df_basic(data_test_df)",
"I could select the Close values, for example, like below...",
"data_test_df.loc[slice(None),(slice(None),'Close')].head()",
"Or like this...",
"data_test_df.xs('Close', level=1, axis=1).head()",
"But I think it will be more clear if I swap the levels in the columns",
"data_train_val_df = data_train_val_df.swaplevel(0, 1, axis=1).stack().unstack()\nshow_df_basic(data_train_val_df)\ndata_test_df = data_test_df.swaplevel(0, 1, axis=1).stack().unstack()\nshow_df_basic(data_test_df)",
"Now it's very easy to select one of the features:",
"data_train_val_df['Close']",
"Let's pickle the data",
"data_train_val_df.to_pickle('../../data/data_train_val_df.pkl')\ndata_test_df.to_pickle('../../data/data_test_df.pkl')",
"No separate validation set will be needed as I will use \"time\" cross-validation for that."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
iurilarosa/thesis
|
codici/Archiviati/numpy/.ipynb_checkpoints/Hough Numpy-checkpoint.ipynb
|
gpl-3.0
|
[
"Numpy",
"import scipy.io\nimport pandas\nimport numpy\nimport os\nfrom matplotlib import pyplot\nfrom scipy import sparse\nimport multiprocessing\n%matplotlib inline\n\n\n\n#carico file dati\npercorsoFile = \"/home/protoss/Documenti/TESI/DATI/peakmap1.mat.mat\"\n\n#print(picchi.shape)\n#picchi[0]\n#nb: picchi ha 0-tempi\n# 1-frequenze\n# 4-pesi\n\n#ora popolo il dataframe\ntabella = pandas.DataFrame(scipy.io.loadmat(percorsoFile)['PEAKS'])\ntabella.drop(tabella.columns[[2, 3]], axis = 1, inplace=True)\ntabella.columns = [\"tempi\", \"frequenze\",\"pesi\"]\n\n#fascia di sicurezza\nsecurbelt = 4000\n\nheaderFreq= scipy.io.loadmat(percorsoFile)['hm_job'][0,0]['fr'][0]\nheaderSpindown = scipy.io.loadmat(percorsoFile)['hm_job'][0,0]['sd'][0]\nepoca = scipy.io.loadmat(percorsoFile)['basic_info'][0,0]['epoch'][0,0]\n\n#nb: headerFreq ha 0- freq minima,\n# 1- step frequenza, \n# 2- enhancement in risoluzone freq, \n# 3- freq massima, \n#headerSpindown ha 0- spin down iniziale di pulsar\n# 1- step spindown\n# 2- numero di step di spindown\n#Definisco relative variabili per comodità e chiarezza del codice\n\n#frequenze\nminFreq = headerFreq[0]\nmaxFreq = headerFreq[3]\nenhancement = headerFreq[2]\nstepFrequenza = headerFreq[1]\nstepFreqRaffinato = stepFrequenza/enhancement\nprint(minFreq,maxFreq, enhancement, stepFrequenza, stepFreqRaffinato)\n\nfreqIniz = minFreq- stepFrequenza/2 - stepFreqRaffinato\nfreqFin = maxFreq + stepFrequenza/2 + stepFreqRaffinato\nnstepFrequenze = numpy.ceil((freqFin-freqIniz)/stepFreqRaffinato)+securbelt\n\n#spindown\nspindownIniz = headerSpindown[0]\nstepSpindown = headerSpindown[1]\nnstepSpindown = headerSpindown[2].astype(int)\n\n\n# riarrangio gli array in modo che abbia i dati \n# nel formato che voglio io\nfrequenze = tabella['frequenze'].values\nfrequenze = ((frequenze-freqIniz)/stepFreqRaffinato)-round(enhancement/2+0.001)\n\ntempi = tabella['tempi'].values\nprint(numpy.amax(tempi)-numpy.amin(tempi))\ntempi = tempi-epoca\ntempi = ((tempi)*3600*24/stepFreqRaffinato)\n#tempi = tempi - numpy.amin(tempi)+1\n#tempi = tempi.astype(int)\n\npesi = tabella['pesi'].values\n\n#%reset_selective tabella\n\n#nstepSpindown = 200\nspindowns = numpy.arange(0, nstepSpindown)\nspindowns = numpy.multiply(spindowns,stepSpindown)\nspindowns = numpy.add(spindowns, spindownIniz)\n# così ho i tre array delle tre grandezze\n\n\nnRows = nstepSpindown\nnColumns = nstepFrequenze.astype(int)\nfakeRow = numpy.zeros(frequenze.size)\n\ndef itermatrix(stepIesimo):\n sdPerTempo = spindowns[stepIesimo]*tempi\n appoggio = numpy.round(frequenze-sdPerTempo+securbelt/2).astype(int)\n \n valori = numpy.bincount(appoggio,pesi)\n \n missColumns = (nColumns-valori.size)\n zeros = numpy.zeros(missColumns)\n matrix = numpy.concatenate((valori, zeros))\n return matrix\n\npool = multiprocessing.Pool()\n%time imageMapped = list(pool.map(itermatrix, range(nstepSpindown)))\npool.close\nimageMapped = numpy.array(imageMapped)\nimageMappedNonsum = imageMapped\n\nsemiLarghezza = numpy.round(enhancement/2+0.001).astype(int)\nimageMapped[:,semiLarghezza*2:nColumns]=imageMapped[:,semiLarghezza*2:nColumns]-imageMapped[:,0:nColumns - semiLarghezza*2]\nimageMapped = numpy.cumsum(imageMapped, axis = 1)",
"$$ H_{i\\:bin} = \\left[\\nu_{bin}-\\left(i\\Delta \\dot{T} + \\dot{T}0 \\right)t{bin} + 2000\\right],\\; i = 0,...,n;\\; bin= 0,..., nbins$$\n$$ H_{i\\:bin} = \\nu_{bin}-\\dot{T}'i t{bin} + 2000,\\; i = 0,...,n;\\; bin= 0,..., nbins$$",
"%matplotlib inline\npyplot.figure(figsize=(30,7))\na = pyplot.imshow(imageMapped[:,3400:nColumns-1500], aspect = 50)\npyplot.colorbar(shrink = 1 ,aspect = 10)",
"Utile notebook per imshow\nConfronti\nHough dal programma originale in matlab",
"percorsoFile = \"originale/concumsum.mat\"\npercorsoFile2 = \"originale/senzacumsum.mat\"\nimmagineOriginale = scipy.io.loadmat(percorsoFile)['binh_df0']\nimmagineOriginaleNonsum = scipy.io.loadmat(percorsoFile2)['binh_df0']\n\n#percorsoFile = \"debugExamples/concumsumDB.mat\"\n#imgOrigDB = scipy.io.loadmat(percorsoFile)['binh_df0']\n\npyplot.figure(figsize=(30,7))\npyplot.imshow(immagineOriginale[:,3200:nstepFrequenze.astype(int)-1500],\n #cmap='gray',\n aspect=50)\npyplot.colorbar(shrink = 1,aspect = 10)\n#pyplot.colorbar(immagine)\npyplot.show\n\nmiaVSoriginale = immagineOriginale - imageMapped\n#miaVSoriginale = immagineOriginale - imageParalled\n#matlabVSoriginale = immagineOriginale - imgOrigDB\n#pyplot.figure(figsize=(100, 30))\n\n#verificadoppia = miaVSoriginale - matlabVSoriginale\npyplot.imshow(miaVSoriginale[:,3200:nstepFrequenze.astype(int)-1500],aspect=50)\npyplot.colorbar(shrink = 1,aspect = 10)\nprint(numpy.nonzero(miaVSoriginale))\n\nmiaVSoriginaleNonsum = immagineOriginaleNonsum - imageMapped\n#miaVSoriginale = immagineOriginale - imageParalled\n#matlabVSoriginale = immagineOriginale - imgOrigDB\n#pyplot.figure(figsize=(100, 30))\n\n#verificadoppia = miaVSoriginale - matlabVSoriginale\npyplot.imshow(miaVSoriginaleNonsum[:,3200:nstepFrequenze.astype(int)-1500],aspect=50)\npyplot.colorbar(shrink = 1,aspect = 10)\nprint(numpy.nonzero(miaVSoriginaleNonsum))",
"Hough dal mio programma in matlab",
"percorsoFile = \"matlabbo/miaimgconcumsum.mat\"\n#percorsoFile = \"matlabbo/miaimgnoncumsum.mat\"\n\nprint(numpy.shape(immagineMatlabbo))\nimmagineMatlabbo = scipy.io.loadmat(percorsoFile)['hough']\npyplot.figure(figsize=(30, 7))\npyplot.imshow(immagineMatlabbo[:,3200:nstepFrequenze.astype(int)-1500],\n #cmap='gray',\n aspect=50)\npyplot.colorbar(shrink = 1,aspect = 10)\n#pyplot.colorbar(immagine)\npyplot.show\n\n\n# CONFRONTO\nmiaMatvsorigMat = immagineMatlabbo - immagineOriginale\n#miaMatvsorigMat = immagineMatlabbo - immagineOriginaleNonsum\npyplot.figure(figsize=(30, 7))\npyplot.imshow(miaMatvsorigMat[:,3200:nstepFrequenze.astype(int)-1500],aspect=50)\npyplot.colorbar(shrink = 1,aspect = 10)\n#print(numpy.nonzero(verifica))\n",
"Programma semplificato per domande",
"import numpy\nfrom scipy import sparse\nimport multiprocessing\nfrom matplotlib import pyplot\n\n#first i build a matrix of some x positions vs time datas in a sparse format\nmatrix = numpy.random.randint(2, size = 100).astype(float).reshape(10,10)\nx = numpy.nonzero(matrix)[0]\ntimes = numpy.nonzero(matrix)[1]\nweights = numpy.random.rand(x.size)\n\n#then i define an array of y positions\nnStepsY = 5\ny = numpy.arange(1,nStepsY+1)\n\nnRows = nStepsY\nnColumns = 80\nimage = numpy.zeros((nRows, nColumns))\nfakeRow = numpy.zeros(x.size)\n\ndef itermatrix(ithStep):\n yTimed = y[ithStep]*times\n positions = (numpy.round(x-yTimed)+50).astype(int)\n\n matrix = sparse.coo_matrix((weights, (fakeRow, positions))).todense()\n matrix = numpy.ravel(matrix)\n missColumns = (nColumns-matrix.size)\n zeros = numpy.zeros(missColumns)\n matrix = numpy.concatenate((matrix, zeros))\n return matrix\n\nfor i in numpy.arange(nStepsY):\n image[i] = itermatrix(i)\n\n#or, without initialization of image:\n%time imageSparsed = list(map(itermatrix, range(nStepsY)))\nimageSparsed = numpy.array(imageSparsed)\n\npyplot.imshow(imageSparsed, aspect = 10)\npyplot.colorbar(shrink = 0.75,aspect = 10)\n\n#TEST PARALLELIZZAZIOME MAP\n%time imageSparsed = list(map(itermatrix, range(nStepsY)))\n\npool = multiprocessing.Pool()\n%time imageParSparsed = pool.map(itermatrix, range(nStepsY))\npool.close()\nimageParalled = numpy.array(imageParSparsed)\n#PROBLEMA CON PARALLELIZZAZIONE DA CAPIRE!\n\n\n%matplotlib inline\n#pyplot.figure(figsize=(100, 30))\na = pyplot.imshow(imageParSparsed, aspect = 10)\npyplot.colorbar(shrink = 0.5,aspect = 10)\n\n# riarrangio gli array in modo che abbia i dati \n# nel formato che voglio io\n#nstepSpindown = 200\nspindowns = numpy.arange(0, nstepSpindown)\nspindowns = numpy.multiply(spindowns,stepSpindown)\nspindowns = numpy.add(spindowns, spindownIniz)\n# così ho i tre array delle tre grandezze\nprint(spindowns)\n\n\n\n\n#nstepSpindown = 200\nspindowns = numpy.arange(0, nstepSpindown)\nspindowns = numpy.multiply(spindowns,stepSpindown)\nspindowns = numpy.add(spindowns, spindownIniz)\n# così ho i tre array delle tre grandezze\n\n\nnRows = nstepSpindown\nnColumns = nstepFrequenze.astype(int)\nfakeRow = numpy.zeros(frequenze.size)\n\ndef itermatrix(stepIesimo):\n sdPerTempo = spindowns[stepIesimo]*tempi\n appoggio = numpy.round(frequenze-sdPerTempo+securbelt/2).astype(int)\n \n matrix = sparse.coo_matrix((pesi, (fakeRow, appoggio))).todense()\n matrix = numpy.ravel(matrix)\n missColumns = (nColumns-matrix.size)\n zeros = numpy.zeros(missColumns)\n matrix = numpy.concatenate((matrix, zeros))\n return matrix\n\n#PROBLEMA CON PARALLELIZZAZIONE DA CAPIRE!\n%time imageMapped = list(map(itermatrix, range(nstepSpindown)))\nimageMapped = numpy.array(imageMapped)\nimageMappedNonsum = imageMapped\n\nsemiLarghezza = numpy.round(enhancement/2+0.001).astype(int)\nimageMapped[:,semiLarghezza*2:nColumns]=imageMapped[:,semiLarghezza*2:nColumns]-imageMapped[:,0:nColumns - semiLarghezza*2]\nimageMapped = numpy.cumsum(imageMapped, axis = 1)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
BrandonSmithJ/tensorflow-double-DQN
|
Double-DQN/tensorflow-deepq/notebooks/.ipynb_checkpoints/karpathy_game-checkpoint.ipynb
|
mit
|
[
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport numpy as np\nimport tempfile\nimport tensorflow as tf\n\nfrom tf_rl.controller import HumanController, DDQN as DiscreteDeepQ\nfrom tf_rl.simulation import KarpathyGame\nfrom tf_rl import simulate\nfrom tf_rl.models import MLP\n\nfrom __future__ import print_function\n\nLOG_DIR = tempfile.mkdtemp()\nprint(LOG_DIR)\n\ncurrent_settings = {\n 'objects': [\n 'friend',\n 'enemy',\n ],\n 'colors': {\n 'hero': 'yellow',\n 'friend': 'green',\n 'enemy': 'red',\n },\n 'object_reward': {\n 'friend': 0.1,\n 'enemy': -0.1,\n },\n 'hero_bounces_off_walls': False,\n 'world_size': (700,500),\n 'hero_initial_position': [400, 300],\n 'hero_initial_speed': [0, 0],\n \"maximum_speed\": [50, 50],\n \"object_radius\": 10.0,\n \"num_objects\": {\n \"friend\" : 25,\n \"enemy\" : 25,\n },\n \"num_observation_lines\" : 32,\n \"observation_line_length\": 120.,\n \"tolerable_distance_to_wall\": 50,\n \"wall_distance_penalty\": -0.0,\n \"delta_v\": 50\n}\n\n# create the game simulator\ng = KarpathyGame(current_settings)\n\nhuman_control = False\n\nif human_control:\n # WSAD CONTROL (requires extra setup - check out README)\n current_controller = HumanController({b\"w\": 3, b\"d\": 0, b\"s\": 1,b\"a\": 2,}) \nelse:\n # Tensorflow business - it is always good to reset a graph before creating a new controller.\n tf.ops.reset_default_graph()\n session = tf.InteractiveSession()\n\n # This little guy will let us run tensorboard\n # tensorboard --logdir [LOG_DIR]\n journalist = tf.train.SummaryWriter(LOG_DIR)\n\n # Brain maps from observation to Q values for different actions.\n # Here it is a done using a multi layer perceptron with 2 hidden\n # layers\n brain = MLP([g.observation_size,], [200, 200, g.num_actions], \n [tf.tanh, tf.tanh, tf.identity])\n \n # The optimizer to use. Here we use RMSProp as recommended\n # by the publication\n optimizer = tf.train.RMSPropOptimizer(learning_rate= 0.001, decay=0.9)\n\n # DiscreteDeepQ object\n current_controller = DiscreteDeepQ(g.observation_size, g.num_actions, brain, optimizer, session,\n discount_rate=0.99, exploration_period=5000, max_experience=10000, \n store_every_nth=4, train_every_nth=4,\n summary_writer=journalist)\n \n session.run(tf.initialize_all_variables())\n session.run(current_controller.target_network_update)\n # graph was not available when journalist was created \n journalist.add_graph(session.graph_def)\n\nFPS = 30\nACTION_EVERY = 3\n \nfast_mode = False\nif fast_mode:\n WAIT, VISUALIZE_EVERY = False, 20\nelse:\n WAIT, VISUALIZE_EVERY = True, 1\n\n \ntry:\n if True:#with tf.device(\"/cpu:0\"):\n simulate(simulation=g,\n controller=current_controller,\n fps=FPS,\n visualize_every=VISUALIZE_EVERY,\n action_every=ACTION_EVERY,\n wait=WAIT,\n disable_training=False,\n simulation_resolution=0.001,\n save_path=None)\nexcept KeyboardInterrupt:\n print(\"Interrupted\")\n\nsession.run(current_controller.target_network_update)\n\ncurrent_controller.q_network.input_layer.Ws[0].eval()\n\ncurrent_controller.target_q_network.input_layer.Ws[0].eval()",
"Average Reward over time",
"g.plot_reward(smoothing=100)",
"Visualizing what the agent is seeing\nStarting with the ray pointing all the way right, we have one row per ray in clockwise order.\nThe numbers for each ray are the following:\n- first three numbers are normalized distances to the closest visible (intersecting with the ray) object. If no object is visible then all of them are $1$. If there's many objects in sight, then only the closest one is visible. The numbers represent distance to friend, enemy and wall in order.\n- the last two numbers represent the speed of moving object (x and y components). Speed of wall is ... zero.\nFinally the last two numbers in the representation correspond to speed of the hero.",
"g.__class__ = KarpathyGame\nnp.set_printoptions(formatter={'float': (lambda x: '%.2f' % (x,))})\nx = g.observe()\nnew_shape = (x[:-2].shape[0]//g.eye_observation_size, g.eye_observation_size)\nprint(x[:-2].reshape(new_shape))\nprint(x[-2:])\ng.to_html()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
librosa/tutorial
|
Librosa tutorial.ipynb
|
cc0-1.0
|
[
"Librosa tutorial\n\nVersion: 0.4.3\nTutorial home: https://github.com/librosa/tutorial\nLibrosa home: http://librosa.github.io/\nUser forum: https://groups.google.com/forum/#!forum/librosa\n\nEnvironments\nWe assume that you have already installed Anaconda.\nIf you don't have an environment, create one by following command:\nbash\nconda create --name YOURNAME scipy jupyter ipython\n(Replace YOURNAME by whatever you want to call the new environment.)\nThen, activate the new environment\nbash\nsource activate YOURNAME\nInstalling librosa\nLibrosa can then be installed by the following [🔗]:\nbash\nconda install -c conda-forge librosa\nNOTE: Windows need to install audio decoding libraries separately. We recommend ffmpeg.\nTest drive\nStart Jupyter:\nbash\njupyter notebook\nand open a new notebook.\nThen, run the following:",
"import librosa\nprint(librosa.__version__)\n\ny, sr = librosa.load(librosa.util.example_audio_file())\nprint(len(y), sr)",
"Documentation!\nLibrosa has extensive documentation with examples.\nWhen in doubt, go to http://librosa.github.io/librosa/\nConventions\n\nAll data are basic numpy types\nAudio buffers are called y\nSampling rate is called sr\nThe last axis is time-like:\n y[1000] is the 1001st sample\n S[:, 100] is the 101st frame of S\nDefaults sr=22050, hop_length=512\n\nRoadmap for today\n\nlibrosa.core\nlibrosa.feature\nlibrosa.display\nlibrosa.beat\nlibrosa.segment\nlibrosa.decompose\n\nlibrosa.core\n\nLow-level audio processes\nUnit conversion\nTime-frequency representations\n\nTo load a signal at its native sampling rate, use sr=None",
"y_orig, sr_orig = librosa.load(librosa.util.example_audio_file(),\n sr=None)\nprint(len(y_orig), sr_orig)",
"Resampling is easy",
"sr = 22050\n\ny = librosa.resample(y_orig, sr_orig, sr)\n\nprint(len(y), sr)",
"But what's that in seconds?",
"print(librosa.samples_to_time(len(y), sr))",
"Spectral representations\nShort-time Fourier transform underlies most analysis.\nlibrosa.stft returns a complex matrix D.\nD[f, t] is the FFT value at frequency f, time (frame) t.",
"D = librosa.stft(y)\nprint(D.shape, D.dtype)",
"Often, we only care about the magnitude.\nD contains both magnitude S and phase $\\phi$.\n$$\nD_{ft} = S_{ft} \\exp\\left(j \\phi_{ft}\\right)\n$$",
"import numpy as np\n\nS, phase = librosa.magphase(D)\nprint(S.dtype, phase.dtype, np.allclose(D, S * phase))",
"Constant-Q transforms\nThe CQT gives a logarithmically spaced frequency basis.\nThis representation is more natural for many analysis tasks.",
"C = librosa.cqt(y, sr=sr)\n\nprint(C.shape, C.dtype)",
"Exercise 0\n\nLoad a different audio file\nCompute its STFT with a different hop length",
"# Exercise 0 solution\n\ny2, sr2 = librosa.load( )\n\nD = librosa.stft(y2, hop_length= )",
"librosa.feature\n\nStandard features:\nlibrosa.feature.melspectrogram\nlibrosa.feature.mfcc\nlibrosa.feature.chroma\nLots more...\n\n\nFeature manipulation:\nlibrosa.feature.stack_memory\nlibrosa.feature.delta\n\n\n\nMost features work either with audio or STFT input",
"melspec = librosa.feature.melspectrogram(y=y, sr=sr)\n\n# Melspec assumes power, not energy as input\nmelspec_stft = librosa.feature.melspectrogram(S=S**2, sr=sr)\n\nprint(np.allclose(melspec, melspec_stft))",
"librosa.display\n\n\nPlotting routines for spectra and waveforms\n\n\nNote: major overhaul coming in 0.5",
"# Displays are built with matplotlib \nimport matplotlib.pyplot as plt\n\n# Let's make plots pretty\nimport matplotlib.style as ms\nms.use('seaborn-muted')\n\n# Render figures interactively in the notebook\n%matplotlib nbagg\n\n# IPython gives us an audio widget for playback\nfrom IPython.display import Audio",
"Waveform display",
"plt.figure()\nlibrosa.display.waveplot(y=y, sr=sr)",
"A basic spectrogram display",
"plt.figure()\nlibrosa.display.specshow(melspec, y_axis='mel', x_axis='time')\nplt.colorbar()",
"Exercise 1\n\n\nPick a feature extractor from the librosa.feature submodule and plot the output with librosa.display.specshow\n\n\nBonus: Customize the plot using either specshow arguments or pyplot functions",
"# Exercise 1 solution\n\nX = librosa.feature.XX()\n\nplt.figure()\n\nlibrosa.display.specshow( )",
"librosa.beat\n\nBeat tracking and tempo estimation\n\nThe beat tracker returns the estimated tempo and beat positions (measured in frames)",
"tempo, beats = librosa.beat.beat_track(y=y, sr=sr)\nprint(tempo)\nprint(beats)",
"Let's sonify it!",
"clicks = librosa.clicks(frames=beats, sr=sr, length=len(y))\n\nAudio(data=y + clicks, rate=sr)",
"Beats can be used to downsample features",
"chroma = librosa.feature.chroma_cqt(y=y, sr=sr)\nchroma_sync = librosa.feature.sync(chroma, beats)\n\nplt.figure(figsize=(6, 3))\nplt.subplot(2, 1, 1)\nlibrosa.display.specshow(chroma, y_axis='chroma')\nplt.ylabel('Full resolution')\nplt.subplot(2, 1, 2)\nlibrosa.display.specshow(chroma_sync, y_axis='chroma')\nplt.ylabel('Beat sync')",
"librosa.segment\n\nSelf-similarity / recurrence\nSegmentation\n\nRecurrence matrices encode self-similarity\nR[i, j] = similarity between frames (i, j)\n\nLibrosa computes recurrence between k-nearest neighbors.",
"R = librosa.segment.recurrence_matrix(chroma_sync)\n\nplt.figure(figsize=(4, 4))\nlibrosa.display.specshow(R)",
"We can include affinity weights for each link as well.",
"R2 = librosa.segment.recurrence_matrix(chroma_sync,\n mode='affinity',\n sym=True)\n\nplt.figure(figsize=(5, 4))\nlibrosa.display.specshow(R2)\nplt.colorbar()",
"Exercise 2\n\nPlot a recurrence matrix using different features\nBonus: Use a custom distance metric",
"# Exercise 2 solution",
"librosa.decompose\n\nhpss: Harmonic-percussive source separation\nnn_filter: Nearest-neighbor filtering, non-local means, Repet-SIM\ndecompose: NMF, PCA and friends\n\nSeparating harmonics from percussives is easy",
"D_harm, D_perc = librosa.decompose.hpss(D)\n\ny_harm = librosa.istft(D_harm)\n\ny_perc = librosa.istft(D_perc)\n\nAudio(data=y_harm, rate=sr)\n\nAudio(data=y_perc, rate=sr)",
"NMF is pretty easy also!",
"# Fit the model\nW, H = librosa.decompose.decompose(S, n_components=16, sort=True)\n\nplt.figure(figsize=(6, 3))\nplt.subplot(1, 2, 1), plt.title('W')\nlibrosa.display.specshow(librosa.logamplitude(W**2), y_axis='log')\nplt.subplot(1, 2, 2), plt.title('H')\nlibrosa.display.specshow(H, x_axis='time')\n\n# Reconstruct the signal using only the first component\nS_rec = W[:, :1].dot(H[:1, :])\n\ny_rec = librosa.istft(S_rec * phase)\n\nAudio(data=y_rec, rate=sr)",
"Exercise 3\n\nCompute a chromagram using only the harmonic component\nBonus: run the beat tracker using only the percussive component\n\nWrapping up\n\n\nThis was just a brief intro, but there's lots more!\n\n\nRead the docs: http://librosa.github.io/librosa/\n\nAnd the example gallery: http://librosa.github.io/librosa_gallery/\nWe'll be sprinting all day. Get involved! https://github.com/librosa/librosa/issues/395"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NYUDataBootcamp/Projects
|
UG_S16/Yusef-Shaheen-IPO-Underpricing.ipynb
|
mit
|
[
"Abstract\nCompanies seeking public funding partner up with investment banks who bear the responsibility of facilitating the private bidding process. The private company sells all shares to the investment bank who in turn take the outstanding shares public. Because the investment banks take on the risk of the shares, many hypothesize that the IPO price tends to be undervalued so that the bank is not left with excess stock (read: risk).\nIs this myth true? Do IPOs truly tend to be underpriced? If so, is there a pattern that can be identified? Could short term underpricing really just be long-term overvaluation in disguise?\nThese questions will be explored in this report.\nCredits:\n-Yusef '18 (Project)\n-Owen '18 (Help with multiprocessing code)\nLoading Modules:",
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport requests\nimport re\nplt.style.use('ggplot')\nimport matplotlib\n%matplotlib inline",
"Loading Dataset:\nThe data used in this report was originally taken from Google finance.\nThe multiprocessing code used to obtain the original Google finance data can be found here: http://puu.sh/oKMeh/1e7ee7d056.docx\nNote that since the code takes a very long time to run (more than 15,000 calls, necessity of a VPN, etc) the code was ran ahead of time and the data was uploaded to a personal filehosting website. This is what PANDAS will read.",
"# Download the data file from `puu.sh` and save it locally under `file_name`:\nurl = \"http://puu.sh/oBCfW/c006093339.xlsx\" # Script was ran ahead of time and uploaded onto this website. Random sample.\nfile_name = \"./IPO_Expanded_Multiprocessing_d.xlsx\"\n\nreq = requests.get(url)\nfile = open(file_name, 'wb')\nfor chunk in req.iter_content(100000):\n file.write(chunk)\nfile.close()\n\nmy_data = pd.read_excel(file_name,sheetname=\"Nasdaq_IPO_Expanded_Multiproces\")\n\nmy_data.head(3)",
"Cleaning Data",
"df = my_data.copy()\n## These symbols were not available \ndf = df[df.Symbol != \"GAV'U\"]\ndf = df[df.Symbol != \"AGR'A\"]\ndf = df[df.Symbol != \"TAP'A\"]\ndf = df[df.Symbol != \"PED'U\"]\n\ndf.shape\n\ndf[\"First Day Open Price\"] = df[\"First Day Open Price\"].replace(\"-\",np.nan).astype('float')\ndf= df[df[\"First Day Open Price\"]<200]\ndf.shape\n\ndf.Sector.replace(to_replace=\"&\",value=\"\",regex=True,inplace=True) ",
"Calculating New Parameters",
"df[\"Day_Closing\"] = 100 * (df[\"First Day Open Price\"] - df[\"First Day Close Price\"])/(df[\"First Day Open Price\"])\ndf[\"Day30_closing\"] = 100 *(df[\"First Day Open Price\"] - df[\"Thirty Days Later Close Price\"])/(df[\"First Day Open Price\"])\ndf[\"Current_closing\"] = 100 * (df[\"First Day Open Price\"] - df[\"Current Price\"])/(df[\"First Day Open Price\"])",
"a = {}\nb = []\nfor i in df.Symbol:\n try:\n x = Share(i).get_price()\n if x == None:\n b.append(i)\n else:\n a[i] = x\n except:\n print(i)\nData Visualization\nFirst, let's take a look at some general IPO information.",
"plt.figure();\ndf.Symbol.groupby(df[\"IPO Date\"]).count().plot(title = \"Frequency of IPO's Since 1997\",\n figsize=(15,15),color=\"b\")",
"Notice the lack of IPOs following the popping of economic bubbles (dot-com, asset-backed securities). This can be visualized nicely with a chart. Notice where dy/dx approaches 0.",
"df_graph2 = df[\"First Day Close Price\"].groupby(df[\"IPO Date\"]).count()\ndf_graph2 = pd.DataFrame(df_graph2)\ndf_graph2['index1'] = df_graph2.index\n#df_graph2 = df_graph2.reset_index(drop = True)\ndf_graph2.columns = [\"Number\",\"IPO Date\"]\ndf_graph2[\"Number\"].cumsum().plot(title = \"Total IPOs 1997-2016\", figsize = (10,10), color=\"m\")",
"Yikes, is that another plateau coming in 2016? Let's hope not. Anyways, let's take a look at the most common sectors for IPOs. A random sample of around 600 stocks were used.",
"df_graph3 = df.groupby([\"Sector\"]).count()\n\ndf_graph3 = df_graph3.reset_index()\ndf_graph3.index = df_graph3[\"Sector\"]\n\ndf_graph3 = df_graph3[[\"Symbol\"]]\ndf_graph3.columns = [\"Total Number of IPOs\"]\ndf_graph3.plot(kind=\"barh\",title = \"Total Number of IPOs by Sector (Random Sample of 600)\", figsize = (10,10),color=\"c\")",
"Somewhat shockingly, healthcare is the sector that dominates IPOs the most— and by a large margin.\nNow that we have some basic IPO info, let's visualize some of the underpricing.",
"my_colors = 'cbmg'\ndf_graph1 = df[[\"Day_Closing\",\"Day30_closing\",]].groupby(df[\"Sector\"]).mean()\ndf_graph1['index1'] = df_graph1.index\ndf_graph1.reset_index(drop=True)\ndf_graph1.plot(kind = \"bar\",title = \"IPO Underpricing Percent by Period\",\n figsize=(10,10), subplots=False,legend = True,color=my_colors)",
"This bar chart graphs the % of underpricing using the columns generated in the \"Calculating New Paramaters\" section.\nAlmost every sector experiences short-term underpricing. The exceptions for this are Telecommunications Services, which experiences overvaluation in the hyper-short term (first day), and Utilities, which is glaringly overvalued.\nFinally, let's take a look at the long term pricing to see if these sectors are experiencing underpricing or overvaluation.",
"df[[\"First Day Open Price\",\"First Day Close Price\",'Thirty Days Later Close Price',\n 'One Year Later Close Price']].groupby(df[\"Sector\"]).mean().plot(kind = \"bar\",\n legend= True,\n figsize=(15,10),\n title=\"Mean Share Price Per Period, Grouped by Sector\",\n color=my_colors\n )\n\ndf[[\"First Day Open Price\",\"First Day Close Price\",'Thirty Days Later Close Price',\n 'One Year Later Close Price']].groupby(df[\"Market\"]).mean().drop(['American Stock Exchange'], axis=0).plot(kind = \"bar\",\n legend= True,\n figsize=(15,10),\n title=\"Mean Share Price Per Period, Grouped by Market\",\n color=my_colors\n )",
"These two bar chart graph the average share price, grouped first by sector then by market, at First Day open, First Day close, Thirty Days Later close, and One Year Later close.\nAs mentioned earlier, most sectors experience underpricing, except for Utilities which is clearly experiencing overvaluation. However, half of the sectors have One Year Later prices that are significantly less than the First Day Open Price, suggesting that overvaluation of IPOs is a widespread issue.\nPerhaps it is less of a case of investment banks underpricing the IPOs, but rather a case of investment banks selling the companies well and engendering overvaluation.\nFor individual investors, this data can be very useful. In summary, investing in IPOs on opening day is almost always worth it, but some sectors pay off better than others and the Utilities sector should be avoided at all costs. The IPO stocks should not be held longer than thirty days, as the majority of stocks tend to dip below the opening price after only one year. Also, the market where the IPO is announced is important: investors should target IPOs on NASDAQ as the mean share price on that market goes up by nearly 66% on the first day.\nI hope you enjoyed my project and learned more about IPOs!",
"print(\"FIN\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
statsmodels/statsmodels.github.io
|
v0.13.2/examples/notebooks/generated/statespace_tvpvar_mcmc_cfa.ipynb
|
bsd-3-clause
|
[
"TVP-VAR, MCMC, and sparse simulation smoothing",
"%matplotlib inline\n\nfrom importlib import reload\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\nfrom scipy.stats import invwishart, invgamma\n\n# Get the macro dataset\ndta = sm.datasets.macrodata.load_pandas().data\ndta.index = pd.date_range('1959Q1', '2009Q3', freq='QS')",
"Background\nBayesian analysis of linear Gaussian state space models via Markov chain Monte Carlo (MCMC) methods has become both commonplace and relatively straightforward in recent years, due especially to advances in sampling from the joint posterior of the unobserved state vector conditional on the data and model parameters (see especially Carter and Kohn (1994), de Jong and Shephard (1995), and Durbin and Koopman (2002)). This is particularly useful for Gibbs sampling MCMC approaches.\nWhile these procedures make use of the forward/backward application of the recursive Kalman filter and smoother, another recent line of research takes a different approach and constructs the posterior joint distribution of the entire vector of states at once - see in particular Chan and Jeliazkov (2009) for an econometric time series treatment and McCausland et al. (2011) for a more general survey. In particular, the posterior mean and precision matrix are constructed explicitly, with the latter a sparse band matrix. Advantage is then taken of efficient algorithms for Cholesky factorization of sparse band matrices; this reduces memory costs and can improve performance. Following McCausland et al. (2011), we refer to this method as the \"Cholesky Factor Algorithm\" (CFA) approach.\nThe CFA-based simulation smoother has some advantages and some drawbacks compared to that based on the more typical Kalman filter and smoother (KFS).\nAdvantages of CFA:\n\nDerivation of the joint posterior distribution is relatively straightforward and easy to understand.\nIn some cases can be both faster and less memory-intensive than the KFS approach\nIn the Appendix at the end of this notebook, we briefly discuss the performance of the two simulation smoothers for the TVP-VAR model. In summary: simple tests on a single machine suggest that for the TVP-VAR model, the CFA and KFS implementations in Statsmodels have about the same runtimes, while both implementations are about twice as fast as the replication code, written in Matlab, provided by Chan and Jeliazkov (2009).\n\n\n\nDrawbacks of CFA:\nThe main drawback is that this method has not (at least so far) reached the generality of the KFS approach. For example:\n\nIt can not be used with models that have reduced-rank error terms in the observation or state equations.\nOne implication of this is that the typical state space model trick of including identities in the state equation to accommodate, for example, higher-order lags in autoregressive models is not applicable. These models can still be handled by the CFA approach, but at the cost of requiring a slightly different implementation for each lag that is included.\nAs an example, standard ways of representing ARMA and VARMA processes in state space form do include identities in the observation and/or state equations, and so the basic formulas presented in Chan and Jeliazkov (2009) do not apply immediately to these models.\n\n\nLess flexibility is available in the state initialization / prior.\n\nImplementation in Statsmodels\nA CFA simulation smoother along the lines of the basic formulas presented in Chan and Jeliazkov (2009) has been implemented in Statsmodels.\nNotes:\n\nTherefore, the CFA simulation smoother in Statsmodels so-far only supports the case that the state transition is truly a first-order Markov process (i.e. it does not support a p-th order Markov process that has been stacked using identities into a first-order process).\nBy contrast, the KFS smoother in Statsmodels is fully general any can be used for any state space model, including those with stacked p-th order Markov processes or other identities in the observation and state equations.\n\nEither a KFS or the CFA simulation smoothers can be constructed from a state space model using the simulation_smoother method. To show the basic idea, we first consider a simple example.\nLocal level model\nA local level model decomposes an observed series $y_t$ into a persistent trend $\\mu_t$ and a transitory error component\n$$\n\\begin{aligned}\ny_t & = \\mu_t + \\varepsilon_t, \\qquad \\varepsilon_t \\sim N(0, \\sigma_\\text{irregular}^2) \\\n\\mu_t & = \\mu_{t-1} + \\eta_t, \\quad ~ \\eta_t \\sim N(0, \\sigma_\\text{level}^2)\n\\end{aligned}\n$$\nThis model satisfies the requirements of the CFA simulation smoother because both the observation error term $\\varepsilon_t$ and the state innovation term $\\eta_t$ are non-degenerate - that is, their covariance matrices are full rank.\nWe apply this model to inflation, and consider simulating draws from the posterior of the joint state vector. That is, we are interested in sampling from\n$$p(\\mu^t \\mid y^t, \\sigma_\\text{irregular}^2, \\sigma_\\text{level}^2)$$\nwhere we define $\\mu^t \\equiv (\\mu_1, \\dots, \\mu_T)'$ and $y^t \\equiv (y_1, \\dots, y_T)'$.\nIn Statsmodels, the local level model falls into the more general class of \"unobserved components\" models, and can be constructed as follows:",
"# Construct a local level model for inflation\nmod = sm.tsa.UnobservedComponents(dta.infl, 'llevel')\n\n# Fit the model's parameters (sigma2_varepsilon and sigma2_eta)\n# via maximum likelihood\nres = mod.fit()\nprint(res.params)\n\n# Create simulation smoother objects\nsim_kfs = mod.simulation_smoother() # default method is KFS\nsim_cfa = mod.simulation_smoother(method='cfa') # can specify CFA method",
"The simulation smoother objects sim_kfs and sim_cfa have simulate methods that perform simulation smoothing. Each time that simulate is called, the simulated_state attribute will be re-populated with a new simulated draw from the posterior.\nBelow, we construct 20 simulated paths for the trend, using the KFS and CFA approaches, where the simulation is at the maximum likelihood parameter estimates.",
"nsimulations = 20\nsimulated_state_kfs = pd.DataFrame(\n np.zeros((mod.nobs, nsimulations)), index=dta.index)\nsimulated_state_cfa = pd.DataFrame(\n np.zeros((mod.nobs, nsimulations)), index=dta.index)\n\nfor i in range(nsimulations):\n # Apply KFS simulation smoothing\n sim_kfs.simulate()\n # Save the KFS simulated state\n simulated_state_kfs.iloc[:, i] = sim_kfs.simulated_state[0]\n\n # Apply CFA simulation smoothing\n sim_cfa.simulate()\n # Save the CFA simulated state\n simulated_state_cfa.iloc[:, i] = sim_cfa.simulated_state[0]",
"Plotting the observed data and the simulations created using each method below, it is not too hard to see that these two methods are doing the same thing.",
"# Plot the inflation data along with simulated trends\nfig, axes = plt.subplots(2, figsize=(15, 6))\n\n# Plot data and KFS simulations\ndta.infl.plot(ax=axes[0], color='k')\naxes[0].set_title('Simulations based on KFS approach, MLE parameters')\nsimulated_state_kfs.plot(ax=axes[0], color='C0', alpha=0.25, legend=False)\n\n# Plot data and CFA simulations\ndta.infl.plot(ax=axes[1], color='k')\naxes[1].set_title('Simulations based on CFA approach, MLE parameters')\nsimulated_state_cfa.plot(ax=axes[1], color='C0', alpha=0.25, legend=False)\n\n# Add a legend, clean up layout\nhandles, labels = axes[0].get_legend_handles_labels()\naxes[0].legend(handles[:2], ['Data', 'Simulated state'])\nfig.tight_layout();",
"Updating the model's parameters\nThe simulation smoothers are tied to the model instance, here the variable mod. Whenever the model instance is updated with new parameters, the simulation smoothers will take those new parameters into account in future calls to the simulate method.\nThis is convenient for MCMC algorithms, which repeatedly (a) update the model's parameters, (b) draw a sample of the state vector, and then (c) draw new values for the model's parameters.\nHere we will change the model to a different parameterization that yields a smoother trend, and show how the simulated values change (for brevity we only show the simulations from the KFS approach, but simulations from the CFA approach would be the same).",
"fig, ax = plt.subplots(figsize=(15, 3))\n\n# Update the model's parameterization to one that attributes more\n# variation in inflation to the observation error and so has less\n# variation in the trend component\nmod.update([4, 0.05])\n\n# Plot simulations\nfor i in range(nsimulations):\n sim_kfs.simulate()\n ax.plot(dta.index, sim_kfs.simulated_state[0],\n color='C0', alpha=0.25, label='Simulated state')\n\n# Plot data\ndta.infl.plot(ax=ax, color='k', label='Data', zorder=-1)\n \n# Add title, legend, clean up layout\nax.set_title('Simulations with alternative parameterization yielding a smoother trend')\nhandles, labels = ax.get_legend_handles_labels()\nax.legend(handles[-2:], labels[-2:])\nfig.tight_layout();",
"Application: Bayesian analysis of a TVP-VAR model by MCMC\nOne of the applications that Chan and Jeliazkov (2009) consider is the time-varying parameters vector autoregression (TVP-VAR) model, estimated with Bayesian Gibb sampling (MCMC) methods. They apply this to model the co-movements in four macroeconomic time series:\n\nReal GDP growth\nInflation\nUnemployment rate\nShort-term interest rates\n\nWe will replicate their example, using a very similar dataset that is included in Statsmodels.",
"# Subset to the four variables of interest\ny = dta[['realgdp', 'cpi', 'unemp', 'tbilrate']].copy()\ny.columns = ['gdp', 'inf', 'unemp', 'int']\n\n# Convert to real GDP growth and CPI inflation rates\ny[['gdp', 'inf']] = np.log(y[['gdp', 'inf']]).diff() * 100\ny = y.iloc[1:]\n\nfig, ax = plt.subplots(figsize=(15, 5))\ny.plot(ax=ax)\nax.set_title('Evolution of macroeconomic variables included in TVP-VAR exercise');",
"TVP-VAR model\nNote: this section is based on Chan and Jeliazkov (2009) section 3.1, which can be consulted for additional details.\nThe usual (time-invariant) VAR(1) model is typically written:\n$$\n\\begin{aligned}\ny_t & = \\mu + \\Phi y_{t-1} + \\varepsilon_t, \\qquad \\varepsilon_t \\sim N(0, H)\n\\end{aligned}\n$$\nwhere $y_t$ is a $p \\times 1$ vector of variables observed at time $t$ and $H$ is a covariance matrix.\nThe TVP-VAR(1) model generalizes this to allow the coefficients to vary over time according. Stacking all the parameters into a vector according to $\\alpha_t = \\text{vec}([\\mu_t : \\Phi_t])$, where $\\text{vec}$ denotes the operation that stacks columns of a matrix into a vector, we model their evolution over time according to:\n$$\\alpha_{i,t+1} = \\alpha_{i, t} + \\eta_{i,t}, \\qquad \\eta_{i, t} \\sim N(0, \\sigma_i^2)$$\nIn other words, each parameter evolves independently according to a random walk.\nNote that there are $p$ coefficients in $\\mu_t$ and $p^2$ coefficients in $\\Phi_t$, so the full state vector $\\alpha$ is shaped $p * (p + 1) \\times 1$.\nPutting the TVP-VAR(1) model into state-space form is relatively straightforward, and in fact we just have to re-write the observation equation into SUR form:\n$$\n\\begin{aligned}\ny_t & = Z_t \\alpha_t + \\varepsilon_t, \\qquad \\varepsilon_t \\sim N(0, H) \\\n\\alpha_{t+1} & = \\alpha_t + \\eta_t, \\qquad \\eta_t \\sim N(0, \\text{diag}({\\sigma_i^2}))\n\\end{aligned}\n$$\nwhere\n$$\nZ_t = \\begin{bmatrix}\n1 & y_{t-1}' & 0 & \\dots & & 0 \\\n0 & 0 & 1 & y_{t-1}' & & 0 \\\n\\vdots & & & \\ddots & \\ddots & \\vdots \\\n0 & 0 & 0 & 0 & 1 & y_{t-1}' \\\n\\end{bmatrix}\n$$\nAs long as $H$ is full rank and each of the variances $\\sigma_i^2$ is non-zero, the model satisfies the requirements of the CFA simulation smoother.\nWe also need to specify the initialization / prior for the initial state, $\\alpha_1$. Here we will follow Chan and Jeliazkov (2009) in using $\\alpha_1 \\sim N(0, 5 I)$, although we could also model it as diffuse.\nAside from the time-varying coefficients $\\alpha_t$, the other parameters that we will need to estimate are terms in the covariance matrix $H$ and the random walk variances $\\sigma_i^2$.\nTVP-VAR model in Statsmodels\nConstructing this model programatically in Statsmodels is also relatively straightforward, since there are basically four steps:\n\nCreate a new TVPVAR class as a subclass of sm.tsa.statespace.MLEModel\nFill in the fixed values of the state space system matrices\nSpecify the initialization of $\\alpha_1$\nCreate a method for updating the state space system matrices with new values of the covariance matrix $H$ and the random walk variances $\\sigma_i^2$.\n\nTo do this, first note that the general state space representation used by Statsmodels is:\n$$\n\\begin{aligned}\ny_t & = d_t + Z_t \\alpha_t + \\varepsilon_t, \\qquad \\varepsilon_t \\sim N(0, H_t) \\\n\\alpha_{t+1} & = c_t + T_t \\alpha_t + R_t \\eta_t, \\qquad \\eta_t \\sim N(0, Q_t) \\\n\\end{aligned}\n$$\nThen the TVP-VAR(1) model implies the following specializations:\n\nThe intercept terms are zero, i.e. $c_t = d_t = 0$\nThe design matrix $Z_t$ is time-varying but its values are fixed as described above (i.e. its values contain ones and lags of $y_t$)\nThe observation covariance matrix is not time-varying, i.e. $H_t = H_{t+1} = H$\nThe transition matrix is not time-varying and is equal to the identity matrix, i.e. $T_t = T_{t+1} = I$\nThe selection matrix $R_t$ is not time-varying and is also equal to the identity matrix, i.e. $R_t = R_{t+1} = I$\nThe state covariance matrix $Q_t$ is not time-varying and is diagonal, i.e. $Q_t = Q_{t+1} = \\text{diag}({\\sigma_i^2})$",
"# 1. Create a new TVPVAR class as a subclass of sm.tsa.statespace.MLEModel\nclass TVPVAR(sm.tsa.statespace.MLEModel):\n # Steps 2-3 are best done in the class \"constructor\", i.e. the __init__ method\n def __init__(self, y):\n # Create a matrix with [y_t' : y_{t-1}'] for t = 2, ..., T\n augmented = sm.tsa.lagmat(y, 1, trim='both', original='in', use_pandas=True)\n # Separate into y_t and z_t = [1 : y_{t-1}']\n p = y.shape[1]\n y_t = augmented.iloc[:, :p]\n z_t = sm.add_constant(augmented.iloc[:, p:])\n\n # Recall that the length of the state vector is p * (p + 1)\n k_states = p * (p + 1)\n super().__init__(y_t, exog=z_t, k_states=k_states)\n\n # Note that the state space system matrices default to contain zeros,\n # so we don't need to explicitly set c_t = d_t = 0.\n\n # Construct the design matrix Z_t\n # Notes:\n # -> self.k_endog = p is the dimension of the observed vector\n # -> self.k_states = p * (p + 1) is the dimension of the observed vector\n # -> self.nobs = T is the number of observations in y_t\n self['design'] = np.zeros((self.k_endog, self.k_states, self.nobs))\n for i in range(self.k_endog):\n start = i * (self.k_endog + 1)\n end = start + self.k_endog + 1\n self['design', i, start:end, :] = z_t.T\n\n # Construct the transition matrix T = I\n self['transition'] = np.eye(k_states)\n\n # Construct the selection matrix R = I\n self['selection'] = np.eye(k_states)\n\n # Step 3: Initialize the state vector as alpha_1 ~ N(0, 5I)\n self.ssm.initialize('known', stationary_cov=5 * np.eye(self.k_states))\n\n # Step 4. Create a method that we can call to update H and Q\n def update_variances(self, obs_cov, state_cov_diag):\n self['obs_cov'] = obs_cov\n self['state_cov'] = np.diag(state_cov_diag)\n\n # Finally, it can be convenient to define human-readable names for\n # each element of the state vector. These will be available in output\n @property\n def state_names(self):\n state_names = np.empty((self.k_endog, self.k_endog + 1), dtype=object)\n for i in range(self.k_endog):\n endog_name = self.endog_names[i]\n state_names[i] = (\n ['intercept.%s' % endog_name] +\n ['L1.%s->%s' % (other_name, endog_name) for other_name in self.endog_names])\n return state_names.ravel().tolist()",
"The above class defined the state space model for any given dataset. Now we need to create a specific instance of it with the dataset that we created earlier containing real GDP growth, inflation, unemployment, and interest rates.",
"# Create an instance of our TVPVAR class with our observed dataset y\nmod = TVPVAR(y)",
"Preliminary investigation with ad-hoc parameters in H, Q\nIn our analysis below, we will need to begin our MCMC iterations with some initial parameterization. Following Chan and Jeliazkov (2009) we will set $H$ to be the sample covariance matrix of our dataset, and we will set $\\sigma_i^2 = 0.01$ for each $i$.\nBefore discussing the MCMC scheme that will allow us to make inferences about the model, first we can consider the output of the model when simply plugging in these initial parameters. To fill in these parameters, we use the update_variances method that we defined earlier and then perform Kalman filtering and smoothing conditional on those parameters.\nWarning: This exercise is just by way of explanation - we must wait for the output of the MCMC exercise to study the actual implications of the model in a meaningful way.",
"initial_obs_cov = np.cov(y.T)\ninitial_state_cov_diag = [0.01] * mod.k_states\n\n# Update H and Q\nmod.update_variances(initial_obs_cov, initial_state_cov_diag)\n\n# Perform Kalman filtering and smoothing\n# (the [] is just an empty list that in some models might contain\n# additional parameters. Here, we don't have any additional parameters\n# so we just pass an empty list)\ninitial_res = mod.smooth([])",
"The initial_res variable contains the output of Kalman filtering and smoothing, conditional on those initial parameters. In particular, we may be interested in the \"smoothed states\", which are $E[\\alpha_t \\mid y^t, H, {\\sigma_i^2}]$.\nFirst, lets create a function that graphs the coefficients over time, separated into the equations for equation of the observed variables.",
"def plot_coefficients_by_equation(states):\n fig, axes = plt.subplots(2, 2, figsize=(15, 8))\n\n # The way we defined Z_t implies that the first 5 elements of the\n # state vector correspond to the first variable in y_t, which is GDP growth\n ax = axes[0, 0]\n states.iloc[:, :5].plot(ax=ax)\n ax.set_title('GDP growth')\n ax.legend()\n\n # The next 5 elements correspond to inflation\n ax = axes[0, 1]\n states.iloc[:, 5:10].plot(ax=ax)\n ax.set_title('Inflation rate')\n ax.legend();\n\n # The next 5 elements correspond to unemployment\n ax = axes[1, 0]\n states.iloc[:, 10:15].plot(ax=ax)\n ax.set_title('Unemployment equation')\n ax.legend()\n\n # The last 5 elements correspond to the interest rate\n ax = axes[1, 1]\n states.iloc[:, 15:20].plot(ax=ax)\n ax.set_title('Interest rate equation')\n ax.legend();\n \n return ax\n",
"Now, we are interested in the smoothed states, which are available in the states.smoothed attribute out our results object initial_res.\nAs the graph below shows, the initial parameterization implies substantial time-variation in some of the coefficients.",
"# Here, for illustration purposes only, we plot the time-varying\n# coefficients conditional on an ad-hoc parameterization\n\n# Recall that `initial_res` contains the Kalman filtering and smoothing,\n# and the `states.smoothed` attribute contains the smoothed states\nplot_coefficients_by_equation(initial_res.states.smoothed);",
"Bayesian estimation via MCMC\nWe will now implement the Gibbs sampler scheme described in Chan and Jeliazkov (2009), Algorithm 2.\nWe use the following (conditionally conjugate) priors:\n$$\n\\begin{aligned}\nH & \\sim \\mathcal{IW}(\\nu_1^0, S_1^0) \\\n\\sigma_i^2 & \\sim \\mathcal{IG} \\left ( \\frac{\\nu_{i2}^0}{2}, \\frac{S_{i2}^0}{2} \\right )\n\\end{aligned}\n$$\nwhere $\\mathcal{IW}$ denotes the inverse-Wishart distribution and $\\mathcal{IG}$ denotes the inverse-Gamma distribution. We set the prior hyperparameters as:\n$$\n\\begin{aligned}\nv_1^0 = T + 3, & \\quad S_1^0 = I \\\nv_{i2}^0 = 6, & \\quad S_{i2}^0 = 0.01 \\qquad \\text{for each} ~ i\\\n\\end{aligned}\n$$",
"# Prior hyperparameters\n\n# Prior for obs. cov. is inverse-Wishart(v_1^0=k + 3, S10=I)\nv10 = mod.k_endog + 3\nS10 = np.eye(mod.k_endog)\n\n# Prior for state cov. variances is inverse-Gamma(v_{i2}^0 / 2 = 3, S+{i2}^0 / 2 = 0.005)\nvi20 = 6\nSi20 = 0.01",
"Before running the MCMC iterations, there are a couple of practical steps:\n\nCreate arrays to store the draws of our state vector, observation covariance matrix, and state error variances.\nPut the initial values for H and Q (described above) into the storage vectors\nConstruct the simulation smoother object associated with our TVPVAR instance to make draws of the state vector",
"# Gibbs sampler setup\nniter = 11000\nnburn = 1000\n\n# 1. Create storage arrays\nstore_states = np.zeros((niter + 1, mod.nobs, mod.k_states))\nstore_obs_cov = np.zeros((niter + 1, mod.k_endog, mod.k_endog))\nstore_state_cov = np.zeros((niter + 1, mod.k_states))\n\n# 2. Put in the initial values\nstore_obs_cov[0] = initial_obs_cov\nstore_state_cov[0] = initial_state_cov_diag\nmod.update_variances(store_obs_cov[0], store_state_cov[0])\n\n# 3. Construct posterior samplers\nsim = mod.simulation_smoother(method='cfa')",
"As before, we could have used either the simulation smoother based on the Kalman filter and smoother or that based on the Cholesky Factor Algorithm.",
"for i in range(niter):\n mod.update_variances(store_obs_cov[i], store_state_cov[i])\n sim.simulate()\n\n # 1. Sample states\n store_states[i + 1] = sim.simulated_state.T\n\n # 2. Simulate obs cov\n fitted = np.matmul(mod['design'].transpose(2, 0, 1), store_states[i + 1][..., None])[..., 0]\n resid = mod.endog - fitted\n store_obs_cov[i + 1] = invwishart.rvs(v10 + mod.nobs, S10 + resid.T @ resid)\n\n # 3. Simulate state cov variances\n resid = store_states[i + 1, 1:] - store_states[i + 1, :-1]\n sse = np.sum(resid**2, axis=0)\n \n for j in range(mod.k_states):\n rv = invgamma.rvs((vi20 + mod.nobs - 1) / 2, scale=(Si20 + sse[j]) / 2)\n store_state_cov[i + 1, j] = rv",
"After removing a number of initial draws, the remaining draws from the posterior allow us to conduct inference. Below, we plot the posterior mean of the time-varying regression coefficients.\n(Note: these plots are different from those in Figure 1 of the published version of Chan and Jeliazkov (2009), but they are very similar to those produced by the Matlab replication code available at http://joshuachan.org/code/code_TVPVAR.html)",
"# Collect the posterior means of each time-varying coefficient\nstates_posterior_mean = pd.DataFrame(\n np.mean(store_states[nburn + 1:], axis=0),\n index=mod._index, columns=mod.state_names)\n\n# Plot these means over time\nplot_coefficients_by_equation(states_posterior_mean);",
"Python also has a number of libraries to assist with exploring Bayesian models. Here we'll just use the arviz package to explore the credible intervals of each of the covariance and variance parameters, although it makes available a much wider set of tools for analysis.",
"import arviz as az\n\n# Collect the observation error covariance parameters\naz_obs_cov = az.convert_to_inference_data({\n ('Var[%s]' % mod.endog_names[i] if i == j else\n 'Cov[%s, %s]' % (mod.endog_names[i], mod.endog_names[j])):\n store_obs_cov[nburn + 1:, i, j]\n for i in range(mod.k_endog) for j in range(i, mod.k_endog)})\n\n# Plot the credible intervals\naz.plot_forest(az_obs_cov, figsize=(8, 7));\n\n# Collect the state innovation variance parameters\naz_state_cov = az.convert_to_inference_data({\n r'$\\sigma^2$[%s]' % mod.state_names[i]: store_state_cov[nburn + 1:, i]\n for i in range(mod.k_states)})\n\n# Plot the credible intervals\naz.plot_forest(az_state_cov, figsize=(8, 7));",
"Appendix: performance\nFinally, we run a few simple tests to compare the performance of the KFS and CFA simulation smoothers by using the %timeit Jupyter notebook magic.\nOne caveat is that the KFS simulation smoother can produce a variety of output beyond just simulations of the posterior state vector, and these additional computations could bias the results. To make the results comparable, we will tell the KFS simulation smoother to only compute simulations of the state by using the simulation_output argument.",
"from statsmodels.tsa.statespace.simulation_smoother import SIMULATION_STATE\n\nsim_cfa = mod.simulation_smoother(method='cfa')\nsim_kfs = mod.simulation_smoother(simulation_output=SIMULATION_STATE)",
"Then we can use the following code to perform a basic timing exercise:\npython\n%timeit -n 10000 -r 3 sim_cfa.simulate()\n%timeit -n 10000 -r 3 sim_kfs.simulate()\nOn the machine this was tested on, this resulted in the following:\n2.06 ms ± 26.5 µs per loop (mean ± std. dev. of 3 runs, 10000 loops each)\n2.02 ms ± 68.4 µs per loop (mean ± std. dev. of 3 runs, 10000 loops each)\nThese results suggest that - at least for this model - there are not noticeable computational gains from the CFA approach relative to the KFS approach. However, this does not rule out the following:\n\nThe Statsmodels implementation of the CFA simulation smoother could possibly be further optimized\nThe CFA approach may only show improvement for certain models (for example with a large number of endog variables)\n\nOne simple way to take a first pass at assessing the first possibility is to compare the runtime of the Statsmodels implementation of the CFA simulation smoother to the Matlab implementation in the replication codes of Chan and Jeliazkov (2009), available at http://joshuachan.org/code/code_TVPVAR.html.\nWhile the Statsmodels version of the CFA simulation smoother is written in Cython and compiled to C code, the Matlab version takes advantage of the Matlab's sparse matrix capabilities. As a result, even though it is not compiled code, we might expect it to have relatively good performance.\nOn the machine this was tested on, the Matlab version typically ran the MCMC loop with 11,000 iterations in 70-75 seconds, while the MCMC loop in this notebook using the Statsmodels CFA simulation smoother (see above), also with 11,0000 iterations, ran in 40-45 seconds. This is some evidence that the Statsmodels implementation of the CFA smoother already performs relatively well (although it does not rule out that there are additional gains possible).\nBibliography\nCarter, Chris K., and Robert Kohn. \"On Gibbs sampling for state space models.\" Biometrika 81, no. 3 (1994): 541-553.\nChan, Joshua CC, and Ivan Jeliazkov. \"Efficient simulation and integrated likelihood estimation in state space models.\" International Journal of Mathematical Modelling and Numerical Optimisation 1, no. 1-2 (2009): 101-120.\nDe Jong, Piet, and Neil Shephard. \"The simulation smoother for time series models.\" Biometrika 82, no. 2 (1995): 339-350.\nDurbin, James, and Siem Jan Koopman. \"A simple and efficient simulation smoother for state space time series analysis.\" Biometrika 89, no. 3 (2002): 603-616.\nMcCausland, William J., Shirley Miller, and Denis Pelletier. \"Simulation smoothing for state–space models: A computational efficiency analysis.\" Computational Statistics & Data Analysis 55, no. 1 (2011): 199-212."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hannorein/rebound
|
ipython_examples/User_Defined_Collision_Resolve.ipynb
|
gpl-3.0
|
[
"User Defined Rebound Collision Resolutions\nIn the CloseEncounter example, we discuss methods for resolving collisions in REBOUND through exceptions and the use of the sim.collision_resolve = \"merge\" method.\nUsing the same 3-Body setup, let us explore how to define and implement the same collision resolution function in python and pass it to the sim.collision_resolve function pointer.",
"import rebound\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef setupSimulation():\n ''' Setup the 3-Body scenario'''\n sim = rebound.Simulation()\n sim.integrator = \"ias15\" # IAS15 is the default integrator, so we don't need this line\n sim.add(m=1.)\n sim.add(m=1e-3, a=1., r=np.sqrt(1e-3/3.)) # we now set collision radii!\n sim.add(m=5e-3, a=1.25, r=1.25*np.sqrt(5e-3/3.))\n sim.move_to_com()\n return sim",
"To reiterate the previous method, let's run the built-in merge collision resolution method",
"sim = setupSimulation()\nsim.collision = \"direct\"\nsim.collision_resolve = \"merge\" # Built in function\n\nprint(\"Particles in the simulation at t=%6.1f: %d\"%(sim.t,sim.N))\nprint(\"System Mass: {}\".format([p.m for p in sim.particles]))\nsim.integrate(100.)\nprint(\"Particles in the simulation at t=%6.1f: %d\"%(sim.t,sim.N))\nprint(\"System Mass: {}\".format([p.m for p in sim.particles]))",
"We can see above that two particles merged into one with a combined mass of 0.006.\nLet's now try to implement this collision function ourselves!\nTo do this, we need to write a function which we can pass to sim.collision_resolve. In this case let's define my_merge. \nNow, whenever a collision occurs, REBOUND will pass our function two parameters:\n\nsim_pointer: a pointer to the simulation object which the collision occurred in.\nBecause it is a ctypes pointer, you will need to use the .contents attribute to access the simulation object\ncollision: this structure contains the attributes .p1 and .p2 which are the indices of the two particles involved in the collision\n\nUsing these inputs, we can define the necessary logic to handle the collision. The return value of our function determines how REBOUND proceeds afterwards:\n\n0: Simulation continues without changes\n1: remove p1 from simulation\n2: remove p2 from simulation\n\nLet us look at how this information can be used to implement the logic of the merge method for colliding particles in a totally inelastic collision.",
"def my_merge(sim_pointer, collided_particles_index):\n\n sim = sim_pointer.contents # retreive the standard simulation object\n ps = sim.particles # easy access to list of particles\n\n i = collided_particles_index.p1 # Note that p1 < p2 is not guaranteed. \n j = collided_particles_index.p2 \n\n # This part is exciting! We can execute additional code during collisions now!\n fig, ax = rebound.OrbitPlot(sim, xlim = (-1.3, 1.3), ylim = (-1.3, 1.3), color=['blue', 'green'])\n ax.set_title(\"Merging particle {} into {}\".format(j, i))\n ax.text(ps[1].x, ps[1].y, \"1\"); \n ax.text(ps[2].x, ps[2].y, \"2\")\n # So we plot the scenario exactly at the timestep that the collision function is triggered\n\n # Merging Logic \n total_mass = ps[i].m + ps[j].m\n merged_planet = (ps[i] * ps[i].m + ps[j] * ps[j].m)/total_mass # conservation of momentum\n\n # merged radius assuming a uniform density\n merged_radius = (ps[i].r**3 + ps[j].r**3)**(1/3)\n\n ps[i] = merged_planet # update p1's state vector (mass and radius will need corrections)\n ps[i].m = total_mass # update to total mass\n ps[i].r = merged_radius # update to joined radius\n\n return 2 # remove particle with index j",
"Now we can set our new collision resolution function in the simulation object.",
"sim = setupSimulation()\nsim.collision = \"direct\"\nps = sim.particles\nsim.collision_resolve = my_merge # user defined collision resolution function\nsim.integrate(100.)",
"Note that we were not only able to resolve the collision, but also to run additional code during the collision, in this case to make a plot, which can be very useful for debugging or logging. Now that you know the basics, you can expand the scenario here and resolve collisions according to the astrophysical problem you are working on."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
CompPhysics/MachineLearning
|
doc/pub/DimRed/ipynb/.ipynb_checkpoints/DimRed-checkpoint.ipynb
|
cc0-1.0
|
[
"<!-- dom:TITLE: Data Analysis and Machine Learning: Preprocessing and Dimensionality Reduction -->\nData Analysis and Machine Learning: Preprocessing and Dimensionality Reduction\n<!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University -->\n<!-- Author: -->\nMorten Hjorth-Jensen, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University\nDate: Oct 14, 2019\nCopyright 1999-2019, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license\nReducing the number of degrees of freedom, overarching view\nMany Machine Learning problems involve thousands or even millions of features for each training\ninstance. Not only does this make training extremely slow, it can also make it much harder to find a good\nsolution, as we will see. This problem is often referred to as the curse of dimensionality.\nFortunately, in real-world problems, it is often possible to reduce the number of features considerably,\nturning an intractable problem into a tractable one.\nHere we will discuss some of the most popular dimensionality\nreduction techniques: the principal component analysis PCA, Kernel PCA, and Locally Linear Embedding (LLE).\nPreprocessing our data\nBefore we proceed however, we will discuss how to preprocess our\ndata. Till now and in connection with our previous examples we have not met so many cases\nwhere we are too sensitive to the scaling of our data. Normally the\ndata may need a rescaling and/or may be sensitive to extreme\nvalues. Scaling the data renders our inputs much more suitable for the\nalgorithms we want to employ.\nScikit-Learn has several functions which allow us to rescale the data, normally resulting in much better results in terms of various accuracy scores. The StandardScaler function in Scikit-Learn ensures that for each feature/predictor we study the mean value is zero and the variance is zero (every column in the design/feature matrix).\nThis scaling has the drawback that it does not ensure that we have a particular maximum or minumum in our data set. Another function included in Scikit-Learn is the MinMaxScaler which ensures that all features are exactly between $0$ and $1$. The Normalizer function scales each column of the design matrix by its Euclidean norm.\nSimple preprocessing examples, Franke function and regression",
"%matplotlib inline\n\n# Common imports\nimport os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport sklearn.linear_model as skl\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import MinMaxScaler, StandardScaler, Normalizer\nfrom sklearn.svm import SVR\n\n# Where to save the figures and data files\nPROJECT_ROOT_DIR = \"Results\"\nFIGURE_ID = \"Results/FigureFiles\"\nDATA_ID = \"DataFiles/\"\n\nif not os.path.exists(PROJECT_ROOT_DIR):\n os.mkdir(PROJECT_ROOT_DIR)\n\nif not os.path.exists(FIGURE_ID):\n os.makedirs(FIGURE_ID)\n\nif not os.path.exists(DATA_ID):\n os.makedirs(DATA_ID)\n\ndef image_path(fig_id):\n return os.path.join(FIGURE_ID, fig_id)\n\ndef data_path(dat_id):\n return os.path.join(DATA_ID, dat_id)\n\ndef save_fig(fig_id):\n plt.savefig(image_path(fig_id) + \".png\", format='png')\n\n\ndef FrankeFunction(x,y):\n\tterm1 = 0.75*np.exp(-(0.25*(9*x-2)**2) - 0.25*((9*y-2)**2))\n\tterm2 = 0.75*np.exp(-((9*x+1)**2)/49.0 - 0.1*(9*y+1))\n\tterm3 = 0.5*np.exp(-(9*x-7)**2/4.0 - 0.25*((9*y-3)**2))\n\tterm4 = -0.2*np.exp(-(9*x-4)**2 - (9*y-7)**2)\n\treturn term1 + term2 + term3 + term4\n\n\ndef create_X(x, y, n ):\n\tif len(x.shape) > 1:\n\t\tx = np.ravel(x)\n\t\ty = np.ravel(y)\n\n\tN = len(x)\n\tl = int((n+1)*(n+2)/2)\t\t# Number of elements in beta\n\tX = np.ones((N,l))\n\n\tfor i in range(1,n+1):\n\t\tq = int((i)*(i+1)/2)\n\t\tfor k in range(i+1):\n\t\t\tX[:,q+k] = (x**(i-k))*(y**k)\n\n\treturn X\n\n\n# Making meshgrid of datapoints and compute Franke's function\nn = 5\nN = 1000\nx = np.sort(np.random.uniform(0, 1, N))\ny = np.sort(np.random.uniform(0, 1, N))\nz = FrankeFunction(x, y)\nX = create_X(x, y, n=n) \n# split in training and test data\nX_train, X_test, y_train, y_test = train_test_split(X,z,test_size=0.2)\n\n\nsvm = SVR(gamma='auto',C=10.0)\nsvm.fit(X_train, y_train)\n\n# The mean squared error and R2 score\nprint(\"MSE before scaling: {:.2f}\".format(mean_squared_error(svm.predict(X_test), y_test)))\nprint(\"R2 score before scaling {:.2f}\".format(svm.score(X_test,y_test)))\n\nscaler = StandardScaler()\nscaler.fit(X_train)\nX_train_scaled = scaler.transform(X_train)\nX_test_scaled = scaler.transform(X_test)\n\nprint(\"Feature min values before scaling:\\n {}\".format(X_train.min(axis=0)))\nprint(\"Feature max values before scaling:\\n {}\".format(X_train.max(axis=0)))\n\nprint(\"Feature min values after scaling:\\n {}\".format(X_train_scaled.min(axis=0)))\nprint(\"Feature max values after scaling:\\n {}\".format(X_train_scaled.max(axis=0)))\n\nsvm = SVR(gamma='auto',C=10.0)\nsvm.fit(X_train_scaled, y_train)\n\nprint(\"MSE after scaling: {:.2f}\".format(mean_squared_error(svm.predict(X_test_scaled), y_test)))\nprint(\"R2 score for scaled data: {:.2f}\".format(svm.score(X_test_scaled,y_test)))",
"Simple preprocessing examples, breast cancer data and classification\nWe show here how we can use a simple regression case on the breast cancer data using support vector machine as algorithm for classification",
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.model_selection import train_test_split \nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.svm import SVC\ncancer = load_breast_cancer()\n\nX_train, X_test, y_train, y_test = train_test_split(cancer.data,cancer.target,random_state=0)\nprint(X_train.shape)\nprint(X_test.shape)\n\nsvm = SVC(C=100)\nsvm.fit(X_train, y_train)\nprint(\"Test set accuracy: {:.2f}\".format(svm.score(X_test,y_test)))\n\nfrom sklearn.preprocessing import MinMaxScaler, StandardScaler\n\nscaler = MinMaxScaler()\nscaler.fit(X_train)\nX_train_scaled = scaler.transform(X_train)\nX_test_scaled = scaler.transform(X_test)\n\nprint(\"Feature min values before scaling:\\n {}\".format(X_train.min(axis=0)))\nprint(\"Feature max values before scaling:\\n {}\".format(X_train.max(axis=0)))\n\nprint(\"Feature min values before scaling:\\n {}\".format(X_train_scaled.min(axis=0)))\nprint(\"Feature max values before scaling:\\n {}\".format(X_train_scaled.max(axis=0)))\n\n\nsvm.fit(X_train_scaled, y_train)\nprint(\"Test set accuracy scaled data: {:.2f}\".format(svm.score(X_test_scaled,y_test)))\n\nscaler = StandardScaler()\nscaler.fit(X_train)\nX_train_scaled = scaler.transform(X_train)\nX_test_scaled = scaler.transform(X_test)\n\nsvm.fit(X_train_scaled, y_train)\nprint(\"Test set accuracy scaled data: {:.2f}\".format(svm.score(X_test_scaled,y_test)))",
"More on Cancer Data",
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.model_selection import train_test_split \nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.svm import SVC\nfrom sklearn.linear_model import LogisticRegression\ncancer = load_breast_cancer()\n\nfig, axes = plt.subplots(15,2,figsize=(10,20))\nmale = cancer.data[cancer.target == 0]\nbene = cancer.data[cancer.target == 1]\nax = axes.ravel()\n\nfor i in range(30):\n _, bins = np.histogram(cancer.data[:,i], bins =50)\n ax[i].hist(male[:,i], bins = bins, alpha = 0.5)\n ax[i].hist(bene[:,i], bins = bins, alpha = 0.5)\n ax[i].set_title(cancer.feature_names[i])\n ax[i].set_yticks(())\nax[0].set_xlabel(\"Feature magnitude\")\nax[0].set_ylabel(\"Frequency\")\nax[0].legend([\"Male\", \"Bene\"], loc =\"best\")\nfig.tight_layout()\nplt.show()\nX_train, X_test, y_train, y_test = train_test_split(cancer.data,cancer.target,random_state=0)\nprint(X_train.shape)\nprint(X_test.shape)\n\nlogreg = LogisticRegression()\nlogreg.fit(X_train, y_train)\n\nprint(\"Test set accuracy: {:.2f}\".format(logreg.score(X_test,y_test)))\n\nfrom sklearn.preprocessing import MinMaxScaler, StandardScaler\n\n\nscaler = StandardScaler()\nscaler.fit(X_train)\nX_train_scaled = scaler.transform(X_train)\nX_test_scaled = scaler.transform(X_test)\nlogreg.fit(X_train_scaled, y_train)\n#svm.fit(X_train_scaled, y_train)\nprint(\"Test set accuracy scaled data: {:.2f}\".format(logreg.score(X_test_scaled,y_test)))",
"Principal Component Analysis\nPrincipal Component Analysis (PCA) is by far the most popular dimensionality reduction algorithm.\nFirst it identifies the hyperplane that lies closest to the data, and then it projects the data onto it.\nThe following Python code uses NumPy’s svd() function to obtain all the principal components of the\ntraining set, then extracts the first two principal components",
"X_centered = X - X.mean(axis=0)\nU, s, V = np.linalg.svd(X_centered)\nc1 = V.T[:, 0]\nc2 = V.T[:, 1]",
"PCA assumes that the dataset is centered around the origin. Scikit-Learn’s PCA classes take care of centering\nthe data for you. However, if you implement PCA yourself (as in the preceding example), or if you use other libraries, don’t\nforget to center the data first.\nOnce you have identified all the principal components, you can reduce the dimensionality of the dataset\ndown to $d$ dimensions by projecting it onto the hyperplane defined by the first $d$ principal components.\nSelecting this hyperplane ensures that the projection will preserve as much variance as possible.",
"W2 = V.T[:, :2]\nX2D = X_centered.dot(W2)",
"<!-- !split -->\nPCA and scikit-learn\nScikit-Learn’s PCA class implements PCA using SVD decomposition just like we did before. The\nfollowing code applies PCA to reduce the dimensionality of the dataset down to two dimensions (note\nthat it automatically takes care of centering the data):",
"from sklearn.decomposition import PCA\npca = PCA(n_components = 2)\nX2D = pca.fit_transform(X)",
"After fitting the PCA transformer to the dataset, you can access the principal components using the\ncomponents variable (note that it contains the PCs as horizontal vectors, so, for example, the first\nprincipal component is equal to",
"pca.components_.T[:, 0]).",
"Another very useful piece of information is the explained variance ratio of each principal component,\navailable via the $explained_variance_ratio$ variable. It indicates the proportion of the dataset’s\nvariance that lies along the axis of each principal component. \nMore material to come here.\nMore on the PCA\nInstead of arbitrarily choosing the number of dimensions to reduce down to, it is generally preferable to\nchoose the number of dimensions that add up to a sufficiently large portion of the variance (e.g., 95%).\nUnless, of course, you are reducing dimensionality for data visualization — in that case you will\ngenerally want to reduce the dimensionality down to 2 or 3.\nThe following code computes PCA without reducing dimensionality, then computes the minimum number\nof dimensions required to preserve 95% of the training set’s variance:",
"pca = PCA()\npca.fit(X)\ncumsum = np.cumsum(pca.explained_variance_ratio_)\nd = np.argmax(cumsum >= 0.95) + 1",
"You could then set $n_components=d$ and run PCA again. However, there is a much better option: instead\nof specifying the number of principal components you want to preserve, you can set $n_components$ to be\na float between 0.0 and 1.0, indicating the ratio of variance you wish to preserve:",
"pca = PCA(n_components=0.95)\nX_reduced = pca.fit_transform(X)",
"Incremental PCA\nOne problem with the preceding implementation of PCA is that it requires the whole training set to fit in\nmemory in order for the SVD algorithm to run. Fortunately, Incremental PCA (IPCA) algorithms have\nbeen developed: you can split the training set into mini-batches and feed an IPCA algorithm one minibatch\nat a time. This is useful for large training sets, and also to apply PCA online (i.e., on the fly, as new\ninstances arrive).\nRandomized PCA\nScikit-Learn offers yet another option to perform PCA, called Randomized PCA. This is a stochastic\nalgorithm that quickly finds an approximation of the first d principal components. Its computational\ncomplexity is $O(m \\times d^2)+O(d^3)$, instead of $O(m \\times n^2) + O(n^3)$, so it is dramatically faster than the\nprevious algorithms when $d$ is much smaller than $n$.\nKernel PCA\nThe kernel trick is a mathematical technique that implicitly maps instances into a\nvery high-dimensional space (called the feature space), enabling nonlinear classification and regression\nwith Support Vector Machines. Recall that a linear decision boundary in the high-dimensional feature\nspace corresponds to a complex nonlinear decision boundary in the original space.\nIt turns out that the same trick can be applied to PCA, making it possible to perform complex nonlinear\nprojections for dimensionality reduction. This is called Kernel PCA (kPCA). It is often good at\npreserving clusters of instances after projection, or sometimes even unrolling datasets that lie close to a\ntwisted manifold.\nFor example, the following code uses Scikit-Learn’s KernelPCA class to perform kPCA with an",
"from sklearn.decomposition import KernelPCA\nrbf_pca = KernelPCA(n_components = 2, kernel=\"rbf\", gamma=0.04)\nX_reduced = rbf_pca.fit_transform(X)",
"LLE\nLocally Linear Embedding (LLE) is another very powerful nonlinear dimensionality reduction\n(NLDR) technique. It is a Manifold Learning technique that does not rely on projections like the previous\nalgorithms. In a nutshell, LLE works by first measuring how each training instance linearly relates to its\nclosest neighbors (c.n.), and then looking for a low-dimensional representation of the training set where\nthese local relationships are best preserved (more details shortly). \nOther techniques\nThere are many other dimensionality reduction techniques, several of which are available in Scikit-Learn.\nHere are some of the most popular:\n* Multidimensional Scaling (MDS) reduces dimensionality while trying to preserve the distances between the instances.\n\n\nIsomap creates a graph by connecting each instance to its nearest neighbors, then reduces dimensionality while trying to preserve the geodesic distances between the instances.\n\n\nt-Distributed Stochastic Neighbor Embedding (t-SNE) reduces dimensionality while trying to keep similar instances close and dissimilar instances apart. It is mostly used for visualization, in particular to visualize clusters of instances in high-dimensional space (e.g., to visualize the MNIST images in 2D).\n\n\nLinear Discriminant Analysis (LDA) is actually a classification algorithm, but during training it learns the most discriminative axes between the classes, and these axes can then be used to define a hyperplane onto which to project the data. The benefit is that the projection will keep classes as far apart as possible, so LDA is a good technique to reduce dimensionality before running another classification algorithm such as a Support Vector Machine (SVM) classifier discussed in the SVM lectures."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
betoesquivel/onforums-application
|
summarizer/BuildingASummarizer.ipynb
|
mit
|
[
"First I need to start with an article dictionary",
"from testdataextractor.testdataextractor.extractor import Extractor\next = Extractor('../test_data/1957284403.ofs.gold.xml')\narticle = ext.extract(verbose=True)",
"I then need to put the data in a format that I can query\nMaybe a way to do this is via pandas?\nI want it to look like so:\n<table>\n<tr>\n <td>sentence</td>\n <td>comment</td>\n <td>links</td>\n</tr>\n<tr>\n <td>\n sentence id\n </td>\n <td>\n comment id, if it is from a comment\n </td>\n <td>\n list of sentences its linked to\n </td>\n</tr>\n</table>",
"import pandas as pd\n\nframe_art = pd.DataFrame.from_dict(article['sentences'], orient='index')\n\nframe_art",
"Excellent! Now get sentences with most number of links",
"def calc_row_len(row):\n if 'list' in str(type(row['links'])):\n return len(row['links']) \n else:\n return 0\nframe_num_links = frame_art.apply(\n (lambda row: calc_row_len(row)), axis=1\n)\nframe_with_lengths = pd.concat([frame_art, frame_num_links], axis=1)\n\ntop_sentences = frame_with_lengths.sort_values(by=0, axis=0, ascending=False)[:11]\ntop_sentences.columns = ['text', 'comment', 'links', 'link length']\nprint top_sentences.ix[:, ['links', 'link length']]\n\nprint '\\nCHUNKED SENTENCES'\nfor s in top_sentences['text']:\n print s[:100]\n\n#print \"These are the most linked sentences in the corpus.\"\n#print \"Sentences\\n\", top_sentences['text']\n#print \"Links they have\\n\", top_sentences['links']\n#print \"Number of links they have Links they have\\n\", top_sentences[0]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/timesketch
|
notebooks/MUS2019_CTF.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/google/timesketch/blob/master/notebooks/MUS2019_CTF.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nMagnet User Summit CTF 2019\nThe folks at Magnet Forensics had a conference recently, and as part of it they put together a digital forensics-themed Capture the Flag competition. I wasn't able to attend, but thankfully they released the CTF online a few days after the live competition ended. \nIt looked like a lot of fun and I wanted to take a crack at it using the open source tools we use/build here at Google. \nForensics Preprocessing\nI'm going to focus on how to find the answers to the CTF questions after all the processing has been done. I'll quickly summarize the processing steps I did to get to the state when I pick up my walkthrough.\nI started off by processing the provided E01 image with a basic log2timeline command; nothing special added:\nlog2timeline.py MUS2019-CTF.plaso MUS-CTF-19-DESKTOP-001.E01\nOnce that finished, I went to Timesketch, made a new sketch, and uploaded the MUS2019-CTF.plaso file I just made. The .plaso file is a database containing the results of my log2timeline run; Timesketch can read it and provide a nice, collaborative interface for reviewing and exploring that data.\nMost of what I'm going to show you is done in Colab by accessing the Timesketch API in Python. You can do most of the steps in the Timesketch web interface directly, but I wanted to demonstrate how you can use Python, Colab, Timesketch, and Plaso together to work a case. \nTimesketch & Colab Setup\nFirst, if you want to run this notebook and play along, click the 'Connect' button at the top right of the page. The Timesketch GitHub has Colab (Timesketch and Colab) that walks through how to install, connect, and explore a Sketch using Colab. Please check it out if you want a more thorough explanation of the setup; I'm just going to show the commands you need to run to get it working:",
"# Install the TimeSketch API client if you don't have it\n!pip install timesketch-api-client\n\n# Import some things we'll need\nfrom timesketch_api_client import config\nfrom timesketch_api_client import search\nimport pandas as pd\npd.options.display.max_colwidth = 60",
"Connect to Timesketch\nBy default, this will connect to the public demo Timesketch server, which David Cowen has graciously allowed to host a copy of the Plaso timeline of the MUS2019-CTF. Thanks Dave!",
"#@title Client Information \n# @markdown In order to connect to Timesketch you need to first get a Timesketch object, which will require you to answer\n# @markdown some questions the first time you execute this code. The answers are:\n# @markdown + **auth_mode**: timesketch (username/pwd combination)\n# @markdown + **host_uri**: https://demo.timesketch.org\n# @markdown + **username**: demo\n# @markdown + **password**: demo\n\nts_client = config.get_client(confirm_choices=True)",
"Now that we've connected to the Timesketch server, we need to select the Sketch that has the CTF timeline. \nFirst we'll list the available sketches, then print their names:",
"sketches = ts_client.list_sketches()\nctf = None\nfor sketch in sketches:\n print('[{0:d}] {1:s}'.format(sketch.id, sketch.name))\n if sketch.name == 'MUS2019 CTF':\n ctf = sketch",
"Then we'll select the MUS2019-CTF sketch (shown as sketch 3 above; you can change the number below to select a different sketch):",
"print(ctf.name)\nprint(ctf.description)",
"Lastly, I'll briefly explain a few paramters of the explore function, which we'll use heavily when answering questions.\n<sketch_name>.explore() is how we send queries to Timesketch and get results back. query_string, return_fields, and as_pandas are the main parameters I'll be using:\n - query_string: This is the same as the query you'd enter if you were using the Timesketch web interface.\n - return_fields: Here we specify what fields we want back from Timesketch. This is where we can get really specific using Colab and only get the things we're interested in (which varies depending on what data types we're expecting back).\n - as_pandas: This just a boolen value which tells Timesketch to return a Pandas DataFrame, rather than a dictionary. We'll have this set to True in all our queries, since DataFrames are awesome!\nOkay, enough setup. Let's get to answering questions!\nQuestions\n\nI grouped the questions from the 'Basic - Desktop' section into three categories: NTFS, TeamViewer, and Registry.\nNTFS Questions\nThis first set of questions relate to aspects of NTFS: MFT entries, sequence numbers, USN entries, and VSNs.\nAs a little refresher, the 64-bit file reference address (or number) is made up of the MFT entry (48 bits) and sequence (16 bits) numbers. We often see this represented as something like 1234-2, with 1234 being the MFT entry number and 2 being the sequence number. Plaso calls the MFT entry number the inode, since that's the more generic term that applies across file systems.\nQ: What is the name of the file associated with MFT entry number 102698?\nSince Plaso parses out the MFT entry (or as it calls it, inode) into its own field, let's do a query for all records with that value:",
"search_obj = search.Search(ctf)\nsearch_obj.query_string = 'inode:102698'\nsearch_obj.return_fields='datetime,timestamp_desc,data_type,inode,filename'\nts_results = search_obj.table\nts_results[['datetime','timestamp_desc','data_type','inode','filename']]",
"Multiple results, as is expected since Plaso creates multiple records for different types of timestamps, but they all point to the same filename: /Users/Administrator/Downloads/TeamViewer_Setup.exe",
"ts_results.filename.unique()",
"Q: What is the file name that represented MFT entry 60725 with a sequence number of 10?\nThe quick way to answer this is to just search for the MFT entry number (60725) and look for references to sequence number 10 in the message field:",
"search_obj = search.Search(ctf)\nsearch_obj.query_string = '60725'\nsearch_obj.return_fields='datetime,timestamp_desc,data_type,filename,message'\nts_results = search_obj.table\nts_results[['datetime','timestamp_desc','data_type','filename','message']]",
"That's a bunch of rows, so let's filter it down by searching for messages that contain '60725-10':",
"ts_results[ts_results.message.str.contains('60725-10')]",
"That filename is really long and cut off; let's just select that field, then deduplicate using set():",
"set(ts_results[ts_results.message.str.contains('60725-10')].filename)",
"Another way to solve this is to query for the file reference number directly. That's not as easy as it sounds, since Plaso stores it in the hex form (I'm working on fixing that). We can work with that though! \nLet's do the same query as above, but add the file_reference field:",
"search_obj = search.Search(ctf)\nsearch_obj.query_string = '60725'\nsearch_obj.return_fields='datetime,timestamp_desc,data_type,file_reference,filename,message'\nts_results = search_obj.table\nts_results[['datetime','timestamp_desc','data_type','file_reference','filename','message']]",
"The file_reference value is not the format we want, since it's hard to tell what the sequence number is. We can convert it to a more useful form though:",
"# Drop any rows with NaN, since they aren't what we're looking for and will \n# break the below function.\nts_results = ts_results.dropna()\npd.options.display.max_colwidth = 110\n\n# Replace the file_reference hex value with the human-readable MFT-Seq version. \n# This is basically what Plaso does to display the result in the 'message' \n# string we searched for. \nts_results['file_reference'] = ts_results['file_reference'].map(\n lambda x: '{0:d}-{1:d}'.format(int(x) & 0xffffffffffff, int(x) >> 48))\nts_results[['datetime','timestamp_desc','data_type','file_reference','filename']]",
"There. Now we have the file_reference number in an easier-to-read format, and the history of all filenames that MFT entry 60725 has had! It's easy to look for the entry with a sequence number of 10 and get our answer.\nQ: Which file name represents the USN record where the USN number is 546416480?\nLike other questions, the quick, generic way to answer is to just search for the unique detail; in this case, search in Timesketch for '546416480'. I'll show the more targeted way below, but it's pretty simple:",
"search_obj = search.Search(ctf)\nsearch_obj.query_string = 'update_sequence_number:546416480'\nsearch_obj.return_fields='datetime,timestamp_desc,data_type,update_sequence_number,filename'\nts_results = search_obj.table\nts_results.shape\n#ts_results[['datetime','timestamp_desc','data_type','update_sequence_number','filename']]",
"Q: What is the MFT sequence number associated with the file \"\\Users\\Administrator\\Desktop\\FTK_Imager_Lite_3.1.1\\FTK Imager.exe\"?\nWe'll handle this question like other ones involving the file reference address, except in this case we first need to find the MFT entry number (or inode) from the file name. Searching for the whole file path in Timesketch is problematic (slashes among other things), so let's search for the file name and then verify the path is right:",
"search_obj = search.Search(ctf)\nsearch_obj.query_string = 'FTK Imager.exe'\nsearch_obj.return_fields='datetime,timestamp_desc,data_type,inode,message,filename'\nts_results = search_obj.table\nts_results[['datetime','timestamp_desc','data_type','inode','message']]",
"In the second row of the results, we can find the correct path we're looking for in the message and see that the corresponding inode is 99916. We could do another search, similar to how we answered other questions... or we could just look down a few rows for a USN entry that shows: \"FTK Imager.exe File reference: 99916-4\". There's the answer!",
"ts_results[\n ~ts_results.filename.isna() & (\n ts_results.filename.str.contains(r'Users\\\\Administrator\\\\Desktop\\\\FTK_Imager_Lite_3.1.1\\\\FTK Imager.exe'))][['filename', 'inode']].drop_duplicates()",
"Q: What is the Volume Serial Number of the Desktop's OS volume?\nI know the VSN can be found in multiple places, but the first one I thought of was as part of a Prefetch file, so let's do it that way. \nI'll search for all 'volume creation' Prefetch records, since I don't really care about which particular one, beyond that it's from the OS drive.",
"search_obj = search.Search(ctf)\nsearch_obj.query_string = 'data_type:\"windows:volume:creation\"'\nsearch_obj.return_fields='datetime,timestamp_desc,data_type,device_path,hostname,serial_number,message'\nts_results = search_obj.table\n\npd.options.display.max_colwidth = 70\nts_results[['datetime','timestamp_desc','data_type','device_path','hostname','serial_number','message']]",
"You can see the VSN in a readable format at the end of the device_path or in the message string. I'm only seeing one value here, so we don't need to determine which drive was the OS one. If we did, I'd look for some system processes that need to run from the OS drive to get the right VSN. \nThat's good enough for the question, but let's also convert the serial_number field from an integar to the hex format the answer wants, just to be sure:",
"for serial_nr in ts_results.serial_number.unique():\n print('{0:08X}'.format(serial_nr))",
"TeamViewer Questions\nThe next group of questions involved TeamViewer, a common remote desktop program.\nQ: Which user installed Team Viewer?\nWe can start searching very broadly, then focus in on anything that stands out. Let's just search everything we have for \"TeamViewer\":",
"search_obj = search.Search(ctf)\nsearch_obj.query_string = 'TeamViewer'\nsearch_obj.return_fields='datetime,timestamp_desc,timestamp,data_type,message'\nts_results = search_obj.table\n\nts_results[['datetime','timestamp_desc','data_type','message']]",
"That returned a lot of results (600+). We could page through them all, but why not see if there are any interesting clusters first? That sounds like a job for a visualization!\nYou can do this multiple ways; I'll do it in Python in a second, but the explanation is a bit complicated. The easier way is to do the search in TImesketch, then go to Charts > Histogram:\nHistograms were part of the old UI and have not yet been ported back into the new UI. This picture is therefore from the old UI and needs to be updated.\n\nAnd here's how you'd do something similar in Python:",
"ts_results = ts_results.set_index('datetime')\nts_results['2018':].message.resample('D').count().plot()",
"Or to use grouping/aggregation in pandas",
"ts_results.reset_index(inplace=True)\nts_results['day'] = ts_results.datetime.dt.strftime('%Y%m%d')\ngroup = ts_results[['day', 'timestamp']].groupby('day', as_index=False)\n\ngroup_df = group.count().rename(columns={'timestamp': 'count'})\n\n\ngroup_df.sort_values('count', ascending=False)[:10]",
"Okay, so from the graphs it looks like we have a good cluster at the end of February; let's look closer. I'll slice the results to only show after 2019-02-20:",
"search_obj = search.Search(ctf)\n\ndate_chip = search.DateRangeChip()\ndate_chip.start_time = '2019-02-25T00:00:00'\ndate_chip.end_time = '2019-03-04T23:59:59'\n\nsearch_obj.query_string = 'TeamViewer'\nsearch_obj.add_chip(date_chip)\nsearch_obj.return_fields = '*'\n\nts_results = search_obj.table\n#ts_results = ts_results.set_index('datetime')\n#ts_results['2019-02-20':][['timestamp_desc','data_type','filename','message']]\n\n\nts_results.data_type.value_counts()\n\nts_results.search_string.value_counts()\n\nts_results[ts_results.data_type.str.contains('chrome')][['datetime', 'url', 'domain', 'search_string', 'message', 'title']]\n\nts_results[ts_results.data_type == 'fs:stat'][['datetime', 'display_name', 'timestamp_desc']]",
"So from this, in a short interval starting 2019-02-25T20:39, we can see:\n* a Google search for \"teamviewer\"\n* a visit in Chrome to teamviewer.com,\n* then teamviewer.com/en-us/teamviewer-automatic-download/,\n* and lastly a bunch of TeamViewer related files being created.\nThe web browser and files created were done under the Administrator account (per the path filename), so that's our answer.\nQ: How Many Times\nAt least how many times did the teamviewer_desktop.exe run?\nPrefetch is a great artifact for \"how many times did something run\"-type questions, so let's look for Prefetch execution entries for the program in question:",
"search_obj = search.Search(ctf)\n\nsearch_obj.query_string = 'data_type:\"windows:prefetch:execution\" AND teamviewer_desktop.exe'\nsearch_obj.return_fields = 'datetime,timestamp_desc,data_type,executable,run_count,message'\nts_results = search_obj.table\nts_results[['datetime','timestamp_desc','data_type','executable','run_count','message']]",
"Q: Execute Where\nAfter looking at the TEAMVIEWER_DESKTOP.EXE prefetch file, which path was the executable in at the time of execution?\nWe did all the work for this question with the previous query (the answer is in the message string), but we can explicitly query for the path:",
"search_obj = search.Search(ctf)\n\nsearch_obj.query_string = 'data_type:\"windows:prefetch:execution\" AND teamviewer_desktop.exe'\nsearch_obj.return_fields = '*'\n\nts_results = search_obj.table\nts_results[['datetime','timestamp_desc','data_type','executable','run_count', 'path_hints']]",
"Registry Questions\nThis last set of questions can be answered using the Windows Registry (and one from event logs).\nLots of registry questions depend on the Current Control Set, so let's verify what it is:",
"# Escaping fun: We need to esacpe the slashes in the key_path once for Timesketch and once for Python, so we'll have triple slashes (\\\\\\)\n\nsearch_obj = search.Search(ctf)\nsearch_obj.query_string = 'data_type:\"windows:registry:key_value\" AND key_path:\"HKEY_LOCAL_MACHINE\\\\\\System\\\\\\Select\"'\nsearch_obj.return_fields='datetime,timestamp_desc,data_type,message'\nts_results = search_obj.table\nts_results[['datetime','timestamp_desc','data_type','message']]",
"From the message, the Current control set is 1.\nQ: What was the timezone offset at the time of imaging? and What is the timezone of the Desktop\nI'm combining these, since the answer is in the same query:",
"search_obj = search.Search(ctf)\n\nsearch_obj.query_string = 'data_type:\"windows:registry:timezone\"'\nsearch_obj.return_fields = 'datetime,timestamp_desc,data_type,message'\nts_results = search_obj.table\n\nts_results[['datetime','timestamp_desc','data_type','message']]",
"The message is really long; let's pull it out:",
"message = list(ts_results.message.unique())[0]\nbuffer = []\nfirst = True\nkey = ''\nfor word in message.split():\n if first:\n print(word)\n first = False\n continue\n\n if not word.endswith(':'):\n buffer.append(word)\n continue\n \n if key:\n words = ' '.join(buffer)\n buffer = []\n print(f'{\" \"*4}{key} = {words}')\n\n key = word[:-1]\n\nwords = ' '.join(buffer)\nbuffer = []\nprint(f'{\" \"*4}{key} = {words}')\n",
"The name of the Timezone is in the message string, as is the ActiveTimeBias, which we can use to get the UTC offset:",
"# The ActiveTimeBias is in minutes, so divide by -60 (I don't know why it's stored negative): \n420 / -60",
"Q: When was the Windows OS installed?\nPlaso actually parses this out as it's own data_type, so querying for it is easy:",
"search_obj = search.Search(ctf)\n\nsearch_obj.query_string = 'data_type:\"windows:registry:installation\"'\nsearch_obj.return_fields = 'datetime,timestamp_desc,data_type,message'\nts_results = search_obj.table\n\nts_results[['datetime','timestamp_desc','data_type','message']]",
"Q: What is the IP address of the Desktop?\nWe already confirmed the Control Set is 001, so let's query for the registry key under that control set that holds the Interface information:",
"search_obj = search.Search(ctf)\nsearch_obj.query_string = 'key_path:\"System\\\\\\ControlSet001\\\\\\Services\\\\\\Tcpip\\\\\\Parameters\\\\\\Interfaces\"'\nsearch_obj.return_fields = 'datetime,timestamp_desc,data_type,message'\n\nts_results = search_obj.table\nts_results[['datetime','timestamp_desc','data_type','message']]",
"There are a few entries, but only the last one has what we want. Reading through it (or using Ctrl+F) we can find the 'IPAddress' is 64.44.141.76.",
"set(ts_results.message)",
"Or we can use str.extract:",
"ts_results.message.str.extract(r'DhcpIPAddress: \\[REG_SZ\\] ([^ ]+)').drop_duplicates()",
"Q: Which User Shutdown Windows on February 25th 2019?\nEvent logs seem like a good place to look for this answer, since a shutdown generates a 1074 event in the System event log. From the question, we have a fairly-narrow timeframe, so let's slice the results down to that after we do our query:",
"search_obj = search.Search(ctf)\nsearch_obj.query_string = 'data_type:\"windows:evtx:record\" AND display_name:\"System.evtx\" AND event_identifier:\"1074\"'\nsearch_obj.return_fields='*'\n\nts_results = search_obj.table\nts_results = ts_results.set_index('datetime')\nts_results['2019-02-25':'2019-02-26'][['timestamp_desc','data_type','username','message']]",
"Wrap Up\nThat's it! Thanks for reading and I hope you found this useful. This walkthrough covered most of the questions from the 'Basic - Desktop' category; I may do other sections as well if there is time/interest. If you found this useful, check out Kristinn's demonstration of Timesketch and Colab.\nYou can get the free, open source tools I used to solve the CTF:\n* Plaso / Log2Timeline: https://github.com/log2timeline/plaso\n* Timesketch: https://github.com/google/timesketch\n* Colab(oratory): https://colab.sandbox.google.com/notebooks/welcome.ipynb"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/graphics
|
tensorflow_graphics/notebooks/reflectance.ipynb
|
apache-2.0
|
[
"Copyright 2019 Google LLC.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Light interaction with materials\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/reflectance.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/reflectance.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nThe world around us is very complex and is made of a wide array of materials ranging from glass to wood. Each material possesses its own intrinsic properties and interacts differently with light. For instance, some are diffuse (e.g. paper or marble) and given a lighting condition, look the same from any angle. Other materials (e.g. metal) have an appearance that can vary significantly and exhibit view dependent effects such as specularities.\nModelling exactly how light interacts with materials is a complex process that involves effects like sub-surface scattering (e.g. skin) and refraction (e.g. water). In this Colab, we focus on the most common effect which is reflection. Bidirectional reflectance distribution functions (BRDF) is the method of choice when it comes to modelling reflectance. Given the direction of incoming light, BRDFs control the amount of light that bounces in the direction the surface is being observed (any gray vector in the image below).\n\nIn this Colab, a light we be shone onto three spheres, each with a material described in the image above, where the specular material is going to be modelled with the Phong specular model.\nNote: This Colab covers an advanced topic and hence focuses on providing a controllable toy example to form a high level understanding of BRDFs rather than providing step by step details. For those interested, these details are nevertheless available in the code.\nSetup & Imports\nIf Tensorflow Graphics is not installed on your system, the following cell can install the Tensorflow Graphics package for you.",
"!pip install tensorflow_graphics",
"Now that Tensorflow Graphics is installed, let's import everything needed to run the demo contained in this notebook.",
"import math as m\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_graphics.rendering.reflectance import lambertian\nfrom tensorflow_graphics.rendering.reflectance import phong\nfrom tensorflow_graphics.rendering.camera import orthographic\nfrom tensorflow_graphics.geometry.representation import grid\nfrom tensorflow_graphics.geometry.representation import ray\nfrom tensorflow_graphics.geometry.representation import vector",
"Controllable lighting of a sphere",
"###############\n# UI controls #\n###############\n#@title Controls { vertical-output: false, run: \"auto\" }\nlight_x_position = -0.4 #@param { type: \"slider\", min: -1, max: 1 , step: 0.05 }\nalbedo_red = 0.7 #@param { type: \"slider\", min: 0.0, max: 1.0 , step: 0.1 }\nalbedo_green = 1 #@param { type: \"slider\", min: 0.0, max: 1.0 , step: 0.1 }\nalbedo_blue = 1 #@param { type: \"slider\", min: 0.0, max: 1.0 , step: 0.1 }\nlight_red = 1 #@param { type: \"slider\", min: 0.0, max: 1.0 , step: 0.1 }\nlight_green = 1 #@param { type: \"slider\", min: 0.0, max: 1.0 , step: 0.1 }\nlight_blue = 1 #@param { type: \"slider\", min: 0.0, max: 1.0 , step: 0.1 }\nspecular_percentage = 0.25 #@param { type: \"slider\", min: 0, max: 1 , step: 0.01 }\nshininess = 4 #@param { type: \"slider\", min: 0, max: 10, step: 1 }\ndiffuse_percentage = 1.0 - specular_percentage\ndtype = np.float64\nalbedo = np.array((albedo_red, albedo_green, albedo_blue), dtype=dtype)\n\ndef compute_intersection_normal_sphere(image_width, image_height, sphere_radius,\n sphere_center, dtype):\n pixel_grid_start = np.array((0.5, 0.5), dtype=dtype)\n pixel_grid_end = np.array((image_width - 0.5, image_height - 0.5), dtype=dtype)\n pixel_nb = np.array((image_width, image_height))\n pixels = grid.generate(pixel_grid_start, pixel_grid_end, pixel_nb)\n\n pixel_ray = tf.math.l2_normalize(orthographic.ray(pixels), axis=-1)\n zero_depth = np.zeros([image_width, image_height, 1])\n pixels_3d = orthographic.unproject(pixels, zero_depth)\n\n intersections_points, normals = ray.intersection_ray_sphere(\n sphere_center, sphere_radius, pixel_ray, pixels_3d)\n intersections_points = np.nan_to_num(intersections_points)\n normals = np.nan_to_num(normals)\n return intersections_points[0, :, :, :], normals[0, :, :, :]\n\n#####################################\n# Setup the image, sphere and light #\n#####################################\n# Image dimensions\nimage_width = 400\nimage_height = 300\n\n# Sphere center and radius\nsphere_radius = np.array((100.0,), dtype=dtype)\nsphere_center = np.array((image_width / 2.0, image_height / 2.0, 300.0),\n dtype=dtype)\n\n# Set the light along the image plane\nlight_position = np.array((image_width / 2.0 + light_x_position * image_width,\n image_height / 2.0, 0.0),\n dtype=dtype)\nvector_light_to_sphere_center = light_position - sphere_center\nlight_intensity_scale = vector.dot(\n vector_light_to_sphere_center, vector_light_to_sphere_center,\n axis=-1) * 4.0 * m.pi\nlight_intensity = np.array(\n (light_red, light_green, light_blue)) * light_intensity_scale\n\n################################################################################################\n# For each pixel in the image, estimate the corresponding surface point and associated normal. #\n################################################################################################\nintersection_3d, surface_normal = compute_intersection_normal_sphere(\n image_width, image_height, sphere_radius, sphere_center, dtype)\n\n#######################################\n# Reflectance and radiance estimation #\n#######################################\nincoming_light_direction = tf.math.l2_normalize(\n intersection_3d - light_position, axis=-1)\noutgoing_ray = np.array((0.0, 0.0, -1.0), dtype=dtype)\nalbedo = tf.broadcast_to(albedo, tf.shape(surface_normal))\n\n# Lambertian BRDF\nbrdf_lambertian = diffuse_percentage * lambertian.brdf(incoming_light_direction, outgoing_ray,\n surface_normal, albedo)\n# Phong BRDF\nbrdf_phong = specular_percentage * phong.brdf(incoming_light_direction, outgoing_ray, surface_normal,\n np.array((shininess,), dtype=dtype), albedo)\n# Composite BRDF\nbrdf_composite = brdf_lambertian + brdf_phong\n# Irradiance\ncosine_term = vector.dot(surface_normal, -incoming_light_direction)\ncosine_term = tf.math.maximum(tf.zeros_like(cosine_term), cosine_term)\nvector_light_to_surface = intersection_3d - light_position\nlight_to_surface_distance_squared = vector.dot(\n vector_light_to_surface, vector_light_to_surface, axis=-1)\nirradiance = light_intensity / (4 * m.pi *\n light_to_surface_distance_squared) * cosine_term\n# Rendering equation\nzeros = tf.zeros(intersection_3d.shape)\nradiance = brdf_composite * irradiance\nradiance_lambertian = brdf_lambertian * irradiance\nradiance_phong = brdf_phong * irradiance\n\n###############################\n# Display the rendered sphere #\n###############################\n# Saturates radiances at 1 for rendering purposes.\nradiance = np.minimum(radiance, 1.0)\nradiance_lambertian = np.minimum(radiance_lambertian, 1.0)\nradiance_phong = np.minimum(radiance_phong, 1.0)\n# Gamma correction\nradiance = np.power(radiance, 1.0 / 2.2)\nradiance_lambertian = np.power(radiance_lambertian, 1.0 / 2.2)\nradiance_phong = np.power(radiance_phong, 1.0 / 2.2)\n\nplt.figure(figsize=(20, 20))\n\n# Diffuse\nradiance_lambertian = np.transpose(radiance_lambertian, (1, 0, 2))\nax = plt.subplot(\"131\")\nax.axes.get_xaxis().set_visible(False)\nax.axes.get_yaxis().set_visible(False)\nax.grid(False)\nax.set_title(\"Lambertian\")\n_ = ax.imshow(radiance_lambertian)\n\n# Specular\nradiance_phong = np.transpose(radiance_phong, (1, 0, 2))\nax = plt.subplot(\"132\")\nax.axes.get_xaxis().set_visible(False)\nax.axes.get_yaxis().set_visible(False)\nax.grid(False)\nax.set_title(\"Specular - Phong\")\n_ = ax.imshow(radiance_phong)\n\n# Diffuse + specular\nradiance = np.transpose(radiance, (1, 0, 2))\nax = plt.subplot(\"133\")\nax.axes.get_xaxis().set_visible(False)\nax.axes.get_yaxis().set_visible(False)\nax.grid(False)\nax.set_title(\"Combined lambertian and specular\")\n_ = ax.imshow(radiance)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/profit-bidder
|
solution_test/profit_bidder_quickstart.ipynb
|
apache-2.0
|
[
"License",
"# Copyright 2022 Google LLC\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Run in Colab\nOverview\nThe current notebook acts as a quick startup guide to make you understand the different steps involved in the solution. Unlike the production pipeline that you can set up using the complete solution, the notebook runs through all the steps in one place using synthesized test data. Please note that you will not be able to test the final step because of fake synthesized data.\nScope of this notebook\nDataset\nWe provide synthesized data sets in the gitrepo that you will clone and use in the notebook. There are three csv files:\n* p_Campaign_43939335402485897.csv\n* p_Conversion_43939335402485897.csv\n* client_profit.csv\nIn addition, we also provide the schema for the above files in json format which you will use in the notebook to create the tables in the BigQuery.\nObjective\nTo help you be conversant on the following:\n1. Setup your environment (install the libraries, initialize the variables, authenticate to Google Cloud, etc.)\n1. Create a service account and two BigQuery datasets\n1. Transform the data, create batches of the data, and push the data through a REST API call to CM360\nCosts\nThis tutorial uses billable components of Google Cloud:\n* BigQuery\nUse the Pricing Calculator to generate a cost estimate based on your projected usage.\nBefore you begin\nFor this reference guide, you need a Google Cloud project.\nYou can create a new one, or select a project you already created.\nThe following steps are required, regardless where you are running your notebook (local or in Cloud AI Platform Notebook).\n* Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n* Make sure that billing is enabled for your project. \n* (When using non-Google Cloud local envirionments)Install Google Cloud SDK Google Cloud SDK\nMandatory variables\nYou must set the below variables:\n* PB_GCP_PROJECT to [Your Google Cloud Project]\n* PB_GCP_APPLICATION_CREDENTIALS to [Full path with the file name to the Service Account json file, if you chose to use Service Account to authenticate to Google Cloud]\nSetup environment\nPIP install appropriate packages",
"%pip install google-cloud-storage # for Storage Account\n%pip install google-cloud # for cloud sdk\n%pip install google-cloud-bigquery # for BigQuery\n%pip install google-cloud-bigquery-storage # for BigQuery Storage client\n%pip install google-api-python-client # for Key management\n%pip install oauth2client # for Key management",
"Initialize all the variables\nRemove all envrionment variables\nComes handy in troubleshooting",
"# remove all localvariables\n# ^^^^^^^^^^^^^^^^^^^^^\n# beg utils\n# ^^^^^^^^^^^^^^^^^^^^^\n# local scope\nmyvar = [key for key in locals().keys() if not key.startswith('_')]\nprint (len(locals().keys()))\nprint (len(myvar))\n# print (myvar)\nfor eachvar in myvar:\n print (eachvar)\n del locals()[eachvar]\nprint (len(locals().keys()))\n# global scope\nmyvar = [key for key in globals().keys() if not key.startswith('_')]\nprint (len(globals().keys()))\nprint (len(myvar))\n# print (myvar)\nfor eachvar in myvar:\n print (eachvar)\n del globals()[eachvar]\nprint (len(globals().keys()))\n# ^^^^^^^^^^^^^^^^^^^^^\n# end utils\n# ^^^^^^^^^^^^^^^^^^^^^",
"Create Python and Shell envrionment variables",
"# GCP Project\nPB_GCP_PROJECT = \"my-project\" #@param {type:\"string\"}\n\n# Default values\nPB_SOLUTION_PREFIX=\"pb_\" #@param {type:\"string\"}\n# service account\nPB_SERVICE_ACCOUNT_NAME=PB_SOLUTION_PREFIX+\"profit-bidder\" #@param {type:\"string\"}\nPB_SERVICE_ACCOUNT_NAME=PB_SERVICE_ACCOUNT_NAME.replace('_','-')\nPB_SA_ROLES=\"roles/bigquery.dataViewer roles/pubsub.publisher roles/iam.serviceAccountTokenCreator\"\nPB_SA_EMAIL=PB_SERVICE_ACCOUNT_NAME + '@' + PB_GCP_PROJECT + '.iam.gserviceaccount.com'\n\n# BQ DS for SA360/CM360\nPB_DS_SA360=PB_SOLUTION_PREFIX + \"sa360_data\" #@param {type:\"string\"}\n# BQ DS for Business data \nPB_DS_BUSINESS_DATA=PB_SOLUTION_PREFIX + \"business_data\" #@param {type:\"string\"}\n# Client margin table\nPB_CLIENT_MARGIN_DATA_TABLE_NAME=\"client_margin_data_table\" #@param {type:\"string\"}\n# Tranformed data table\nPB_CM360_TABLE=\"my_transformed_data\" #@param {type:\"string\"}\nPB_CM360_PROFILE_ID=\"my_cm_profileid\" #@param {type:\"string\"}\nPB_CM360_FL_ACTIVITY_ID=\"my_fl_activity_id\" #@param {type:\"string\"}\nPB_CM360_FL_CONFIG_ID=\"my_fl_config_id\" #@param {type:\"string\"}\n\n# DON'T CHNAGE THE BELOW VARIABLES; it is hardcoded to match the test dataset\nPB_SQL_TRANSFORM_ADVERTISER_ID=\"43939335402485897\" #synthensized id to test.\nPB_CAMPAIGN_TABLE_NAME=\"p_Campaign_\" + PB_SQL_TRANSFORM_ADVERTISER_ID\nPB_CONVERSION_TABLE_NAME=\"p_Conversion_\" + PB_SQL_TRANSFORM_ADVERTISER_ID\n\nPB_TIMEZONE=\"America/New_York\"\n\nPB_REQUIRED_KEYS = [\n 'conversionId',\n 'conversionQuantity',\n 'conversionRevenue',\n 'conversionTimestamp',\n 'conversionVisitExternalClickId',\n]\nPB_API_SCOPES = ['https://www.googleapis.com/auth/dfareporting',\n 'https://www.googleapis.com/auth/dfatrafficking',\n 'https://www.googleapis.com/auth/ddmconversions',\n 'https://www.googleapis.com/auth/devstorage.read_write']\nPB_CM360_API_NAME = 'dfareporting'\nPB_CM360_API_VERSION = 'v3.5'\n\nPB_BATCH_SIZE=100\n\n# create a variable that you can pass to the bq Cell magic\n# import the variables to the shell\nimport os\nPB_all_args = [key for key in locals().keys() if not key.startswith('_')]\n# print (PB_all_args)\nPB_BQ_ARGS = {}\nfor PB_each_key in PB_all_args:\n # print (f\"{PB_each_key}:{locals()[PB_each_key]}\")\n if PB_each_key.upper().startswith(PB_SOLUTION_PREFIX.upper()):\n PB_BQ_ARGS[PB_each_key] = locals()[PB_each_key]\n os.environ[PB_each_key] = str(PB_BQ_ARGS[PB_each_key])\nprint (PB_BQ_ARGS)",
"Setup your Google Cloud project",
"# set the desired Google Cloud project\n!gcloud config set project $PB_GCP_PROJECT\nimport os\nos.environ['GOOGLE_CLOUD_PROJECT'] = PB_GCP_PROJECT\n# validate that the Google Cloud project has been set properly.\n!echo 'gcloud will use the below project:'\n!gcloud info --format='value(config.project)'",
"Authenticate with Google Cloud\nAuthenticate using ServiceAccount Key file",
"# download the ServiceAccount key and provide the path to the file below\n# PB_GCP_APPLICATION_CREDENTIALS = \"<Full path with the file name to the above downloaded json file>\"\n# PB_GCP_APPLICATION_CREDENTIALS = \"/Users/dpani/Downloads/dpani-sandbox-2-3073195cd132.json\"\n\n# uncomment the below code in codelab environment\n# authenticate using service account\n# from google.colab import files\n# # Upload service account key\n# keyfile_upload = files.upload()\n# PB_GCP_APPLICATION_CREDENTIALS = list(keyfile_upload.keys())[0]\n\n# import os\n# os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = PB_GCP_APPLICATION_CREDENTIALS\n# # set the account\n# !echo \"Setting Service Account:\" $PB_GCP_APPLICATION_CREDENTIALS\n# !gcloud auth activate-service-account --key-file=$PB_GCP_APPLICATION_CREDENTIALS",
"Authenticate using OAuth",
"# uncomment the below code in codelab environment\n# authenticate using oauth\nimport sys\nif 'google.colab' in sys.modules:\n from google.colab import auth as google_auth\n google_auth.authenticate_user()",
"Enable the below Google Cloud Services for the solution",
"# set the proper Permission for the required Google Cloud Services\n!gcloud services enable \\\n bigquery.googleapis.com \\\n bigquerystorage.googleapis.com \\\n bigquerydatatransfer.googleapis.com \\\n doubleclickbidmanager.googleapis.com \\\n doubleclicksearch.googleapis.com \\\n storage-api.googleapis.com ",
"Utilities fuctions\nDelete a dataset in BigQuery (DDL)",
"# delete the BigQuery dataset...!!! BE CAREFUL !!!\ndef delete_dataset(dataset_id):\n \"\"\"Deletes a BigQuery dataset\n This is not recommendated to use it in a production enviornment.\n Comes handy in the iterative development and testing phases of the SDLC.\n !!! BE CAREFUL !!!!\n Args:\n dataset_id(:obj:`str`): The BigQuery dataset name that we want to delete\n \"\"\"\n # [START bigquery_delete_dataset]\n from google.cloud import bigquery\n # Construct a BigQuery client object.\n client = bigquery.Client()\n # dataset_id = 'your-project.your_dataset'\n # Use the delete_contents parameter to delete a dataset and its contents.\n # Use the not_found_ok parameter to not receive an error if the\n # dataset has already been deleted.\n client.delete_dataset(\n dataset_id, delete_contents=True, not_found_ok=True\n ) # Make an API request.\n print(\"Deleted dataset '{}'.\".format(dataset_id))",
"Delete a table in BigQuery (DDL)",
"# delete BigQuery table if not needed...!!! BE CAREFUL !!!\ndef delete_table(table_id):\n \"\"\"Deletes a BigQuery table\n This is not recommendated to use it in a production enviornment.\n Comes handy in the iterative development and testing phases of the SDLC.\n !!! BE CAREFUL !!!!\n Args:\n table_id(:obj:`str`): The BigQuery table name that we want to delete\n \"\"\"\n from google.cloud import bigquery\n # Construct a BigQuery client object.\n client = bigquery.Client()\n # client.delete_table(table_id, not_found_ok=True) # Make an API request.\n client.delete_table(table_id) # Make an API request.\n print(\"Deleted table '{}'.\".format(table_id))",
"Deletes a Service Account",
"# delete a service account\ndef delete_service_account(PB_GCP_PROJECT: str,\n PB_ACCOUNT_NAME: str\n ):\n \"\"\"The function deletes a service account\n\n This is not recommendated to use it in a production enviornment.\n Comes handy in the iterative development and testing phases of the SDLC.\n !!! BE CAREFUL !!!!\n\n Args:\n PB_GCP_PROJECT:(:obj:`str`): Google Cloud project for deployment\n PB_ACCOUNT_NAME:(:obj:`str`): Name of the service account.\n \"\"\"\n\n from googleapiclient import discovery\n from oauth2client.client import GoogleCredentials\n\n credentials = GoogleCredentials.get_application_default()\n\n service = discovery.build('iam', 'v1', credentials=credentials)\n\n # The resource name of the service account in the following format:\n # `projects/{PROJECT_ID}/serviceAccounts/{ACCOUNT}`.\n # Using `-` as a wildcard for the `PROJECT_ID` will infer the project from\n # the account. The `ACCOUNT` value can be the `email` address or the\n # `unique_id` of the service account.\n name = f'projects/{PB_GCP_PROJECT}/serviceAccounts/{PB_ACCOUNT_NAME}@{PB_GCP_PROJECT}.iam.gserviceaccount.com'\n\n print(\"Going to delete service account '{}'.\".format(name)) \n request = service.projects().serviceAccounts().delete(name=name)\n request.execute() \n print(\"Account deleted\")",
"Profit bid solution\nCreates the Service Account and BigQuery DSs:\n\nService account (the same one used to push the conversion to the SA360/CM360)\nBQ DS for SA360/CM360\nBQ DS for Business data",
"%%bash\n# create the service account\n# and add necessary iam roles\nfunction get_roles {\n gcloud projects get-iam-policy ${PB_GCP_PROJECT} --flatten=\"bindings[].members\" --format='table(bindings.role)' --filter=\"bindings.members:${PB_SA_EMAIL}\"\n}\nfunction create_service_account {\n echo \"Creating service account $PB_SA_EMAIL\"\n gcloud iam service-accounts describe $PB_SA_EMAIL > /dev/null 2>&1\n RETVAL=$?\n if (( ${RETVAL} != \"0\" )); then\n gcloud iam service-accounts create ${PB_SERVICE_ACCOUNT_NAME} --description 'Profit Bidder Service Account' --project ${PB_GCP_PROJECT}\n fi\n for role in ${PB_SA_ROLES}; do\n echo -n \"Adding ${PB_SERVICE_ACCOUNT_NAME} to ${role} \"\n if get_roles | grep $role &> /dev/null; then\n echo \"already added.\"\n else\n gcloud projects add-iam-policy-binding ${PB_GCP_PROJECT} --member=\"serviceAccount:${PB_SA_EMAIL}\" --role=\"${role}\"\n echo \"added.\"\n fi\n done \n}\n# Creates the service account and adds necessary permissions\ncreate_service_account\n\nfunction create_bq_ds {\n dataset=$1\n echo \"Creating BQ dataset: '${dataset}'\" \n bq --project_id=${PB_GCP_PROJECT} show --dataset ${dataset} > /dev/null 2>&1\n RETVAL=$?\n if (( ${RETVAL} != \"0\" )); then\n bq --project_id=${PB_GCP_PROJECT} mk --dataset ${dataset}\n else\n echo \"Reusing ${dataset}.\"\n fi\n}\n#create the BQ DSs\ncreate_bq_ds $PB_DS_SA360\ncreate_bq_ds $PB_DS_BUSINESS_DATA",
"Download the test data\nTest data is in 'solution_test' folder",
"%%bash\n# Download the test data from gitrepo\nDIR=$HOME/solutions/profit-bidder\nif [ -d \"$DIR\" ]\nthen\n echo $DIR already exists.\nelse\n mkdir -p $HOME/solutions/profit-bidder\n cd $HOME/solutions/profit-bidder\n git clone https://github.com/google/profit-bidder.git .\nfi\nexport PB_TEST_DATA_DIR=$DIR/solution_test\nls -ltrah $PB_TEST_DATA_DIR\necho $PB_TEST_DATA_DIR folder contains the test data.",
"Uploads Test data to BigQuery",
"%%bash\n# uploades the test data into the BigQuery\nfunction create_bq_table {\n dataset=$1\n table_name=$2\n schema_name=$3\n\n sql_result=$(list_bq_table $1 $2)\n echo \"Creating BQ table: '${dataset}.${table_name}'\" \n if [[ \"$sql_result\" == *\"1\"* ]]; then\n echo \"Reusing ${dataset}.${table_name}.\"\n else\n bq --project_id=${PB_GCP_PROJECT} mk -t --schema ${schema_name} --time_partitioning_type DAY ${dataset}.${table_name}\n fi \n}\n\nfunction delete_bq_table {\n dataset=$1\n table_name=$2\n sql_result=$(list_bq_table $1 $2)\n echo \"Deleting BQ table: '${dataset}.${table_name}'\" \n if [[ \"$sql_result\" == *\"1\"* ]]; then\n bq rm -f -t $PB_GCP_PROJECT:$dataset.$table_name\n else\n echo \"${dataset}.${table_name} doesn't exists.\"\n fi \n}\n\nfunction list_bq_table {\n dataset=$1\n table_name=$2\n echo \"Checking BQ table exist: '${dataset}.${table_name}'\" \n sql_query='SELECT\n COUNT(1) AS cnt\n FROM \n `<myproject>`.<mydataset>.__TABLES_SUMMARY__\n WHERE table_id = \"<mytable_name>\"'\n sql_query=\"${sql_query/<myproject>/${PB_GCP_PROJECT}}\"\n sql_query=\"${sql_query/<mydataset>/${dataset}}\"\n sql_query=\"${sql_query/<mytable_name>/${table_name}}\"\n\n bq_qry_cmd=\"bq query --use_legacy_sql=false --format=csv '<mysql_qery>'\"\n bq_qry_cmd=\"${bq_qry_cmd/<mysql_qery>/${sql_query}}\"\n sql_result=$(eval $bq_qry_cmd) \n if [[ \"$sql_result\" == *\"1\"* ]]; then\n echo \"${dataset}.${table_name} exist\"\n echo \"1\"\n else\n echo \"${dataset}.${table_name} doesn't exist\"\n echo \"0\"\n fi \n}\n\nfunction load_bq_table {\n dataset=$1\n table_name=$2\n data_file=$3\n schema_name=$4\n sql_result=$(list_bq_table $1 $2)\n echo \"Loading data to BQ table: '${dataset}.${table_name}'\" \n if [[ \"$sql_result\" == *\"1\"* ]]; then\n delete_bq_table $dataset $table_name\n fi \n if [[ \"$schema_name\" == *\"autodetect\"* ]]; then\n bq --project_id=${PB_GCP_PROJECT} load \\\n --autodetect \\\n --source_format=CSV \\\n $dataset.$table_name \\\n $data_file \n else\n create_bq_table $dataset $table_name $schema_name\n bq --project_id=${PB_GCP_PROJECT} load \\\n --source_format=CSV \\\n --time_partitioning_type=DAY \\\n --skip_leading_rows=1 \\\n ${dataset}.${table_name} \\\n ${data_file}\n fi \n}\n\n# save the current working dierctory\ncurrent_working_dir=`pwd`\n\n# change to the test data directory\nDIR=$HOME/solutions/profit-bidder\nexport PB_TEST_DATA_DIR=$DIR/solution_test\nls -ltrah $PB_TEST_DATA_DIR\necho $PB_TEST_DATA_DIR folder contains the test data.\ncd $PB_TEST_DATA_DIR\npwd\n\n# create campaign table\n# load test data to campaign table\nload_bq_table $PB_DS_SA360 $PB_CAMPAIGN_TABLE_NAME \"p_Campaign_${PB_SQL_TRANSFORM_ADVERTISER_ID}.csv\" \"p_Campaign_schema.json\"\n# create conversion table\n# load test data to conversion\nload_bq_table $PB_DS_SA360 $PB_CONVERSION_TABLE_NAME \"p_Conversion_${PB_SQL_TRANSFORM_ADVERTISER_ID}.csv\" \"${PB_TEST_DATA_DIR}/p_Conversion_schema.json\"\n# load test profit data\nload_bq_table $PB_DS_BUSINESS_DATA $PB_CLIENT_MARGIN_DATA_TABLE_NAME \"client_profit.csv\" \"autodetect\"\n\n# change to original working directory\ncd $current_working_dir\npwd\n\n",
"Create a BigQuery client, import the libraries, load the bigquery Cell magic",
"# create a BigQuery client\nfrom google.cloud import bigquery\nbq_client = bigquery.Client(project=PB_GCP_PROJECT)\n# load the bigquery Cell magic\n# %load_ext google.cloud.bigquery\n%reload_ext google.cloud.bigquery\n\n# test that BigQuery client works\nsql = \"\"\"\n SELECT name\n FROM `bigquery-public-data.usa_names.usa_1910_current`\n WHERE state = 'TX'\n LIMIT 100\n\"\"\"\n\n# Run a Standard SQL query using the environment's default project\ndf = bq_client.query(sql).to_dataframe()\ndf",
"Transform and aggregate",
"# The below query transforms the data from Campaign, Conversion, \n# and profit tables.\naggregate_sql = f\"\"\"\n-- Copyright 2021 Google LLC\n--\n-- Licensed under the Apache License, Version 2.0 (the \"License\");\n-- you may not use this file except in compliance with the License.\n-- You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing, software\n-- distributed under the License is distributed on an \"AS IS\" BASIS,\n-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-- See the License for the specific language governing permissions and\n-- limitations under the License.\n\n-- ****** TEMPLATE CODE ******\n-- NOTE: Please thoroughly review and test your version of this query before launching your pipeline\n-- The resulting data from this script should provide all the necessary columns for upload via \n-- the CM360 API and the SA360 API\n\n-- \n-- the below placeholders must be replaced with appropriate values.\n-- install.sh does so\n-- project_id as: {PB_GCP_PROJECT}\n-- sa360_dataset_name as: {PB_DS_SA360}\n-- advertiser_id as: {PB_SQL_TRANSFORM_ADVERTISER_ID}\n-- timezone as: America/New_York e.g. America/New_York\n-- floodlight_name as: My Sample Floodlight Activity\n-- account_type as: Other engines\n-- gmc_dataset_name as: pb_gmc_data\n-- gmc_account_id as: mygmc_account_id\n-- business_dataset_name as: {PB_DS_BUSINESS_DATA}\n-- client_margin_data_table as: {PB_CLIENT_MARGIN_DATA_TABLE_NAME}\n-- client_profit_data_sku_col as: sku\n-- client_profit_data_profit_col as: profit\n-- target_floodlight_name as: My Sample Floodlight Activity\n-- product_sku_var as: u9\n-- product_quantity_var as: u10\n-- product_unit_price_var as: u11\n-- product_sku_regex as: (.*?);\n-- product_quantity_regex as: (.*?);\n-- product_unit_price_regex as: (.*?);\n-- product_sku_delim as: |\n-- product_quantity_delim as: |\n-- product_unit_price_delim as: |\n-- \n\nWITH\ncampaigns AS (\n -- Example: Extracting all campaign names and IDs if needed for filtering for\n -- conversions for a subset of campaigns\n SELECT\n campaign,\n campaignId,\n row_number() OVER (partition BY campaignId ORDER BY lastModifiedTimestamp DESC) as row_num -- for de-duping\n FROM `{PB_GCP_PROJECT}.{PB_DS_SA360}.p_Campaign_{PB_SQL_TRANSFORM_ADVERTISER_ID}`\n -- Be sure to replace the Timezone with what is appropriate for your use case\n WHERE EXTRACT(DATE FROM _PARTITIONTIME) >= DATE_SUB(CURRENT_DATE('America/New_York'), INTERVAL 7 DAY)\n)\n,expanded_conversions AS (\n -- Parses out all relevant product data from a conversion request string\n SELECT\n conv.*,\n campaign,\n -- example of U-Variables that are parsed to extract product purchase data\n SPLIT(REGEXP_EXTRACT(floodlightEventRequestString, \"u9=(.*?);\"),\"|\") AS u9,\n SPLIT(REGEXP_EXTRACT(floodlightEventRequestString, \"u10=(.*?);\"),\"|\") AS u10,\n SPLIT(REGEXP_EXTRACT(floodlightEventRequestString, \"u11=(.*?);\"),\"|\") AS u11,\n FROM `{PB_GCP_PROJECT}.{PB_DS_SA360}.p_Conversion_{PB_SQL_TRANSFORM_ADVERTISER_ID}` AS conv\n LEFT JOIN (\n SELECT campaign, campaignId\n FROM campaigns\n WHERE row_num = 1\n GROUP BY 1,2\n ) AS camp\n USING (campaignId)\n WHERE\n -- Filter for conversions that occured in the previous day\n -- Be sure to replace the Timezone with what is appropriate for your use case\n floodlightActivity IN ('My Sample Floodlight Activity')\n AND accountType = 'Other engines' -- filter by Account Type as needed\n)\n,flattened_conversions AS (\n -- Flattens the extracted product data for each conversion which leaves us with a row\n -- of data for each product purchased as part of a given conversion\n SELECT\n advertiserId,\n campaignId,\n conversionId,\n skuId,\n pos1,\n quantity,\n pos2,\n cost,\n pos3\n FROM expanded_conversions,\n UNNEST(expanded_conversions.u9) AS skuId WITH OFFSET pos1,\n UNNEST(expanded_conversions.u10) AS quantity WITH OFFSET pos2,\n UNNEST(expanded_conversions.u11) AS cost WITH OFFSET pos3\n WHERE pos1 = pos2 AND pos1 = pos3 AND skuId != ''\n GROUP BY 1,2,3,4,5,6,7,8,9\n ORDER BY conversionId\n)\n,inject_gmc_margin AS (\n -- Merges Margin data with the products found in the conversion data\n SELECT \n advertiserId,\n campaignId,\n conversionId,\n skuId,\n quantity,\n IF(cost = '', '0', cost) as cost,\n pos1,\n pos2,\n pos3,\n -- PLACEHOLDER MARGIN, X% for unclassified items\n CASE\n WHEN profit IS NULL THEN 0.0\n ELSE profit\n END AS margin,\n sku,\n FROM flattened_conversions\n LEFT JOIN `{PB_GCP_PROJECT}.{PB_DS_BUSINESS_DATA}.{PB_CLIENT_MARGIN_DATA_TABLE_NAME}`\n ON flattened_conversions.skuId = sku\ngroup by 1,2,3,4,5,6,7,8,9,10,11\n)\n,all_conversions as (\n -- Rolls up all previously expanded conversion data while calculating profit based on the matched \n -- margin value. Also assigns timestamp in millis and micros \n SELECT\n e.account,\n e.accountId,\n e.accountType,\n e.advertiser,\n igm.advertiserId,\n e.agency,\n e.agencyId,\n igm.campaignId,\n e.campaign,\n e.conversionAttributionType,\n e.conversionDate,\n -- '00' may be changed to any string value that will help you identify these\n -- new conversions in reporting\n CONCAT(igm.conversionId, '00') as conversionId,\n e.conversionLastModifiedTimestamp,\n -- Note:Rounds float quantity and casts to INT, change based on use case\n -- This is done to support CM360 API\n CAST(ROUND(e.conversionQuantity) AS INT64) AS conversionQuantity,\n e.conversionRevenue,\n SUM(\n FLOOR(CAST(igm.cost AS FLOAT64))\n ) AS CALCULATED_REVENUE,\n -- PROFIT CALCULATED HERE, ADJUST LOGIC AS NEEDED FOR YOUR USE CASE\n ROUND(\n SUM(\n -- multiply item cost by class margin\n SAFE_MULTIPLY(\n CAST(igm.cost AS FLOAT64),\n igm.margin)\n ),2\n ) AS CALCULATED_PROFIT,\n e.conversionSearchTerm,\n e.conversionTimestamp,\n -- SA360 timestamp should be in millis\n UNIX_MILLIS(e.conversionTimestamp) as conversionTimestampMillis,\n -- CM360 Timestamp should be in micros\n UNIX_MICROS(e.conversionTimestamp) as conversionTimestampMicros,\n e.conversionType,\n e.conversionVisitExternalClickId,\n e.conversionVisitId,\n e.conversionVisitTimestamp,\n e.deviceSegment,\n e.floodlightActivity,\n e.floodlightActivityId,\n e.floodlightActivityTag,\n e.floodlightEventRequestString,\n e.floodlightOrderId,\n e.floodlightOriginalRevenue,\n status\n FROM inject_gmc_margin AS igm\n LEFT JOIN expanded_conversions AS e\n ON igm.advertiserID = e.advertiserId AND igm.campaignId = e.campaignID AND igm.conversionId = e.conversionId\n GROUP BY 1,2,3,4,5,6,8,7,9,10,11,12,13,14,15,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33\n)\n-- The columns below represent the original conversion data with their new profit\n-- values calculated (assigned to conversionRevenue column) along with any original \n-- floofdlight data that the client wishes to keep for trouble shooting.\nSELECT \n account,\n accountId,\n accountType,\n advertiser,\n advertiserId,\n agency,\n agencyId,\n campaignId,\n campaign,\n conversionId,\n conversionAttributionType,\n conversionDate,\n conversionTimestamp,\n conversionTimestampMillis,\n conversionTimestampMicros,\n CALCULATED_PROFIT AS conversionRevenue,\n conversionQuantity,\n -- The below is used only troublehsooting purpose.\n \"My Sample Floodlight Activity\" AS floodlightActivity,\n conversionSearchTerm,\n conversionType,\n conversionVisitExternalClickId,\n conversionVisitId,\n conversionVisitTimestamp,\n deviceSegment,\n CALCULATED_PROFIT,\n CALCULATED_REVENUE,\n -- Please prefix any original conversion values you wish to keep with \"original\". \n -- These values may help with troubleshooting\n conversionRevenue AS originalConversionRevenue,\n floodlightActivity AS originalFloodlightActivity,\n floodlightActivityId AS originalFloodlightActivityId,\n floodlightActivityTag AS originalFloodlightActivityTag,\n floodlightOriginalRevenue AS originalFloodlightRevenue,\n floodlightEventRequestString,\n floodlightOrderId\nFROM all_conversions\nWHERE CALCULATED_PROFIT > 0.0\nORDER BY account ASC\n\"\"\"\n# execute the transform query \ndf = bq_client.query(aggregate_sql).to_dataframe()\n# print a couple of records of the transformed query\ndf.head()\n\n# write the data to a table\ndf.to_gbq(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}', \n project_id=PB_GCP_PROJECT,\n if_exists='replace', \n progress_bar=True,)",
"Formulate the payload and push to CM360",
"# Reads the from transformed table, chunks the data, \n# and uploads the data to CM360\n# We need to chunk the data so as to adhere \n# to the payload limit of the CM360 REST API.\nimport pytz\nimport datetime\nimport decimal\nimport logging\nimport json\nimport google.auth\nimport google.auth.impersonated_credentials\nimport google_auth_httplib2\nfrom googleapiclient import discovery\n\ndef today_date(timezone):\n \"\"\"Returns today's date using the timezone\n Args:\n timezone(:obj:`str`): The timezone with default to America/New_York\n Returns:\n Date: today's date\n \"\"\"\n tz = pytz.timezone(timezone)\n return datetime.datetime.now(tz).date()\n\ndef time_now_str(timezone):\n \"\"\"Returns today's date using the timezone\n Args:\n timezone(:obj:`str`): The timezone with default to America/New_York\n Returns:\n Timezone: current timezone\n \"\"\"\n # set correct timezone for datetime check\n tz = pytz.timezone(timezone)\n return datetime.datetime.now(tz).strftime(\"%m-%d-%Y, %H:%M:%S\")\n\ndef pluralize(count):\n \"\"\"An utility function \n Args:\n count(:obj:`int`): A number\n Returns:\n str: 's' or empty\n \"\"\"\n if count > 1:\n return 's'\n return '' \n\ndef get_data(table_ref_name, cloud_client, batch_size):\n \"\"\"Returns the data from the transformed table.\n Args:\n table_ref_name(:obj:`google.cloud.bigquery.table.Table`): Reference to the table\n cloud_client(:obj:`google.cloud.bigquery.client.Client`): BigQuery client\n batch_size(:obj:`int`): Batch size\n Returns:\n Array[]: list/rows of data\n \"\"\"\n\n current_batch = []\n table = cloud_client.get_table(table_ref_name)\n print(f'Downloading {table.num_rows} rows from table {table_ref_name}')\n skip_stats = {}\n for row in cloud_client.list_rows(table_ref_name):\n missing_keys = []\n for key in PB_REQUIRED_KEYS:\n val = row.get(key)\n if val is None:\n missing_keys.append(key)\n count = skip_stats.get(key, 0)\n count += 1\n skip_stats[key] = count\n if len(missing_keys) > 0:\n row_as_dict = dict(row.items())\n logging.debug(f'Skipped row: missing values for keys {missing_keys} in row {row_as_dict}')\n continue\n result = {}\n conversionTimestamp = row.get('conversionTimestamp')\n # convert floating point seconds to microseconds since the epoch\n result['conversionTimestampMicros'] = int(conversionTimestamp.timestamp() * 1_000_000)\n for key in row.keys():\n value = row.get(key)\n if type(value) == datetime.datetime or type(value) == datetime.date:\n result[key] = value.strftime(\"%y-%m-%d \")\n elif type(value) == decimal.Decimal:\n result[key] = float(value)\n else:\n result[key] = value\n current_batch.append(result)\n if len(current_batch) >= batch_size:\n yield current_batch\n current_batch = []\n if len(current_batch) > 0:\n yield current_batch\n pretty_skip_stats = ', '.join([f'{val} row{pluralize(val)} missing key \"{key}\"' for key, val in skip_stats.items()])\n logging.info(f'Processed {table.num_rows} from table {table_ref_name} skipped {pretty_skip_stats}')\n\ndef setup(sa_email, api_scopes, api_name, api_version):\n \"\"\"Impersonates a service account, authenticate with Google Service,\n and returns a discovery api for further communication with Google Services.\n Args:\n sa_email(:obj:`str`): Service Account to impersonate\n api_scopes(:obj:`Any`): An array of scope that the service account \n expectes to have permission in the CM360\n api_name(:obj:`str`): CM360 API Name\n api_version(:obj:`str`): CM360 API version\n Returns:\n module:discovery: to interact with Goolge Services.\n \"\"\"\n\n source_credentials, project_id = google.auth.default()\n\n target_credentials = google.auth.impersonated_credentials.Credentials(\n source_credentials=source_credentials,\n target_principal=sa_email,\n target_scopes=api_scopes,\n delegates=[],\n lifetime=500)\n\n http = google_auth_httplib2.AuthorizedHttp(target_credentials)\n # setup API service here\n try: \n return discovery.build(\n api_name,\n api_version,\n cache_discovery=False,\n http=http)\n except:\n print('Could not authenticate') \n\n\ndef upload_data(timezone, rows, profile_id, fl_configuration_id, fl_activity_id):\n \"\"\"POSTs the conversion data using CM360 API\n Args:\n timezone(:obj:`Timezone`): Current timezone or defaulted to America/New_York \n rows(:obj:`Any`): An array of conversion data\n profile_id(:obj:`str`): Profile id - should be gathered from the CM360\n fl_configuration_id(:obj:`str`): Floodlight config id - should be gathered from the CM360\n fl_activity_id(:obj:`str`): Floodlight activity id - should be gathered from the CM360\n \"\"\"\n \n print('Starting conversions for ' + time_now_str(timezone))\n if not fl_activity_id or not fl_configuration_id:\n print('Please make sure to provide a value for both floodlightActivityId and floodlightConfigurationId!!')\n return\n # Build the API connection\n try: \n service = setup(PB_SA_EMAIL, PB_API_SCOPES, \n PB_CM360_API_NAME, PB_CM360_API_VERSION)\n # upload_log = ''\n print('Authorization successful')\n currentrow = 0\n all_conversions = \"\"\"{\"kind\": \"dfareporting#conversionsBatchInsertRequest\", \"conversions\": [\"\"\"\n while currentrow < len(rows):\n for row in rows[currentrow:min(currentrow+100, len(rows))]:\n conversion = json.dumps({\n 'kind': 'dfareporting#conversion',\n 'gclid': row['conversionVisitExternalClickId'],\n 'floodlightActivityId': fl_activity_id, # (Use short form CM Floodlight Activity Id )\n 'floodlightConfigurationId': fl_configuration_id, # (Can be found in CM UI)\n 'ordinal': row['conversionId'],\n 'timestampMicros': row['conversionTimestampMicros'],\n 'value': row['conversionRevenue'],\n 'quantity': row['conversionQuantity'] #(Alternatively, this can be hardcoded to 1)\n })\n # print('Conversion: ', conversion) # uncomment if you want to output each conversion\n all_conversions = all_conversions + conversion + ','\n all_conversions = all_conversions[:-1] + ']}'\n payload = json.loads(all_conversions)\n print(f'CM360 request payload: {payload}')\n request = service.conversions().batchinsert(profileId=profile_id, body=payload)\n print('[{}] - CM360 API Request: '.format(time_now_str()), request)\n response = request.execute()\n print('[{}] - CM360 API Response: '.format(time_now_str()), response)\n if not response['hasFailures']:\n print('Successfully inserted batch of 100.')\n else:\n status = response['status']\n for line in status:\n try:\n if line['errors']:\n for error in line['errors']:\n print('Error in line ' + json.dumps(line['conversion']))\n print('\\t[%s]: %s' % (error['code'], error['message']))\n except:\n print('Conversion with gclid ' + line['gclid'] + ' inserted.')\n print('Either finished or found errors.')\n currentrow += 100\n all_conversions = \"\"\"{\"kind\": \"dfareporting#conversionsBatchInsertRequest\", \"conversions\": [\"\"\"\n except:\n print('Could not authenticate') \n\ndef partition_and_distribute(cloud_client, table_ref_name, batch_size, timezone, \n profile_id, fl_configuration_id, fl_activity_id):\n \"\"\"Partitions the data to chunks of batch size and\n uploads to the CM360\n Args:\n table_ref_name(:obj:`google.cloud.bigquery.table.Table`): Reference to the table\n cloud_client(:obj:`google.cloud.bigquery.client.Client`): BigQuery client\n batch_size(:obj:`int`): Batch size\n timezone(:obj:`Timezone`): Current timezone or defaulted to America/New_York \n profile_id(:obj:`str`): Profile id - should be gathered from the CM360\n fl_configuration_id(:obj:`str`): Floodlight config id - should be gathered from the CM360\n fl_activity_id(:obj:`str`): Floodlight activity id - should be gathered from the CM360\n \"\"\"\n for batch in get_data(table_ref_name, cloud_client, batch_size):\n # print(f'Batch size: {len(batch)} batch: {batch}')\n upload_data(timezone, batch, profile_id, fl_configuration_id, \n fl_activity_id)\n # DEBUG BREAK!\n if batch_size == 1:\n break\n\ntry: \n table = bq_client.get_table(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}')\nexcept:\n print ('Could not find table with the provided table name: {}.'.format(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}')) \n table = None\n\ntodays_date = today_date(PB_TIMEZONE)\n\nif table is not None:\n table_ref_name = table.full_table_id.replace(':', '.')\n if table.modified.date() == todays_date or table.created.date() == todays_date:\n print('[{}] is up-to-date. Continuing with upload...'.format(table_ref_name))\n partition_and_distribute(bq_client, table_ref_name, PB_BATCH_SIZE,\n PB_TIMEZONE, PB_CM360_PROFILE_ID, \n PB_CM360_FL_CONFIG_ID, PB_CM360_FL_ACTIVITY_ID) \n else:\n print('[{}] data may be stale. Please check workflow to verfiy that it has run correctly. Upload is aborted!'.format(table_ref_name))\nelse:\n print('Table not found! Please double check your workflow for any errors.')",
"Clean up - !!! BE CAREFUL!!!\nDelete the transformed table",
"# deletes the transformed table\ndelete_table(f'{PB_DS_BUSINESS_DATA}.{PB_CM360_TABLE}')",
"Delete the SA and BQ DSs:\n\nService account (the same one used to push the conversion to the SA360/CM360)\nBQ DS for SA360/CM360\nBQ DS for Business data",
"# deletes the service account\ndelete_service_account(PB_GCP_PROJECT, PB_SERVICE_ACCOUNT_NAME)\n# deletes the dataset\ndelete_dataset(PB_DS_SA360)\ndelete_dataset(PB_DS_BUSINESS_DATA)",
"Delete the Google Cloud Project\nTo avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial is to Delete the project.\nThe easiest way to eliminate billing is to delete the project you created for the tutorial.\nCaution: Deleting a project has the following effects:\n* Everything in the project is deleted. If you used an existing project for this tutorial, when you delete it, you also delete any other work you've done in the project.\n* <b>Custom project IDs are lost. </b>When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com</b> URL, delete selected resources inside the project instead of deleting the whole project. \nIf you plan to explore multiple tutorials and quickstarts, reusing projects can help you avoid exceeding project quota limits.\n<br>\n<ol type=\"1\">\n <li>In the Cloud Console, go to the <b>Manage resources</b> page.</li>\n Go to the <a href=\"https://console.cloud.google.com/iam-admin/projects\">Manage resources page</a>\n <li>In the project list, select the project that you want to delete and then click <b>Delete</b> Trash icon.</li>\n <li>In the dialog, type the project ID and then click <b>Shut down</b> to delete the project. </li>\n</ol>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Diyago/Machine-Learning-scripts
|
time series regression/facebook_prophet_review.ipynb
|
apache-2.0
|
[
"<center>\n<img src=\"../../img/ods_stickers.jpg\">\nОткрытый курс по машинному обучению\n</center>\nАвтор материала: аналитик-разработчик в команде Яндекс.Метрики Мария Мансурова. Материал распространяется на условиях лицензии Creative Commons CC BY-NC-SA 4.0. Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала.\n<center>К статье на Хабре \"Предсказываем будущее с помощью библиотеки Facebook Prophet\"",
"import warnings\nwarnings.filterwarnings('ignore')\nimport os\nimport pandas as pd\n\nfrom plotly import __version__\nprint(__version__) # need 1.9.0 or greater\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\nfrom plotly import graph_objs as go\nimport requests\nimport pandas as pd\n\ninit_notebook_mode(connected = True)\n\ndef plotly_df(df, title = ''):\n data = []\n \n for column in df.columns:\n trace = go.Scatter(\n x = df.index,\n y = df[column],\n mode = 'lines',\n name = column\n )\n data.append(trace)\n \n layout = dict(title = title)\n fig = dict(data = data, layout = layout)\n iplot(fig, show_link=False)\n \n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom scipy import stats\nimport statsmodels.api as sm",
"Загрузка и предобработка данных\nДанные соревнования по прогнозу популярности статьи на Хабрахабре.",
"habr_df = pd.read_csv('../../data/howpop_train.csv')\n\nhabr_df['published'] = pd.to_datetime(habr_df.published)\nhabr_df = habr_df[['published', 'url']]\nhabr_df = habr_df.drop_duplicates()\n\naggr_habr_df = habr_df.groupby('published')[['url']].count()\naggr_habr_df.columns = ['posts']\n\naggr_habr_df = aggr_habr_df.resample('D').apply(sum)\nplotly_df(aggr_habr_df.resample('W').apply(sum), \n title = 'Опубликованные посты на Хабрахабре')",
"Построение прогноза Prophet",
"# pip install pystan\n# pip install fbprophet\nfrom fbprophet import Prophet\n\npredictions = 30\n\ndf = aggr_habr_df.reset_index()\ndf.columns = ['ds', 'y']\ndf.tail()\n\ntrain_df = df[:-predictions]\n\nm = Prophet()\nm.fit(train_df)\n\nfuture = m.make_future_dataframe(periods=30)\nfuture.tail()\n\nforecast = m.predict(future)\nforecast.tail()\n\nprint(', '.join(forecast.columns))\n\nm.plot(forecast)\n\nm.plot_components(forecast)",
"Оценка качества Prophet",
"cmp_df = forecast.set_index('ds')[['yhat', 'yhat_lower', 'yhat_upper']].join(df.set_index('ds'))\n\nimport numpy as np\ncmp_df['e'] = cmp_df['y'] - cmp_df['yhat']\ncmp_df['p'] = 100*cmp_df['e']/cmp_df['y']\nnp.mean(abs(cmp_df[-predictions:]['p'])), np.mean(abs(cmp_df[-predictions:]['e']))",
"Прогноз с BoxCox",
"def invboxcox(y, lmbda):\n if lmbda == 0:\n return(np.exp(y))\n else:\n return(np.exp(np.log(lmbda * y + 1) / lmbda))\n\ntrain_df2 = train_df.copy().fillna(14)\ntrain_df2 = train_df2.set_index('ds')\ntrain_df2['y'], lmbda_prophet = stats.boxcox(train_df2['y'])\n\ntrain_df2.reset_index(inplace=True)\n\nm2 = Prophet()\nm2.fit(train_df2)\nfuture2 = m2.make_future_dataframe(periods=30)\n\nforecast2 = m2.predict(future2)\nforecast2['yhat'] = invboxcox(forecast2.yhat, lmbda_prophet)\nforecast2['yhat_lower'] = invboxcox(forecast2.yhat_lower, lmbda_prophet)\nforecast2['yhat_upper'] = invboxcox(forecast2.yhat_upper, lmbda_prophet)\n\ncmp_df2 = forecast2.set_index('ds')[['yhat', 'yhat_lower', 'yhat_upper']].join(df.set_index('ds'))\n\ncmp_df2['e'] = cmp_df2['y'] - cmp_df2['yhat']\ncmp_df2['p'] = 100*cmp_df2['e']/cmp_df2['y']\nnp.mean(abs(cmp_df2[-predictions:]['p'])), np.mean(abs(cmp_df2[-predictions:]['e']))",
"Визуализация результатов",
"def show_forecast(cmp_df, num_predictions, num_values):\n upper_bound = go.Scatter(\n name='Upper Bound',\n x=cmp_df.tail(num_predictions).index,\n y=cmp_df.tail(num_predictions).yhat_upper,\n mode='lines',\n marker=dict(color=\"444\"),\n line=dict(width=0),\n fillcolor='rgba(68, 68, 68, 0.3)',\n fill='tonexty')\n\n forecast = go.Scatter(\n name='Prediction',\n x=cmp_df.tail(predictions).index,\n y=cmp_df.tail(predictions).yhat,\n mode='lines',\n line=dict(color='rgb(31, 119, 180)'),\n )\n\n lower_bound = go.Scatter(\n name='Lower Bound',\n x=cmp_df.tail(num_predictions).index,\n y=cmp_df.tail(num_predictions).yhat_lower,\n marker=dict(color=\"444\"),\n line=dict(width=0),\n mode='lines')\n\n fact = go.Scatter(\n name='Fact',\n x=cmp_df.tail(num_values).index,\n y=cmp_df.tail(num_values).y,\n marker=dict(color=\"red\"),\n mode='lines',\n )\n\n # Trace order can be important\n # with continuous error bars\n data = [lower_bound, upper_bound, forecast, fact]\n\n layout = go.Layout(\n yaxis=dict(title='Посты'),\n title='Опубликованные посты на Хабрахабре',\n showlegend = False)\n\n fig = go.Figure(data=data, layout=layout)\n iplot(fig, show_link=False)\n\nshow_forecast(cmp_df, predictions, 200)",
"Сравнение с ARIMA моделью",
"train_df = train_df.fillna(14).set_index('ds')\n\nplt.figure(figsize=(15,10))\nsm.tsa.seasonal_decompose(train_df['y'].values, freq=7).plot();\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(train_df['y'])[1])\n\ntrain_df.index = pd.to_datetime(train_df.index)\n\ntrain_df['y_box'], lmbda = stats.boxcox([1 if x == 0 else x for x in train_df['y']])\nplt.figure(figsize=(15,7))\ntrain_df.y.plot()\nplt.ylabel(u'Posts on Habr')\nprint(\"Оптимальный параметр преобразования Бокса-Кокса: %f\" % lmbda)\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(train_df['y'])[1])\n\ntrain_df['y_box_diff'] = train_df.y_box - train_df.y_box.shift(7)\nplt.figure(figsize=(15,10))\nsm.tsa.seasonal_decompose(train_df.y_box_diff[12:].values, freq=7).plot();\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(train_df.y_box_diff[8:])[1])\n\nplt.figure(figsize=(15,8))\nax = plt.subplot(211)\nsm.graphics.tsa.plot_acf(train_df.y_box_diff[13:].values.squeeze(), lags=48, ax=ax)\nax = plt.subplot(212)\nsm.graphics.tsa.plot_pacf(train_df.y_box_diff[13:].values.squeeze(), lags=48, ax=ax)",
"Начальные приближения Q = 1, q = 4, P = 5, p = 3",
"ps = range(0, 4)\nd=1\nqs = range(0, 5)\nPs = range(0, 7)\nD=1\nQs = range(0, 2)\n\nfrom itertools import product\n\nparameters = product(ps, qs, Ps, Qs)\nparameters_list = list(parameters)\nlen(parameters_list)\n\n%%time\nresults = []\nbest_aic = float(\"inf\")\n\n\n\nfor param in parameters_list:\n print(param)\n #try except нужен, потому что на некоторых наборах параметров модель не обучается\n try:\n %time model=sm.tsa.statespace.SARIMAX(train_df.y_box, order=(param[0], d, param[1]), seasonal_order=(param[2], D, param[3], 7)).fit(disp=-1)\n #выводим параметры, на которых модель не обучается и переходим к следующему набору\n except ValueError:\n print('wrong parameters:', param)\n continue\n aic = model.aic\n #сохраняем лучшую модель, aic, параметры\n if aic < best_aic:\n best_model = model\n best_aic = aic\n best_param = param\n results.append([param, model.aic])\n \nwarnings.filterwarnings('default')\n\nresult_table = pd.DataFrame(results)\nresult_table.columns = ['parameters', 'aic']\nprint(result_table.sort_values(by = 'aic', ascending=True).head())\n\nprint(best_model.summary())\n\nplt.figure(figsize=(15,8))\nplt.subplot(211)\nbest_model.resid[13:].plot()\nplt.ylabel(u'Residuals')\n\nax = plt.subplot(212)\nsm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax)\n\nprint(\"Критерий Стьюдента: p=%f\" % stats.ttest_1samp(best_model.resid[13:], 0)[1])\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1])\n\ntrain_df['arima_model'] = invboxcox(best_model.fittedvalues, lmbda)\nplt.figure(figsize=(15,7))\ntrain_df.y.tail(200).plot()\ntrain_df.arima_model[13:].tail(200).plot(color='r')\nplt.ylabel('Posts on Habr');\n\narima_df = train_df2.set_index('ds')[['y']]\n\ndate_list = [pd.datetime.strptime(\"2016-10-01\", \"%Y-%m-%d\") + \n pd.Timedelta(x) for x in range(0, predictions+1)]\nfuture = pd.DataFrame(index=date_list, columns= arima_df.columns)\narima_df = pd.concat([arima_df, future])\narima_df['forecast'] = invboxcox(best_model.predict(start=train_df.shape[0], end=train_df.shape[0]+predictions-1), lmbda)\nplt.figure(figsize=(15,7))\narima_df.y.tail(200).plot()\narima_df.forecast.tail(200).plot(color='r')\nplt.ylabel('Habr posts');\n\ncmp_df.head()\n\ncmp_df = cmp_df.join(arima_df[['forecast']])\n\nimport numpy as np\ncmp_df['e_arima'] = cmp_df['y'] - cmp_df['forecast']\ncmp_df['p_arima'] = 100*cmp_df['e_arima']/cmp_df['y']\n\nnum_values = 200\n\nforecast = go.Scatter(\n name='Prophet',\n x=cmp_df.tail(predictions).index,\n y=cmp_df.tail(predictions).yhat,\n mode='lines',\n line=dict(color='rgb(31, 119, 180)'),\n)\n\n\nfact = go.Scatter(\n name='Fact',\n x=cmp_df.tail(num_values).index,\n y=cmp_df.tail(num_values).y,\n marker=dict(color=\"red\"),\n mode='lines',\n)\n\narima = go.Scatter(\n name='ARIMA',\n x=cmp_df.tail(predictions).index,\n y=cmp_df.tail(predictions).forecast,\n mode='lines'\n)\n\n# Trace order can be important\n# with continuous error bars\ndata = [forecast, fact, arima]\n\nlayout = go.Layout(\n yaxis=dict(title='Посты'),\n title='Опубликованные посты на Хабрахабре',\n showlegend = True)\n\nfig = go.Figure(data=data, layout=layout)\niplot(fig, show_link=False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
turbomanage/training-data-analyst
|
quests/endtoendml/labs/6_deploy.ipynb
|
apache-2.0
|
[
"<h1> Deploying and predicting with model </h1>\n\n<h2>Learning Objectives</h2>\n<ol>\n <li>Create the model using ai-platform CLI commands</li>\n<li>Deploy the ML model to production</li>\n <li>Perform predictions with the model</li>\n</ol>\n\nTODO: Complete the lab notebook #TODO sections. You can refer to the solutions/ notebook for reference.",
"# change these to try this notebook out\nBUCKET = 'cloud-training-demos-ml'\nPROJECT = 'cloud-training-demos'\nREGION = 'us-central1'\n\nimport os\nos.environ['BUCKET'] = BUCKET\nos.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION\nos.environ['TFVERSION'] = '1.13' \n\n%%bash\nif ! gsutil ls | grep -q gs://${BUCKET}/babyweight/trained_model; then\n gsutil mb -l ${REGION} gs://${BUCKET}\n # copy canonical model if you didn't do previous notebook\n gsutil -m cp -R gs://cloud-training-demos/babyweight/trained_model gs://${BUCKET}/babyweight\nfi",
"<h2> Deploy trained model </h2>\n<p>\nDeploying the trained model to act as a REST web service is a simple gcloud call.",
"%%bash\ngsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/\n\n%%bash\nMODEL_NAME=\"babyweight\"\nMODEL_VERSION=\"ml_on_gcp\"\nMODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/trained_model/export/exporter/ | tail -1)\necho \"Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes\"\n\n# Optional: Delete the version of the model if it already exists:\n#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}\n#gcloud ai-platform models delete ${MODEL_NAME}\n\n# TODO: Create the model \ngcloud ai-platform models create \n\n# TODO: Create the model version \ngcloud ai-platform versions create",
"<h2> Use model to predict (online prediction) </h2>\n<p>\nSend a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.",
"from oauth2client.client import GoogleCredentials\nimport requests\nimport json\n\nMODEL_NAME = 'babyweight'\nMODEL_VERSION = 'ml_on_gcp'\n\ntoken = GoogleCredentials.get_application_default().get_access_token().access_token\napi = 'https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict' \\\n .format(PROJECT, MODEL_NAME, MODEL_VERSION)\nheaders = {'Authorization': 'Bearer ' + token }\ndata = {\n 'instances': [\n {\n 'key': 'b1',\n 'is_male': 'True',\n 'mother_age': 26.0,\n 'plurality': 'Single(1)',\n 'gestation_weeks': 39\n },\n {\n 'key': 'g1',\n 'is_male': 'False',\n 'mother_age': 29.0,\n 'plurality': 'Single(1)',\n 'gestation_weeks': 38\n },\n {\n 'key': 'b2',\n 'is_male': 'True',\n 'mother_age': 26.0,\n 'plurality': 'Triplets(3)',\n 'gestation_weeks': 39\n },\n {\n 'key': 'u1',\n 'is_male': 'Unknown',\n 'mother_age': 29.0,\n 'plurality': 'Multiple(2+)',\n 'gestation_weeks': 38\n },\n ]\n}\nresponse = requests.post(api, json=data, headers=headers)\nprint(response.content)",
"The predictions for the four instances were: 7.66, 7.22, 6.32 and 6.19 pounds respectively when I ran it (your results might be different).\n<h2> Use model to predict (batch prediction) </h2>\n<p>\nBatch prediction is commonly used when you thousands to millions of predictions.\nCreate a file with one instance per line and submit using gcloud.",
"%%writefile inputs.json\n{\"key\": \"b1\", \"is_male\": \"True\", \"mother_age\": 26.0, \"plurality\": \"Single(1)\", \"gestation_weeks\": 39}\n{\"key\": \"g1\", \"is_male\": \"False\", \"mother_age\": 26.0, \"plurality\": \"Single(1)\", \"gestation_weeks\": 39}\n\n%%bash\nINPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json\nOUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs\ngsutil cp inputs.json $INPUT\ngsutil -m rm -rf $OUTPUT \n\ngcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \\\n --data-format=TEXT --region ${REGION} \\\n --input-paths=$INPUT \\\n --output-path=$OUTPUT \\\n --model=babyweight --version=ml_on_gcp",
"Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dwhswenson/openpathsampling
|
examples/tests/test_pyemma.ipynb
|
mit
|
[
"PyEmma Featurizer Support",
"from __future__ import print_function\n\nimport openpathsampling as paths\nimport numpy as np\n\n# NBVAL_IGNORE_OUTPUT\nimport pyemma.coordinates as coor\n\n# NBVAL_IGNORE_OUTPUT\nref_storage = paths.Storage('engine_store_test.nc', mode='r')\n\n# NBVAL_IGNORE_OUTPUT\nstorage = paths.Storage('delete.nc', 'w')\nstorage.trajectories.save(ref_storage.trajectories[0])",
"Import a PyEmma Coordinates Module\nUsing of pyemma featurizers or general other complex code requires a little trick to be storable. Since storing of code only works if we are not dependend on the context (scope) we need to wrap the construction of our featurizer in a function, that gets all it needs from the global scope as a parameter",
"# NBVAL_IGNORE_OUTPUT\ndef pyemma_generator(f):\n f.add_inverse_distances(f.pairs(f.select_Backbone()))\n\n# NBVAL_IGNORE_OUTPUT\ncv = paths.collectivevariable.PyEMMAFeaturizerCV(\n 'pyemma', \n pyemma_generator, \n topology=ref_storage.snapshots[0].topology\n).with_diskcache()",
"Now use this featurizer generating function to build a collective variable out of it. All we need for that is a name as usual, the generating function, the list of parameters - here only the topology and at best a test snapshot, a template.",
"cv(ref_storage.trajectories[0]);",
"Let's save it to the storage",
"# NBVAL_IGNORE_OUTPUT\nprint(storage.save(cv))",
"and apply the featurizer to a trajectory",
"# NBVAL_IGNORE_OUTPUT\ncv(storage.trajectories[0]);",
"Sync to make sure the cache is written to the netCDF file.",
"# NBVAL_IGNORE_OUTPUT\ncv(storage.snapshots.all());\n\n# NBVAL_IGNORE_OUTPUT\npy_cv = storage.cvs['pyemma']\n\nstore = storage.stores['cv%d' % storage.idx(py_cv)]\nnc_var = store.variables['value']\n\nassert(nc_var.shape[1] == 15)\nprint(nc_var.shape[1])\n\nassert(nc_var.var_type == 'numpy.float32')\nprint(nc_var.var_type)\n\n# NBVAL_IGNORE_OUTPUT\nprint(storage.variables['attributes_json'][:])\n\n# NBVAL_IGNORE_OUTPUT\npy_cv_idx = storage.idx(py_cv)\nprint(py_cv_idx)\npy_emma_feat = storage.vars['attributes_json'][py_cv_idx]\n\n# NBVAL_IGNORE_OUTPUT\nerg = py_emma_feat(storage.snapshots);\n\n# NBVAL_IGNORE_OUTPUT\nprint(erg[:,2:4])\n\nstorage.close()\nref_storage.close()\n\n# NBVAL_IGNORE_OUTPUT\nstorage = paths.Storage('delete.nc', 'r')\n\ncv = storage.cvs[0]",
"Make sure that we get the same result",
"assert np.allclose(erg, cv(storage.snapshots))\n\nstorage.close()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
machinelearningdeveloper/lc101-kc
|
October 27, 2016/Covered in class.ipynb
|
unlicense
|
[
"Book Exercises\nSpins with compass",
"spins = input(\"How many times did you spin? (Enter a negative number for couter-clockwise spins) \")\n\n# Need to ensure the direction of the spin is always correct.\n# This works because of the definition of the modulo operator and what happens with\n# negative numbers. Keep this in mind for the assignment for chapter 3.\ndegrees = (float(spins) * 360) % 360\n\nprint(\"You are facing\", degrees, \"degrees relative to north\")",
"Fun with Turtles",
"import turtle \nwn = turtle.Screen() # creates a graphics window\nalex = turtle.Turtle() # create a turtle named alex\nalex.speed(1)\nalex.shape('turtle')\n\nfor i in [0,1,2,3,4,5]:\n alex.forward(150) # tell alex to move forward by 150 units\n alex.left(85) # turn by 90 degrees\n alex.forward(75)\n\n## This won't run as expected with the notebook. Wait a moment for the window to be created.\n\nwn.exitonclick()\n\n",
"Looking at documentation for tutles\nThe documentation for the module inside of Python can be found here:\nhttps://docs.python.org/3.6/library/turtle.html",
"# Copied from the documentation example. This is bad practice to do import * DO NOT DO IT!\nfrom turtle import *\ncolor('red', 'yellow')\nbegin_fill()\nwhile True:\n forward(200)\n left(170)\n if abs(pos()) < 1:\n break\nend_fill()\ndone()",
"Using Range",
"for number in range(6, 0, -1):\n print(\"I have\", number, \"cookies. Iím going to eat one.\")\n\nprint('I ate all my cookies')\n",
"Lots of turtles",
"import turtle\nwn = turtle.Screen()\nwn.bgcolor(\"lightgreen\")\ntess = turtle.Turtle()\ntess.color(\"blue\")\ntess.shape(\"turtle\")\n\njim = turtle.Turtle()\njim.color(\"green\")\njim.shape(\"turtle\")\n\ncarl = turtle.Turtle()\ncarl.color(\"red\")\ncarl.shape(\"turtle\")\n\ntess.up()\ncarl.up()\njim.up()\n# Keep in mind for today's studio\nfor size in range(5, 60, 2): # start with size = 5 and grow by 2\n tess.stamp() # leave an impression on the canvas\n carl.stamp()\n jim.stamp()\n \n carl.forward(size + 10)\n jim.forward(size)\n tess.forward(size) # move tess along\n \n carl.right(90)\n jim.left(24)\n tess.right(24) # and turn her\n\nwn.exitonclick()\n",
"Math",
"from math import sqrt\n\nprint(sqrt(24))\nprint(sqrt(25))\n\nprint(sqrt(-2))",
"Random\nRemember how it's not actually random. Here is an example:",
"import random\n\nrandom.seed(5)\n\nprint(random.randint(0,10))\nprint(random.randint(0,10))\nprint(random.randint(0,10), '\\n')\n\nrandom.seed(5)\n\nprint(random.randint(0,10))\nprint(random.randint(0,10))\nprint(random.randint(0,10), '\\n')\n\nrandom.seed(5)\n\nprint(random.randint(0,10))\nprint(random.randint(0,10))\nprint(random.randint(0,10), '\\n')\n\nrandom.seed(5)\n\nprint(random.randint(0,10))\nprint(random.randint(0,10))\nprint(random.randint(0,10), '\\n')",
"Chapter 4 Exercises\nBottles of beer",
"bottles_of_beer = 99\n\nfor bottle_number in range(bottles_of_beer, 0, -1):\n print(bottle_number, \"Bottles of Beer on the Wall\")\n print(\"Take one down pass it around\")",
"Cool sample by: Murial",
"import turtle\nimport random\nwn = turtle.Screen()\nanaise = turtle.Turtle()\nhour = 1\nlines = 1\nangle = 1\n\nanaise.speed(0)\n# Set the color mode so the mac is happy.\n# wn.colormode(255)\n\n#change starting point of line randomly\nwhile lines < 200:\n anaise.goto(random.randrange(50), random.randrange(50))\n anaise.down()\n angle = (random.randrange(360))\n anaise.color(random.randrange(255),random.randrange(255),random.randrange(255))\n anaise.pensize(random.randrange(11))\n anaise.right(angle)\n anaise.forward(random.randrange(100))\n anaise.up()\n \n #Count the number of times the loop occurs\n lines = lines + 1\n \nwn.exitonclick()",
"Draw the regular polygon\nWrite a program that asks the user for the number of sides, the length of the side, the color, and the fill color of a regular polygon. The program should draw the polygon and then fill it in."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/awi/cmip6/models/sandbox-2/aerosol.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: AWI\nSource ID: SANDBOX-2\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:37\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'awi', 'sandbox-2', 'aerosol')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Meteorological Forcings\n5. Key Properties --> Resolution\n6. Key Properties --> Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --> Absorption\n12. Optical Radiative Properties --> Mixtures\n13. Optical Radiative Properties --> Impact Of H2o\n14. Optical Radiative Properties --> Radiative Scheme\n15. Optical Radiative Properties --> Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of aerosol model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrognostic variables in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of tracers in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre aerosol calculations generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the aerosol model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Variables 2D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Frequency\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of transport in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for aerosol transport modeling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n",
"7.3. Mass Conservation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to ensure mass conservation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.4. Convention\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTransport by convention",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prescribed Climatology\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nSpecify the climatology type for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n",
"8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Other Method Characteristics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCharacteristics of the "other method" used for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Prescribed Fields Mmr\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as mass mixing ratios.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Prescribed Fields Mmr\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of optical and radiative properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Optical Radiative Properties --> Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.2. Dust\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Organics\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12. Optical Radiative Properties --> Mixtures\n**\n12.1. External\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there external mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Internal\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.3. Mixing Rule\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Optical Radiative Properties --> Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact size?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.2. Internal Mixture\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact internal mixture?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Optical Radiative Properties --> Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Shortwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of shortwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Optical Radiative Properties --> Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol-cloud interactions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Twomey\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the Twomey effect included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.3. Twomey Minimum Ccn\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Drizzle\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect drizzle?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.5. Cloud Lifetime\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect cloud lifetime?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the Aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n",
"16.3. Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther model components coupled to the Aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.4. Gas Phase Precursors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of gas phase aerosol precursors.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.5. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.6. Bulk Scheme Species\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of species covered by the bulk scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kubeflow/code-intelligence
|
Issue_Embeddings/notebooks/01_AcquireData.ipynb
|
mit
|
[
"Running This Notebook\nThis notebook should be run using the github/mdtok container on DockerHub. The Dockerfile that defines this container is located at the root of this repository named: cpu.Dockerfile\nThis will ensure that you are able to run this notebook properly as many of the dependencies in this project are rapidly changing. To run this notebook using this container, the commands are:\nGet the container: docker pull github\\mdtok\nRun the container: docker run --it --net=host -v <host_dir>:/ds github/mdtok bash",
"from mdparse.parser import transform_pre_rules, compose\nimport pandas as pd\nfrom tqdm import tqdm_notebook\nfrom fastai.text.transform import defaults",
"Source of Data\nThe GHArchive project ingests large amounts of data from GitHub repositories. This data is stored in BigQuery for public consumption. \nFor this project, we gathered over 18 million GitHub issues by executing this query. This query attempts to remove duplicate issues where the content of the issue is roughly the same.\nThis query results in over 18 Million GitHub issues. The results of this query are split into 100 csv files for free download on the following Google Cloud Storage Bucket:\nhttps://storage.googleapis.com/issue_label_bot/language_model_data/0000000000{00-99}.csv.gz, each file contains approximately 180,000 issues and is 55MB compressed.\nPreview Data\nDownload Sample\nThe below dataframe illustrates what the format of the raw data looks like:",
"df = pd.read_csv(f'https://storage.googleapis.com/issue_label_bot/language_model_data/000000000000.csv.gz').sample(5)\n\ndf.head(1)",
"Illustrate Markdown Parsing Using mdparse\nmdparse is a library that parses markdown text and annotates the text with fields with meta-data for deep learning. Below is an illustration of mdparse at work. The parsed and annotated text can be seen in the clean_body field:\nThe changes are often subtle, but can make a big difference with regard to feature extraction for language modeling.",
"pd.set_option('max_colwidth', 1000)\n\ndf['clean_body'] = ''\nfor i, b in tqdm_notebook(enumerate(df.body), total=len(df)):\n try:\n df['clean_body'].iloc[i] = compose(transform_pre_rules+defaults.text_pre_rules)(b)\n except:\n print(f'error at: {i}')\n break\n \ndf[['body', 'clean_body']]",
"Download And Pre-Process Data\nWe download the data from GCP and pre-process this data before saving to disk.",
"from fastai.text.transform import ProcessPoolExecutor, partition_by_cores\nimport numpy as np\nfrom fastai.core import parallel\nfrom itertools import chain\n\ntransforms = transform_pre_rules + defaults.text_pre_rules\n\ndef process_dict(dfdict, _):\n \"\"\"process the data, but allow failure.\"\"\"\n t = compose(transforms)\n title = dfdict['title']\n body = dfdict['body']\n try:\n text = 'xxxfldtitle '+ t(title) + ' xxxfldbody ' + t(body)\n except:\n return None\n return {'url': dfdict['url'], 'text':text}\n\n\ndef download_data(i, _):\n \"\"\"Since the data is in 100 chunks already, just do the processing by chunk.\"\"\"\n fn = f'https://storage.googleapis.com/issue_label_bot/language_model_data/{str(i).zfill(12)}.csv.gz'\n dicts = [process_dict(d, 0) for d in pd.read_csv(fn).to_dict(orient='rows')]\n df = pd.DataFrame([d for d in dicts if d])\n df.to_csv(f'/ds/IssuesLanguageModel/data/1_processed_csv/processed_part{str(i).zfill(4)}.csv', index=False)\n return df",
"Note: The below procedure took over 30 hours on a p3.8xlarge instance on AWS with 32 Cores and 64GB of Memory. You may have to change the number of workers based on your memory and compute constraints.",
"dfs = parallel(download_data, list(range(100)), max_workers=31)\n\ndfs_rows = sum([x.shape[0] for x in dfs])\nprint(f'number of rows in pre-processed data: {dfs_rows:,}')\n\ndel dfs",
"Cached pre-processed data\nSince ~19M GitHub issues take a long time to pre-process, the pre-processed files are available here:\nhttps://storage.googleapis.com/issue_label_bot/pre_processed_data/1_processed_csv/processed_part00{00-99}.csv\nPartition Data Into Train/Validation Set\nSet aside random 10 files (out of 100) as the Validation set",
"from pathlib import Path\nfrom random import shuffle\n\n# shuffle the files\np = Path('/ds/IssuesLanguageModel/data/1_processed_csv/')\nfiles = p.ls()\nshuffle(files)\n\n# show a preview of files\nfiles[:5]\n\nvalid_df = pd.concat([pd.read_csv(f) for f in files[:10]]).dropna().drop_duplicates()\ntrain_df = pd.concat([pd.read_csv(f) for f in files[10:]]).dropna().drop_duplicates()\n\nprint(f'rows in train_df:, {train_df.shape[0]:,}')\nprint(f'rows in valid_df:, {valid_df.shape[0]:,}')\n\nvalid_df.to_hdf('/ds/IssuesLanguageModel/data/2_partitioned_df/valid_df.hdf')\ntrain_df.to_hdf('/ds/IssuesLanguageModel/data/2_partitioned_df/train_df.hdf')",
"Location of Train/Validaiton DataFrames\nYou can download the above saved dataframes (in hdf format) from Google Cloud Storage:\ntrain_df.hdf (9GB): \nhttps://storage.googleapis.com/issue_label_bot/pre_processed_data/2_partitioned_df/train_df.hdf\nvalid_df.hdf (1GB)\nhttps://storage.googleapis.com/issue_label_bot/pre_processed_data/2_partitioned_df/valid_df.hdf"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
IsacLira/data-science-cookbook
|
2017/06-linear-regression/resp_rlm_otacilio_bezerra.ipynb
|
mit
|
[
"Regressão Linear Multivariada - Trabalho\nEstudo de caso: Qualidade de Vinhos\nNesta trabalho, treinaremos um modelo de regressão linear usando descendência de gradiente estocástico no conjunto de dados da Qualidade do Vinho. O exemplo pressupõe que uma cópia CSV do conjunto de dados está no diretório de trabalho atual com o nome do arquivo winequality-white.csv.\nO conjunto de dados de qualidade do vinho envolve a previsão da qualidade dos vinhos brancos em uma escala, com medidas químicas de cada vinho. É um problema de classificação multiclasse, mas também pode ser enquadrado como um problema de regressão. O número de observações para cada classe não é equilibrado. Existem 4.898 observações com 11 variáveis de entrada e 1 variável de saída. Os nomes das variáveis são os seguintes:\n\nFixed acidity.\nVolatile acidity.\nCitric acid.\nResidual sugar.\nChlorides.\nFree sulfur dioxide. \nTotal sulfur dioxide. \nDensity.\npH.\nSulphates.\nAlcohol.\nQuality (score between 0 and 10).\n\nO desempenho de referencia de predição do valor médio é um RMSE de aproximadamente 0.148 pontos de qualidade.\nUtilize o exemplo apresentado no tutorial e altere-o de forma a carregar os dados e analisar a acurácia de sua solução. \nDefinição das Bibliotecas e Funções Principais",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import MinMaxScaler\n\ndef RMSE(errors):\n return np.sqrt(1/errors.shape[1] * np.sum(errors**2))\n\ndef predict(X, coef, addOnes=False):\n if(addOnes): X = np.append(np.ones([X.shape[0], 1]), X, axis=1)\n return np.dot(X, coef).reshape(1, X.shape[0])\n\ndef stochasticGD(X, y, alfa=0.00001, maxEpoch=50):\n X = np.append(np.ones([X.shape[0], 1]), X, axis=1)\n coef = np.random.randn(X.shape[1], 1)\n errorHist = []\n \n for epoch in range(maxEpoch):\n error = predict(X, coef) - y\n errorHist.append(RMSE(error))\n \n for i in range(X.shape[0]):\n coef[0] -= alfa * error[0,i]\n for j in range(len(coef)-1):\n coef[j+1] -= alfa * error[0,i] * X[i,j]\n \n print(\"Epoch: {} | RMSE: {}\".format(epoch, errorHist[-1]))\n print(\"Coefficients: \\n\", coef.T)\n print(\"\\n###\")\n \n return coef, errorHist",
"Carregando o conjunto de dados e utilizando o Gradiente Descendente Estocástico",
"data = pd.read_csv(\"winequality-white.csv\", delimiter=\";\")\n\nX = MinMaxScaler().fit_transform(data.values[:,:-1])\ny = data.values[:,-1]\n\n[coef, errorHist] = stochasticGD(X, y)\n\nprint(\"Gradiente Descendente Estocástico\\nRMSE: {}\".format(RMSE(y - predict(X, coef, True))))\nprint(\"Coeficientes:\\n\", coef.T)",
"Plotagem do Custo por Época",
"plt.plot(errorHist)\nplt.show()",
"Estimativa dos Coeficientes pelo Método dos Mínimos Quadrados (OLS)",
"X = np.append(np.ones([X.shape[0], 1]), X, axis=1)\nbeta = np.dot(np.dot(np.linalg.pinv(np.dot(X.T,X)), X.T), y)\n\nprint(\"Métodos dos Mínimos Quadrados\\nRMSE: {}\".format(RMSE(y - predict(X,beta))))\nprint(\"Coeficientes:\\n\", beta)",
"Comentários\nObs.: Os dados desse dataset foram disponibilizados pela Universidade do Minho (Portugal) :P\nPrimeiramente, é interessante notar que os dados de saída ($Y$), o atributo \"Quality\", possue apenas valores discretos e bastante baixos. Em contra-partida, os atributos de entrada ($X$) se apresentam em várias escalas, e podem ser bem maiores que os valores de saída. Por esse motivo, para manter a estabilidade do Stochastic Gradient Descent, é necessário realizar algum tipo de Feature Scaling para normalizar os dados de entrada, gerando assim um treinamento mais estável.\nNo meu código, utilizei a classe MinMaxScaler do próprio Scikit-Learn para realizar essa normalização de forma rápida. O Min-Max Scaling consiste em, para cada atributo, subtrair todos os valores pelo menor valor e dividir isso pela diferença entre o maior e menor valor. Isso garante, então, que todos os dados serão dispostos no intervalo fechado [0, 1].\nNo meu código, também, utilizei a notação matricial das operações entre os coeficientes ($\\beta$) e os dados de entrada ($X$). Isso permite uma computação mais rápida, com menos linhas de códigos, e ainda mantém todas as características originais do problema. Uma outra estivativa de coeficientes, utilizando o Método dos Mínimos Quadrados, também foi apresentada e mostrou resultados similares aos do Gradiente Descendente Estocástico."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
M0nica/datalogues
|
content/posts/ImportingDataIntoPandas.ipynb
|
mit
|
[
"Pandas is a Python Data Analysis Library. It allows you to play around with data and perform powerful data analysis. \nIn this example I will show you how to read data from CSV and Excel files in Pandas. You can then save the read output as in a Pandas dataframe. The sample data used in the below exercise was generated by https://mockaroo.com/.",
"import pandas as pd\n\ncsv_data_df = pd.read_csv('data/MOCK_DATA.csv')",
"Preview the first 5 lines of the data with .head() to ensure that it loaded.",
"csv_data_df.head()",
"You will need to pip install xlrd if you haven't already. In order to import data from Excel.",
"import xlrd\nexcel_data_df = pd.read_excel('data/MOCK_DATA.xlsx')\n\nexcel_data_df.head()",
"Image Courtesy of jballeis (Own work) CC BY-SA 3.0, via Wikimedia Commons"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
manoharan-lab/structural-color
|
color_mixing_tutorial.ipynb
|
gpl-3.0
|
[
"Tutorial for color mixing with bulk Monte Carlo simulations in the structural-color package\nCopyright 2016, Vinothan N. Manoharan, Victoria Hwang, Annie Stephenson\nThis file is part of the structural-color python package.\nThis package is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\nThis package is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with this package. If not, see http://www.gnu.org/licenses/.\nIntroduction to color mixing with bulk Monte Carlo simulations\nOne of the advantages of the bulk montecarlo model is that we can sample phase functions and scattering lengths for spheres that contain different particle assemblies. This means we can simulate the reflectance of bulk films made of mixtures of spheres with different colors, allowing us to simulate color mixing using the bulk Monte Carlo model. \nBelow is an example that calculates a reflectance spectrum from a bulk film made of a mixture of two types of spheres of different colors. \nLoading and using the package and module\nYou'll need the following imports",
"%matplotlib inline\nimport numpy as np\nimport time\nimport structcol as sc\nimport structcol.refractive_index as ri\nfrom structcol import montecarlo as mc\nfrom structcol import detector as det\nfrom structcol import phase_func_sphere as pfs\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom scipy.misc import factorial\nimport os",
"Start by running Monte Carlo code for a single sphere\nThis is essentially the same as running MC for a sphere as described in montecarlo_tutorial.ipynb, only we return a few extra parameters from calc_refl_trans() and use them to calculate the phase function, scattering coefficient, and absorption coefficient for the bulk Monte Carlo simulation.\nSet parameters\nWe have to set a few extra parameters for the bulk simulation",
"# Properties of the source\nwavelengths = sc.Quantity(np.arange(400., 801.,10),'nm') # wavelengths at which to calculate reflectance\n\n# Geometric properties of the sample\nparticle_radii = sc.Quantity([130, 160],'nm') # radii of the two species of particles\nvolume_fraction_bulk = sc.Quantity(0.63,'') # volume fraction of the spheres in the bulk film\nvolume_fraction_particles = sc.Quantity(0.55, '') # volume fraction of the particles in the sphere boundary\nsphere_boundary_diameter = sc.Quantity('10 um') # diameter of sphere boundary in bulk film\nbulk_thickness = sc.Quantity('50 um') # thickness of the bulk film\nboundary = 'sphere' # geometry of sample\nboundary_bulk = 'film' # geometry of the bulk sample\n\n# Refractive indices\nn_particle = ri.n('vacuum', wavelengths) # refractive index of particle\nn_matrix = ri.n('polystyrene', wavelengths) + 2e-5*1j # refractive index of matrix\nn_matrix_bulk = ri.n('vacuum', wavelengths) # refractive index of the bulk matrix\nn_medium = ri.n('vacuum', wavelengths) # refractive index of medium outside the bulk sample.\n\n# Monte Carlo parameters\nntrajectories = 2000 # number of trajectories to run with a spherical boundary\nnevents = 300 # number of scattering events for each trajectory in a spherical boundary\nntrajectories_bulk = 2000 # number of trajectories to run in the bulk film\nnevents_bulk = 300 # number of events to run in the bulk film\n\n# Properties that should not need to be changed\nz_low = sc.Quantity('0.0 um') # sets trajectories starting point\nsns.set_style('white') # sets white plotting background",
"Run Monte Carlo for the two colors of spheres\nRun Monte Carlo simulations for a sphere boundary, for the two colors of spheres. This will give two sets of scattering parameters for each wavelength.",
"p_bulk = np.zeros((particle_radii.size, wavelengths.size, 200))\nreflectance_sphere = np.zeros(wavelengths.size)\nmu_scat_bulk = sc.Quantity(np.zeros((particle_radii.size, wavelengths.size)),'1/um')\nmu_abs_bulk = sc.Quantity(np.zeros((particle_radii.size, wavelengths.size)),'1/um')\n\nfor j in range(particle_radii.size):\n # print radius to keep track of where we are in calculation\n print('particle radius: ' + str(particle_radii[j]))\n for i in range(wavelengths.size):\n\n # caculate the effective index of the sample\n n_sample = ri.n_eff(n_particle[i], n_matrix[i], volume_fraction_particles)\n\n # Calculate the phase function and scattering and absorption coefficients from the single scattering model\n # (this absorption coefficient is of the scatterer, not of an absorber added to the system)\n p, mu_scat, mu_abs = mc.calc_scat(particle_radii[j], n_particle[i], n_sample,\n volume_fraction_particles, wavelengths[i])\n\n # Initialize the trajectories\n r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_matrix_bulk[i], n_sample, \n boundary, sample_diameter = sphere_boundary_diameter)\n r0 = sc.Quantity(r0, 'um')\n k0 = sc.Quantity(k0, '')\n W0 = sc.Quantity(W0, '')\n\n # Create trajectories object\n trajectories = mc.Trajectory(r0, k0, W0)\n\n # Generate a matrix of all the randomly sampled angles first \n sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)\n\n # Create step size distribution\n step = mc.sample_step(nevents, ntrajectories, mu_scat)\n\n # Run photons\n trajectories.absorb(mu_abs, step) \n trajectories.scatter(sintheta, costheta, sinphi, cosphi) \n trajectories.move(step)\n \n # Calculate reflection and transmition \n (refl_indices, \n trans_indices, \n _, _, _, \n refl_per_traj, trans_per_traj,\n _,_,_,_,\n reflectance_sphere[i], \n _,_, norm_refl, norm_trans) = det.calc_refl_trans(trajectories, sphere_boundary_diameter,\n n_matrix_bulk[i], n_sample, boundary, \n run_fresnel_traj = False, \n return_extra = True)\n \n ### Calculate phase function and lscat ###\n # use output of calc_refl_trans to calculate phase function, mu_scat, and mu_abs for the bulk\n p_bulk[j,i,:], mu_scat_bulk[j,i], mu_abs_bulk[j,i] = pfs.calc_scat_bulk(refl_per_traj, trans_per_traj, \n trans_indices, \n norm_refl, norm_trans, \n volume_fraction_bulk, \n sphere_boundary_diameter,\n n_matrix_bulk[i],\n wavelengths[i],\n plot=False, phi_dependent=False)",
"Sample distribution of particle radii\nGiven the fraction (probability distribution) of each color of sphere in the mixture, sample the particle radii for each event and trajectory. The marginalized distribution should be the same as the the given probability distribution of the sphere sizes.",
"# sample\nprob = np.array([0.5,0.5]) # fraction of each sphere color type\nsphere_type_sampled = pfs.sample_concentration(prob, ntrajectories_bulk, nevents_bulk)\n\n# plot\nsns.distplot(np.ndarray.flatten(sphere_type_sampled), kde = False)\nplt.xlim([1,2])\nplt.ylabel('number sampled')\nplt.xlabel('sphere type number')",
"Calculate reflectance of bulk film with spheres of two different colors\nThe only difference from a normal bulk reflectance calculation (see bulk_montecarlo_tutorial.ipynb) is that we use the function pfs.sample_angles_step_poly() instead of sample_angles() and sample_step()\nNote that for mixtures of different sphere types, absorption only works in the bulk matrix, not in the spheres themselves. This is because sampling the different absorption lengths for different sphere types has not yet been implemented.",
"reflectance_bulk_mix = np.zeros(wavelengths.size)\n\nfor i in range(wavelengths.size):\n \n # print the wavelength keep track of where we are in calculation \n print('wavelength: ' + str(wavelengths[i]))\n\n # Initialize the trajectories\n r0, k0, W0 = mc.initialize(nevents_bulk, ntrajectories_bulk, n_medium[i], n_matrix_bulk[i], boundary_bulk)\n r0 = sc.Quantity(r0, 'um')\n W0 = sc.Quantity(W0, '')\n k0 = sc.Quantity(k0, '')\n \n # Sample angles and calculate step size based on sampled radii\n (sintheta, costheta, sinphi, cosphi, \n step, _, _) = pfs.sample_angles_step_poly(nevents_bulk, ntrajectories_bulk,\n p_bulk[:,i,:], \n sphere_type_sampled, \n mu_scat_bulk[:,i])\n \n\n # Create trajectories object\n trajectories = mc.Trajectory(r0, k0, W0)\n\n # Run photons\n trajectories.absorb(mu_abs_bulk[0,i], step) # Note: we assume that all scattering events \n # have the same amount of absorption \n trajectories.scatter(sintheta, costheta, sinphi, cosphi) \n trajectories.move(step)\n\n # calculate reflectance\n reflectance_bulk_mix[i], transmittance = det.calc_refl_trans(trajectories, bulk_thickness,\n n_medium[i], n_matrix_bulk[i], boundary_bulk)\n\nplt.figure()\nplt.plot(wavelengths, reflectance_bulk_mix, linewidth = 3)\nplt.ylim([0,1])\nplt.xlim([400,800])\nplt.xlabel('Wavelength (nm)')\nplt.ylabel('Reflectance')\nplt.title('Bulk Reflectance')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
whitead/numerical_stats
|
unit_14/lectures/lecture_2.ipynb
|
gpl-3.0
|
[
"Writing Modules and Functions\nUnit 14, Lecture 2\nNumerical Methods and Statistics\n\nMay 1, 2018\nWriting Good, Reliable Documented Functions\nWe're going to focus now on what goes into writing a good Python function. If you want your function to be reusable, you need to store it in a textfile that ends in .py. We can do this using the %%writefile magic. Let's see an example:",
"%%writefile test.py\n\ndef hello_world():\n print('Hello World')\n\nimport test\n\ntest.hello_world()",
"If you look in the file system, you'll see we have a file called test.py. If it's in the same directory as you, you can get everything from the test file using import. Here's some examples of it's somewhere else:\n\nIf test.py is in the parent directory of yours: import ..test\nIf test.py is in a subdirectory called sub: import sub.test. To do that though you need to have an empty file called __init__.py inside of the sub folder\n\nModules\nThis file we've created is called a module, just like the math or numpy module. We can have multiple functions inside the module as well as variables.",
"%%writefile test.py\n\npi = 3.0\n\ndef square(x):\n return x*x\n\n\ndef hello_world():\n print('Hello World')\n\nimport test\nprint('pi is exactly {}'.format(test.pi))",
"Uh-oh! It is using an outdated of test.py. To get python to reload it, we can restart the kernel or use the reload command",
"from importlib import reload\nreload(test)\nprint('pi is exactly {}'.format(test.pi))",
"Documenting\nYou can add helpful documentation at the module (top of file) and function level",
"%%writefile test.py\n'''This module contains nonsense'''\n\npi = 3.0\n\ndef square(x):\n '''Want to square a number? This function will help'''\n return x*x\n\n\ndef hello_world():\n print('Hello World')\n\nreload(test)\nhelp(test)",
"Writing a good function\nThe reason for creating a module like test.py is to write a function once and for all so you don't need to copy-pasta. Let's try this for confidence intervals of data. Here are the steps:\n\nDocument what your function should do (plan)\nGet basic functionality working in a notebook (develop)\nMove function to a file and import (deploy)\nWrite some cells in a notebook to test basic cases until you have everything working (test)\nFinally polish off your code by testing bad inputs and trying to break it (more testing)\n\nExample: Writing a function to compute confidence intervals\nLet's see this in action for computing confidence intervals\n1. Plan\nI'll be writing out the documentation. I'll use a docstring format called Napoleon. This is more complex than what we've seen before. We specify the function, how it works, examples, what it takes and what it returns. It's important to write your documentation FIRST, so you know what to write",
"def conf_interval(data, interval_type='double', confidence=0.95):\n '''This function takes in the data and computes a confidence interval\n \n Examples\n --------\n\n data = [4,3,2,5]\n center, width = conf_interval(data)\n print('The mean is {} +/- {}'.format(center, width))\n \n Parameters\n ----------\n data : list\n The list of data points\n interval_type : str\n What kind of confidence interval. Can be double, upper, lower.\n confidence : float\n The confidence of the interval\n Returns\n -------\n center, width\n Center is the mean of the data. Width is the width of the confidence interval. \n If a lower or upper is specified, width is the upper or lower value.\n '''",
"2. Develop\nLet's try first of all to compute just a double-sided confidence interval",
"import scipy.stats as ss\nimport numpy as np\n\ndata = [4,3,5,3,6, 7]\ninterval_type = 'double'\nconfidence = 0.95\n\ncenter = np.mean(data)\ns = np.std(data, ddof=1)\nppf = 1 - (1 - confidence) / 2\nt = ss.t.ppf(ppf, len(data))\nwidth = s / np.sqrt(len(data)) * t\n\nprint(center, width, ppf)",
"Now let's try adding some logic for the interval_type of confidence interval",
"interval_type = 'lower'\nif interval_type == 'lower':\n ppf = confidence\n t = ss.t.ppf(ppf, len(data))\n top = s / np.sqrt(len(data)) * t\n print(center, top)",
"The lower confidence interval should run from neg-infinity to a value above the mean. We need to adjust the code.",
"interval_type = 'lower'\nif interval_type == 'lower':\n ppf = confidence\n t = ss.t.ppf(ppf, len(data))\n top = s / np.sqrt(len(data)) * t\n print(center, center + top)\n\ninterval_type = 'upper'\nif interval_type == 'upper':\n ppf = 1 - confidence\n t = ss.t.ppf(ppf, len(data))\n top = s / np.sqrt(len(data)) * t\n print(center, center + top)",
"We can see there is quite a bit of code-repeat. Let's try to put the whole thing together without repeats",
"import scipy.stats as ss\nimport numpy as np\n\ndata = [4,3,5,3,6, 7]\ninterval_type = 'lower'\nconfidence = 0.95\n\ncenter = np.mean(data)\ns = np.std(data, ddof=1)\nif interval_type == 'lower':\n ppf = confidence\nelif interval_type == 'upper':\n ppf = 1 - confidence\nelse:\n ppf = 1 - (1 - confidence) / 2\nt = ss.t.ppf(ppf, len(data))\nwidth = s / np.sqrt(len(data)) * t\n\nif interval_type == 'lower' or interval_type == 'upper':\n width = width + center\n\nprint(center, width, ppf)",
"3. Deploy\nLet's put everything together now into a file",
"%%writefile utilities.py\n\nimport scipy.stats as ss\nimport numpy as np\n\ndef conf_interval(data, interval_type='double', confidence=0.95):\n '''This function takes in the data and computes a confidence interval\n \n Examples\n --------\n\n data = [4,3,2,5]\n center, width = conf_interval(data)\n print('The mean is {} +/- {}'.format(center, width))\n \n Parameters\n ----------\n data : list\n The list of data points\n interval_type : str\n What kind of confidence interval. Can be double, upper, lower.\n confidence : float\n The confidence of the interval\n Returns\n -------\n center, width\n Center is the mean of the data. Width is the width of the confidence interval. \n If a lower or upper is specified, width is the upper or lower value.\n '''\n\n center = np.mean(data)\n s = np.std(data, ddof=1)\n if interval_type == 'lower':\n ppf = confidence\n elif interval_type == 'upper':\n ppf = 1 - confidence\n else:\n ppf = 1 - (1 - confidence) / 2\n t = ss.t.ppf(ppf, len(data))\n width = s / np.sqrt(len(data)) * t\n \n if interval_type == 'lower' or interval_type == 'upper':\n width = center + width\n return center, width\n\nimport utilities\nreload(utilities)",
"I wrote some example code with the documentation. Let's see if it works",
"data = [4,3,2,5]\ncenter, width = utilities.conf_interval(data)\nprint('The mean is {} +/- {}'.format(center, width))",
"4. Test\nLet's now test the code for a few different cases",
"#see if it recovers the correct mean\ndata = ss.norm.rvs(size=1000, loc=12.4)\nprint(utilities.conf_interval(data))\n\n#see if it can handle upper/lower\nprint(utilities.conf_interval(data, 'upper'))\n\nprint(utilities.conf_interval(data, 'lower'))\n\n#Check different confidence values\nprint(utilities.conf_interval(data, confidence=0.75))",
"5. Break it",
"utilities.conf_interval(data, confidence=95)",
"This is a pretty usual mistake. We should probably check that confidence is a valid probability.",
"utilities.conf_interval([3], confidence=0.5)",
"Uh-oh, only one value was given. We should probably warn if there are not enough values.",
"%%writefile utilities.py\n\nimport scipy.stats as ss\nimport numpy as np\n\ndef conf_interval(data, interval_type='double', confidence=0.95):\n '''This function takes in the data and computes a confidence interval\n \n Examples\n --------\n\n data = [4,3,2,5]\n center, width = conf_interval(data)\n print('The mean is {} +/- {}'.format(center, width))\n \n Parameters\n ----------\n data : list\n The list of data points\n interval_type : str\n What kind of confidence interval. Can be double, upper, lower.\n confidence : float\n The confidence of the interval\n Returns\n -------\n center, width\n Center is the mean of the data. Width is the width of the confidence interval. \n If a lower or upper is specified, width is the upper or lower value.\n '''\n \n if(len(data) < 3):\n print('Not enough data given. Must have at least 3 values')\n\n center = np.mean(data)\n s = np.std(data, ddof=1)\n if interval_type == 'lower':\n ppf = confidence\n elif interval_type == 'upper':\n ppf = 1 - confidence\n else:\n ppf = 1 - (1 - confidence) / 2\n t = ss.t.ppf(ppf, len(data))\n width = s / np.sqrt(len(data)) * t\n \n if interval_type == 'lower' or interval_type == 'upper':\n width = center + width\n return center, width\n\nreload(utilities)\nutilities.conf_interval([3])",
"Ah, but notice it didn't actually stop the program!\nExceptions\nWhat we need is to do one of those error messages you see a lot. We can do that by raising an exception",
"raise RuntimeError('This is a problem')\n\nraise ValueError('Your value is bad and you should feel bad')\n\n%%writefile utilities.py\n\nimport scipy.stats as ss\nimport numpy as np\n\ndef conf_interval(data, interval_type='double', confidence=0.95):\n '''This function takes in the data and computes a confidence interval\n \n Examples\n --------\n\n data = [4,3,2,5]\n center, width = conf_interval(data)\n print('The mean is {} +/- {}'.format(center, width))\n \n Parameters\n ----------\n data : list\n The list of data points\n interval_type : str\n What kind of confidence interval. Can be double, upper, lower.\n confidence : float\n The confidence of the interval\n Returns\n -------\n center, width\n Center is the mean of the data. Width is the width of the confidence interval. \n If a lower or upper is specified, width is the upper or lower value.\n '''\n \n if(len(data) < 3):\n raise ValueError('Not enough data given. Must have at least 3 values')\n\n center = np.mean(data)\n s = np.std(data, ddof=1)\n if interval_type == 'lower':\n ppf = confidence\n elif interval_type == 'upper':\n ppf = 1 - confidence\n else:\n ppf = 1 - (1 - confidence) / 2\n t = ss.t.ppf(ppf, len(data))\n width = s / np.sqrt(len(data)) * t\n \n if interval_type == 'lower' or interval_type == 'upper':\n width = center + width\n return center, width\n\nreload(utilities)\nutilities.conf_interval([3])\n\n%%writefile utilities.py\n\nimport scipy.stats as ss\nimport numpy as np\n\ndef conf_interval(data, interval_type='double', confidence=0.95):\n '''This function takes in the data and computes a confidence interval\n \n Examples\n --------\n\n data = [4,3,2,5]\n center, width = conf_interval(data)\n print('The mean is {} +/- {}'.format(center, width))\n \n Parameters\n ----------\n data : list\n The list of data points\n interval_type : str\n What kind of confidence interval. Can be double, upper, lower.\n confidence : float\n The confidence of the interval\n Returns\n -------\n center, width\n Center is the mean of the data. Width is the width of the confidence interval. \n If a lower or upper is specified, width is the upper or lower value.\n '''\n \n if(len(data) < 3):\n raise ValueError('Not enough data given. Must have at least 3 values')\n if(interval_type not in ['upper', 'lower', 'double']):\n raise ValueError('I do not know how to make a {} confidence interval'.format(interval_type))\n if(0 > confidence or confidence > 1):\n raise ValueError('Confidence must be between 0 and 1')\n \n center = np.mean(data)\n s = np.std(data, ddof=1)\n if interval_type == 'lower':\n ppf = confidence\n elif interval_type == 'upper':\n ppf = 1 - confidence\n else:\n ppf = 1 - (1 - confidence) / 2\n t = ss.t.ppf(ppf, len(data))\n width = s / np.sqrt(len(data)) * t\n \n if interval_type == 'lower' or interval_type == 'upper':\n width = center + width\n return center, width\n\nreload(utilities)\nutilities.conf_interval([3])\n\nutilities.conf_interval([3,4,32], confidence=95)",
"Packaging up your files\nNow we'll learn how to put all our files together into a package that we can always use.\nYou need to arrange your files and folders in a special way. Let's say I'm putting all my functions together into a package called che116. I need to arrange it like this:\nche116-package/ <-- the top directory\n setup.py <-- the file which gives info about the package\n che116/ <-- a folder where the code is stored\n __init__.py <-- a completely empty file. The name is important\n stats.py <-- where I would put some functions related to stats\n\nHere's the contents of the three files we need to make. NOTE: You need to create the folders above before you can run this. Change the stuff after %%writefile to match where you want it.",
"%%writefile unit_15/che116-package/setup.py\n\nfrom setuptools import setup\n\nsetup(name = 'che116', #the name for install purposes\n author = 'Andrew White', #for your own info\n description = 'Some stuff I wrote for CHE 116', #displayed when install/update\n version='1.0',\n packages=['che116']) #This name should match the directory where you put your code\n\n%%writefile unit_15/che116-package/che116/__init__.py\n'''You can put some comments in here if you want. They should describe the package.'''\n\n%%writefile unit_15/che116-package/che116/stats.py\n\n\nimport scipy.stats as ss\nimport numpy as np\n\ndef conf_interval(data, interval_type='double', confidence=0.95):\n '''This function takes in the data and computes a confidence interval\n \n Examples\n --------\n\n data = [4,3,2,5]\n center, width = conf_interval(data)\n print('The mean is {} +/- {}'.format(center, width))\n \n Parameters\n ----------\n data : list\n The list of data points\n interval_type : str\n What kind of confidence interval. Can be double, upper, lower.\n confidence : float\n The confidence of the interval\n Returns\n -------\n center, width\n Center is the mean of the data. Width is the width of the confidence interval. \n If a lower or upper is specified, width is the upper or lower value.\n '''\n \n if(len(data) < 3):\n raise ValueError('Not enough data given. Must have at least 3 values')\n if(interval_type not in ['upper', 'lower', 'double']):\n raise ValueError('I do not know how to make a {} confidence interval'.format(interval_type))\n if(0 > confidence or confidence > 1):\n raise ValueError('Confidence must be between 0 and 1')\n \n center = np.mean(data)\n s = np.std(data, ddof=1)\n if interval_type == 'lower':\n ppf = confidence\n elif interval_type == 'upper':\n ppf = 1 - confidence\n else:\n ppf = 1 - (1 - confidence) / 2\n t = ss.t.ppf(ppf, len(data))\n width = s / np.sqrt(len(data)) * t\n \n if interval_type == 'lower' or interval_type == 'upper':\n width = center + width\n return center, width",
"Installing your package\nOnce you're done, run pip install -e [path to your folder], where the path is the directory where you put the setup.py file. The -e means editable: if you edit any of the above files, you do not need to reinstall",
"%system pip install -e unit_15/che116-package\n\n#YOU MUST RESTART KERNEL FIRST TIME THROUGH\n#after intall + restart, you'll always have your package available\nimport che116.stats as cs\n\ncs.conf_interval([4,3,4])\n\nhelp(che116)\n\nhelp(che116.stats)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
miguelesteras/IF-Microscopy-Assistant
|
plots/owlstone_assignment_miguelesteras.ipynb
|
mit
|
[
"Owlstone exemplar data analysis assignment",
"\"\"\"\nCreated on Thu Aug 24 19:36:27 2017\n@author: miguelesteras\n\"\"\"\n# import libraries \nimport numpy as np\nfrom scipy import stats\nimport pandas as pd\nfrom detect_peaks import *\nfrom matplotlib import pyplot as plt\nfrom __future__ import division, print_function\n\n# Load csv files as dataframes and convert them to np.arrays\nmatrix_data = pd.read_csv('/Users/miguelesteras/Desktop/OwlStone/test_matrix.csv')\nperipheral_data = pd.read_csv('/Users/miguelesteras/Desktop/OwlStone/test_peripheral_dat.csv')",
"<br>\nTask 1: Inspect the peripheral data and comment on the stability of the conditions over the time\nDescriptive Statistics",
"peripheral_data.iloc[:,1:].describe()",
"<br>\nPeripheral Data Visualization (Considering no time dependency)\nThe peripheral data is standarized to z-score along each variable to be plotted in a unique graph. z-score return nan values for vectors with std=0, hence nan values are converted into '0' before plotting.\n<br><br>",
"peripheral_np = np.transpose(peripheral_data.as_matrix())\nperipheral_z = np.nan_to_num(np.apply_along_axis(stats.zscore, 1, peripheral_np[1:,:]), copy=False)\nperipheral_plot = peripheral_z.tolist()\nlabels = list(peripheral_data.columns.values[1:])\n\n# plot boxplot and scatterplot\nfig, axes = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))\nplt.xticks(peripheral_np[0], labels, rotation='vertical')\naxes.boxplot(peripheral_plot,0,'') \naxes.set_title('Box Plot + Scatter Plot')\nfor i in range(0,len(peripheral_plot)):\n y = peripheral_plot[i]\n # Add some random \"jitter\" to the x-axis\n x = np.random.normal(i+1, 0.04, size=len(y))\n axes.plot(x, y, 'r.', alpha=0.2)\n \naxes.set_xticklabels(labels)\naxes.set_xlabel('peripheral variables')\naxes.set_ylabel('z-score')\n\nplt.show()",
"<br>\nTime series peripheral data visualization",
"# visualization in grid format\nperipheral_plot2 = peripheral_np[1:,:].tolist()\nsize = np.ceil(np.sqrt(len(peripheral_plot2)))\nfig2, axes2 = plt.subplots(nrows=int(size),\n ncols=int(size), figsize=(40, 30))\nax = axes2.flatten()\nfor j in range(0,len(peripheral_plot2)):\n ax[j].plot(peripheral_plot2[j], \n linestyle=':', marker='o', color='r')\n ax[j].set_ylabel(labels[j], fontsize=14)\n \nplt.show()\n\n# Comparison of some variables (example) \nfig, axes = plt.subplots(nrows=1, ncols=1, figsize=(8, 6))\nselection = [6,9,11,12]\ntitle = []\nfor i in selection:\n axes.plot(peripheral_plot[i], linestyle=':', marker='o', alpha=0.2)\naxes.set_ylabel('z-score')\n\nprint (labels[6],' & ',labels[9],' & ',labels[11],' & ',labels[12])\nplt.show()",
"<br>\nConclusions from Peripheral Data Visualization (based on time series, assuming a temporal order)\nSome peripheral measures vary greatly. Some correlate (see example above) others seem to be time independent and move between few values (eg. mosfet_temperature and inlet_flow) other might be time-dependent (eg. dispersion voltage). A better understanding of the nature of the measurements might reveal a causation effect in the data. These observations might explain the temporal pattern observed in the matrix data (see below study).\n<br>\nTask 2: Exploratory data analysis on the test_matrix data set\nDescriptive Statistics",
"matrix_data.describe()",
"<br>\nExtract FWHM\nThe function used to detect peaks is a modification of a function written by Marcos Duarte, and it is modeled after the MatLab findpeaks. Parameters like can be optimized to filter some false positive peaks based on peak intensity compared with background, distance between peaks and others.\nOnce peaks are detected, FWHM are detected based on a walk left and right from the peak, looking for the first value that is true for value < half the max height 'HM' (this height is calculated based on half point between peak intensity and min value in the series, there are alternatives to this eg. using 25% quantile). The two values around the HM are interpolated to calculate the exact value for x, at each side. The distance/width in x between this two points is the FWHM.",
"matrix_np = np.transpose(matrix_data.as_matrix())\nmatrix_plot = matrix_np[1::].tolist()\n\n# detect peaks and calculate FWHM. mph = minimum peak height. mpd = minimum horizontal distance\npeaks = []\nfwhm = []\nnumPeaks = np.zeros(len(matrix_plot))\nmaxPeak = np.zeros(len(matrix_plot))\nCV = matrix_np[0]\nmpd = 20\nk=0\nfor faims in matrix_plot:\n mph = 2*np.std(faims)+np.median(faims)\n ind = detect_peaks(faims, CV, mph=mph, mpd=mpd, threshold=0, edge='rising',\n kpsh=False, valley=False, show=False, ax=None)\n peaks.append(ind.tolist())\n numPeaks[k] = len(ind)\n maxPeak[k] = max(faims)\n k=k+1\n width = []\n for peak in ind:\n width.append(find_FWHM(np.asarray(faims), peak))\n fwhm.append(width) # python list variable storing FWHM for all series. Each value correspond (same index) variable peaks\n\n\n# plot example\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 8))\naxes[0].plot(stats.zscore(numPeaks), 'ro', label='no.Peaks')\naxes[0].plot(stats.zscore(maxPeak), label='max Peak Value')\naxes[0].set_ylabel('no. of peaks & max.value', fontsize=16)\naxes[0].legend(loc='best', framealpha=.5, numpoints=1, fontsize=14)\naxes[0].set_title('number of peaks in sequence vs maximum intensity in sequence (z-scored normalized)')\n\nmph = 2*np.std(matrix_plot[21])+np.median(matrix_plot[21])\nind = detect_peaks(matrix_plot[21], CV, mph=mph, mpd=mpd, threshold=0, edge='rising',\n kpsh=False, valley=False, show=True, ax=axes[1],title='Sample 21')\n\n\n# smooth signal\nwindow='hanning'\nwindow_len=11 \nmatrix_smooth = []\nfor x in matrix_plot:\n s=np.r_[x[window_len-1:0:-1],x,x[-1:-window_len:-1]] \n w=eval('np.'+window+'(window_len)') \n y=np.convolve(w/w.sum(),s,mode='valid')\n matrix_smooth.append(y.tolist())\n\n# plot example\nfig, axes = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))\ny = matrix_smooth[38] \nCV2 = np.linspace(min(CV),max(CV),num=len(y))\nmph = 2*np.std(y)+np.median(y)\nind = detect_peaks(y, CV2, mph=mph, mpd=mpd, threshold=0, edge='rising',\n kpsh=False, valley=False, show=True, ax=axes, title=\"Sample 38 'smooth'\")\n\nfig, axes = plt.subplots(nrows=1, ncols=1, figsize=(10, 6))\nmph = 2*np.std(matrix_plot[38])+np.median(matrix_plot[38])\nind = detect_peaks(matrix_plot[38], CV, mph=mph, mpd=mpd, threshold=0, edge='rising',\n kpsh=False, valley=False, show=True, ax=axes,title='Sample 38')\n\n\n# detect peaks and calculate FWHM from smoothed signal. \npeaksSM = []\nfwhmSM = []\nnumPeaksSM = np.zeros(len(matrix_smooth))\nmaxPeakSM = np.zeros(len(matrix_smooth))\nCV = matrix_np[0]\nmpd = 20\nk=0\nfor faims in matrix_smooth:\n CV2 = np.linspace(min(CV),max(CV),num=len(faims))\n mph = 2*np.std(faims)+np.median(faims)\n ind = detect_peaks(faims, CV2, mph=mph, mpd=mpd, threshold=0, edge='rising',\n kpsh=False, valley=False, show=False, ax=None)\n peaksSM.append(ind.tolist())\n numPeaksSM[k] = len(ind)\n maxPeakSM[k] = max(faims)\n k=k+1\n width = []\n for peak in ind:\n width.append(find_FWHM(np.asarray(faims), peak))\n fwhmSM.append(width) # python list variable storing FWHM for all series.",
"<br>\nTask 3: Testing peak detection and FWHM algorithm\nThe peak detection might be improved; avoiding detection of small peaks part of a big one. Doing this by filtering peaks based on the gradients of the neighbourhood (left and right). For series with not constant baseline values (eg. sample 38), a moving median might be used to calculate the peak filter threshold. \nThe FWHM algorithm can be made more robust. For series with not constant baseline values (as it is the case in some of the examples here), a moving min value or moving 25% quantile might be more suited to calculate the HM. This method is not robust in case of two peaks too close to each other. One possible resolution in this scenario could be fitting a mixture of gaussians (eg, 2 gaussians for two peaks) on the query distribution, and used the best fit gaussians to calculate the FWHM.\nThe algorithm could be put to the test against labeled data to quatify sensitivity and specificity. \n<br>\nSummary\nMaximum peak intensities drop dramatically after the 15th iteration (from 331 to 14). After that, noise dificults greatly the detection of true positive peaks. These might be explained by the nature of the sample (no true positives are present in that fraction) or by some of the peripheral factors.\nSmoothing the signal before detection of peaks can help reducing the number of false positives in cases of low intensity peaks within noisy data. \nPeak detection might be imporved by training a model for peak classification (eg. random forest); for which some labeled data (ground truth) is needed, as well as a suitable 'peak descriptor', a vector of variables that discribe and discriminates positives from negative. Some of these variables might be fwhm, height, CV, distance to neighbours,... \n<br>\n<br>\nFunctions",
"\ndef detect_peaks(x, x2, mph=None, mpd=1, threshold=0, edge='rising',\n kpsh=False, valley=False, show=False, ax=None, title='Peak Detection'):\n\n \"\"\"Detect peaks in data based on their amplitude and other features.\n functions 'detect_peaks' & '_plot' have been modified from code by Marcos Duarte\n https://github.com/demotu/BMC\n\n Parameters\n ----------\n x : 1D array_like data.\n mph : detect peaks that are greater than minimum peak height.\n mpd : detect peaks that are at least separated by minimum peak distance (in number of data).\n threshold : detect peaks (valleys) that are greater (smaller) than `threshold` in relation to their immediate neighbors.\n edge : for a flat peak, keep only the rising edge ('rising'), only the falling edge ('falling'), both edges ('both'), or don't detect a flat peak (None).\n kpsh : keep peaks with same height even if they are closer than `mpd`.\n valley : if True (1), detect valleys (local minima) instead of peaks.\n show : if True (1), plot data in matplotlib figure.\n ax : a matplotlib.axes.Axes instance, optional (default = None).\n\n Returns\n -------\n ind : 1D array_like indeces of the peaks in `x`.\n\n References\n ----------\n [1] http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/DetectPeaks.ipynb\n\n \"\"\"\n\n x = np.atleast_1d(x).astype('float64')\n if x.size < 3:\n return np.array([], dtype=int)\n if valley:\n x = -x\n # find indices of all peaks\n dx = x[1:] - x[:-1]\n # handle NaN's\n indnan = np.where(np.isnan(x))[0]\n if indnan.size:\n x[indnan] = np.inf\n dx[np.where(np.isnan(dx))[0]] = np.inf\n ine, ire, ife = np.array([[], [], []], dtype=int)\n if not edge:\n ine = np.where((np.hstack((dx, 0)) < 0) & (np.hstack((0, dx)) > 0))[0]\n else:\n if edge.lower() in ['rising', 'both']:\n ire = np.where((np.hstack((dx, 0)) <= 0) & (np.hstack((0, dx)) > 0))[0]\n if edge.lower() in ['falling', 'both']:\n ife = np.where((np.hstack((dx, 0)) < 0) & (np.hstack((0, dx)) >= 0))[0]\n ind = np.unique(np.hstack((ine, ire, ife)))\n # handle NaN's\n if ind.size and indnan.size:\n # NaN's and values close to NaN's cannot be peaks\n ind = ind[np.in1d(ind, np.unique(np.hstack((indnan, indnan-1, indnan+1))), invert=True)]\n # first and last values of x cannot be peaks\n if ind.size and ind[0] == 0:\n ind = ind[1:]\n if ind.size and ind[-1] == x.size-1:\n ind = ind[:-1]\n # remove peaks < minimum peak height\n if ind.size and mph is not None:\n ind = ind[x[ind] >= mph]\n # remove peaks - neighbors < threshold\n if ind.size and threshold > 0:\n dx = np.min(np.vstack([x[ind]-x[ind-1], x[ind]-x[ind+1]]), axis=0)\n ind = np.delete(ind, np.where(dx < threshold)[0])\n # detect small peaks closer than minimum peak distance\n if ind.size and mpd > 1:\n ind = ind[np.argsort(x[ind])][::-1] # sort ind by peak height\n idel = np.zeros(ind.size, dtype=bool)\n for i in range(ind.size):\n if not idel[i]:\n # keep peaks with the same height if kpsh is True\n idel = idel | (ind >= ind[i] - mpd) & (ind <= ind[i] + mpd) \\\n & (x[ind[i]] > x[ind] if kpsh else True)\n idel[i] = 0 # Keep current peak\n # remove the small peaks and sort back the indices by their occurrence\n ind = np.sort(ind[~idel])\n\n if show:\n if indnan.size:\n x[indnan] = np.nan\n if valley:\n x = -x\n _plot(x, x2, mph, mpd, threshold, edge, valley, ax, ind, title)\n\n return ind\n\n\n\n\ndef _plot(x, x2, mph, mpd, threshold, edge, valley, ax, ind, title):\n \"\"\"Plot results of the detect_peaks function, see its help.\"\"\"\n try:\n import matplotlib.pyplot as plt\n except ImportError:\n print('matplotlib is not available.')\n else:\n if ax is None:\n _, ax = plt.subplots(1, 1, figsize=(8, 4))\n\n ax.plot(x2, x, 'b', lw=1)\n if ind.size:\n label = 'valley' if valley else 'peak'\n label = label + 's' if ind.size > 1 else label\n ax.plot(x2[ind], x[ind], '+', mfc=None, mec='r', mew=2, ms=8,\n label='%d %s' % (ind.size, label))\n ax.legend(loc='best', framealpha=.5, numpoints=1)\n ymin, ymax = x[np.isfinite(x)].min(), x[np.isfinite(x)].max()\n yrange = ymax - ymin if ymax > ymin else 1\n ax.set_ylim(ymin - 0.1*yrange, ymax + 0.1*yrange)\n ax.set_xlabel('CV/Line', fontsize=14)\n ax.set_ylabel('FAIMS', fontsize=14)\n ax.set_title(title, y=1, fontsize=14)\n plt.show()\n\n\n\n\ndef find_FWHM(vec, peakIdx):\n halfmax = min(vec)+((vec[peakIdx]-min(vec))/2) # half maximum height of peak\n ind1 = peakIdx\n ind2 = peakIdx \n# walk right and left from peak until halfmax value is passed. \n# If value is not in sequence the function returns a float nan value. \n while vec[ind1]>halfmax: \n if ind1 == 0:\n width = float('NaN')\n return width\n else:\n ind1=ind1-1\n while vec[ind2]>halfmax:\n if ind2 == len(vec)-1:\n width = float('NaN')\n return width\n ind2=ind2+1 \n\n # Interpolate the exact value of x for y=halfmax, left and right from the peak\n gradient1 = vec[ind1+1]-vec[ind1]\n gradient2 = vec[ind2]-vec[ind2-1]\n interpo1= ind1 + (halfmax -vec[ind1])/gradient1\n interpo2= ind2 + (halfmax -vec[ind2])/gradient2\n #calculate the width and return value\n width = interpo2-interpo1\n return width"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.13/_downloads/plot_cluster_stats_evoked.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Permutation F-test on sensor data with 1D cluster level\nOne tests if the evoked response is significantly different\nbetween conditions. Multiple comparison problem is addressed\nwith cluster level permutation test.",
"# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.stats import permutation_cluster_test\nfrom mne.datasets import sample\n\nprint(__doc__)",
"Set parameters",
"data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id = 1\ntmin = -0.2\ntmax = 0.5\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\nchannel = 'MEG 1332' # include only this channel in analysis\ninclude = [channel]",
"Read epochs for the channel of interest",
"picks = mne.pick_types(raw.info, meg=False, eog=True, include=include,\n exclude='bads')\nevent_id = 1\nreject = dict(grad=4000e-13, eog=150e-6)\nepochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=reject)\ncondition1 = epochs1.get_data() # as 3D matrix\n\nevent_id = 2\nepochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=reject)\ncondition2 = epochs2.get_data() # as 3D matrix\n\ncondition1 = condition1[:, 0, :] # take only one channel to get a 2D array\ncondition2 = condition2[:, 0, :] # take only one channel to get a 2D array",
"Compute statistic",
"threshold = 6.0\nT_obs, clusters, cluster_p_values, H0 = \\\n permutation_cluster_test([condition1, condition2], n_permutations=1000,\n threshold=threshold, tail=1, n_jobs=1)",
"Plot",
"times = epochs1.times\nplt.close('all')\nplt.subplot(211)\nplt.title('Channel : ' + channel)\nplt.plot(times, condition1.mean(axis=0) - condition2.mean(axis=0),\n label=\"ERF Contrast (Event 1 - Event 2)\")\nplt.ylabel(\"MEG (T / m)\")\nplt.legend()\nplt.subplot(212)\nfor i_c, c in enumerate(clusters):\n c = c[0]\n if cluster_p_values[i_c] <= 0.05:\n h = plt.axvspan(times[c.start], times[c.stop - 1],\n color='r', alpha=0.3)\n else:\n plt.axvspan(times[c.start], times[c.stop - 1], color=(0.3, 0.3, 0.3),\n alpha=0.3)\nhf = plt.plot(times, T_obs, 'g')\nplt.legend((h, ), ('cluster p-value < 0.05', ))\nplt.xlabel(\"time (ms)\")\nplt.ylabel(\"f-values\")\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
lmoresi/UoM-VIEPS-Intro-to-Python
|
Notebooks/Numpy+Scipy/5 - Scipy Interpolate.ipynb
|
mit
|
[
"scipy.interpolate\nThis module provides general interpolation capability for data in 1, 2, and higher dimensions. This list of features is from the documentation:\n\n\nA class representing an interpolant (interp1d) in 1-D, offering several interpolation methods.\n\n\nConvenience function griddata offering a simple interface to interpolation in N dimensions (N = 1, 2, 3, 4, ...). Object-oriented interface for the underlying routines is also available.\n\n\nFunctions for 1- and 2-dimensional (smoothed) cubic-spline interpolation, based on the FORTRAN library FITPACK. There are both procedural and object-oriented interfaces for the FITPACK library.\n\n\nInterpolation using Radial Basis Functions.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"1D data",
"from scipy.interpolate import interp1d\n\nx = np.linspace(0, 10, num=11, endpoint=True)\ny = np.cos(-x**2/9.0)\nf = interp1d(x, y, kind='linear') # default if kind=None\nf2 = interp1d(x, y, kind='cubic')\nf3 = interp1d(x, y, kind='nearest')\n\nxnew = np.linspace(0, 10, num=41, endpoint=True)\nplt.plot(x, y, 'o', xnew, f(xnew), '-', xnew, f2(xnew), '--', xnew, f3(xnew), '.-')\nplt.legend(['data', 'linear', 'cubic', 'nearest'], loc='best')\nplt.show()",
"nD data\nThere are fewer approaches to n-dimensional data, the evaluation for arbitrary dimensions is always for points on an n dimensional grid.",
"from scipy.interpolate import griddata\n\ndef func(x, y):\n return x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2\n\n# A regular grid array of x,y coordinates\n\ngrid_x, grid_y = np.mgrid[0:1:100j, 0:1:200j] # see np.info(np.mgrid) for an explanation of the 200j !!\n\nnp.info(np.mgrid)\n\n# A random sampling within the same area\n\npoints = np.random.rand(1000, 2)\nvalues = func(points[:,0], points[:,1])\n\n# Resample from the values at these points onto the regular mesh\n\ngrid_z0 = griddata(points, values, (grid_x, grid_y), method='nearest')\ngrid_z1 = griddata(points, values, (grid_x, grid_y), method='linear')\ngrid_z2 = griddata(points, values, (grid_x, grid_y), method='cubic')\n\nplt.subplot(221)\nplt.imshow(func(grid_x, grid_y).T, extent=(0,1,0,1), origin='lower', cmap='jet')\nplt.plot(points[:,0], points[:,1], 'k.', ms=1)\nplt.title('Original')\nplt.subplot(222)\nplt.imshow(grid_z0.T, extent=(0,1,0,1), origin='lower', cmap='jet')\nplt.title('Nearest')\nplt.subplot(223)\nplt.imshow(grid_z1.T, extent=(0,1,0,1), origin='lower', cmap='jet')\nplt.title('Linear')\nplt.subplot(224)\nplt.imshow(grid_z2.T, extent=(0,1,0,1), origin='lower', cmap='jet')\nplt.title('Cubic')\nplt.gcf().set_size_inches(6, 6)\nplt.show()",
"Splines\nWhich have the added benefit of giving smooth derivative information",
"from scipy.interpolate import splrep, splev\n\nx = np.arange(0, 2*np.pi+np.pi/4, 2*np.pi/8)\ny = np.sin(x)\ntck = splrep(x, y, s=0)\nxnew = np.arange(0, 2*np.pi, np.pi/50)\nynew = splev(xnew, tck, der=0)\nyder = splev(xnew, tck, der=1)\n\nplt.figure()\nplt.plot(x, y, 'x', xnew, ynew, xnew, np.sin(xnew), x, y, 'b')\nplt.legend(['Linear', 'Cubic Spline', 'True'])\nplt.axis([-0.05, 6.33, -1.05, 1.05])\nplt.title('Cubic-spline interpolation')\nplt.show()\n\nplt.figure()\nplt.plot(xnew, yder, xnew, np.cos(xnew),'--')\nplt.legend(['Cubic Spline', 'True'])\nplt.axis([-0.05, 6.33, -1.05, 1.05])\nplt.title('Derivative estimation from spline')\nplt.show()",
"2D splines are also available",
"from scipy.interpolate import bisplrep, bisplev\n\n# Gridded function (at low resolution ... doesn't need to be gridded data here)\n\nx, y = np.mgrid[-1:1:20j, -1:1:20j]\nz = (x+y) * np.exp(-6.0*(x*x+y*y))\n\nplt.figure()\nplt.pcolor(x, y, z, cmap='jet')\nplt.colorbar()\nplt.title(\"Sparsely sampled function.\")\nplt.show()\n\nxnew, ynew = np.mgrid[-1:1:70j, -1:1:70j]\n\n## Create the spline-representation object tck\n\ntck = bisplrep(x, y, z, s=0)\nznew = bisplev(xnew[:,0], ynew[0,:], tck)\n\nplt.figure()\nplt.pcolor(xnew, ynew, znew, cmap='jet')\nplt.colorbar()\nplt.title(\"Interpolated function.\")\nplt.show()",
"See also\n\nRadial basis function interpolation for scattered data in n dimensions (slow for large numbers of points): from scipy.interpolate import Rbf\nscipy.ndimage for fast interpolation operations on image-like arrays\nB-splines on regular arrays are found in the scipy.signal module"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
wanderer2/pymc3
|
docs/source/notebooks/GLM-logistic.ipynb
|
apache-2.0
|
[
"GLM: Logistic Regression\n\n\nThis is a reproduction with a few slight alterations of Bayesian Log Reg by J. Benjamin Cook\n\n\nAuthor: Peadar Coyle and J. Benjamin Cook\n\nHow likely am I to make more than $50,000 US Dollars?\nExploration of model selection techniques too - I use DIC and WAIC to select the best model. \nThe convenience functions are all taken from Jon Sedars work.\nThis example also has some explorations of the features so serves as a good example of Exploratory Data Analysis and how that can guide the model creation/ model selection process.",
"%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport pymc3 as pm\nimport matplotlib.pyplot as plt\nimport seaborn\nimport warnings\nwarnings.filterwarnings('ignore')\nfrom collections import OrderedDict\nfrom time import time\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom scipy.optimize import fmin_powell\nfrom scipy import integrate\n\nimport theano as thno\nimport theano.tensor as T \n\n\ndef run_models(df, upper_order=5):\n ''' \n Convenience function:\n Fit a range of pymc3 models of increasing polynomial complexity. \n Suggest limit to max order 5 since calculation time is exponential.\n '''\n \n models, traces = OrderedDict(), OrderedDict()\n\n for k in range(1,upper_order+1):\n\n nm = 'k{}'.format(k)\n fml = create_poly_modelspec(k)\n\n with pm.Model() as models[nm]:\n\n print('\\nRunning: {}'.format(nm))\n pm.glm.glm(fml, df, family=pm.glm.families.Normal())\n\n start_MAP = pm.find_MAP(fmin=fmin_powell, disp=False)\n traces[nm] = pm.sample(2000, start=start_MAP, step=pm.NUTS(), progressbar=True) \n \n return models, traces\n\ndef plot_traces(traces, retain=1000):\n ''' \n Convenience function:\n Plot traces with overlaid means and values\n '''\n \n ax = pm.traceplot(traces[-retain:], figsize=(12,len(traces.varnames)*1.5),\n lines={k: v['mean'] for k, v in pm.df_summary(traces[-retain:]).iterrows()})\n\n for i, mn in enumerate(pm.df_summary(traces[-retain:])['mean']):\n ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data'\n ,xytext=(5,10), textcoords='offset points', rotation=90\n ,va='bottom', fontsize='large', color='#AA0022')\n \ndef create_poly_modelspec(k=1):\n ''' \n Convenience function:\n Create a polynomial modelspec string for patsy\n '''\n return ('income ~ educ + hours + age ' + ' '.join(['+ np.power(age,{})'.format(j) \n for j in range(2,k+1)])).strip()",
"The Adult Data Set is commonly used to benchmark machine learning algorithms. The goal is to use demographic features, or variables, to predict whether an individual makes more than \\$50,000 per year. The data set is almost 20 years old, and therefore, not perfect for determining the probability that I will make more than \\$50K, but it is a nice, simple dataset that can be used to showcase a few benefits of using Bayesian logistic regression over its frequentist counterpart.\nThe motivation for myself to reproduce this piece of work was to learn how to use Odd Ratio in Bayesian Regression.",
"data = pd.read_csv(\"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data\", header=None, names=['age', 'workclass', 'fnlwgt', \n 'education-categorical', 'educ', \n 'marital-status', 'occupation',\n 'relationship', 'race', 'sex', \n 'captial-gain', 'capital-loss', \n 'hours', 'native-country', \n 'income'])\n\ndata",
"Scrubbing and cleaning\nWe need to remove any null entries in Income. \nAnd we also want to restrict this study to the United States.",
"data = data[~pd.isnull(data['income'])]\n\n\ndata[data['native-country']==\" United-States\"]\n\nincome = 1 * (data['income'] == \" >50K\")\nage2 = np.square(data['age'])\n\ndata = data[['age', 'educ', 'hours']]\ndata['age2'] = age2\ndata['income'] = income\n\nincome.value_counts()",
"Exploring the data\nLet us get a feel for the parameters. \n* We see that age is a tailed distribution. Certainly not Gaussian!\n* We don't see much of a correlation between many of the features, with the exception of Age and Age2. \n* Hours worked has some interesting behaviour. How would one describe this distribution?",
"\ng = seaborn.pairplot(data)\n\n# Compute the correlation matrix\ncorr = data.corr()\n\n# Generate a mask for the upper triangle\nmask = np.zeros_like(corr, dtype=np.bool)\nmask[np.triu_indices_from(mask)] = True\n\n# Set up the matplotlib figure\nf, ax = plt.subplots(figsize=(11, 9))\n\n# Generate a custom diverging colormap\ncmap = seaborn.diverging_palette(220, 10, as_cmap=True)\n\n# Draw the heatmap with the mask and correct aspect ratio\nseaborn.heatmap(corr, mask=mask, cmap=cmap, vmax=.3,\n linewidths=.5, cbar_kws={\"shrink\": .5}, ax=ax)",
"We see here not many strong correlations. The highest is 0.30 according to this plot. We see a weak-correlation between hours and income \n(which is logical), we see a slighty stronger correlation between education and income (which is the kind of question we are answering).\nThe model\nWe will use a simple model, which assumes that the probability of making more than $50K \nis a function of age, years of education and hours worked per week. We will use PyMC3 \ndo inference. \nIn Bayesian statistics, we treat everything as a random variable and we want to know the posterior probability distribution of the parameters\n(in this case the regression coefficients)\nThe posterior is equal to the likelihood $$p(\\theta | D) = \\frac{p(D|\\theta)p(\\theta)}{p(D)}$$\nBecause the denominator is a notoriously difficult integral, $p(D) = \\int p(D | \\theta) p(\\theta) d \\theta $ we would prefer to skip computing it. Fortunately, if we draw examples from the parameter space, with probability proportional to the height of the posterior at any given point, we end up with an empirical distribution that converges to the posterior as the number of samples approaches infinity. \nWhat this means in practice is that we only need to worry about the numerator. \nGetting back to logistic regression, we need to specify a prior and a likelihood in order to draw samples from the posterior. We could use sociological knowledge about the effects of age and education on income, but instead, let's use the default prior specification for GLM coefficients that PyMC3 gives us, which is $p(θ)=N(0,10^{12}I)$. This is a very vague prior that will let the data speak for themselves.\nThe likelihood is the product of n Bernoulli trials, $\\prod^{n}{i=1} p{i}^{y} (1 - p_{i})^{1-y_{i}}$,\nwhere $p_i = \\frac{1}{1 + e^{-z_i}}$, \n$z_{i} = \\beta_{0} + \\beta_{1}(age){i} + \\beta_2(age)^{2}{i} + \\beta_{3}(educ){i} + \\beta{4}(hours){i}$ and $y{i} = 1$ if income is greater than 50K and $y_{i} = 0$ otherwise. \nWith the math out of the way we can get back to the data. Here I use PyMC3 to draw samples from the posterior. The sampling algorithm used is NUTS, which is a form of Hamiltonian Monte Carlo, in which parameteres are tuned automatically. Notice, that we get to borrow the syntax of specifying GLM's from R, very convenient! I use a convenience function from above to plot the trace infromation from the first 1000 parameters.",
"with pm.Model() as logistic_model:\n pm.glm.glm('income ~ age + age2 + educ + hours', data, family=pm.glm.families.Binomial())\n trace_logistic_model = pm.sample(2000, pm.NUTS(), progressbar=True)\n\n\nplot_traces(trace_logistic_model, retain=1000)",
"Some results\nOne of the major benefits that makes Bayesian data analysis worth the extra computational effort in many circumstances is that we can be explicit about our uncertainty. Maximum likelihood returns a number, but how certain can we be that we found the right number? Instead, Bayesian inference returns a distribution over parameter values.\nI'll use seaborn to look at the distribution of some of these factors.",
"plt.figure(figsize=(9,7))\ntrace = trace_logistic_model[1000:]\nseaborn.jointplot(trace['age'], trace['educ'], kind=\"hex\", color=\"#4CB391\")\nplt.xlabel(\"beta_age\")\nplt.ylabel(\"beta_educ\")\nplt.show()",
"So how do age and education affect the probability of making more than $$50K?$ To answer this question, we can show how the probability of making more than $50K changes with age for a few different education levels. Here, we assume that the number of hours worked per week is fixed at 50. PyMC3 gives us a convenient way to plot the posterior predictive distribution. We need to give the function a linear model and a set of points to evaluate. We will pass in three different linear models: one with educ == 12 (finished high school), one with educ == 16 (finished undergrad) and one with educ == 19 (three years of grad school).",
"# Linear model with hours == 50 and educ == 12\nlm = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + \n samples['age']*x + \n samples['age2']*np.square(x) + \n samples['educ']*12 + \n samples['hours']*50)))\n\n# Linear model with hours == 50 and educ == 16\nlm2 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + \n samples['age']*x + \n samples['age2']*np.square(x) + \n samples['educ']*16 + \n samples['hours']*50)))\n\n# Linear model with hours == 50 and educ == 19\nlm3 = lambda x, samples: 1 / (1 + np.exp(-(samples['Intercept'] + \n samples['age']*x + \n samples['age2']*np.square(x) + \n samples['educ']*19 + \n samples['hours']*50)))",
"Each curve shows how the probability of earning more than $ 50K$ changes with age. The red curve represents 19 years of education, the green curve represents 16 years of education and the blue curve represents 12 years of education. For all three education levels, the probability of making more than $50K increases with age until approximately age 60, when the probability begins to drop off. Notice that each curve is a little blurry. This is because we are actually plotting 100 different curves for each level of education. Each curve is a draw from our posterior distribution. Because the curves are somewhat translucent, we can interpret dark, narrow portions of a curve as places where we have low uncertainty and light, spread out portions of the curve as places where we have somewhat higher uncertainty about our coefficient values.",
"# Plot the posterior predictive distributions of P(income > $50K) vs. age\npm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm, samples=100, color=\"blue\", alpha=.15)\npm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm2, samples=100, color=\"green\", alpha=.15)\npm.glm.plot_posterior_predictive(trace, eval=np.linspace(25, 75, 1000), lm=lm3, samples=100, color=\"red\", alpha=.15)\nimport matplotlib.lines as mlines\nblue_line = mlines.Line2D(['lm'], [], color='b', label='High School Education')\ngreen_line = mlines.Line2D(['lm2'], [], color='g', label='Bachelors')\nred_line = mlines.Line2D(['lm3'], [], color='r', label='Grad School')\nplt.legend(handles=[blue_line, green_line, red_line], loc='lower right')\nplt.ylabel(\"P(Income > $50K)\")\nplt.xlabel(\"Age\")\nplt.show()\n\nb = trace['educ']\nplt.hist(np.exp(b), bins=20, normed=True)\nplt.xlabel(\"Odds Ratio\")\nplt.show()",
"Finally, we can find a credible interval (remember kids - credible intervals are Bayesian and confidence intervals are frequentist) for this quantity. This may be the best part about Bayesian statistics: we get to interpret credibility intervals the way we've always wanted to interpret them. We are 95% confident that the odds ratio lies within our interval!",
"lb, ub = np.percentile(b, 2.5), np.percentile(b, 97.5)\n\nprint(\"P(%.3f < O.R. < %.3f) = 0.95\"%(np.exp(3*lb),np.exp(3*ub)))",
"Model selection\nThe Deviance Information Criterion (DIC) is a fairly unsophisticated method for comparing the deviance of likelhood across the the sample traces of a model run. However, this simplicity apparently yields quite good results in a variety of cases. We'll run the model with a few changes to see what effect higher order terms have on this model.\nOne question that was immediately asked was what effect does age have on the model, and why should it be age^2 versus age? We'll use the DIC to answer this question.",
"models_lin, traces_lin = run_models(data, 4)\n\ndfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])\ndfdic.index.name = 'model'\n\nfor nm in dfdic.index:\n dfdic.loc[nm, 'lin'] = pm.stats.dic(traces_lin[nm],models_lin[nm])\n\n\ndfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='dic')\n\ng = seaborn.factorplot(x='model', y='dic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)",
"There isn't a lot of difference between these models in terms of DIC. So our choice is fine in the model above, and there isn't much to be gained for going up to age^3 for example.\nNext we look at WAIC. Which is another model selection technique.",
"dfdic = pd.DataFrame(index=['k1','k2','k3','k4'], columns=['lin'])\ndfdic.index.name = 'model'\n\nfor nm in dfdic.index:\n dfdic.loc[nm, 'lin'] = pm.stats.waic(traces_lin[nm],models_lin[nm])\n\n\ndfdic = pd.melt(dfdic.reset_index(), id_vars=['model'], var_name='poly', value_name='waic')\n\ng = seaborn.factorplot(x='model', y='waic', col='poly', hue='poly', data=dfdic, kind='bar', size=6)",
"The WAIC confirms our decision to use age^2."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hglanz/phys202-2015-work
|
assignments/assignment06/ProjectEuler17.ipynb
|
mit
|
[
"Project Euler: Problem 17\nhttps://projecteuler.net/problem=17\nIf the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.\nIf all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?\nNOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of \"and\" when writing out numbers is in compliance with British usage.\nFirst write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above",
"import math as math\n\ndef ones_to_words(n):\n onesdict = {0: \"\",\n 1: \"one\",\n 2: \"two\",\n 3: \"three\",\n 4: \"four\",\n 5: \"five\",\n 6: \"six\",\n 7: \"seven\",\n 8: \"eight\",\n 9: \"nine\",\n }\n return onesdict[n]\n\ndef teens_to_words(n):\n teendict = {10: \"ten\",\n 11: \"eleven\",\n 12: \"twelve\",\n 13: \"thirteen\",\n 14: \"fourteen\",\n 15: \"fifteen\",\n 16: \"sixteen\",\n 17: \"seventeen\",\n 18: \"eighteen\",\n 19: \"nineteen\",\n }\n return teendict[n]\n\ndef tens_to_words(n):\n tensdict = {2: \"twenty\",\n 3: \"thirty\",\n 4: \"forty\",\n 5: \"fifty\",\n 6: \"sixty\",\n 7: \"seventy\",\n 8: \"eighty\",\n 9: \"ninety\",\n }\n return tensdict[n]\n\ndef number_to_words(n):\n \"\"\"Given a number n between 1-1000 inclusive return a list of words for the number.\"\"\"\n cent = n // 100\n tens = int(n % 100) // 10\n ones = int(n % 10)\n \n words = \"\"\n \n if cent > 0:\n # hundreds\n if cent == 10:\n words += \"one thousand\"\n else:\n words += (ones_to_words(cent) + \" hundred \")\n \n # tens and ones\n if tens == 0:\n if ones == 0:\n return words\n else:\n words += \"and \" + ones_to_words(ones)\n elif tens == 1:\n words += \"and \" + teens_to_words(10 * tens + ones)\n else:\n words += \"and \" + tens_to_words(tens) + \"-\" + ones_to_words(ones)\n else:\n # tens and ones\n if tens == 0:\n words += ones_to_words(ones)\n elif tens == 1:\n words += teens_to_words(10 * tens + ones)\n else:\n words += tens_to_words(tens) + \"-\" + ones_to_words(ones)\n \n return words\n #raise NotImplementedError()",
"Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.",
"assert number_to_words(4) == \"four\"\nassert number_to_words(58) == \"fifty-eight\"\nassert number_to_words(409) == \"four hundred and nine\"\nassert number_to_words(1000) == \"one thousand\"\nassert number_to_words(712) == \"seven hundred and twelve\"\n#raise NotImplementedError()\n\nassert True # use this for grading the number_to_words tests.",
"Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.",
"def count_letters(n):\n \"\"\"Count the number of letters used to write out the words for 1-n inclusive.\"\"\"\n x = number_to_words(n)\n x = x.replace(\"-\", \" \")\n return sum([len(y) for y in x.split(\" \")])\n #raise NotImplementedError()",
"Now write a set of assert tests for your count_letters function that verifies that it is working as expected.",
"assert count_letters(4) == 4\nassert count_letters(58) == 10\nassert count_letters(409) == 18\nassert count_letters(1000) == 11\nassert count_letters(712) == 21\n#raise NotImplementedError()\n\nassert True # use this for grading the count_letters tests.",
"Finally used your count_letters function to solve the original question.",
"total_letters = 0\nfor i in range(1, 1001):\n total_letters += count_letters(i)\n \nprint(total_letters)\n \n#raise NotImplementedError()\n\nassert True # use this for gradig the answer to the original question."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gtrichards/QuasarSelection
|
SpIESHighzQuasars2.ipynb
|
mit
|
[
"Final SpIES High-z Quasar Selection\nNotebook performing selection of $3.5<z<5$ quasars from SDSS+SpIES data.\nLargely the same as SpIESHighzQuasars notebook except using the algoirthm(s) from\nSpIESHighzCandidateSelection2. See notes below for creating a version of the\ntest set that includes i-band mag and extinctu. (This wasn't easy.)\nFirst load the training data, then instantiate and train the algorithm; see https://github.com/gtrichards/QuasarSelection/blob/master/SpIESHighzCandidateSelection2.ipynb",
"%matplotlib inline\nfrom astropy.table import Table\nimport numpy as np\nimport matplotlib.pyplot as plt\ndata = Table.read('GTR-ADM-QSO-ir-testhighz_findbw_lup_2016_starclean.fits')\n\n# X is in the format need for all of the sklearn tools, it just has the colors\n# X = np.vstack([ data['ug'], data['gr'], data['ri'], data['iz'], data['zs1'], data['s1s2'], data['imag'], data['extinctu']]).T\n# Don't use imag and extinctu since they don't contribute much to the accuracy and they add a lot to the data volume.\nX = np.vstack([ data['ug'], data['gr'], data['ri'], data['iz'], data['zs1'], data['s1s2'] ]).T\ny = np.array(data['labels'])\n\n# For algorithms that need scaled data:\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nscaler.fit(X) # Use the full training set now\nXStrain = scaler.transform(X)\n\n# SVM\nfrom sklearn.svm import SVC\nsvm = SVC(random_state=42)\nsvm.fit(XStrain,y)\n\n# Bagging\nfrom sklearn.ensemble import BaggingClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nbag = BaggingClassifier(KNeighborsClassifier(n_neighbors=7), max_samples=0.5, max_features=1.0, random_state=42)\nbag.fit(XStrain, y)",
"Second, load the test data\nTest Data\nTest set data set was made as follows (see 18 April 2016 README entry):\nmaketest_2016.py\n\nOutput is:\nclassifiers_out = open('GTR-ADM-QSO-ir_classifiers_good_test_2016.dat','w') \nothers_out= open('GTR-ADM-QSO-ir_others_good_test_2016.dat','w')\nczr_out = open('GTR-ADM-QSO-ir_photoz_in7_good_test_2016.dat','w')\n\nReally need the first two files combined (so that we have both RA/Dec and colors in one place).\nBut couldn't merge them with TOPCAT or STILTS. So had to break them into 3 pieces (with TOPCAT),\nthen used combine_test_files_STILTS.py to merge them together (just changing the input/output file names by hand). \nActually ran this on dirac so that I'd have more memory than on quasar. Copied the output files back to quasar and merged them together with TOPCAT.\nSo<br>\nGTR-ADM-QSO-ir_others_good_test_2016a.dat + GTR-ADM-QSO-ir_classifiers_good_test_2016a.dat<br>\ngives<br>\nGTR-ADM-QSO-ir_good_test_2016a.dat<br>\n(and so on for \"b\" and \"c\").\nThen<br>\nGTR-ADM-QSO-ir_good_test_2016a.dat + GTR-ADM-QSO-ir_good_test_2016b.dat + GTR-ADM-QSO-ir_good_test_2016c.dat<br>\ngives<br>\nGTR-ADM-QSO-ir_good_test_2016.dat<br>\nand similarly for the fits output file.\n\nSince I wanted to use the imag and extinctu, then I also had to make a version of the test file with combine_test_files_STILTSn.py (on quasar). This was fairly involved because of memory issues. The new output file is GTR-ADM-QSO-ir_good_test_2016n.dat. In the end, I ended up not using that and this is more of an exploration of SVM and bagging as alternatives to RF.\nNow read in the test file and convert it to an appropriate array format for sklearn.",
"#data2 = Table.read('GTR-ADM-QSO-ir_good_test_2016n.fits')\ndata2 = Table.read('GTR-ADM-QSO-ir_good_test_2016.fits')\n\nprint data2.keys()",
"I had some problems with GTR-ADM-QSO-ir_good_test_2016n.fits because it thought that there were blank entries among the attributes. There actually weren't (as far as I could tell), but I found that I could use filled to fix the problem. However, that just caused problems later!",
"# Not sure why I need to do this because there don't appear to be any unfilled columns\n# but the code segment below won't run without it.\n# Only need to do for the file with imag and extinctu\n# data2 = data2.filled()",
"Taking too long to do all the objects, so just do Stripe 82, which is all that we really care about anyway.",
"ramask = ( ( (data2['ra']>=300.0) & (data2['ra']<=360.0) ) | ( (data2['ra']>=0.0) & (data2['ra']<=60.0) ) )\ndecmask = ((data2['dec']>=-1.5) & (data2['dec']<=1.5))\n\ndataS82 = data2[ramask & decmask]\n\nprint len(dataS82)\n\n#Xtest = np.vstack([dataS82['ug'], dataS82['gr'], dataS82['ri'], dataS82['iz'], dataS82['zs1'], dataS82[]'s1s2'], dataS82['i'], data2['extinctu']]).T\nXtest = np.vstack([dataS82['ug'], dataS82['gr'], dataS82['ri'], dataS82['iz'], dataS82['zs1'], dataS82['s1s2'] ]).T\n\nXStest = scaler.transform(Xtest)",
"Quasar Candidates\nFinally, do the classification and output the test file, including the predicted labels.",
"from dask import compute, delayed\n\ndef processSVM(Xin):\n return svm.predict(Xin)\n\n# Create dask objects\n# Reshape is necessary because the format of x as drawm from Xtest \n# is not what sklearn wants.\ndobjsSVM = [delayed(processSVM)(x.reshape(1,-1)) for x in XStest]\n\nimport dask.threaded\nypredSVM = compute(*dobjsSVM, get=dask.threaded.get)\n\nypredSVM = np.array(ypredSVM).reshape(1,-1)[0]\n\nfrom dask import compute, delayed\n\ndef processBAG(Xin):\n return bag.predict(Xin)\n\n# Create dask objects\n# Reshape is necessary because the format of x as drawm from Xtest \n# is not what sklearn wants.\ndobjsBAG = [delayed(processBAG)(x.reshape(1,-1)) for x in XStest]\n\nimport dask.threaded\nypredBAG = compute(*dobjsBAG, get=dask.threaded.get)\n\nypredBAG = np.array(ypredBAG).reshape(1,-1)[0]",
"Now write results to output file. Didn't do bagging b/c takes too long. See SpIESHighzQuasarsS82all.py which I ran on dirac.",
"dataS82['ypredSVM'] = ypredSVM\ndataS82['ypredBAG'] = ypredBAG\n#dataS82.write('GTR-ADM-QSO-ir_good_test_2016_Stripe82svm.fits', format='fits')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ThinkBayes2
|
soln/chap08.ipynb
|
mit
|
[
"Poisson Processes\nThink Bayes, Second Edition\nCopyright 2020 Allen B. Downey\nLicense: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)",
"# If we're running on Colab, install empiricaldist\n# https://pypi.org/project/empiricaldist/\n\nimport sys\nIN_COLAB = 'google.colab' in sys.modules\n\nif IN_COLAB:\n !pip install empiricaldist\n\n# Get utils.py\n\nfrom os.path import basename, exists\n\ndef download(url):\n filename = basename(url)\n if not exists(filename):\n from urllib.request import urlretrieve\n local, _ = urlretrieve(url, filename)\n print('Downloaded ' + local)\n \ndownload('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')\n\nfrom utils import set_pyplot_params\nset_pyplot_params()",
"This chapter introduces the Poisson process, which is a model used to describe events that occur at random intervals.\nAs an example of a Poisson process, we'll model goal-scoring in soccer, which is American English for the game everyone else calls \"football\".\nWe'll use goals scored in a game to estimate the parameter of a Poisson process; then we'll use the posterior distribution to make predictions.\nAnd we'll solve The World Cup Problem.\nThe World Cup Problem\nIn the 2018 FIFA World Cup final, France defeated Croatia 4 goals to 2. Based on this outcome:\n\n\nHow confident should we be that France is the better team?\n\n\nIf the same teams played again, what is the chance France would win again?\n\n\nTo answer these questions, we have to make some modeling decisions.\n\n\nFirst, I'll assume that for any team against another team there is some unknown goal-scoring rate, measured in goals per game, which I'll denote with the Python variable lam or the Greek letter $\\lambda$, pronounced \"lambda\".\n\n\nSecond, I'll assume that a goal is equally likely during any minute of a game. So, in a 90 minute game, the probability of scoring during any minute is $\\lambda/90$.\n\n\nThird, I'll assume that a team never scores twice during the same minute.\n\n\nOf course, none of these assumptions is completely true in the real world, but I think they are reasonable simplifications.\nAs George Box said, \"All models are wrong; some are useful.\"\n(https://en.wikipedia.org/wiki/All_models_are_wrong).\nIn this case, the model is useful because if these assumptions are \ntrue, at least roughly, the number of goals scored in a game follows a Poisson distribution, at least roughly.\nThe Poisson Distribution\nIf the number of goals scored in a game follows a Poisson distribution with a goal-scoring rate, $\\lambda$, the probability of scoring $k$ goals is\n$$\\lambda^k \\exp(-\\lambda) ~/~ k!$$\nfor any non-negative value of $k$.\nSciPy provides a poisson object that represents a Poisson distribution.\nWe can create one with $\\lambda=1.4$ like this:",
"from scipy.stats import poisson\n\nlam = 1.4\ndist = poisson(lam)\ntype(dist)",
"The result is an object that represents a \"frozen\" random variable and provides pmf, which evaluates the probability mass function of the Poisson distribution.",
"k = 4\ndist.pmf(k)",
"This result implies that if the average goal-scoring rate is 1.4 goals per game, the probability of scoring 4 goals in a game is about 4%.\nWe'll use the following function to make a Pmf that represents a Poisson distribution.",
"from empiricaldist import Pmf\n\ndef make_poisson_pmf(lam, qs):\n \"\"\"Make a Pmf of a Poisson distribution.\"\"\"\n ps = poisson(lam).pmf(qs)\n pmf = Pmf(ps, qs)\n pmf.normalize()\n return pmf",
"make_poisson_pmf takes as parameters the goal-scoring rate, lam, and an array of quantities, qs, where it should evaluate the Poisson PMF. It returns a Pmf object.\nFor example, here's the distribution of goals scored for lam=1.4, computed for values of k from 0 to 9.",
"import numpy as np\n\nlam = 1.4\ngoals = np.arange(10)\npmf_goals = make_poisson_pmf(lam, goals)",
"And here's what it looks like.",
"from utils import decorate\n\ndef decorate_goals(title=''):\n decorate(xlabel='Number of goals',\n ylabel='PMF',\n title=title)\n\npmf_goals.bar(label=r'Poisson distribution with $\\lambda=1.4$')\n\ndecorate_goals('Distribution of goals scored')",
"The most likely outcomes are 0, 1, and 2; higher values are possible but increasingly unlikely.\nValues above 7 are negligible.\nThis distribution shows that if we know the goal scoring rate, we can predict the number of goals.\nNow let's turn it around: given a number of goals, what can we say about the goal-scoring rate?\nTo answer that, we need to think about the prior distribution of lam, which represents the range of possible values and their probabilities before we see the score.\nThe Gamma Distribution\nIf you have ever seen a soccer game, you have some information about lam. In most games, teams score a few goals each. In rare cases, a team might score more than 5 goals, but they almost never score more than 10.\nUsing data from previous World Cups, I estimate that each team scores about 1.4 goals per game, on average. So I'll set the mean of lam to be 1.4.\nFor a good team against a bad one, we expect lam to be higher; for a bad team against a good one, we expect it to be lower.\nTo model the distribution of goal-scoring rates, I'll use a gamma distribution, which I chose because:\n\n\nThe goal scoring rate is continuous and non-negative, and the gamma distribution is appropriate for this kind of quantity.\n\n\nThe gamma distribution has only one parameter, alpha, which is the mean. So it's easy to construct a gamma distribution with the mean we want.\n\n\nAs we'll see, the shape of the gamma distribution is a reasonable choice, given what we know about soccer.\n\n\nAnd there's one more reason, which I will reveal in <<_ConjugatePriors>>.\nSciPy provides gamma, which creates an object that represents a gamma distribution.\nAnd the gamma object provides provides pdf, which evaluates the probability density function (PDF) of the gamma distribution.\nHere's how we use it.",
"from scipy.stats import gamma\n\nalpha = 1.4\nqs = np.linspace(0, 10, 101)\nps = gamma(alpha).pdf(qs)",
"The parameter, alpha, is the mean of the distribution.\nThe qs are possible values of lam between 0 and 10.\nThe ps are probability densities, which we can think of as unnormalized probabilities.\nTo normalize them, we can put them in a Pmf and call normalize:",
"from empiricaldist import Pmf\n\nprior = Pmf(ps, qs)\nprior.normalize()",
"The result is a discrete approximation of a gamma distribution.\nHere's what it looks like.",
"def decorate_rate(title=''):\n decorate(xlabel='Goal scoring rate (lam)',\n ylabel='PMF',\n title=title)\n\nprior.plot(ls='--', label='prior', color='C5')\ndecorate_rate(r'Prior distribution of $\\lambda$')",
"This distribution represents our prior knowledge about goal scoring: lam is usually less than 2, occasionally as high as 6, and seldom higher than that. \nAnd we can confirm that the mean is about 1.4.",
"prior.mean()",
"As usual, reasonable people could disagree about the details of the prior, but this is good enough to get started. Let's do an update.\nThe Update\nSuppose you are given the goal-scoring rate, $\\lambda$, and asked to compute the probability of scoring a number of goals, $k$. That is precisely the question we answered by computing the Poisson PMF.\nFor example, if $\\lambda$ is 1.4, the probability of scoring 4 goals in a game is:",
"lam = 1.4\nk = 4\npoisson(lam).pmf(4)",
"Now suppose we are have an array of possible values for $\\lambda$; we can compute the likelihood of the data for each hypothetical value of lam, like this:",
"lams = prior.qs\nk = 4\nlikelihood = poisson(lams).pmf(k)",
"And that's all we need to do the update.\nTo get the posterior distribution, we multiply the prior by the likelihoods we just computed and normalize the result.\nThe following function encapsulates these steps.",
"def update_poisson(pmf, data):\n \"\"\"Update Pmf with a Poisson likelihood.\"\"\"\n k = data\n lams = pmf.qs\n likelihood = poisson(lams).pmf(k)\n pmf *= likelihood\n pmf.normalize()",
"The first parameter is the prior; the second is the number of goals.\nIn the example, France scored 4 goals, so I'll make a copy of the prior and update it with the data.",
"france = prior.copy()\nupdate_poisson(france, 4)",
"Here's what the posterior distribution looks like, along with the prior.",
"prior.plot(ls='--', label='prior', color='C5')\nfrance.plot(label='France posterior', color='C3')\n\ndecorate_rate('Posterior distribution for France')",
"The data, k=4, makes us think higher values of lam are more likely and lower values are less likely. So the posterior distribution is shifted to the right.\nLet's do the same for Croatia:",
"croatia = prior.copy()\nupdate_poisson(croatia, 2)",
"And here are the results.",
"prior.plot(ls='--', label='prior', color='C5')\ncroatia.plot(label='Croatia posterior', color='C0')\n\ndecorate_rate('Posterior distribution for Croatia')",
"Here are the posterior means for these distributions.",
"print(croatia.mean(), france.mean())",
"The mean of the prior distribution is about 1.4.\nAfter Croatia scores 2 goals, their posterior mean is 1.7, which is near the midpoint of the prior and the data.\nLikewise after France scores 4 goals, their posterior mean is 2.7.\nThese results are typical of a Bayesian update: the location of the posterior distribution is a compromise between the prior and the data.\nProbability of Superiority\nNow that we have a posterior distribution for each team, we can answer the first question: How confident should we be that France is the better team?\nIn the model, \"better\" means having a higher goal-scoring rate against the opponent. We can use the posterior distributions to compute the probability that a random value drawn from France's distribution exceeds a value drawn from Croatia's.\nOne way to do that is to enumerate all pairs of values from the two distributions, adding up the total probability that one value exceeds the other.",
"def prob_gt(pmf1, pmf2):\n \"\"\"Compute the probability of superiority.\"\"\"\n total = 0\n for q1, p1 in pmf1.items():\n for q2, p2 in pmf2.items():\n if q1 > q2:\n total += p1 * p2\n return total",
"This is similar to the method we use in <<_Addends>> to compute the distribution of a sum.\nHere's how we use it:",
"prob_gt(france, croatia)",
"Pmf provides a function that does the same thing.",
"Pmf.prob_gt(france, croatia)",
"The results are slightly different because Pmf.prob_gt uses array operators rather than for loops.\nEither way, the result is close to 75%. So, on the basis of one game, we have moderate confidence that France is actually the better team.\nOf course, we should remember that this result is based on the assumption that the goal-scoring rate is constant.\nIn reality, if a team is down by one goal, they might play more aggressively toward the end of the game, making them more likely to score, but also more likely to give up an additional goal.\nAs always, the results are only as good as the model.\nPredicting the Rematch\nNow we can take on the second question: If the same teams played again, what is the chance Croatia would win?\nTo answer this question, we'll generate the \"posterior predictive distribution\", which is the number of goals we expect a team to score.\nIf we knew the goal scoring rate, lam, the distribution of goals would be a Poisson distribution with parameter lam.\nSince we don't know lam, the distribution of goals is a mixture of a Poisson distributions with different values of lam.\nFirst I'll generate a sequence of Pmf objects, one for each value of lam.",
"pmf_seq = [make_poisson_pmf(lam, goals) \n for lam in prior.qs]",
"The following figure shows what these distributions look like for a few values of lam.",
"import matplotlib.pyplot as plt\n\nfor i, index in enumerate([10, 20, 30, 40]):\n plt.subplot(2, 2, i+1)\n lam = prior.qs[index]\n pmf = pmf_seq[index]\n pmf.bar(label=f'$\\lambda$ = {lam}', color='C3')\n decorate_goals()",
"The predictive distribution is a mixture of these Pmf objects, weighted with the posterior probabilities.\nWe can use make_mixture from <<_GeneralMixtures>> to compute this mixture.",
"from utils import make_mixture\n\npred_france = make_mixture(france, pmf_seq)",
"Here's the predictive distribution for the number of goals France would score in a rematch.",
"pred_france.bar(color='C3', label='France')\ndecorate_goals('Posterior predictive distribution')",
"This distribution represents two sources of uncertainty: we don't know the actual value of lam, and even if we did, we would not know the number of goals in the next game.\nHere's the predictive distribution for Croatia.",
"pred_croatia = make_mixture(croatia, pmf_seq)\n\npred_croatia.bar(color='C0', label='Croatia')\ndecorate_goals('Posterior predictive distribution')",
"We can use these distributions to compute the probability that France wins, loses, or ties the rematch.",
"win = Pmf.prob_gt(pred_france, pred_croatia)\nwin\n\nlose = Pmf.prob_lt(pred_france, pred_croatia)\nlose\n\ntie = Pmf.prob_eq(pred_france, pred_croatia)\ntie",
"Assuming that France wins half of the ties, their chance of winning the rematch is about 65%.",
"win + tie/2",
"This is a bit lower than their probability of superiority, which is 75%. And that makes sense, because we are less certain about the outcome of a single game than we are about the goal-scoring rates.\nEven if France is the better team, they might lose the game.\nThe Exponential Distribution\nAs an exercise at the end of this notebook, you'll have a chance to work on the following variation on the World Cup Problem:\n\nIn the 2014 FIFA World Cup, Germany played Brazil in a semifinal match. Germany scored after 11 minutes and again at the 23 minute mark. At that point in the match, how many goals would you expect Germany to score after 90 minutes? What was the probability that they would score 5 more goals (as, in fact, they did)?\n\nIn this version, notice that the data is not the number of goals in a fixed period of time, but the time between goals.\nTo compute the likelihood of data like this, we can take advantage of the theory of Poisson processes again. If each team has a constant goal-scoring rate, we expect the time between goals to follow an exponential distribution.\nIf the goal-scoring rate is $\\lambda$, the probability of seeing an interval between goals of $t$ is proportional to the PDF of the exponential distribution:\n$$\\lambda \\exp(-\\lambda t)$$\nBecause $t$ is a continuous quantity, the value of this expression is not a probability; it is a probability density. However, it is proportional to the probability of the data, so we can use it as a likelihood in a Bayesian update.\nSciPy provides expon, which creates an object that represents an exponential distribution.\nHowever, it does not take lam as a parameter in the way you might expect, which makes it awkward to work with.\nSince the PDF of the exponential distribution is so easy to evaluate, I'll use my own function.",
"def expo_pdf(t, lam):\n \"\"\"Compute the PDF of the exponential distribution.\"\"\"\n return lam * np.exp(-lam * t)",
"To see what the exponential distribution looks like, let's assume again that lam is 1.4; we can compute the distribution of $t$ like this:",
"lam = 1.4\nqs = np.linspace(0, 4, 101)\nps = expo_pdf(qs, lam)\npmf_time = Pmf(ps, qs)\npmf_time.normalize()",
"And here's what it looks like:",
"def decorate_time(title=''):\n decorate(xlabel='Time between goals (games)',\n ylabel='PMF',\n title=title)\n\npmf_time.plot(label='exponential with $\\lambda$ = 1.4')\n\ndecorate_time('Distribution of time between goals')",
"It is counterintuitive, but true, that the most likely time to score a goal is immediately. After that, the probability of each successive interval is a little lower.\nWith a goal-scoring rate of 1.4, it is possible that a team will take more than one game to score a goal, but it is unlikely that they will take more than two games.\nSummary\nThis chapter introduces three new distributions, so it can be hard to keep them straight.\nLet's review:\n\n\nIf a system satisfies the assumptions of a Poisson model, the number of events in a period of time follows a Poisson distribution, which is a discrete distribution with integer quantities from 0 to infinity. In practice, we can usually ignore low-probability quantities above a finite limit.\n\n\nAlso under the Poisson model, the interval between events follows an exponential distribution, which is a continuous distribution with quantities from 0 to infinity. Because it is continuous, it is described by a probability density function (PDF) rather than a probability mass function (PMF). But when we use an exponential distribution to compute the likelihood of the data, we can treat densities as unnormalized probabilities.\n\n\nThe Poisson and exponential distributions are parameterized by an event rate, denoted $\\lambda$ or lam.\n\n\nFor the prior distribution of $\\lambda$, I used a gamma distribution, which is a continuous distribution with quantities from 0 to infinity, but I approximated it with a discrete, bounded PMF. The gamma distribution has one parameter, denoted $\\alpha$ or alpha, which is also its mean.\n\n\nI chose the gamma distribution because the shape is consistent with our background knowledge about goal-scoring rates.\nThere are other distributions we could have used; however, we will see in <<_ConjugatePriors>> that the gamma distribution can be a particularly good choice.\nBut we have a few things to do before we get there, starting with these exercises.\nExercises\nExercise: Let's finish the exercise we started:\n\nIn the 2014 FIFA World Cup, Germany played Brazil in a semifinal match. Germany scored after 11 minutes and again at the 23 minute mark. At that point in the match, how many goals would you expect Germany to score after 90 minutes? What was the probability that they would score 5 more goals (as, in fact, they did)?\n\nHere are the steps I recommend:\n\n\nStarting with the same gamma prior we used in the previous problem, compute the likelihood of scoring a goal after 11 minutes for each possible value of lam. Don't forget to convert all times into games rather than minutes.\n\n\nCompute the posterior distribution of lam for Germany after the first goal.\n\n\nCompute the likelihood of scoring another goal after 12 more minutes and do another update. Plot the prior, posterior after one goal, and posterior after two goals.\n\n\nCompute the posterior predictive distribution of goals Germany might score during the remaining time in the game, 90-23 minutes. Note: You will have to think about how to generate predicted goals for a fraction of a game.\n\n\nCompute the probability of scoring 5 or more goals during the remaining time.",
"# Solution\n\n# Here's a function that updates the distribution of lam\n# with the given time between goals\n\ndef update_expo(pmf, data):\n \"\"\"Update based on an observed interval\n \n pmf: prior PMF\n data: time between goals in minutes\n \"\"\"\n t = data / 90\n lams = pmf.qs\n likelihood = expo_pdf(t, lams)\n pmf *= likelihood\n pmf.normalize()\n\n# Solution\n\n# Here are the updates for the first and second goals\n\ngermany = prior.copy()\nupdate_expo(germany, 11)\n\ngermany2 = germany.copy()\nupdate_expo(germany2, 12)\n\n# Solution\n\n# Here are the mean values of `lam` after each update\n\ngermany.mean(), germany2.mean()\n\n# Solution\n\n# Here's what the posterior distributions look like\n\nprior.plot(ls='--', label='prior', color='C5')\ngermany.plot(color='C3', label='Posterior after 1 goal')\ngermany2.plot(color='C16', label='Posterior after 2 goals')\n\ndecorate_rate('Prior and posterior distributions')\n\n# Solution\n\n# Here's the predictive distribution for each possible value of `lam`\n\nt = (90-23) / 90\n\npmf_seq = [make_poisson_pmf(lam*t, goals) \n for lam in germany2.qs]\n\n# Solution\n\n# And here's the mixture of predictive distributions,\n# weighted by the probabilities in the posterior distribution.\n\npred_germany2 = make_mixture(germany2, pmf_seq)\n\n# Solution\n\n# Here's what the predictive distribution looks like\n\npred_germany2.bar(color='C1', label='germany')\ndecorate_goals('Posterior predictive distribution')\n\n# Solution\n\n# Here's the probability of scoring exactly 5 more goals\n\npred_germany2[5]\n\n# Solution\n\n# And the probability of 5 or more\n\npred_germany2.prob_ge(5)",
"Exercise: Returning to the first version of the World Cup Problem. Suppose France and Croatia play a rematch. What is the probability that France scores first?\nHint: Compute the posterior predictive distribution for the time until the first goal by making a mixture of exponential distributions. You can use the following function to make a PMF that approximates an exponential distribution.",
"def make_expo_pmf(lam, high):\n \"\"\"Make a PMF of an exponential distribution.\n \n lam: event rate\n high: upper bound on the interval `t`\n \n returns: Pmf of the interval between events\n \"\"\"\n qs = np.linspace(0, high, 101)\n ps = expo_pdf(qs, lam)\n pmf = Pmf(ps, qs)\n pmf.normalize()\n return pmf\n\n# Solution\n\n# Here are the predictive distributions for the \n# time until the first goal\n\npmf_seq = [make_expo_pmf(lam, high=4) for lam in prior.qs]\n\n# Solution\n\n# And here are the mixtures based on the two posterior distributions\n\npred_france = make_mixture(france, pmf_seq)\npred_croatia = make_mixture(croatia, pmf_seq)\n\n# Solution\n\n# Here's what the posterior predictive distributions look like\n\npred_france.plot(label='France', color='C3')\npred_croatia.plot(label='Croatia', color='C0')\n\ndecorate_time('Posterior predictive distribution')\n\n# Solution\n\n# And here's the probability France scores first\n\nPmf.prob_lt(pred_france, pred_croatia)",
"Exercise: In the 2010-11 National Hockey League (NHL) Finals, my beloved Boston\nBruins played a best-of-seven championship series against the despised\nVancouver Canucks. Boston lost the first two games 0-1 and 2-3, then\nwon the next two games 8-1 and 4-0. At this point in the series, what\nis the probability that Boston will win the next game, and what is\ntheir probability of winning the championship?\nTo choose a prior distribution, I got some statistics from\nhttp://www.nhl.com, specifically the average goals per game\nfor each team in the 2010-11 season. The distribution is well modeled by a gamma distribution with mean 2.8.\nIn what ways do you think the outcome of these games might violate the assumptions of the Poisson model? How would these violations affect your predictions?",
"# Solution\n\n# When a team is winning or losing by an insurmountable margin,\n# they might remove their best players from the game, which\n# would affect their goal-scoring rate, violating the assumption\n# that the goal scoring rate is constant.\n\n# In this example, Boston won the third game 8-1, but scoring\n# eight goals in a game might not reflect their true long-term\n# goal-scoring rate.\n\n# As a result, the analysis below might overestimate the chance\n# that Boston wins.\n\n# As it turned out, they did not.\n\n# Solution\n\nfrom scipy.stats import gamma\n\nalpha = 2.8\nqs = np.linspace(0, 15, 101)\nps = gamma.pdf(qs, alpha)\nprior_hockey = Pmf(ps, qs)\nprior_hockey.normalize()\n\n# Solution\n\nprior_hockey.plot(ls='--', color='C5')\ndecorate_rate('Prior distribution for hockey')\nprior_hockey.mean()\n\n# Solution\n\nbruins = prior_hockey.copy()\nfor data in [0, 2, 8, 4]:\n update_poisson(bruins, data)\n \nbruins.mean()\n\n# Solution\n\ncanucks = prior_hockey.copy()\nfor data in [1, 3, 1, 0]:\n update_poisson(canucks, data)\n \ncanucks.mean()\n\n# Solution\n\ncanucks.plot(label='Canucks')\nbruins.plot(label='Bruins')\n\ndecorate_rate('Posterior distributions')\n\n# Solution\n\ngoals = np.arange(15)\npmf_seq = [make_poisson_pmf(lam, goals) for lam in bruins.qs]\n\n# Solution\n\npred_bruins = make_mixture(bruins, pmf_seq)\n\npred_bruins.bar(label='Bruins', color='C1')\ndecorate_goals('Posterior predictive distribution')\n\n# Solution\n\npred_canucks = make_mixture(canucks, pmf_seq)\n\npred_canucks.bar(label='Canucks')\ndecorate_goals('Posterior predictive distribution')\n\n# Solution\n\nwin = Pmf.prob_gt(pred_bruins, pred_canucks)\nlose = Pmf.prob_lt(pred_bruins, pred_canucks)\ntie = Pmf.prob_eq(pred_bruins, pred_canucks)\n\nwin, lose, tie\n\n# Solution\n\n# Assuming the Bruins win half of the ties,\n# their chance of winning the next game is...\n\np = win + tie/2\np\n\n# Solution\n\n# Their chance of winning the series is their\n# chance of winning k=2 or k=3 of the remaining\n# n=3 games.\n\nfrom scipy.stats import binom\n\nn = 3\na = binom.pmf([2,3], n, p)\na.sum()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dbrattli/RxPY
|
notebooks/Getting Started.ipynb
|
apache-2.0
|
[
"Getting Started with RxPY\nReactiveX, or Rx for short, is an API for programming with observable event streams. RxPY is a port of ReactiveX to Python. Learning Rx with Python is particularly interesting since Python removes much of the clutter that comes with statically typed languages. RxPY works with both Python 2 and Python 3 but all examples in this tutorial uses Python 3.4.\nRx is about processing streams of events. With Rx you:\n\nTell what you want to process (Observable)\nHow you want to process it (A composition of operators)\nWhat you want to do with the result (Observer)\n\nIt's important to understand that with Rx you describe what you want to do with events if and when they arrive. It's all a declarative composition of operators that will do some processing the events when they arrive. If nothing happens, then nothing is processed.\nThus the pattern is that you subscribe to an Observable using an Observer:\npython\nsubscription = Observable.subscribe(observer)\nNOTE: Observables are not active in themselves. They need to be subscribed to make something happen. Simply having an Observable lying around doesn't make anything happen.\nInstall\nUse pip to install RxPY:",
"%%bash\npip install rx",
"Importing the Rx module",
"import rx\nfrom rx import Observable, Observer",
"Generating a sequence\nThere are many ways to generate a sequence of events. The easiest way to get started is to use the from_iterable() operator that is also called just from_. Other operators you may use to generate a sequence such as just, generate, create and range.",
"class MyObserver(Observer):\n def on_next(self, x):\n print(\"Got: %s\" % x)\n \n def on_error(self, e):\n print(\"Got error: %s\" % e)\n \n def on_completed(self):\n print(\"Sequence completed\")\n\nxs = Observable.from_iterable(range(10))\nd = xs.subscribe(MyObserver())\n\nxs = Observable.from_(range(10))\nd = xs.subscribe(print)",
"NOTE: The subscribe method takes an observer, or one to three callbacks for handing on_next(), on_error(), and on_completed(). This is why we can use print directly as the observer in the example above, since it becomes the on_next() handler for an anonymous observer. \nFiltering a sequence",
"xs = Observable.from_(range(10))\nd = xs.filter(\n lambda x: x % 2\n ).subscribe(print)",
"Transforming a sequence",
"xs = Observable.from_(range(10))\nd = xs.map(\n lambda x: x * 2\n ).subscribe(print)",
"NOTE: You can also take an index as the second parameter to the mapper function:",
"xs = Observable.from_(range(10, 20, 2))\nd = xs.map(\n lambda x, i: \"%s: %s\" % (i, x * 2)\n ).subscribe(print)",
"Merge\nMerging two observable sequences into a single observable sequence using the merge operator:",
"xs = Observable.range(1, 5)\nys = Observable.from_(\"abcde\")\nzs = xs.merge(ys).subscribe(print)",
"The Spacetime of Rx\nIn the examples above all the events happen at the same moment in time. The events are only separated by ordering. This confuses many newcomers to Rx since the result of the merge operation above may have several valid results such as:\na1b2c3d4e5\n1a2b3c4d5e\nab12cd34e5\nabcde12345\n\nThe only garantie you have is that 1 will be before 2 in xs, but 1 in xs can be before or after a in ys. It's up the the sort stability of the scheduler to decide which event should go first. For real time data streams this will not be a problem since the events will be separated by actual time. To make sure you get the results you \"expect\", it's always a good idea to add some time between the events when playing with Rx.\nMarbles and Marble Diagrams\nAs we saw in the previous section it's nice to add some time when playing with Rx and RxPY. A great way to explore RxPY is to use the marbles test module that enables us to play with marble diagrams. The marbles module adds two new extension methods to Observable. The methods are from_marbles() and to_marbles().\nExamples:\n1. res = rx.Observable.from_marbles(\"1-2-3-|\")\n2. res = rx.Observable.from_marbles(\"1-2-3-x\", rx.Scheduler.timeout)\nThe marble string consists of some special characters:\n- = Timespan of 100 ms\n x = on_error()\n | = on_completed()\nAll other characters are treated as an on_next() event at the given moment they are found on the string. If you need to represent multi character values, then you can group then with brackets such as \"1-(42)-3\". \nLets try it out:",
"from rx.testing import marbles\n\nxs = Observable.from_marbles(\"a-b-c-|\")\nxs.to_blocking().to_marbles()",
"It's now easy to also add errors into the even stream by inserting x into the marble string:",
"xs = Observable.from_marbles(\"1-2-3-x-5\")\nys = Observable.from_marbles(\"1-2-3-4-5\")\nxs.merge(ys).to_blocking().to_marbles()",
"Subjects and Streams\nA simple way to create an observable stream is to use a subject. It's probably called a subject after the Subject-Observer pattern described in the Design Patterns book by the gang of four (GOF).\nAnyway, a Subject is both an Observable and an Observer, so you can both subscribe to it and on_next it with events. This makes it an obvious candidate if need to publish values into an observable stream for processing:",
"from rx.subjects import Subject\n\nstream = Subject()\nstream.on_next(41)\n\nd = stream.subscribe(lambda x: print(\"Got: %s\" % x))\n\nstream.on_next(42)\n\nd.dispose()\nstream.on_next(43)",
"That's all for now"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Olsthoorn/TransientGroundwaterFlow
|
exercises_notebooks/ReversibleStorage.ipynb
|
gpl-3.0
|
[
"Reversible groundwater storage",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport pdb",
"Introduction\nIn the remainder of this syllabus, we will restrict ourselves to reversible groundwater storage phenomena only, i.e. phenomena in which the porous medium is not changed.\nIn groundwater flow systems, three separate forms of storage may be distinguished:\n\n\nPhreatic storage, which occurs in unconfined aquifers, i.e. aquifers with a free water table. It is due to filling and emptying of pores at the top of the saturated zone.\n\n\nElastic storage, which is due to combined compressibility of the water, the grains and the porous matrix (soil skeleton).\n\n\nSometimes the interface between fresh water and another fluid (be it saline water, oil or gas) can provide a third type of storage. This works by displacement of the interface, generally between the fresh water and the saline water. When displacing an interface, the total volume of water in the subsurface remains the same, however, the amount of usable fresh water may increase (or decrease) at the cost of saline water, and therefore, one may consider this storage of fresh water.\n\n\nSpecific yield\nThe water table in an unconfined or phreatic layer is the elevation where the pressure equals the atmospheric pressure. It is in no way a sharp boundary between water and air, like it is between the water table of surface water and the air above it. The boundary between the saturated and unsaturated zone is not sharp, there is water throughout most of the unsaturated zone, that is, above the plane where the pressure equals the atmospheric one. In this lecture we study the this zone and its implications for the specific yield of an aquifer. What happens when we say that the water table sinks or rises?\nPhreatic storage is due to the filling and emptying of pores above the saturated zone, i.e. above the water table. Because it is related to changes of the water table, it is limited to phreatic (unconfined) aquifers.\nThe storage coefficient for an unconfined aquifer is called specific yield and is denoted by the symbol $S_{y}$. It is dimensionless, as follows from its definition \n$$S_{y}=\\frac{\\partial V_{w}}{\\partial h}$$\nwhere $\\partial V_{w}$ is the change of volume of water from a column of aquifer per unit of surface area and $dh$ is the change of the water table elevation.\n$S_{y}S_{y}$, therefore, is the amount of water released from storage per square meter of aquifer per m drawdown of the water table.\nHydrogeologists, and groundwater engineers alike, often treat specific yield as a constant. In reality, the draining and filling of pores is more complex and this should be kept in mind in order to judge differences of $S_{y}$ values under different circumstances even with the same aquifer material. This will be explained further down.\nThere is no such thing as a sharp boundary between the saturated and the unsaturated porous medium above and below the water table. In fact, the water content is continuous across the water table.\nThe water table is, by definition, the elevation where the pressure equals atmospheric pressure\nBecause we relate all pressures relative to atmospheric we may say the water table is the elevation where the water pressure is zero (relative to the pressure of the atmosphere).\nThe soil itself may be considered to consist of a dense network of connected tortuous pores of widely varying diameter that may be fully or partially filled with water. Due to adhesive forces pores may even be fully filled above the water table.\nIn pores above the water table the pressure is negative (i.e. below atmospheric).\nIf grains can be wetted (attract water), as is generally the case with water, water will be sucked against gravity, into the pores above the water table over a certain height. This height mainly depends on the diameter of the pores.\nUnsaturated zone and capillary zone\nFor the purpose of better understanding the unsaturared zone, one often envisions it as a network of small interconnected pores. The most simple picture is that of a single vertical pore, a straw, that ends in the saturated zone. We know from experience that, if the straw is small enough, the water in it will rise above the water table. We have learned to say that this is due to adhesion between the water and the wall of the pore.\n\nLooking at this straw with the water risen in it until it has reached equilibrium, the relation describing it is easy enough. The weight of the water that has been sucked up must equal the cohesion around the circumference of the straw. Hence:\n$$ \\rho g h_c \\pi r^2 = 2 \\pi r \\gamma \\cos \\alpha $$\nwhere $\\rho$ [kg/m3] is the density of the fluid, $g$ [N/lg] is gravity, $r$ [m] the radius of the straw $\\gamma$ [N/m] the cohesion force and $\\alpha$ [radians, i.e. L/L] the angle that the water surface makes with the surface of the straw.\nHence\n$$ h_c = \\frac {2 \\gamma \\cos \\alpha} { \\rho \\, g \\, r } $$\nA first exercise is to gain a feeling for how large this suction can be.\nTo get an idea, we know that surface tension $\\tau$ [N/m] works in the free water surface in the straw, and we know its value from our school handbooks or from looking it up in the Internet:\n$$\\tau = 75 \\times 10^{-3} \\,\\, N/m$$\nIf the angle $\\alpha$ is small, and it often is the case with wettable surfaces like sand (for clean glass it is even $\\alpha \\approx 0$, then $\\cos \\alpha \\approx 1$, so that $\\gamma\\approx \\tau$.\nWith this we have\n$$ h_x \\approx \\frac {2 \\tau } { \\rho \\, g \\, r} $$\nThen we should have an idea how large the radius of the pores of a porous medium are.\nIf we have a matrix consisting of spheres of radius $r$ and a porosity $\\epsilon$ we have for the volume the grains in a m3:\n$$ V_g = \\frac 1 6 \\pi d^3 n = 1 - \\epsilon $$\nwith $n$ the number of grains in one m3. The surface area of this mass equals\n$$ A = \\pi d^2 n $$\nNoting that the grais and the pores share the same surface area, we have the following ratio\n$$ \\frac {V_g} A = \\frac {1-\\epsilon} A = \\frac {d_g} 6 $$\n$$ \\frac {V_p} A = \\frac \\epsilon A = \\frac {d_p} 6 $$ \nallowing to conclude that \n$$ d_p = \\frac \\epsilon {1 - \\epsilon} d_g $$\nwhile $d_g$ can be obtained from sand-sieving. With an often found value of $\\epsilon \\approx 35%$ we get\n$$ r_p \\approx 0.5 r_g = 0.25 d_g $$\nLet's say we have the following grain diameters:",
"dg = np.array([0.002, 0.063, 0.2, 0.630, 2.0 ]) * 1e-3 # mm",
"values that bound the following bin names \"silt\", \"fine sand\", \"medium sand\", \"coarse sand\"\nThen we could compute the capillary rise for straws with these pores like",
"g = 9.81 # N/kg (gravity)\nrho = 1000. # kg/m3 (water)\ntau = 75e-3 # N/m\npor = 0.35\n\nrp = dg * por / (1 - por)\nhc = 2 * tau / (rho * g * rp)\n\nprint(\"\\n\\nResults of computing capillary rise given grain diameters (porosity = {}):\\n\".format(por))\nprint( (16*\" \" + \"{:>8s} | {:>8s} | {:>8s} | {:>8s} | {:>8s} | {:>8s}\").\n format(\"clay \",\"silt \",\"fine sand\",\"med. sand\",\"crs sand\", \"gravel\"))\nprint(\"Grain diam.[mm]: \", end=\"\"); print((\"{:11.3g}\" * len(dg)).format(*(dg * 1000)))\nprint(\"Grain diam. [m]: \", end=\"\"); print((\"{:11.3g}\" * len(dg)).format(*dg))\nprint(\"Pore radius [m]: \", end=\"\"); print((\"{:11.3g}\" * len(rp)).format(*rp))\nprint(\"Cap. rise [m]: \", end=\"\"); print((\"{:11.3g}\" * len(rp)).format(*hc))\nprint()",
"This shows that in the range sand, the capillary rise is expected to vary from about 1.5 cm for coarse sand of grain diameter of 2 mm to about 0.5 m for fine sand of grain size diameter of 0.06 mm.\nAquifer material, i.e. sediment, has different pore sizes, therefore, it could be presented as a bungle of straws with different sizes in which the water rises to different heights in accordance with the radius of each straw.\n\nDue to the presence of many pore sizes, will the moisture content of the sand decline with distance from the water table. The thickness of the capillary zone is that which corresponds to $h_c$ of the widest pores. Above this elevation, more and more pores will be dry and the lower will be the water content, as shown in the figure to the right.\nA soil property is the so-called \"air entry pressure\". This is the air pressure that has to be imposed at one end of a soil sample with ambulant air pressure at the down side, before the air is blown through the sample. What is the relation between this \"air entry pressure\", pore width and the eight of the full-capillary zone?\nMoisture content and sediment particle distribution (sand sieve curves)\nReal aquifer material has a grain size distribution that shows the mass fraction versus grain diameter, as made visible by sieving the sand with a set of seives with different stanard opening size ane weighing the amount of sand that remains on each sieve. Such curves often look like a cumulative normal probbility density function, such as the one below, which is readily generated by sampling for instance a million times the normal probability density function, collect the samples in a set of bins and draw a histogram for them, or rather a comulative histogam like so:",
"x = plt.hist(np.random.randn(int(1e6)), bins=25, cumulative=True, normed=True)\nplt.show()",
"If this were a sieve curve, then the values on the x-axis would be the grain diameter on log scale. So then the -2 would meen 0.01 mm, 0 would mean 1 mm and 2 100 mm. Then, in fact the distribution is not normal, but log-normal (it is normal when the horizontal axis is plotted on log-scale). The vertical axis of sieve curse is the mass fraction that has a grain size smaller than d. So the $y$-axis starts at zero and runs to 1.0.\nThe normal probability density function is defined as\n$$ p = \\frac 1 {\\sqrt {2 \\pi} } \\exp \\left( - \\frac { (x -\\mu)^2 } {2 \\sigma^2 } \\right) $$\nOn log scale, $x = \\log d$ and $\\mu = \\log d_{50}$ while $\\sigma$ can be expressed as a factor as it yields a constant distance on log scale. $\\sigma = \\log \\alpha$\n$$p = \\frac 1 {\\sqrt{ 2 \\pi } } \\exp \\left(- \\frac {\\log^2 \\left( \\frac d {d_{50}} \\right)} {2 \\log^2 \\alpha} \\right)$$\nThe spread of a sieve curve is normally quantified by its so-called coefficient of uniformity, $U= \\frac {d_{60}} {d_{10}}$. For a normal distribution we could rather use $\\sigma$ or $2 \\sigma$ which is the distance between the points where the cumative probability density functions has the value of 84% and 16%. (Remember that 68% of the samples from a probability density function lie between $\\mu - \\sigma$ and $\\mu + \\sigma$, that is, 16% lie below $\\mu-\\sigma$ and 16% above $\\mu + \\sigma$). Hence we approximate $U$ by\n$$ U \\approx \\frac {d_{84}} {d_{16}} $$\nso that\n$$ \\log U = \\log (d_{84}) - \\log (d_{16}) = 2 \\sigma $$\nor\n$$ \\sigma \\approx \\log \\sqrt U $$\nTo generate a sieve curve with mean diameter $d_{50}$ and spread $U$, we sample values of $x$ from the normal probability density function with mean $\\mu = \\log d_{50}$ and $\\sigma = \\log \\sqrt U$, and translate the sample back to real grain diameters by\n$$ d = 10^{x} $$\nExample: Generate a sieve curve with $d_{50} = 0.2\\, mm$ and $U = \\sqrt 2$",
"d50 = 0.2 # mm\nU = 2.0\n\n# convert to linear scale to sample from the normal probability density function\nmu = np.log10(d50)\nsigma = np.log10(np.sqrt(U))\nx = np.random.randn(int(1e4)) * sigma + mu # sample a million points\n\n# convert back to the real-world scale\nd = 10.**x\n\n# show histogram of normalized comumulative distribution = sieve curve\nax = plt.figure().add_subplot(111)\nax.hist(d, bins=100, normed=True, cumulative=True)\n\n# get the x-axis and covert to log scale, set labels\nax.set(xscale='log', xlim=(1e-1, 1.), xlabel='d [mm]', ylabel='mass fraction [-]')\nax.set_title(r\"Generated sand sieve curve with $d_{50} = 0.2\\,$ mm and $u = \\sqrt{2}$\")\nax.grid(True)\n\n# Plot the mean and the sigma's as derived above\nax.plot(10**mu, 0.5, 'ro')\nax.plot([10**(mu-sigma), 10**mu], [0.16, 0.16], 'ro-') # sigma at 16%\nax.plot([10**mu, 10**(mu+sigma)], [0.84, 0.84], 'ro-') # sigma at 84%\nax.plot([10**mu, 10**mu], [0., 1.], 'r-') # vertical line through d50\nplt.show()",
"This this instrument in place we can now investigate the moisture distribution of an arbitrary sand of which the grain distribution can be characterized by a normal probability density function with mean grain size $d_{50}$ and uniformity coefficient $U$.\nThe grain size distribution is easily converted to pore radius distribution by noting that the volume of pores is $1-\\epsilon$ times the volume of grains, where $\\epsilon$ is the porosity. This is a constant factor, that only shifts the distribution horizontally on its log axis.\n$$ r = \\frac d 2 \\,\\,\\frac { \\epsilon} { 1 - \\epsilon }$$\nUsing our relation between pore radius $r$ and capillary rise $h_c$, we can link to associate with each pore radius.",
"g = 9.81 # N/kg, gravity\nrho = 1000. # density of water, kg/m3\npor = 0.35 # porosity\ntau = 75.e-3 # N/m water surface tension\n\nr = np.sort((d/2) * por/(1 - por)) # also sort\nhc = 2 * tau / (rho * g * r/1000.) # hc in m by converting r to m\n\nax1 = plt.figure().add_subplot(111)\nax1.hist(hc, bins=100, normed=True, cumulative=True)\nax1.set(xlabel=\"hc [m]\", ylabel=\"pore fraction [-]\", title=\"volume fraction versus hc\")\nplt.show()\n\nfr = np.cumsum(r)/np.sum(r) # fraction of pores with radius smaller than r\n\nax1=plt.figure().add_subplot(111)\nax1.set(xscale='linear',xlabel='pore fraction',ylabel='capillary rise', title='distribution of filled pores')\nax1.plot(fr,hc,'r')\nax1.grid(True)\nplt.show()",
"Now that were are able to generate a capillary rise curve from the mean diameter and the uniformity coefficient of a sand sieve curve, we may investigate different situations. For example curves with more spread.\nLet's use the same r, but multiply it to make the sand finer or coarser",
"phi_units = np.arange(-11, 10)\n\nsize = [\"v. large\", \"large\", \"medium\", \"small\",\n \"large\", \"small\",\n \"v. coarse\", \"coarse\", \"medium\", \"fine\", \"v. fine\",\n \"v. coarse\", \"coarse\", \"medium\", \"fine\", \"v. fine\",\n \"v. coarse\", \"coarse\", \"medium\", \"fine\", \"v. fine\"]\ncla1 = 4 * [\"boulders\"] + 2 * [\"cobbles\"] + 5 * [\"pebbles\"] + 5 * [\"sand\"] + 5 * [\"silt\"]\ncla2 = 11 * [\"gravel\"] + 5 * [\"sand\"] + 5 * [\"mud\"]\n\nprint((\"{:11s}\" * 6).format(\"d mm\", \"d mm\", \"size\", \"class\", \"class\", \" hc [m]\"))\nfor pu, sz, c1, c2 in zip(phi_units, size, cla1, cla2):\n rp = 2**(-pu) * por/(1-por) /2.\n h = 2 *tau / (rho * g * rp/1000)\n if pu<0:\n print(\"{:<7d}\".format(2**(-pu)), end=\"\")\n else:\n print(\"1/{:<5d}\".format(2**pu), end=\"\")\n print((\"{:>10.3g} \" + 3 * \" {:<10s}\" + \"{:10.3g}\").format(2.**(-pu), sz, c1, c2, h))\n\n\nD50 = [0.002, 0.02, 0.2, ]\nmult = [0.1, 0.333, 1.0, 3.33, 10.]\nu = [1.2, ]\nax2 = plt.figure().add_subplot(111)\nax2.set(xlabel='pore fraction [-]', ylabel='hc [m]', title='cap. rise versus pore fraction')\nfor m in mult:\n hc = 2 * tau / (rho * g * (m * r/1000))\n ax2.plot(fr, hc, label=\"d50 = {:.3g} mm\".format(d50 * m))\nax2.legend(loc='best')\nplt.show()",
"Below some sieve curves are given \nDonhuai Sun, Bloemendal, J, Rea, DK, and Ruixia Su (2002) Grain-size distribution function of polymodal sediments in hydraulic and Aeolian environments, and numerical partitioning of the sedimentary components, in Sedimentary Geology 152(3-4):263-277 · October 2002, DOI: 10.1016/S0037-0738(02)00082-9.\n\nFigure: Particle distribution of some loesses\nBelow we define a function called sieveCurve, that generates and shows sieve curves from any number of constituing particle distributions that are specified each by means of its mean, standard deviation and fraction of this distribution to the total mass of the sand. Se we can take the constituing grain size contributions of the Loess examples in the figure and create from that the overall sieve curve, that we can show and use to compute the moisture distribution above the water table (i.e. above the plane where p=atmospheric pressure). The computed water distributions will only be valid when there is no vertical flow in the unsaturated zone.",
"def sieveCurve(sediment, sed_props, n=100000):\n \"\"\"Returns sieve curve data from input.\n parameters:\n -----------\n sediment: [str,: name of the sediment, float:porosity]\n sed_props: [[f, d50, u, clr], [f, d50, u, clr], ...]\n f is the relative mass contribution,\n d50 the mean grain diameter,\n u the uniformity\n taken as ratio of the diameters between +- sigma around d50.\n The distributions are assumed normal with mu=log10(d50) and sigma=log(sqrt(u))\n clr the color of the line\n n : int, total number of random samples\n \n \"\"\"\n D = np.array([])\n \n name, por = sediment\n ax1.set_title(name) # name of the sediment\n ax1.set_xlabel('d [mm]')\n ax1.set_ylabel(' mass fraction []')\n ax2.set_ylabel(' dm/d(log(d))')\n for props in sed_props:\n f, d50, u, clr = props\n m = round(f * n)\n mu = np.log10(d50)\n sigma = np.log10(np.sqrt(u))\n print(\"f={:10.3f}, d50={:10.3f}, u={:10.3f}, m={:10.3f}, mu={:10.3f}, sigma={:10.3f}\".format(f, d50, u, m, mu, sigma))\n \n # sampling the normal distribution\n x = np.sort(np.random.randn(int(m)) * sigma + mu)\n d = 10.0**x # convert to log normal\n h = np.arange(1, m+1) / m # because we measure and sample mass not grains\n \n ax1.plot(d, h, clr, label=name) # cumulative distribtuion\n ax1.plot(d50, 0.5, 'ro') # show its center\n ax1.plot([10**(mu-sigma), d50], [0.16, 0.16], 'ro-') # show sigma at 16%\n ax1.plot([d50, 10**(mu+sigma)], [0.84, 0.84], 'ro-') # show sigma at 84%\n ax1.plot([d50, d50], [0., 1.], 'r-') # vertical line through d50\n \n # sampling the cumulative distribution for the derivative\n xs = np.linspace(mu-3*sigma, mu+3*sigma, 200) # 200 sampling points\n dx = np.diff(xs)\n xm = 0.5 * (xs[:-1] + xs[1:])\n dm = 10**xm # d at between two sampling locations\n\n hi = np.interp(xs, x, h) # get interpolated points of cum. distr.\n dh = np.diff(hi) # increment of cum. distr.\n p = dh/dx # derivative is prob. density on linear scale\n ax2.plot(dm, f * dh/dx, 'g')\n D = np.hstack((D, d))\n \n # pdf combined\n D = np.sort(D)\n X = np.log10(D) # back to linear axis\n N = len(D)\n H = np.arange(1., N+1) / N\n ax1.plot(D, H, 'k-', linewidth=3, label=name) # combined cumulative pdf\n \n Xs = np.linspace(X[0],x[-1], 201)\n Hi = np.interp(Xs, X, H) # sample H\n Xm = 0.5 * (Xs[1:] + Xs[:-1]) # sample locations for derivative\n Dm = 10**Xm # to log scale\n ax2.plot(Dm, np.diff(Hi)/np.diff(Xs), 'k-', label=name)\n \n # plot hc versus volume fraction\n r = D/2 * por / (1 - por) / 1e6 # pore radius in m \n hc = 2 * tau / (rho * g * r)\n ax3.set_title(name)\n ax3.plot(H * por, hc, clr, label=name)\n \n return D\n \n \ntitles = [[\"(a) loess from Xian, southern Loess Plateau\", 0.45],\n [\"(b) sand from the Mu-Us Desert\", 0.45],\n [\"(c) late Tertiary aeolian red clay from Xifeng, central Loess Plateau\", 0.45],\n [\"(d) locally derived loess from Zhengzhou, on the south bank of the Yellow River\", 0.45]]\n\nsediments = [[[0.55, 4., 10., 'r'], [0.45, 10., 4., 'r']],\n [[0.06, 6., 3., 'b'], [0.94, 150., 8., 'b']],\n [[0.80, 6., 10., 'g'], [0.20, 35., 2., 'g']],\n [[0.25, 5., 4., 'm'], [0.75, 50., 4., 'm']]]\n\nfig = plt.figure()\nax1 = fig.add_subplot(111)\nax2 = ax1.twinx()\nax1.set(xscale='log', ylim=(0.0, 1.0), xlim=(1.e-2, 1.e4))\nax3 = plt.figure().add_subplot(111)\nax3.set(xlabel='pore volume', ylabel='elevation above water table [m]', ylim=(0., 3.))\nax3.grid(True)\n\nd = sieveCurve(titles[3], sediments[3])\n\n#ax1.legend(loc='best')\n#ax3.legend(loc='best')\nplt.show()\n\nnames = [[\"(a) Clayey loess\", 0.50],\n [\"(b) Loess\", 0.45],\n [\"(c) Fine sand\", 0.38],\n [\"(d) Coarse sand with gravel\", 0.32]]\nsediments = [[[0.30, 2., 10., 'r'], [0.70, 12., 4., 'r']],\n [[0.75, 12., 12., 'b'], [0.25, 64., 4., 'b']],\n [[1.00,150., 10., 'g']],\n [[0.70,500., 4., 'm'], [0.30, 2000., 2., 'm']], 'k', 'coarse sand w. gravel']\n\nfig = plt.figure()\nax1 = fig.add_subplot(111)\nax2 = ax1.twinx()\nax1.set(xscale='log', ylim=(0.0, 1.0), xlim=(1.e-2, 1.e4))\n\nax3 = plt.figure().add_subplot(111)\nax3.set(xlabel='pore volume', ylabel='elevation above water table [m]', ylim=(0., 3.))\nax3.grid(True)\n\nfor name, sediment in zip(names, sediments):\n sieveCurve(name, sediment)\n \nax1.legend(loc='best', fontsize='small')\nax3.legend(loc='best', fontsize='small')\n\nplt.show()\n\nax3.legend(loc='best')\nplt.show()",
"Specific retention and specific yield\nThe figure above shows that the moisture content above the water table differs not only with elevation, but also with the size of the grains. The fine soils, even if they tend to have a higher porosity than the coarser sediments, they als retain more water and, therefore, less water is released when the water table is lowered. While the release of water is important from the perspective of groundwater behavior, it is the specific yield, the retained amount of water is important as the resource for vegetation.\nThe water content above the water due to capillary rise determines how wet the topsoil is in cases whith a shallow groundwater dept, i.e. water table depth. This should be immediately clear from the picture below. That picture also shows that the specific yield diminishes when the water table gets shallower. This is because the specific yield is the hatched amount of water in the figure. When the water table is shallow, part of the hatched water falls above ground surface and, therefore, does not exist. The smaller hatched area shows the release of water under shallow water table condition when the water table is lowered.\n\nFigure: Specific yield and shallow water table\nThe specific retention is the amount of water retained in the soil when the water table is dropped. One measure is to define the specific retention as the moisture content in the soil when suction is 1 m (100 cm). Looking at the graph above one sees that the specific retention for gravel is almost zero, for find sand it would be around 70 L/m3, for loess 300 L/m3 and for clay 450 L/m3. Dividing by the porosity in L/m3 yields the moisture content of the samples. Specific yield is just porosity minus specific retention. It is the amount of water released from a m3 of soil due to lowering the head by 1 m. It can also be expressed as a percentage as is done in the figure below.\n\nThis picture form Bear(1973) shows the relation of porosity with grain size (compare with the computed moisture cuvers above) and the specific yield and specific retention. The finer the sediment, the larger the specific retention, and, therfore, the smaller the specific yield. Less sorted sediment yields a higher specific retention than well-sorted sediment, which is due to a larger percentage of fine pores that hold water better."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
molgor/spystats
|
notebooks/Spatial Model Fitting using GLS-Copy2.ipynb
|
bsd-2-clause
|
[
"Spatial Model fitting in GLS\nIn this exercise we will fit a linear model using a Spatial structure as covariance matrix. \nWe will use GLS to get better estimators.\nAs always we will need to load the necessary libraries.",
"# Load Biospytial modules and etc.\n%matplotlib inline\nimport sys\nsys.path.append('/apps')\nsys.path.append('..')\nsys.path.append('../spystats')\nimport django\ndjango.setup()\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n## Use the ggplot style\nplt.style.use('ggplot')\n\nimport tools",
"Use this to automate the process. Be carefull it can overwrite current results\nrun ../HEC_runs/fit_fia_logbiomass_logspp_GLS.py /RawDataCSV/idiv_share/plotsClimateData_11092017.csv /apps/external_plugins/spystats/HEC_runs/results/logbiomas_logsppn_res.csv -85 -80 30 35\nImporting data\nWe will use the FIA dataset and for exemplary purposes we will take a subsample of this data. \nAlso important.\nThe empirical variogram has been calculated for the entire data set using the residuals of an OLS model. \nWe will use some auxiliary functions defined in the fit_fia_logbiomass_logspp_GLS.\nYou can inspect the functions using the ?? symbol.",
"from HEC_runs.fit_fia_logbiomass_logspp_GLS import prepareDataFrame,loadVariogramFromData,buildSpatialStructure, calculateGLS, initAnalysis, fitGLSRobust\n\nsection = initAnalysis(\"/RawDataCSV/idiv_share/FIA_Plots_Biomass_11092017.csv\",\n \"/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv\",\n -130,-60,30,40)\n\nimport rpy2 \n\nimport rpy2.robjects as ro\nfrom rpy2.robjects import r, pandas2ri\n\n\npandas2ri.activate()\n\nr_section = pandas2ri.pandas2ri(section)\n\nM = r.lm('logBiomass~logSppN', data=r_section)\n\nprint(r.summary(M).rx2('coefficients'))\n\nr.library('nlme')\n\n#section = initAnalysis(\"/RawDataCSV/idiv_share/plotsClimateData_11092017.csv\",\n# \"/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv\",\n# -85,-80,30,35)\n\n# IN HEC\n#section = initAnalysis(\"/home/hpc/28/escamill/csv_data/idiv/FIA_Plots_Biomass_11092017.csv\",\"/home/hpc/28/escamill/spystats/HEC_runs/results/variogram/data_envelope.csv\",-85,-80,30,35)\n\nsection.shape",
"Now we will obtain the data from the calculated empirical variogram.",
"gvg,tt = loadVariogramFromData(\"/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv\",section)\n\ngvg.plot(refresh=False,with_envelope=True)\n\ncorrm = gvg.calculateCovarianceMatrix()\n\nC = r.corSymm(corrm)\n\nmod4 = r.gls('logBiomass ~ logSppN', data=r_section,correlation = C)\n\nresum,gvgn,resultspd,results = fitGLSRobust(section,gvg,num_iterations=1,distance_threshold=1000000)\n\nresum.as_text",
"restricted w/ all data spatial correlation parameters\nLog-Likelihood: -16607\nAIC: 3.322e+04\nrestricted w/ restricted spatial correlation parameters\nLog-Likelihood: -16502.\nAIC: 3.301e+04",
"plt.plot(resultspd.rsq)\nplt.title(\"GLS feedback algorithm\")\nplt.xlabel(\"Number of iterations\")\nplt.ylabel(\"R-sq fitness estimator\")\n\nresultspd.columns\n\na = map(lambda x : x.to_dict(), resultspd['params'])\n\nparamsd = pd.DataFrame(a)\n\nparamsd\n\nplt.plot(paramsd.Intercept.loc[1:])\nplt.get_yaxis().get_major_formatter().set_useOffset(False)\n\nfig = plt.figure(figsize=(10,10))\nplt.plot(paramsd.logSppN.iloc[1:])\n\nvariogram_data_path = \"/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv\"\nthrs_dist = 100000\nemp_var_log_log = pd.read_csv(variogram_data_path)",
"Instantiating the variogram object",
"gvg = tools.Variogram(section,'logBiomass',using_distance_threshold=thrs_dist)\ngvg.envelope = emp_var_log_log\ngvg.empirical = emp_var_log_log.variogram\ngvg.lags = emp_var_log_log.lags\n#emp_var_log_log = emp_var_log_log.dropna()\n#vdata = gvg.envelope.dropna()",
"Instantiating theoretical variogram model",
"matern_model = tools.MaternVariogram(sill=0.34,range_a=100000,nugget=0.33,kappa=4)\nwhittle_model = tools.WhittleVariogram(sill=0.34,range_a=100000,nugget=0.0,alpha=3)\nexp_model = tools.ExponentialVariogram(sill=0.34,range_a=100000,nugget=0.33)\ngaussian_model = tools.GaussianVariogram(sill=0.34,range_a=100000,nugget=0.33)\nspherical_model = tools.SphericalVariogram(sill=0.34,range_a=100000,nugget=0.33)\n\ngvg.model = whittle_model\n#gvg.model = matern_model\n#models = map(lambda model : gvg.fitVariogramModel(model),[matern_model,whittle_model,exp_model,gaussian_model,spherical_model])\n\ngvg.fitVariogramModel(whittle_model)\n\nimport numpy as np\nxx = np.linspace(0,1000000,1000)\n\ngvg.plot(refresh=False,with_envelope=True)\nplt.plot(xx,whittle_model.f(xx),lw=2.0,c='k')\nplt.title(\"Empirical Variogram with fitted Whittle Model\")\n\ndef randomSelection(n,p):\n idxs = np.random.choice(n,p,replace=False)\n random_sample = new_data.iloc[idxs]\n return random_sample\n#################\nn = len(new_data)\np = 3000 # The amount of samples taken (let's do it without replacement)\n\nrandom_sample = randomSelection(n,100)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
I2Cvb/prostate
|
notebook/time-warping-normalisation-t2w.ipynb
|
mit
|
[
"Normalisation of T2W-MRI using Fisher-Rao metric and functional data analysis\nWe can put all the needed libraries there",
"import numpy as np\n\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nimport fdasrsf as fs\n\nfrom scipy import interpolate",
"Unormalized data\nWe need first to investigate the unormalized data. To do so, we can construct the PDFs of the T2W image.\nLoad the data",
"data_t2w_norm = np.load('../data/t2w/data_raw_norm.npy')\ndata_t2w_norm = (data_t2w_norm + 1.) / 2.\n\npatient_sizes = np.load('../data/t2w/patient_sizes.npy')\nlabel = np.load('../data/t2w/label.npy')\n\nprint '-----> Data loaded'",
"Function to normalised the data",
"# Define the function to compute the Normalised Mean Intensity\ndef nmi(data):\n # get the minimum \n #min_data = np.min(data)\n min_data = -1.\n print 'mini: {}'.format(min_data)\n\n # get the maximum\n #max_data = np.max(data)\n max_data = 1.\n print 'maxi: {}'.format(max_data)\n\n # find the mean\n mean_data = np.mean(data)\n print 'mean: {}'.format(mean_data)\n\n # return the nmi\n return mean_data / (max_data - min_data)",
"Compute the histogram for the raw T2W-MRI",
"# To make the future plots\nfig, axes = plt.subplots(nrows=2, ncols=2, figsize=(20, 15))\n\nnsampling=1061\n\nglobal_hist_t2w = np.zeros((nsampling, len(patient_sizes)))\nglobal_hist_t2w_cap = np.zeros((nsampling, len(patient_sizes)))\nnmi_raw = []\n\nfor pt in xrange(len(patient_sizes)):\n \n # Find the index of the current patients\n if (pt == 0):\n start_idx = 0\n end_idx = patient_sizes[pt]\n else:\n start_idx = np.sum(patient_sizes[0 : pt])\n end_idx = np.sum(patient_sizes[0 : pt + 1])\n\n ##### RAW DATA #####\n # Compute the histogram for the whole data\n nb_bins = nsampling\n hist, bin_edges = np.histogram(data_t2w_norm[start_idx : end_idx], bins=nb_bins, range=(0., 1.), density=True)\n hist = np.divide(hist, np.sum(hist))\n axes[0, 0].plot(bin_edges[0 : -1], hist, label='Patient '+str(pt))\n \n # Append the histogram to the global list of histogram\n global_hist_t2w[:, pt] = hist\n\n # Compute the histogram for the cancer data\n nb_bins = nsampling\n sub_data = data_t2w_norm[start_idx : end_idx]\n cap_data = sub_data[np.nonzero(label[start_idx : end_idx] == 1)[0]]\n hist, bin_edges = np.histogram(cap_data, bins=nb_bins, range=(0., 1.), density=True)\n hist = np.divide(hist, np.sum(hist))\n axes[0, 1].plot(bin_edges[0 : -1], hist)\n \n # Append the histogram to the global list of histogram\n global_hist_t2w_cap[:, pt] = hist\n \n time = bin_edges[0 : -1]\n \n# Align all the curve using FDASRSF\n# Define the variance as in the original code for each curve\n#from sklearn.decomposition import PCA\n#pca = PCA(n_components=.99)\n#pca.fit(global_hist_t2w)\n#print pca.noise_variance_\nvar = []\nfor c in global_hist_t2w.T:\n var.append((.1 * np.fabs(c).max()) ** 2)\n# var.append(pca.noise_variance_)\nout = fs.srsf_align(global_hist_t2w, time, showplot=False, smoothdata=True, \n# method='mean', fit_variance=False, var=np.array(var))\n method='mean', fit_variance=True, method_fit='pca')\n#print global_hist_t2w.shape\n#print time.shape\n#out = fs.align_fPCA(global_hist_t2w, time, num_comp=1, showplot=False,\n# smoothdata=True, fit_variance=False, var=np.array(var))\n# smoothdata=True, fit_variance=True, method_fit='pca')\naxes[1, 0].plot(time, out.fn)\nplt.show()",
"Normalise the data using the inverse function",
"# To make the future plots\nfig, axes = plt.subplots(nrows=2, ncols=2, figsize=(20, 15))\n\n# Make a copy of the original data\ndata_norm_fda = data_t2w_norm.copy()\n\n\n# Try to normalise the data\nfor pt in xrange(len(patient_sizes)):\n \n # Find the index of the current patients\n if (pt == 0):\n start_idx = 0\n end_idx = patient_sizes[pt]\n else:\n start_idx = np.sum(patient_sizes[0 : pt])\n end_idx = np.sum(patient_sizes[0 : pt + 1])\n \n # Let's normalise the data using the interpolation function\n time = time / time[-1]\n f = interpolate.interp1d(time, fs.invertGamma(out.gam[:, pt]), kind='cubic')\n data_norm_fda[start_idx:end_idx] = f(data_t2w_norm[start_idx:end_idx])\n #data_norm_fda[start_idx:end_idx] = np.interp(data_t2w_norm[start_idx:end_idx],\n # time,\n # fs.invertGamma(out.gam[:, pt]))\n \n # Compute the histogram for the whole data\n nb_bins = 200\n hist, bin_edges = np.histogram(data_norm_fda[start_idx : end_idx], bins=nb_bins, range=(0., 1.), density=True)\n hist = np.divide(hist, np.sum(hist))\n axes[1, 0].plot(bin_edges[0 : -1], hist, label='Patient '+str(pt))\n \n # Compute the histogram for the cancer data\n nb_bins = 200\n sub_data = data_norm_fda[start_idx : end_idx]\n cap_data = sub_data[np.nonzero(label[start_idx : end_idx] == 1)[0]]\n hist, bin_edges = np.histogram(cap_data, bins=nb_bins, range=(0., 1.), density=True)\n hist = np.divide(hist, np.sum(hist))\n axes[1, 1].plot(bin_edges[0 : -1], hist)\n \n #print np.count_nonzero(np.isnan(hist))\n \n # Compute the histogram for the whole data\n nb_bins = nsampling\n hist, bin_edges = np.histogram(data_t2w_norm[start_idx : end_idx], bins=nb_bins, range=(0., 1.), density=True)\n hist = np.divide(hist, np.sum(hist))\n axes[0, 0].plot(bin_edges[0 : -1], hist, label='Patient '+str(pt))\n \n # Append the histogram to the global list of histogram\n global_hist_t2w[:, pt] = hist\n\n # Compute the histogram for the cancer data\n nb_bins = nsampling\n sub_data = data_t2w_norm[start_idx : end_idx]\n cap_data = sub_data[np.nonzero(label[start_idx : end_idx] == 1)[0]]\n hist, bin_edges = np.histogram(cap_data, bins=nb_bins, range=(0., 1.), density=True)\n hist = np.divide(hist, np.sum(hist))\n axes[0, 1].plot(bin_edges[0 : -1], hist)\n \n # Append the histogram to the global list of histogram\n global_hist_t2w_cap[:, pt] = hist\n ",
"Save the data",
"# Normalise the data between -1 and 1\ndata_norm_fda = (data_norm_fda * 2.) - 1.\nnp.save('../data/t2w/data_fdasrsf_norm.npy', data_norm_fda)\n\nnp.unique(np.isinf(data_norm_fda))",
"Just to plot some data for the poster SPIE MI 2015",
"import seaborn as sns\nsns.set_style(\"whitegrid\")\nsns.set_context(\"poster\")\n\n# Plot each transformation curve\nplt.figure(figsize=(15, 15))\nfor gamma in out.gam.T:\n time = np.linspace(0., 1., len(gamma))\n plt.plot(fs.invertGamma(gamma), time)\n \nplt.xlabel('Non-normalized intensities')\nplt.ylabel('Normalised intensities')\n\nplt.savefig('aaa.png')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
WNoxchi/Kaukasos
|
FADL1/quick_dogscats.ipynb
|
mit
|
[
"1 Quick Dogs v Cats",
"# git pull https://github.com/fastai.git\n# cd fastai\n# conda env create -f environment.yml\n# source activate fastai\n# python -m ipykernel install --user --name fastai --display-name \"Python 3 (FastAI)\"\n\n# ## if working from another dir:\n# ln -s <path to fastai/fastai> fastai\n\n# %mkdir -p data/dogscats\n\nfrom fastai.conv_learner import * # conv_learner p.much imports everything else\nPATH = 'data/dogscats/'\nsz=224; bs=16\n\n# how do we want to tsfm our data: in a way that's suitable to the resnet50 model\n# assuming the photos are side-on photos, and zooming in up to 10% ea. time\ntfms = tfms_from_model(resnet50, sz, aug_tfms=transforms_side_on, max_zoom=1.1)\n# we want to get some data in from paths (assuming there's a folder for ea. class inside \n# train/ and valid/ -- to submit to Kaggle need specfy test/ folder\ndata = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=bs)\n# create a model from a pretrained resnet50 model using data\nlearn = ConvLearner.pretrained(resnet50, data)\n# call fit; be default has all but last few layers frozen\n# here we do 3 cycles of len 1\n%time learn.fit(1e-2, 3, cycle_len=1)",
"We didn't specify precompute=True because it only makes it a little faster for this first set. So we skipped it. It's a shortcut that caches some of the intermediate steps that don't have to be recalculated each time. NOTE: when we use precomputed activations: data augmentation will not work.\nSo if you specfy data augmentation, such as w/ aug_tfms=..., and also precompute=True fastai will not actually do any data augmentation.\nNOTE: if you're using big / deep model like ResNet50 or ResNeXt101 on a dataset v.similar to ImageNet (like this cats/dogs datset; ie: side-on photos of standard objects of a similar size to ImageNet: 200~500 pxls), you should probably run learn.bn_freeze(True) after learn.unfreeze().\nWhat this is doing is causing the BatchNorm moving averages to stop updating.\n(Not currently supported by any library besides fastai, and is v.important)\nNeeded to restart kernel & set batch-size to 16 to fit it in Gfx Memory\nrerun lines: tfms = ...; data = ...; learn = ...;\nWe run one more epoch, training the entire network:",
"# we can unfreeze and this will train the entire network\nlearn.unfreeze()\n# bn_freeze stops updates of \nlearn.bn_freeze(True) # important for large models --> stops updates of BN moving avgs\n # use for models w/ ~ > 34 layers, & larger imgs & reg.s of intrst & norml imgs\n%time learn.fit([1e-5, 1e-4, 1e-2], 1, cycle_len=1)\n\n# We use Test Time Augmentation to ensure we get the best predictions we can\n%time log_preds, y = learn.TTA()\n\n# Finally, this gives us about 99.6% accuracy\nmetrics.log_loss(y, np.exp(log_preds)), accuracy(log_preds, y)",
"That's great, I think that's tested on a validation set? (https://youtu.be/_VpaKaMyjqI?t=2772 : Yes.) Now to run predictions on the actual test data set.\nThese were all essentially the minimum set of steps when you try a new dataset. This assumes we already know what learning rate to use (use the learning-rate finder to decide on that), and knowing the directory layout, & etc.\nNOTE: Don't forget to save your weights! (I think) specifying a new learner and data object will load the pretrained model from scratch.",
"learn.save('ResNet50_01')",
"Up above, TTA() has is_test set to False by default, so it won't use the test set. Iguess it uses the valid set? To use the test set, specify is_test=True\nNOTE: running TTA() on the test set wasn't working ('NoneType' TypeError) because the test_name par to from_paths(..) is by default set to None. Also the test dir in this dataset (from fast.ai) is called test1/, so has to be specfd as such or renamed.",
"# Reloading saved weights after re-openning notebook;\n# Also specifying test dataset for final predictions:\ntfms = tfms_from_model(resnet50, sz, aug_tfms=transforms_side_on, max_zoom=1.1)\ndata = ImageClassifierData.from_paths(PATH, tfms=tfms, bs=bs, test_name='test1')\nlearn = ConvLearner.pretrained(resnet50, data)\nlearn.load('ResNet50_01')\n\n%time log_preds, y = learn.TTA(is_test=True)\n\ntest_preds = np.exp(log_preds)\n\nlen(test_preds)\n\n# id,label\n# 1,0.5\n# 2,0.5\n\ndata.test_ds.get_x(0)\n\ntest_preds[:10]",
"One way to create the submission csv file:",
"# See: http://forums.fast.ai/t/dog-breed-identification-challenge/7464/101\nids = [i.split('.jpg')[0].split('/')[-1] for i in data.test_dl.dataset.fnames]\n\npreds = [np.argmax(pred) for pred in test_preds]\n\npreds[:10]d\n\nsubmission = pd.DataFrame()\nsubmission['id'] = [i for i in ids]\nsubmission['label'] = [p for p in preds]\n\nsubmission.head()",
"Another way to create the submission csv file:",
"df = pd.DataFrame(test_preds) # NOTE: pretty sure this has to be [np.argmax(pred) for pred in test_preds]\ndf.columns = data.classes\n\n# '6:' skips the 'test1/' part of the fname, ':-4' skips the '.jpg' part\ndf.insert(0, 'id', [o[6:-4] for o in data.test_ds.fnames])\n\ndf.head()",
"NOTE: THIS WILL NOT WORK FOR THIS DATASET. Only because I'm lazy. The format for this datset's submissions is: \n\nid,label\n<id_num>,<0 or 1>\n...,...\n\nBut this 2nd method is generally how you'll go about it for multi-class data; and even this datset if you make sure the format is correct.\nSaving the CSV File & Submitting",
"os.mkdir(PATH + '/results/')\n\n%ls $PATH\n\nsubmission.to_csv(PATH+'results/' + 'submission_dogscats_quick_00.csv')\n\nSUBM = f'{PATH}subm/'\nos.makedirs(SUBM, exist_ok=True)\n# df.to_csv(f'{PATH}subm.gz', compression='gzip', index=False)\nsubmission.to_csv(f'{SUBM}subm.gz', compression='gzip', index=False)\n\nFileLink(f'{SUBM}subm.gz')",
"Misc; looking at stuff",
"submission.head()\n\ndata.classes\n\n# 1st 10 filenames in test dataset: (I fucking love fastai)\ndata.test_ds.fnames[:10]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jpzhangvincent/MobileAppMarketAnalysis
|
notebooks/Web scraping(Explore with BeautifulSoup).ipynb
|
mit
|
[
"from urllib2 import Request, urlopen, HTTPError\nfrom urlparse import urlunparse, urlparse\nimport json \nimport pandas as pd\nfrom matplotlib import pyplot as plt\nimport requests\n\nfrom bs4 import BeautifulSoup\nimport urllib",
"<b>Step 1</b> Get the links of different app categories on iTunes.",
"r = urllib.urlopen('https://itunes.apple.com/us/genre/ios-books/id6018?mt=8').read()\nsoup = BeautifulSoup(r)\nprint type(soup)\n\nall_categories = soup.find_all(\"div\", class_=\"nav\")\ncategory_url = all_categories[0].find_all(class_ = \"top-level-genre\")\ncategories_url = pd.DataFrame()\nfor itm in category_url:\n category = itm.get_text()\n url = itm.attrs['href']\n d = {'category':[category], 'url':[url]}\n df = pd.DataFrame(d)\n categories_url = categories_url.append(df, ignore_index = True)\nprint categories_url\n\ncategories_url['url'][0]",
"<b>Step2</b>\nGet the links for all popular apps of different catigories on iTunes.",
"def extract_apps(url):\n r = urllib.urlopen(url).read()\n soup = BeautifulSoup(r)\n apps = soup.find_all(\"div\", class_=\"column\")\n apps_link = apps[0].find_all('a')\n column_first = pd.DataFrame()\n for itm in apps_link:\n app_name = itm.get_text()\n url = itm.attrs['href']\n d = {'category':[app_name], 'url':[url]}\n df = pd.DataFrame(d)\n column_first = column_first.append(df, ignore_index = True)\n apps_link2 = apps[1].find_all('a')\n column_second = pd.DataFrame()\n for itm in apps_link2:\n app_name = itm.get_text()\n url = itm.attrs['href']\n d = {'category':[app_name], 'url':[url]}\n df = pd.DataFrame(d)\n column_second = column_second.append(df, ignore_index = True)\n apps_link3 = apps[2].find_all('a')\n column_last = pd.DataFrame()\n for itm in apps_link3:\n app_name = itm.get_text()\n url = itm.attrs['href']\n d = {'category':[app_name], 'url':[url]}\n df = pd.DataFrame(d)\n column_last = column_last.append(df, ignore_index = True)\n Final_app_link = pd.DataFrame()\n Final_app_link = Final_app_link.append(column_first, ignore_index = True)\n Final_app_link = Final_app_link.append(column_second, ignore_index = True)\n Final_app_link = Final_app_link.append(column_last, ignore_index = True)\n return Final_app_link\n\napp_url = pd.DataFrame()\nfor itm in categories_url['url']:\n apps = extract_apps(itm)\n app_url = app_url.append(apps, ignore_index = True)\n\napp_url['url'][0]",
"<b>Step3</b> Extract the information for all popular apps.",
"def get_content(url):\n r = urllib.urlopen(url).read()\n soup = BeautifulSoup(r)\n des = soup.find_all('div', id = \"content\")\n apps = soup.find_all(\"div\", class_=\"lockup product application\")\n rate = soup.find_all(\"div\", class_=\"extra-list customer-ratings\")\n dic = []\n global app_name, descript, link, price, category, current_rate, current_count, total_count, total_rate, seller,mul_dev,mul_lang,new_ver_des\n for itm in des:\n try:\n descript = itm.find_all('div',{'class':\"product-review\"})[0].get_text().strip().split('\\n')[2].encode('utf-8')\n except:\n descript = ''\n try:\n new_ver_des = itm.find_all('div',{'class':\"product-review\"})[1].get_text().strip().split('\\n')[2].encode('utf-8')\n except:\n new_ver_des = ''\n try:\n app_name = itm.find_all('div',{'class':\"left\" })[0].get_text().split('\\n')[1]\n except:\n app_name = ''\n for itm in apps:\n category = itm.find_all('span',{'itemprop':\"applicationCategory\" })[0].get_text()\n price = itm.find_all('div',{'class':\"price\" })[0].get_text() \n link = itm.a[\"href\"]\n seller = itm.find_all(\"span\", itemprop=\"name\")[0].get_text()\n try:\n device = itm.find_all(\"span\", itemprop=\"operatingSystem\")[0].get_text()\n if 'and' in device.lower():\n mul_dev = 'Y'\n else:\n mul_dev = \"N\"\n except:\n mul_dev = \"N\"\n try:\n lang = itm.find_all(\"li\",class_ = \"language\")[0].get_text().split(',')\n if len(lang) >1:\n mul_lang = \"Y\"\n else:\n mul_lang = \"N\"\n except:\n mul_lang = \"N\"\n for itm in rate:\n try:\n current_rate = itm.find_all('span',{'itemprop':\"ratingValue\"})[0].get_text()\n except:\n current_rate = ''\n try:\n current_count = itm.find_all('span',{'itemprop':\"reviewCount\"})[0].get_text()\n except:\n current_count = ''\n try:\n total_count = itm.find_all('span',{'class':\"rating-count\"})[1].get_text()\n except:\n try:\n total_count = itm.find_all('span',{'class':\"rating-count\"})[0].get_text()\n except:\n total_count = ''\n try:\n total_rate = itm.find_all('div', class_=\"rating\",itemprop = False)[0]['aria-label'].split(',')[0]\n except:\n total_rate = ''\n for i in range(3):\n try:\n globals()['user_{0}'.format(i)] = soup.find_all(\"div\", class_=\"customer-reviews\")[0].find_all(\"span\", class_='user-info')[i].get_text().strip( ).split(' ')[-1]\n except:\n globals()['user_{0}'.format(i)] = ''\n try:\n globals()['star_{0}'.format(i)] = soup.find_all(\"div\", class_=\"customer-reviews\")[0].find_all(\"div\", class_=\"rating\")[i]['aria-label']\n except:\n globals()['star_{0}'.format(i)] = ''\n try:\n globals()['comm_{0}'.format(i)] = soup.find_all(\"div\", class_=\"customer-reviews\")[0].find_all(\"p\", class_=\"content\")[i].get_text()\n except:\n globals()['comm_{0}'.format(i)] = ''\n \n dic.append({'app':app_name,'link':link, 'price':price,'category':category,'current rating':current_rate, \n 'current reviews':current_count,'overall rating':total_rate,'overall reviews':total_count,\n 'description':descript,'seller':seller,'multiple languages':mul_lang,\n 'multiple devices':mul_dev,'new version description':new_ver_des,'user 1':user_0,\n 'rate 1':star_0,'comment 1':comm_0,'user 2':user_1,'rate 2':star_1,'comment 2':comm_1,\n 'user 3':user_2,'rate 3':star_2,'comment 3':comm_2})\n dic = pd.DataFrame(dic)\n return dic\n\nfull_content = pd.DataFrame()\nfor itm in app_url['url']:\n content = get_content(itm)\n full_content = full_content.append(content, ignore_index = True)\n\nfull_content\n\nfull_content.to_csv('app.csv',encoding='utf-8',index=True)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
InsightSoftwareConsortium/SimpleITK-Notebooks
|
Python/65_Registration_FFD.ipynb
|
apache-2.0
|
[
"Non-Rigid Registration: Free Form Deformation\nThis notebook illustrates the use of the Free Form Deformation (FFD) based non-rigid registration algorithm in SimpleITK.\nThe data we work with is a 4D (3D+time) thoracic-abdominal CT, the Point-validated Pixel-based Breathing Thorax Model (POPI) model. This data consists of a set of temporal CT volumes, a set of masks segmenting each of the CTs to air/body/lung, and a set of corresponding points across the CT volumes. \nThe POPI model is provided by the Léon Bérard Cancer Center & CREATIS Laboratory, Lyon, France. The relevant publication is:\nJ. Vandemeulebroucke, D. Sarrut, P. Clarysse, \"The POPI-model, a point-validated pixel-based breathing thorax model\",\nProc. XVth International Conference on the Use of Computers in Radiation Therapy (ICCR), Toronto, Canada, 2007.\nThe POPI data, and additional 4D CT data sets with reference points are available from the CREATIS Laboratory <a href=\"http://www.creatis.insa-lyon.fr/rio/popi-model?action=show&redirect=popi\">here</a>.",
"import SimpleITK as sitk\nimport registration_utilities as ru\nimport registration_callbacks as rc\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfrom ipywidgets import interact, fixed\n\n# utility method that either downloads data from the Girder repository or\n# if already downloaded returns the file name for reading from disk (cached data)\n%run update_path_to_download_script\nfrom downloaddata import fetch_data as fdata",
"Utilities\nLoad utilities that are specific to the POPI data, functions for loading ground truth data, display and the labels for masks.",
"%run popi_utilities_setup.py",
"Loading Data\nLoad all of the images, masks and point data into corresponding lists. If the data is not available locally it will be downloaded from the original remote repository. \nTake a look at the images. According to the documentation on the POPI site, volume number one corresponds to end inspiration (maximal air volume).",
"images = []\nmasks = []\npoints = []\nfor i in range(0, 10):\n image_file_name = f\"POPI/meta/{i}0-P.mhd\"\n mask_file_name = f\"POPI/masks/{i}0-air-body-lungs.mhd\"\n points_file_name = f\"POPI/landmarks/{i}0-Landmarks.pts\"\n images.append(\n sitk.ReadImage(fdata(image_file_name), sitk.sitkFloat32)\n ) # read and cast to format required for registration\n masks.append(sitk.ReadImage(fdata(mask_file_name)))\n points.append(read_POPI_points(fdata(points_file_name)))\n\ninteract(\n display_coronal_with_overlay,\n temporal_slice=(0, len(images) - 1),\n coronal_slice=(0, images[0].GetSize()[1] - 1),\n images=fixed(images),\n masks=fixed(masks),\n label=fixed(lung_label),\n window_min=fixed(-1024),\n window_max=fixed(976),\n);",
"Getting to know your data\nWhile the POPI site states that image number 1 is end inspiration, and visual inspection seems to suggest this is correct, we should probably take a look at the lung volumes to ensure that what we expect is indeed what is happening.\nWhich image is end inspiration and which end expiration?",
"label_shape_statistics_filter = sitk.LabelShapeStatisticsImageFilter()\n\nfor i, mask in enumerate(masks):\n label_shape_statistics_filter.Execute(mask)\n print(\n f\"Lung volume in image {i} is {0.000001*label_shape_statistics_filter.GetPhysicalSize(lung_label)} liters.\"\n )",
"Free Form Deformation\nThis function will align the fixed and moving images using a FFD. If given a mask, the similarity metric will be evaluated using points sampled inside the mask. If given fixed and moving points the similarity metric value and the target registration errors will be displayed during registration. \nAs this notebook performs intra-modal registration, we use the MeanSquares similarity metric (simple to compute and appropriate for the task).",
"def bspline_intra_modal_registration(\n fixed_image,\n moving_image,\n fixed_image_mask=None,\n fixed_points=None,\n moving_points=None,\n):\n\n registration_method = sitk.ImageRegistrationMethod()\n\n # Determine the number of BSpline control points using the physical spacing we want for the control grid.\n grid_physical_spacing = [50.0, 50.0, 50.0] # A control point every 50mm\n image_physical_size = [\n size * spacing\n for size, spacing in zip(fixed_image.GetSize(), fixed_image.GetSpacing())\n ]\n mesh_size = [\n int(image_size / grid_spacing + 0.5)\n for image_size, grid_spacing in zip(image_physical_size, grid_physical_spacing)\n ]\n\n initial_transform = sitk.BSplineTransformInitializer(\n image1=fixed_image, transformDomainMeshSize=mesh_size, order=3\n )\n registration_method.SetInitialTransform(initial_transform)\n\n registration_method.SetMetricAsMeanSquares()\n # Settings for metric sampling, usage of a mask is optional. When given a mask the sample points will be\n # generated inside that region. Also, this implicitly speeds things up as the mask is smaller than the\n # whole image.\n registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\n registration_method.SetMetricSamplingPercentage(0.01)\n if fixed_image_mask:\n registration_method.SetMetricFixedMask(fixed_image_mask)\n\n # Multi-resolution framework.\n registration_method.SetShrinkFactorsPerLevel(shrinkFactors=[4, 2, 1])\n registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2, 1, 0])\n registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()\n\n registration_method.SetInterpolator(sitk.sitkLinear)\n registration_method.SetOptimizerAsLBFGSB(\n gradientConvergenceTolerance=1e-5, numberOfIterations=100\n )\n\n # If corresponding points in the fixed and moving image are given then we display the similarity metric\n # and the TRE during the registration.\n if fixed_points and moving_points:\n registration_method.AddCommand(\n sitk.sitkStartEvent, rc.metric_and_reference_start_plot\n )\n registration_method.AddCommand(\n sitk.sitkEndEvent, rc.metric_and_reference_end_plot\n )\n registration_method.AddCommand(\n sitk.sitkIterationEvent,\n lambda: rc.metric_and_reference_plot_values(\n registration_method, fixed_points, moving_points\n ),\n )\n\n return registration_method.Execute(fixed_image, moving_image)",
"Perform Registration\nThe following cell allows you to select the images used for registration, runs the registration, and afterwards computes statistics comparing the target registration errors before and after registration and displays a histogram of the TREs.\nTo time the registration, uncomment the timeit magic. \n<b>Note</b>: this creates a separate scope for the cell. Variables set inside the cell, specifically tx, will become local variables and thus their value is not available in other cells.",
"#%%timeit -r1 -n1\n\n# Select the fixed and moving images, valid entries are in [0,9].\nfixed_image_index = 0\nmoving_image_index = 7\n\n\ntx = bspline_intra_modal_registration(\n fixed_image=images[fixed_image_index],\n moving_image=images[moving_image_index],\n fixed_image_mask=(masks[fixed_image_index] == lung_label),\n fixed_points=points[fixed_image_index],\n moving_points=points[moving_image_index],\n)\n(\n initial_errors_mean,\n initial_errors_std,\n _,\n initial_errors_max,\n initial_errors,\n) = ru.registration_errors(\n sitk.Euler3DTransform(), points[fixed_image_index], points[moving_image_index]\n)\n(\n final_errors_mean,\n final_errors_std,\n _,\n final_errors_max,\n final_errors,\n) = ru.registration_errors(tx, points[fixed_image_index], points[moving_image_index])\n\nplt.hist(initial_errors, bins=20, alpha=0.5, label=\"before registration\", color=\"blue\")\nplt.hist(final_errors, bins=20, alpha=0.5, label=\"after registration\", color=\"green\")\nplt.legend()\nplt.title(\"TRE histogram\")\nprint(\n f\"Initial alignment errors in millimeters, mean(std): {initial_errors_mean:.2f}({initial_errors_std:.2f}), max: {initial_errors_max:.2f}\"\n)\nprint(\n f\"Final alignment errors in millimeters, mean(std): {final_errors_mean:.2f}({final_errors_std:.2f}), max: {final_errors_max:.2f}\"\n)",
"Another option for evaluating the registration is to use segmentation. In this case, we transfer the segmentation from one image to the other and compare the overlaps, both visually, and quantitatively.\n<b>Note</b>: A more detailed version of the approach described here can be found in the Segmentation Evaluation notebook.",
"# Transfer the segmentation via the estimated transformation. Use Nearest Neighbor interpolation to retain the labels.\ntransformed_labels = sitk.Resample(\n masks[moving_image_index],\n images[fixed_image_index],\n tx,\n sitk.sitkNearestNeighbor,\n 0.0,\n masks[moving_image_index].GetPixelID(),\n)\n\nsegmentations_before_and_after = [masks[moving_image_index], transformed_labels]\ninteract(\n display_coronal_with_label_maps_overlay,\n coronal_slice=(0, images[0].GetSize()[1] - 1),\n mask_index=(0, len(segmentations_before_and_after) - 1),\n image=fixed(images[fixed_image_index]),\n masks=fixed(segmentations_before_and_after),\n label=fixed(lung_label),\n window_min=fixed(-1024),\n window_max=fixed(976),\n)\n\n# Compute the Dice coefficient and Hausdorff distance between the segmentations before, and after registration.\nground_truth = masks[fixed_image_index] == lung_label\nbefore_registration = masks[moving_image_index] == lung_label\nafter_registration = transformed_labels == lung_label\n\nlabel_overlap_measures_filter = sitk.LabelOverlapMeasuresImageFilter()\nlabel_overlap_measures_filter.Execute(ground_truth, before_registration)\nprint(\n f\"Dice coefficient before registration: {label_overlap_measures_filter.GetDiceCoefficient():.2f}\"\n)\nlabel_overlap_measures_filter.Execute(ground_truth, after_registration)\nprint(\n f\"Dice coefficient after registration: {label_overlap_measures_filter.GetDiceCoefficient():.2f}\"\n)\n\nhausdorff_distance_image_filter = sitk.HausdorffDistanceImageFilter()\nhausdorff_distance_image_filter.Execute(ground_truth, before_registration)\nprint(\n f\"Hausdorff distance before registration: {hausdorff_distance_image_filter.GetHausdorffDistance():.2f}\"\n)\nhausdorff_distance_image_filter.Execute(ground_truth, after_registration)\nprint(\n f\"Hausdorff distance after registration: {hausdorff_distance_image_filter.GetHausdorffDistance():.2f}\"\n)",
"Multi-resolution control point grid\nIn the example above we used the standard image registration framework. This implies the same transformation model at all image resolutions. For global transformations (e.g. rigid, affine...) the number of transformation parameters has no relationship to the changing resolution. For the BSpline transformation we can potentially use fewer control points for images with lower frequencies, higher levels of the image pyramid, increasing the number of control points as we go down the pyramid. With the standard framework we use the same number of control points for all pyramid levels.\nTo use a multi-resolution control point grid we have a specific initializer for the BSpline transformation, SetInitialTransformAsBSpline.\nThe following code solves the same registration task as above, just with a multi-resolution control point grid.",
"def bspline_intra_modal_registration2(\n fixed_image,\n moving_image,\n fixed_image_mask=None,\n fixed_points=None,\n moving_points=None,\n):\n\n registration_method = sitk.ImageRegistrationMethod()\n\n # Determine the number of BSpline control points using the physical spacing we\n # want for the finest resolution control grid.\n grid_physical_spacing = [50.0, 50.0, 50.0] # A control point every 50mm\n image_physical_size = [\n size * spacing\n for size, spacing in zip(fixed_image.GetSize(), fixed_image.GetSpacing())\n ]\n mesh_size = [\n int(image_size / grid_spacing + 0.5)\n for image_size, grid_spacing in zip(image_physical_size, grid_physical_spacing)\n ]\n\n # The starting mesh size will be 1/4 of the original, it will be refined by\n # the multi-resolution framework.\n mesh_size = [int(sz / 4 + 0.5) for sz in mesh_size]\n\n initial_transform = sitk.BSplineTransformInitializer(\n image1=fixed_image, transformDomainMeshSize=mesh_size, order=3\n )\n # Instead of the standard SetInitialTransform we use the BSpline specific method which also\n # accepts the scaleFactors parameter to refine the BSpline mesh. In this case we start with\n # the given mesh_size at the highest pyramid level then we double it in the next lower level and\n # in the full resolution image we use a mesh that is four times the original size.\n registration_method.SetInitialTransformAsBSpline(\n initial_transform, inPlace=True, scaleFactors=[1, 2, 4]\n )\n registration_method.SetMetricAsMeanSquares()\n # Settings for metric sampling, usage of a mask is optional. When given a mask the sample points will be\n # generated inside that region. Also, this implicitly speeds things up as the mask is smaller than the\n # whole image.\n registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\n registration_method.SetMetricSamplingPercentage(0.01)\n if fixed_image_mask:\n registration_method.SetMetricFixedMask(fixed_image_mask)\n\n # Multi-resolution framework.\n registration_method.SetShrinkFactorsPerLevel(shrinkFactors=[4, 2, 1])\n registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2, 1, 0])\n registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()\n\n registration_method.SetInterpolator(sitk.sitkLinear)\n # Use the LBFGS2 instead of LBFGS. The latter cannot adapt to the changing control grid resolution.\n registration_method.SetOptimizerAsLBFGS2(\n solutionAccuracy=1e-2, numberOfIterations=100, deltaConvergenceTolerance=0.01\n )\n\n # If corresponding points in the fixed and moving image are given then we display the similarity metric\n # and the TRE during the registration.\n if fixed_points and moving_points:\n registration_method.AddCommand(\n sitk.sitkStartEvent, rc.metric_and_reference_start_plot\n )\n registration_method.AddCommand(\n sitk.sitkEndEvent, rc.metric_and_reference_end_plot\n )\n registration_method.AddCommand(\n sitk.sitkIterationEvent,\n lambda: rc.metric_and_reference_plot_values(\n registration_method, fixed_points, moving_points\n ),\n )\n\n return registration_method.Execute(fixed_image, moving_image)\n\n#%%timeit -r1 -n1\n\n# Select the fixed and moving images, valid entries are in [0,9].\nfixed_image_index = 0\nmoving_image_index = 7\n\n\ntx = bspline_intra_modal_registration2(\n fixed_image=images[fixed_image_index],\n moving_image=images[moving_image_index],\n fixed_image_mask=(masks[fixed_image_index] == lung_label),\n fixed_points=points[fixed_image_index],\n moving_points=points[moving_image_index],\n)\n(\n initial_errors_mean,\n initial_errors_std,\n _,\n initial_errors_max,\n initial_errors,\n) = ru.registration_errors(\n sitk.Euler3DTransform(), points[fixed_image_index], points[moving_image_index]\n)\n(\n final_errors_mean,\n final_errors_std,\n _,\n final_errors_max,\n final_errors,\n) = ru.registration_errors(tx, points[fixed_image_index], points[moving_image_index])\n\nplt.hist(initial_errors, bins=20, alpha=0.5, label=\"before registration\", color=\"blue\")\nplt.hist(final_errors, bins=20, alpha=0.5, label=\"after registration\", color=\"green\")\nplt.legend()\nplt.title(\"TRE histogram\")\nprint(\n f\"Initial alignment errors in millimeters, mean(std): {initial_errors_mean:.2f}({initial_errors_std:.2f}), max: {initial_errors_max:.2f}\"\n)\nprint(\n f\"Final alignment errors in millimeters, mean(std): {final_errors_mean:.2f}({final_errors_std:.2f}), max: {final_errors_max:.2f}\"\n)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
josh-gree/maths-with-python
|
07-sympy.ipynb
|
mit
|
[
"In standard mathematics we routinely write down abstract variables or concepts and manipulate them without ever assigning specific values to them. An example would be the quadratic equation\n\\begin{equation}\n a x^2 + b x + c = 0\n\\end{equation}\nand its roots $x_{\\pm}$: we can write down the solutions of the equation and discuss the existence, within the real numbers, of the roots, without specifying the particular values of the parameters $a, b$ and $c$.\nIn a standard computer programming language, we can write functions that encapsulate the solutions of the equation, but calling those functions requires us to specify values of the parameters. In general, the value of a variable must be given before the variable can be used.\nHowever, there do exist Computer Algebra Systems that can perform manipulations in the \"standard\" mathematical form. Through the university you will have access to Wolfram Mathematica and Maple, which are commercial packages providing a huge range of mathematical tools. There are also freely available packages, such as SageMath and sympy. These are not always easy to use, as all CAS have their own formal languages that rarely perfectly match your expectations.\nHere we will briefly look at sympy, which is a pure Python CAS. sympy is not suitable for complex calculations, as it's far slower than the alternatives. However, it does interface very cleanly with Python, so can be used inside Python code, especially to avoid entering lengthy expressions.\nsympy\nSetting up\nSetting up sympy is straightforward:",
"import sympy\nsympy.init_printing()",
"The standard import command is used. The init_printing command looks at your system to find the clearest way of displaying the output; this isn't necessary, but is helpful for understanding the results.\nTo do anything in sympy we have to explicitly tell it if something is a variable, and what name it has. There are two commands that do this. To declare a single variable, use",
"x = sympy.Symbol('x')",
"To declare multiple variables at once, use",
"y, z0 = sympy.symbols(('y', 'z_0'))",
"Note that the \"name\" of the variable does not need to match the symbol with which it is displayed. We have used this with z0 above:",
"z0",
"Once we have variables, we can define new variables by operating on old ones:",
"a = x + y\nb = y * z0\nprint(\"a={}. b={}.\".format(a, b))\n\na",
"In addition to variables, we can also define general functions. There is only one option for this:",
"f = sympy.Function('f')",
"In-built functions\nWe have seen already that mathematical functions can be found in different packages. For example, the $\\sin$ function appears in math as math.sin, acting on a single number. It also appears in numpy as numpy.sin, where it can act on vectors and arrays in one go. sympy re-implements many mathematical functions, for example as sympy.sin, which can act on abstract (sympy) variables.\nWhenever using sympy we should use sympy functions, as these can be manipulated and simplified. For example:",
"c = sympy.sin(x)**2 + sympy.cos(x)**2\n\nc\n\nc.simplify()",
"Note the steps taken here. c is an object, something that sympy has created. Once created it can be manipulated and simplified, using the methods on the object. It is useful to use tab completion to look at the available commands. For example,",
"d = sympy.cosh(x)**2 - sympy.sinh(x)**2",
"Now type d. and then tab, to inspect all the available methods. As before, we could do",
"d.simplify()",
"but there are many other options.\nSolving equations\nLet us go back to our quadratic equation and check the solution. To define an equation we use the sympy.Eq function:",
"a, b, c, x = sympy.symbols(('a', 'b', 'c', 'x'))\nquadratic_equation = sympy.Eq(a*x**2+b*x+c, 0)\nsympy.solve(quadratic_equation)",
"What happened here? sympy is not smart enough to know that we wanted to solve for x! Instead, it solved for the first variable it encountered. Let us try again:",
"sympy.solve(quadratic_equation, x)",
"This is our expectation: multiple solutions, returned as a list. We can access and manipulate these results:",
"roots = sympy.solve(quadratic_equation, x)\nxplus, xminus = sympy.symbols(('x_{+}', 'x_{-}'))\nxplus = roots[0]\nxminus = roots[1]",
"We can substitute in specific values for the parameters to find solutions:",
"xplus_solution = xplus.subs([(a,1), (b,2), (c,3)])\nxplus_solution",
"We have a list of substitutions. Each substitution is given by a tuple, containing the variable to be replaced, and the expression replacing it. We do not have to substitute in numbers, as here, but could use other variables:",
"xminus_solution = xminus.subs([(b,a), (c,a+z0)])\nxminus_solution\n\nxminus_solution.simplify()",
"We can use similar syntax to solve systems of equations, such as\n\\begin{align}\n x + 2 y &= 0, \\ xy & = z_0.\n\\end{align}",
"eq1 = sympy.Eq(x+2*y, 0)\neq2 = sympy.Eq(x*y, z0)\nsympy.solve([eq1, eq2], [x, y])",
"Differentiation and integration\nDifferentiation\nThere is a standard function for differentiation, diff:",
"expression = x**2*sympy.sin(sympy.log(x))\nsympy.diff(expression, x)",
"A parameter can control how many times to differentiate:",
"sympy.diff(expression, x, 3)",
"Partial differentiation with respect to multiple variables can also be performed by increasing the number of arguments:",
"expression2 = x*sympy.cos(y**2 + x)\nsympy.diff(expression2, x, 2, y, 3)",
"There is also a function representing an unevaluated derivative:",
"sympy.Derivative(expression2, x, 2, y, 3)",
"These can be useful for display, building up a calculation in stages, simplification, or when the derivative cannot be evaluated. It can be explicitly evaluated using the doit function:",
"sympy.Derivative(expression2, x, 2, y, 3).doit()",
"Integration\nIntegration uses the integrate function. This can calculate either definite or indefinite integrals, but will not include the integration constant.",
"integrand=sympy.log(x)**2\nsympy.integrate(integrand, x)\n\nsympy.integrate(integrand, (x, 1, 10))",
"The definite integral is specified by passing a tuple, with the variable to be integrated (here x) and the lower and upper limits (which can be expressions).\nNote that sympy includes an \"infinity\" object oo (two o's), which can be used in the limits of integration:",
"sympy.integrate(sympy.exp(-x), (x, 0, sympy.oo))",
"Multiple integration for higher dimensional integrals can be performed:",
"sympy.integrate(sympy.exp(-(x+y))*sympy.cos(x)*sympy.sin(y), x, y)\n\nsympy.integrate(sympy.exp(-(x+y))*sympy.cos(x)*sympy.sin(y), \n (x, 0, sympy.pi), (y, 0, sympy.pi))",
"Again, there is an unevaluated integral:",
"sympy.Integral(integrand, x)\n\nsympy.Integral(integrand, (x, 1, 10))",
"Again, the doit method will explicitly evaluate the result where possible.\nDifferential equations\nDefining and solving differential equations uses the pattern from the previous sections. We'll use the same example problem as in the scipy case, \n\\begin{equation}\n \\frac{\\text{d} y}{\\text{d} t} = e^{-t} - y, \\qquad y(0) = 1.\n\\end{equation}\nFirst we define that $y$ is a function, currently unknown, and $t$ is a variable.",
"y = sympy.Function('y')\nt = sympy.Symbol('t')",
"y is a general function, and can be a function of anything at this point (any number of variables with any name). To use it consistently, we must refer to it explicitly as a function of $t$ everywhere. For example,",
"y(t)",
"We then define the differential equation. sympy.Eq defines the equation, and diff differentiates:",
"ode = sympy.Eq(y(t).diff(t), sympy.exp(-t) - y(t))\node",
"Here we have used diff as a method applied to the function. As sympy can't differentiate $y(t)$ (as it doesn't have an explicit value), it leaves it unevaluated.\nWe can now use the dsolve function to get the solution to the ODE. The syntax is very similar to the solve function used above:",
"sympy.dsolve(ode, y(t))",
"This is simple enough to solve, but we'll use symbolic methods to find the constant, by setting $t = 0$ and $y(t) = y(0) = 1$.",
"general_solution = sympy.dsolve(ode, y(t))\nvalue = general_solution.subs([(t,0), (y(0), 1)])\nvalue",
"We then find the specific solution of the ODE.",
"ode_solution = general_solution.subs([(value.rhs,value.lhs)])\node_solution",
"Plotting\nsympy provides an interface to matplotlib so that expressions can be directly plotted. For example,",
"%matplotlib inline\nfrom matplotlib import rcParams\nrcParams['figure.figsize']=(12,9)\n\nsympy.plot(sympy.sin(x));",
"We can explicitly set limits, for example",
"sympy.plot(sympy.exp(-x)*sympy.sin(x**2), (x, 0, 1));",
"We can plot the solution to the differential equation computed above:",
"sympy.plot(ode_solution.rhs, xlim=(0, 1), ylim=(0.7, 1.05));",
"This can be visually compared to the previous result. However, we would often like a more precise comparison, which requires numerically evaluating the solution to the ODE at specific points.\nlambdify\nAt the end of a symbolic calculation using sympy we will have a result that is often long and complex, and that is needed in another part of another code. We could type the appropriate expression in by hand, but this is tedious and error prone. A better way is to make the computer do it.\nThe example we use here is the solution to the ODE above. We have solved it symbolically, and the result is straightforward. We can also solve it numerically using scipy. We want to compare the two.\nFirst, let us compute the scipy numerical result:",
"from numpy import exp\nfrom scipy.integrate import odeint\nimport numpy\n\ndef dydt(y, t):\n \"\"\"\n Defining the ODE dy/dt = e^{-t} - y.\n \n Parameters\n ----------\n \n y : real\n The value of y at time t (the current numerical approximation)\n t : real\n The current time t\n \n Returns\n -------\n \n dydt : real\n The RHS function defining the ODE.\n \"\"\"\n \n return exp(-t) - y\n\nt_scipy = numpy.linspace(0.0, 1.0)\ny0 = [1.0]\n\ny_scipy = odeint(dydt, y0, t_scipy)",
"We want to evaluate our sympy solution at the same points as our scipy solution, in order to do a direct comparison. In order to do that, we want to construct a function that computes our sympy solution, without typing it in. That is what lambdify is for: it creates a function from a sympy expression.\nFirst let us get the expression explicitly:",
"ode_expression = ode_solution.rhs\node_expression",
"Then we construct the function using lambdify:",
"from sympy.utilities.lambdify import lambdify\n\node_function = lambdify((t,), ode_expression, modules='numpy')",
"The first argument to lambdify is a tuple containing the arguments of the function to be created. In this case that's just t, the time(s) at which we want to evaluate the expression. The second argument to lambdify is the expression that we want converted into a function. The third argument, which is optional, tells lambdify that where possible it should use numpy functions. This means that we call the function using numpy arrays, it will calculate using numpy array expressions, doing the whole calculation in a single call.\nWe now have a function that we can directly call:",
"print(\"sympy solution at t=0: {}\".format(ode_function(0.0)))\nprint(\"sympy solution at t=0.5: {}\".format(ode_function(0.5)))",
"And we can directly apply this function to the times at which the scipy solution is constructed, for comparison:",
"y_sympy = ode_function(t_scipy)",
"Now we can we matplotlib to plot both on the same figure:",
"from matplotlib import pyplot\npyplot.plot(t_scipy, y_scipy[:,0], 'b-', label='scipy')\npyplot.plot(t_scipy, y_sympy, 'k--', label='sympy')\npyplot.xlabel(r'$t$')\npyplot.ylabel(r'$y$')\npyplot.legend(loc='upper right')\npyplot.show()",
"We see good visual agreement everywhere. But how accurate is it?\nNow that we have numpy arrays explicitly containing the solutions, we can manipulate these to see the differences between solutions:",
"pyplot.semilogy(t_scipy, numpy.abs(y_scipy[:,0]-y_sympy))\npyplot.xlabel(r'$t$')\npyplot.ylabel('Difference in solutions');",
"The accuracy is around $10^{-8}$ everywhere - by modifying the accuracy of the scipy solver this can be made more accurate (if needed) or less (if the calculation takes too long and high accuracy is not required).\nFurther reading\nsympy has detailed documentation and a useful tutorial.\nExercise : systematic ODE solving\nWe are interested in the solution of\n\\begin{equation}\n \\frac{\\text{d} y}{\\text{d} t} = e^{-t} - y^n, \\qquad y(0) = 1,\n\\end{equation}\nwhere $n > 1$ is an integer. The \"minor\" change from the above examples mean that sympy can only give the solution as a power series.\nExercise 1\nCompute the general solution as a power series for $n = 2$.\nExercise 2\nInvestigate the help for the dsolve function to straightforwardly impose the initial condition $y(0) = 1$ using the ics argument. Using this, compute the specific solutions that satisfy the ODE for $n = 2, \\dots, 10$.\nExercise 3\nUsing the removeO command, plot each of these solutions for $t \\in [0, 1]$."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
quantopian/research_public
|
notebooks/lectures/Maximum_Likelihood_Estimation/questions/notebook.ipynb
|
apache-2.0
|
[
"Exercises: Maximum Likelihood Estimation\nBy Christopher van Hoecke, Max Margenot, and Delaney Mackenzie\nLecture Link :\nhttps://www.quantopian.com/lectures/maximum-likelihood-estimation\nIMPORTANT NOTE:\nThis lecture corresponds to the Maximum Likelihood Estimation lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public\n\nNotebook released under the Creative Commons Attribution 4.0 License.\n\nKey concepts\nNormal Distribution MLE Estimators: \n$$\n\\hat\\mu = \\frac{1}{T}\\sum_{t=1}^{T} x_t \\\\qquad \\hat\\sigma = \\sqrt{\\frac{1}{T}\\sum_{t=1}^{T}{(x_t - \\hat\\mu)^2}}\n$$\nExponential Distribution MLE Estimators: \n$$\\hat\\lambda = \\frac{\\sum_{t=1}^{T} x_t}{T}$$",
"# Useful Libraries\nimport pandas as pd\nimport math\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy\nimport scipy.stats as stats",
"Exercise 1: Normal Distribution\n\nGiven the equations above, write down functions to calculate the MLE estimators $\\hat{\\mu}$ and $\\hat{\\sigma}$ of the normal distribution. \nGiven the sample normally distributed set, find the maximum likelihood $\\hat{\\mu}$ and $\\hat{\\sigma}$.\nFit the data to a normal distribution using SciPy. Compare SciPy's calculated parameters with your calculated values of $\\hat{\\mu}$ and $\\hat{\\sigma}$.\nPlot a normal distribution PDF with your estimated parameters",
"# Normal mean and standard deviation MLE estimators\ndef normal_mu(X):\n # Get the number of observations\n T = #______# Your code goes here\n # Sum the observations\n s = #______# Your code goes here\n return 1.0/T * s\n\ndef normal_sigma(X):\n T = #______# Your code goes here\n # Get the mu MLE\n mu = #______# Your code goes here\n # Sum the square of the differences\n s = #______# Your code goes here\n # Compute sigma^2\n sigma_squared = \n return math.sqrt(sigma_squared)\n\n# Normal Distribution Sample Data\nTRUE_MEAN = 40\nTRUE_STD = 10\nX = np.random.normal(TRUE_MEAN, TRUE_STD, 10000000)\n\n# Use your functions to compute the MLE mu and sigma\nmu = #______# Your code goes here\nstd = #______# Your code goes here\n\nprint 'Maximum likelihood value of mu:', mu\nprint 'Maximum likelihood value for sigma:', std\n\n# Fit the distribution using SciPy and compare those parameters with yours \nscipy_mu, scipy_std = #______# Your code goes here\nprint 'Scipy Maximum likelihood value of mu:', scipy_mu\nprint 'Scipy Maximum likelihood value for sigma:', scipy_std\n\n# Get the PDF, fill it with your calculated parameters, and plot it along x\nx = np.linspace(0, 80, 80)\n\nplt.hist(X, bins=x, normed='true')\nplt.plot(pdf(x, loc=mu, scale=std), color='red')\nplt.xlabel('Value')\nplt.ylabel('Observed Frequency')\nplt.legend(['Fitted Distribution PDF', 'Observed Data', ]);",
"Exercise 2: Exponential Distribution\n\nGiven the equations above, write down functions to calculate the MLE estimator $\\hat{\\lambda}$ of the exponential distribution\nGiven the sample exponentially distributed set, find the maximum likelihood\nFit the data to an exponential distribution using SciPy. Compare SciPy's calculated parameter with your calculated values of $\\hat{\\lambda}$\nPlot an exponential distribution PDF with your estimated parameter",
"def exp_lambda(X):\n T = #______# Your code goes here\n s = #______# Your code goes here\n return s/T\n\n# Exponential distribution sample data\nTRUE_LAMBDA = 5\nX = np.random.exponential(TRUE_LAMBDA, 1000)\n\n# Use your functions to compute the MLE lambda\nlam = #______# Your code goes here\nprint \"Lambda estimate: \", lam\n\n# Fit the distribution using SciPy and compare that parameter with yours \n_, l = #______# Your code goes here\nprint 'Scipy lambds estimate: ', l\n\n# Get the PDF, fill it with your calculated parameter, and plot it along x\nx = range(0, 80)\n\nplt.hist(X, bins=x, normed='true')\nplt.plot(pdf(x, scale=l), color = 'red')\nplt.xlabel('Value')\nplt.ylabel('Observed Frequency')\nplt.legend(['Fitted Distribution PDF', 'Observed Data', ]);",
"Exercise 3 : Fitting Data Using MLE\n\nUsing the MLE estimators laid out in the lecture, fit the returns for SPY from 2014 to 2015 to a normal distribution. \nCheck for normality using the Jarque-Bera test",
"prices = get_pricing('SPY', \n fields='price', \n start_date='2016-01-04', \n end_date='2016-01-05', \n frequency = 'minute')\nreturns = prices.pct_change()[1:]\n\nmu = #______# Your code goes here\nstd = #______# Your code goes here\n\nx = np.linspace(#______# Your code goes here)\nh = plt.hist(#______# Your code goes here)\nl = plt.plot(#______# Your code goes here)\nplt.show(h, l);",
"Recall that this fit only makes sense if we have normally distributed data.",
"alpha = 0.05\nstat, pval = #______# Your code goes here\nprint pval\n\nif pval > alpha: \n print 'Accept our null hypothesis'\nif pval < alpha: \n print 'Reject our null hypothesis'",
"Congratulations on completing the Maximum Likelihood Estimation exercises!\nAs you learn more about writing trading models and the Quantopian platform, enter the daily Quantopian Contest. Your strategy will be evaluated for a cash prize every day.\nStart by going through the Writing a Contest Algorithm tutorial.\nThis presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mdbenito/ModelSelection
|
examples/MSE-Q2231975.ipynb
|
gpl-3.0
|
[
"Bayesian model selection and linear regression\nThis notebook uses Bayesian selection for linear regression with basis functions in order to (partially) answer question #2231975 in Math StackExchange. The necessary code can be found in bitbucket.\nThe idea is to use a fixed set of simple functions to interpolate a given (small) dataset. Non-parametric regression will yield almost perfect results but it seemed not to be an option for the OP, so this is one possibility.\nWe begin with the usual boilerplate importing the necessary modules. Note the manipulation of the imports path in order to access the code in the local repository.",
"import sys\nsys.path.append(\"../src/\")\nfrom Hypotheses import *\nfrom ModelSelection import LinearRegression\nfrom Plots import updateMAPFitPlot, updateProbabilitiesPlot\nimport numpy as np\nfrom sklearn import preprocessing\nimport matplotlib.pyplot as pl\n%matplotlib notebook",
"We now load data and normalize it to have zero mean and variance 1. This is required to avoid numerical issues: for large values of the target values, some probabilities in the computations become zero because of the exponential function ($e^{-t}$ becomes almost zero for relatively small values of $t$).",
"data = np.loadtxt('data-2231875.txt', delimiter=',', skiprows=1)\ndata[:,1] = preprocessing.scale(data[:,1])\npl.title(\"The (normalized) dataset\")\n_ = pl.plot(data[:,0], data[:,1])\n#pl.savefig('data.svg')",
"Next we prepare some set of hypothesis spaces to be tested against each other. Because it's easy and already implemented in the repo, we take two polynomial and two trigonometric families of basis functions.",
"var = data[:, 1].std() # Should be approx. 1 after scaling\nsigma = 0.1 # Observation noise sigma \nhc = HypothesisCollection()\nhc.append(PolynomialHypothesis(M=5, variance=var, noiseVariance=sigma**2))\nhc.append(PolynomialHypothesis(M=6, variance=var, noiseVariance=sigma**2))\nhc.append(TrigonometricHypothesis(halfM=4, variance=var, noiseVariance=sigma**2))\nhc.append(TrigonometricHypothesis(halfM=6, variance=var, noiseVariance=sigma**2))\n\nlr = LinearRegression(hc, sigma)",
"We now perform bayesian updates to our belief in each hypothesis space. Each data point is fed to the LinearRegression object which then performs:\n1. Estimation of the weights for each hypothesis.\n2. Computation of the posterior probability of each hypothesis, given the data.",
"%%time\nymin, ymax = min(data[:,1]), max(data[:,1])\n# Looping is ugly, but it is what it is! :P\nfor x, y in data:\n lr.update(x, y)\n\n# MAP values for the weights w_j\nwmap = [param.mean for param in lr.parameter]\n\nfig, (ax1, ax2) = pl.subplots(2)\nupdateMAPFitPlot(ax1, lr.XHist, lr.hypotheses, wmap, 0.005) \nax1.plot(lr.XHist, lr.THist, 'k+', ms=4, alpha=0.5) # plot the data points\nax1.set_title(\"Data and MAP fits\")\nupdateProbabilitiesPlot(ax2, lr)\nax2.set_title(\"Incremental model probability\")\nfig.subplots_adjust(hspace=0.5)\n#pl.savefig('mapfits.svg')\n_ = pl.show()",
"The winner among the hypotheses proposed is clearly the Trigonometric hypothesis ($H_3$) with $M=12$ basis functions:\n$$\\phi_j (x) = \\cos (\\pi j x)\\ \\text{ for }\\ j = 2 k,$$\n$$\\phi_j (x) = \\sin (\\pi j x)\\ \\text{ for }\\ j = 2 k+1,$$\nwhere $k \\in 0, \\ldots, M/2,$. Our best candidate is then\n$$f(x) = \\sum_{j=0}^11 w_j \\phi_j (x). $$\nThe specific values of the weights $w_j$ are taken from from the a posteriori distribution computed (Gaussian since we started with a Gaussian prior). Their MAP values are:",
"prob_hypotheses = np.array(lr.probHyp)\nwinner = np.argmax(prob_hypotheses[:,-1])\nwmap[winner].round(2).flatten()",
"Note how the model comparison rejects the hypothesis Trig7 after seeing about half the dataset and leans in favor of Trig11 which becomes a better fit. This might come at cost later, though, because Trig11 is a wildly oscillating polynomial beyond the interval considered whereas Trig7 is a bit more tame. More data would be needed to decide and besides, you really don't want to extrapolate with your regression ;)",
"xx = np.linspace(-1,2,200)\nfor h, w, l in zip(lr.hypotheses[2:], wmap[2:], ['Trig7', 'Trig11']):\n pl.plot(xx, [np.dot(h.evaluate(x).flatten(), w) for x in xx], label=l)\npl.title(\"Complexity in competing hypotheses\")\n_ = pl.legend()\n#pl.savefig('complexity.svg')",
"It is important to note that all this doesn't mean that either hypothesis is good (nor that extrapolating beyond the range of the dataset would be wise, no matter how good the fit is), only that one is better than the other. At this point we would need to try more hypothesis spaces, perhaps including functions with compact support at multiple scales and locations. And of course more data."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Sz593/coursera_ml_notes
|
Jupyter Notebooks/ex1/ex1 - Linear Regression.ipynb
|
mit
|
[
"import os\nimport sys\n\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%matplotlib inline",
"1 Simple Octave/MATLAB Function\nAs a quick warm up, create a function to return a 5x5 identity matrix.",
"A = np.eye(5)\nprint(A)",
"2 Linear Regression with One Variable\nIn this part of this exercise, you will implement linear regression with one variable to predict profits for a food truck. Suppose you are the CEO of a restaurant franchise and are considering different cities for opening a new outlet. The chain already has trucks in various cities and you have data for profits and populations from the cities.\nYou would like to use this data to help you select which city to expand to next. The file ex1data1.txt contains the dataset for our linear regression prob- lem. The first column is the population of a city and the second column is\nthe profit of a food truck in that city. A negative value for profit indicates a loss.\n2.1 Plotting the Data\nBefore starting on any task, it is often useful to understand the data by visualizing it. For this dataset, you can use a scatter plot to visualize the data, since it has only two properties to plot (profit and population). (Many other problems that you will encounter in real life are multi-dimensional and can't be plotted on a 2-d plot.)",
"datafile = 'ex1\\\\ex1data1.txt'\ndf = pd.read_csv(datafile, header=None, names=['Population', 'Profit'])\n\ndef plot_data(x, y):\n plt.figure(figsize=(10, 6))\n plt.plot(x, y, '.', label='Training Data')\n plt.xlabel(\"Population of City in 10,000s\", fontsize=16)\n plt.ylabel(\"Profit in $10,000s\", fontsize=16)\n\nimport os\nimport sys\nimport datetime as dt\n\nfp_list_master = ['C:', 'Users', 'szahn', 'Dropbox', 'Statistics & Machine Learning', 'coursera_ml_notes']\nfp = os.sep.join(fp_list_master)\nfp_fig = fp + os.sep + 'LaTeX Notes' + os.sep + 'Figures'\nprint(os.path.isdir(fp), os.path.isdir(fp_fig))\n\nplot_data(df['Population'], df['Profit'])\n#plt.savefig(fp_fig + os.sep + 'linreg_hw_2_1_plot_data.pdf')",
"2.2 Gradient Descent\nIn this part, you will fit the linear regression parameters $\\theta$ to our dataset using gradient descent.\n2.2.1 Update Equations\nThe objective of linear regression is to minimize the cost function\n$$\nJ\\left( \\theta \\right) = \\frac{1}{2m} \\sum_{i=1}^m \\left( h_\\theta \\left( x^{\\left( i\\right)} \\right) - y^{\\left( i \\right)} \\right)^2\n$$\nwhere $h_\\theta\\left( x \\right)$ is the hypothesis given by the linear model\n$$\nh_\\theta\\left( x \\right) = \\theta^\\intercal x = \\theta_0 + \\theta_1 x_1\n$$\nRecall that the parameters of your model are the $\\theta_j$ values. These are the values you will adjust to minimize cost $J(\\theta)$. One way to do this is to use the batch gradient descent algorithm. In batch gradient descent, each iteration performs the update\n$$\n\\theta_j := \\theta_j - \\alpha\\frac{1}{m}\\sum_{i=1}^m \\left( h_\\theta\\left( x^{\\left( i\\right)} \\right) - y^{\\left(i\\right)}\\right) x_j^{\\left(i\\right)} \\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\text{simultaneously update } \\theta_j \\text{ for all } j \\text{.}\n$$\nWith each step of gradient descent, your parameters $\\theta_j$ come closer to the optimal values that will achieve the lowest cost $J(\\theta)$.\n2.2.2 Implementation\nIn the following lines, we add another dimension to our data to accommodate the $\\theta_0$ intercept term.",
"# set the number of training examples\nm = len(df['Population'])\n\n# create an array from the dataframe (missing column for x_0 values)\nX = df['Population'].values\n\n# add in the first column of the array for x_0 values\nX = X[:, np.newaxis]\nX = np.insert(X, 0, 1, axis=1)\n\ny = df['Profit'].values\ny = y[:, np.newaxis]",
"Let's make the (totally random) guess that $\\theta_0$ = 0 and $\\theta_1$ = 0. In that case, we have the following output from the hypothesis function.",
"theta_values = np.array([[0.], [0]])\nprint(theta_values.shape)\nprint(X.shape, end='\\n\\n')\n\n_ = np.dot(X, theta_values)\nprint(_.shape)",
"2.2.3 Computing the Cost $J(\\theta)$\nNow, we can define our actual hypothesis function for linear regression with a single variable.",
"# define the hypothesis\ndef h(theta, X):\n \"\"\"Takes the dot product of the matrix X and the vector theta,\n yielding a predicted result.\n \"\"\"\n return np.dot(X, theta)\n\ndef compute_cost(X, y, theta):\n \"\"\"Takes the design matrix X and output vector y, and computes the cost of\n the parameters stored in the vector theta.\n \n The dimensions must be as follows:\n - X must be m x n\n - y must be m x 1\n - theta must be n x 1\n \n \"\"\"\n m = len(y)\n \n J = 1 / (2*m) * np.dot((np.dot(X, theta) - y).T, (np.dot(X, theta) - y))\n return J\n\n# define column vector theta = [[0], [0]]\ntheta = np.zeros((2, 1))\n\n# compute the cost function for our existing X and y, with our new theta vector\n# verify that the cost for our theta of zeros is 32.07\ncompute_cost(X, y, theta)",
"Gradient Descent\nNow we'll actually implement the gradient descent algorithm. Keep in mind that the cost $J(\\theta)$ is parameterized by the vector $\\theta$, not $X$ and $y$. That is, we minimize $J(\\theta)$ by changing $\\theta$. We initialize the initial parameters to 0 and the learning rate alpha to 0.01.",
"def gradient_descent(X, y, theta, alpha, num_iters):\n \"\"\"\n \n \n \"\"\"\n m = len(y)\n J_history = []\n theta_history = []\n \n for i in range(num_iters):\n J_history.append(float(compute_cost(X, y, theta)))\n theta_history.append(theta)\n theta = theta - (alpha / m) * np.dot(X.T, (np.dot(X, theta) - y))\n \n return theta, J_history, theta_history\n\n# set up some initial parameters for gradient descent\ntheta_initial = np.zeros((2, 1))\niterations = 1500\nalpha = 0.01\n\ntheta_final, J_hist, theta_hist = gradient_descent(X, y, \n theta_initial, \n alpha, iterations)",
"After running the batch gradient descent algorithm, we can plot the convergence of $J(\\theta)$ over the number of iterations.",
"def plot_cost_convergence(J_history):\n abscissa = list(range(len(J_history)))\n ordinate = J_history\n\n plt.figure(figsize=(10, 6))\n plt.plot(abscissa, ordinate, '.')\n plt.title('Convergence of the Cost Function', fontsize=24)\n plt.xticks(fontsize=16)\n plt.yticks(fontsize=16)\n plt.xlabel('Iteration Number', fontsize=18)\n plt.ylabel('Cost Function', fontsize=18)\n plt.xlim(min(abscissa) - max(abscissa) * 0.05, 1.05 * max(abscissa))\n\nplot_cost_convergence(J_hist)\nplt.ylim(4.3, 6.9)\n#plt.savefig(fp_fig + os.sep + 'linreg_hw_2_4_viz_j_of_theta.pdf')\n\nplot_data(df['Population'], df['Profit'])\n\nx_min = min(df.Population)\nx_max = max(df.Population)\nabscissa = np.linspace(x_min, x_max, 50)\nhypot = lambda x: theta_final[0] + theta_final[1] * x\nordinate = [hypot(x) for x in abscissa]\nplt.plot(abscissa, ordinate, label='Hypothesis h(x) = {:.2f} + {:.2f}x'.format(\n float(theta_final[0]), float(theta_final[1])), color='indianred')\n\nplt.legend(loc=4, frameon=True, fontsize=16)\n# plt.savefig(fp_fig + os.sep + 'linreg_hw_2_3_plot_lin_reg.pdf')",
"2.4 Visualizing $J(\\theta)$",
"from mpl_toolkits.mplot3d import axes3d, Axes3D\nfrom matplotlib import cm\n\nfig = plt.figure(figsize=(12, 12))\nax = fig.gca(projection='3d')\n\ntheta_0_vals = np.linspace(-10, 10, 100)\ntheta_1_vals = np.linspace(-1, 4, 100)\n\ntheta1, theta2, cost = [], [], []\n\nfor t0 in theta_0_vals:\n for t1 in theta_1_vals:\n theta1.append(t0)\n theta2.append(t1)\n theta_array = np.array([[t0], [t1]])\n cost.append(compute_cost(X, y, theta_array))\n\nscat = ax.scatter(theta1, theta2, cost, \n c=np.abs(cost), cmap=plt.get_cmap('rainbow'))\nplt.xlabel(r'$\\theta_0$', fontsize=24)\nplt.ylabel(r'$\\theta_1$', fontsize=24)\nplt.title(r'Cost Function by $\\theta_0$ and $\\theta_1$', fontsize=24)\n\ntheta_0_hist = [x[0] for x in theta_hist]\ntheta_1_hist = [x[1] for x in theta_hist]\ntheta_hist_end = len(theta_0_hist) - 1\n\nfig = plt.figure(figsize=(12, 12))\nax = fig.gca(projection='3d')\n\ntheta_0_vals = np.linspace(-10, 10, 100)\ntheta_1_vals = np.linspace(-1, 4, 100)\n\ntheta1, theta2, cost = [], [], []\n\nfor t0 in theta_0_vals:\n for t1 in theta_1_vals:\n theta1.append(t0)\n theta2.append(t1)\n theta_array = np.array([[t0], [t1]])\n cost.append(compute_cost(X, y, theta_array))\n\nscat = ax.scatter(theta1, theta2, cost, \n c=np.abs(cost), cmap=plt.get_cmap('rainbow'))\n\nplt.plot(theta_0_hist, theta_1_hist, J_hist, 'r',\n label='Cost Minimization Path')\nplt.plot(theta_0_hist[0], theta_1_hist[0], J_hist[0], 'ro',\n label='Cost Minimization Start')\nplt.plot(theta_0_hist[theta_hist_end],\n theta_1_hist[theta_hist_end],\n J_hist[theta_hist_end], 'co', label='Cost Minimization Finish')\n\nplt.xlabel(r'$\\theta_0$', fontsize=24)\nplt.ylabel(r'$\\theta_1$', fontsize=24)\nplt.title(r'Cost Function Minimization', fontsize=24)\nplt.legend(fontsize=12)\n\nplt.savefig(fp_fig + os.sep + 'linreg_hw_2_4_plot_surface_plot.pdf')",
"3 Linear Regression with Multiple Variables"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
MatteusDeloge/opengrid
|
notebooks/Demo_Houseprint.ipynb
|
apache-2.0
|
[
"General Imports\n!! IMPORTANT !!\nIf you did NOT install opengrid with pip, \nmake sure the path to the opengrid folder is added to your PYTHONPATH",
"import os\nimport inspect\nimport sys\nimport pandas as pd\nimport charts\n\nfrom opengrid.library import houseprint\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = 16,8",
"Houseprint",
"hp = houseprint.Houseprint()\n# for testing:\n# hp = houseprint.Houseprint(spreadsheet='unit and integration test houseprint')\n\nhp\n\nhp.sites[:5]\n\nhp.get_devices()[:4]\n\nhp.get_sensors('water')[:3]",
"A Houseprint object can be saved as a pickle. It loses its tmpo session however (connections cannot be pickled)",
"hp.save('new_houseprint.pkl')\n\nhp = houseprint.load_houseprint_from_file('new_houseprint.pkl')",
"TMPO\nThe houseprint, sites, devices and sensors all have a get_data method. In order to get these working for the fluksosensors, the houseprint creates a tmpo session.",
"hp.init_tmpo()\nhp._tmpos.debug = False\nhp.sync_tmpos()",
"Lookup sites, devices, sensors based on key\nThese methods return a single object",
"hp.find_site(1)\n\nhp.find_device('FL03001556')\n\nsensor = hp.find_sensor('d5a747b86224834f745f4c9775d70241')\n\nprint(sensor.site)\nprint(sensor.unit)",
"Lookup sites, devices, sensors based on search criteria\nThese methods return a list with objects satisfying the criteria",
"hp.search_sites(inhabitants=5)\n\nhp.search_sensors(type='electricity', direction='Import')",
"Get Data",
"head = pd.Timestamp('20151102')\ntail = pd.Timestamp('20151103')\ndf = hp.get_data(sensortype='water', head=head,tail=tail, diff=True, resample='min', unit='l/min')\ncharts.plot(df, stock=True, show='inline')",
"Site",
"site = hp.find_site(1)\nsite\n\nprint(site.size)\nprint(site.inhabitants)\nprint(site.postcode)\nprint(site.construction_year)\nprint(site.k_level)\nprint(site.e_level)\nprint(site.epc_cert)\n\nsite.devices\n\nsite.get_sensors('electricity')\n\nhead = pd.Timestamp('20150617')\ntail = pd.Timestamp('20150628')\ndf=site.get_data(sensortype='electricity', head=head,tail=tail, diff=True, unit='kW')\ncharts.plot(df, stock=True, show='inline')",
"Device",
"device = hp.find_device('FL03001552')\ndevice\n\ndevice.key\n\ndevice.get_sensors('gas')\n\nhead = pd.Timestamp('20151101')\ntail = pd.Timestamp('20151104')\ndf = hp.get_data(sensortype='gas', head=head,tail=tail, diff=True, unit='kW')\ncharts.plot(df, stock=True, show='inline')",
"Sensor",
"sensor = hp.find_sensor('53b1eb0479c83dee927fff10b0cb0fe6')\nsensor\n\nsensor.key\n\nsensor.type\n\nsensor.description\n\nsensor.system\n\nsensor.unit\n\nhead = pd.Timestamp('20150617')\ntail = pd.Timestamp('20150618')\ndf=sensor.get_data(head,tail,diff=True, unit='W')\ncharts.plot(df, stock=True, show='inline')",
"Getting data for a selection of sensors",
"sensors = hp.search_sensors(type='electricity', system='solar')\nprint(sensors)\ndf = hp.get_data(sensors=sensors, head=head, tail=tail, diff=True, unit='W')\ncharts.plot(df, stock=True, show='inline')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
banyh/ShareIPythonNotebook
|
Gensim - Basics.ipynb
|
gpl-3.0
|
[
"必備的module",
"import numpy as np\nfrom scipy import spatial",
"在python中計算cosine similarity最快的方法是什麼?\n\n方法1: 利用spatial.distance.cosin\n方法2: 利用np.dot,但要自己除以向量長度\n方法3: 土法鍊鋼\n方法4: HM的方法",
"def sim1(n):\n v1 = np.random.randint(0, 100, n)\n v2 = np.random.randint(0, 100, n)\n return 1 - spatial.distance.cosine(v1, v2)\n\ndef sim2(n):\n v1 = np.random.randint(0, 100, n)\n v2 = np.random.randint(0, 100, n)\n return np.dot(v1, v2) / np.linalg.norm(v1) / np.linalg.norm(v2)\n\nimport math\ndef sim3(n):\n v1 = np.random.randint(0, 100, n)\n v2 = np.random.randint(0, 100, n)\n return sum(v1 * v2) / math.sqrt(sum(v1 ** 2)) / math.sqrt(sum(v2 ** 2))\n\nfrom itertools import izip\ndef dot_product(v1, v2):\n return sum(map(lambda x: x[0] * x[1], izip(v1, v2)))\n\ndef sim4(n):\n v1 = np.random.randint(0, 100, n)\n v2 = np.random.randint(0, 100, n)\n prod = dot_product(v1, v2)\n len1 = math.sqrt(dot_product(v1, v1))\n len2 = math.sqrt(dot_product(v2, v2))\n return prod / (len1 * len2)\n\n%timeit sim1(400)\n\n%timeit sim2(400)\n\n%timeit sim3(400)\n\n%timeit sim4(400)",
"結論\nHM的方法最慢,用np.dot及np.linalg.norm組合的最快\n日期時間的格式",
"from datetime import datetime as dt\n\nstart = dt.now()\nstart.date(), start.time(), start\n\ndt.now() - start",
"logging: 整合所有的module log\nlogging module的用途是讓所有module有一致的介面可以留下執行記錄。基本的logging system有兩個角色\n* Logger: 多個logger組成一個樹狀結構,會傳給root logger匯整\n* Handler: 接收root logger的資訊,並輸出到檔案\n在預設情況下,會使用StreamHandler,將log輸出到標準輸出。如果在basicConfig中指定filename參數,則會建立FileHandler輸出到檔案。",
"import logging\nfmtstr = '%(asctime)s [%(levelname)s][%(name)s] %(message)s'\ndatefmtstr = '%Y/%m/%d %H:%M:%S'\nif len(logging.getLogger().handlers) >= 1:\n logging.getLogger().handlers[0].setFormatter(logging.Formatter(fmtstr, datefmtstr))\nelse:\n logging.basicConfig(format=fmtstr, datefmt=datefmtstr)\n\n# 如果直接呼叫 logging.warning,就是使用root logger\nlogging.warning(\"please set %d in %s\", 100, \"length\")",
"如果從某個module呼叫時,就用",
"# 在root logger下面增加child logger\naaa_logger = logging.getLogger('aaa')\nbbb_logger = aaa_logger.getChild('bbb')\nccc_logger = bbb_logger.getChild('ccc')\n\naaa_logger.warn(\"hello\")\n\nbbb_logger.warn(\"hello\")\n\n# 當logger是樹狀結構時,logger的名稱會變成 aaa.bbb.ccc\nccc_logger.warn(\"hello\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
encima/Comp_Thinking_In_Python
|
Session_6/Modules, Imports and Packages.ipynb
|
mit
|
[
"from notebook.services.config import ConfigManager\nfrom IPython.paths import locate_profile\ncm = ConfigManager(profile_dir=locate_profile(get_ipython().profile))\ncm.update('livereveal', {\n 'theme': 'solarized',\n 'transition': 'zoom',\n 'start_slideshow_at': 'selected',\n})",
"Modules, Imports and Packages\nDr. Chris Gwilliams\ngwilliamsc@cardiff.ac.uk\nPython Modules\nWe have seen that there are many things one can do using Python, but this barely touches the surface. \nPython uses modules (a.ka. libraries) to extend the basic functionality and we have 3 ways of doing this:\nStandard Modules\nThese are modules built into the Python Standard Library, similar to the built in functions (type, len) that we have been using.\nThe majority of them are listed here\nExercise\nFollow the link in the previous slide and find the documentation for the random module.\nExternal Modules\nThese are libraries, written by developers (like you), that extend the functionality of Python. They do things like:\n- Web Scraping\n- Network Visualisation\n- Neural Networks\n- Gaming\nWe will cover these a bit later in the course.\nLocal Modules\nThese are .py within your file system and we will look at these later on in the session.\nDO NOT EVER SAVE A SCRIPT WITH THE SAME NAME AS A MODULE YOU USE\nimport statements\nA Python script is, typically, made up of three things at the high level:\n\nimport - modules you can use within your code\nExecutable code - the code you have written\nComments - ignored by the interpreter",
"import random\n\ndir(random)",
"Exercise\nUsing the dir method (and the documentation), import the random module and generate a random number between 42 and 749.",
"import random #import section\n\nprint(random.randrange(42,749)) #code section\n\nfrom random import randrange\n\nprint(randrange(10,300))",
"Modules and Packages\nA Python module is just a .py file, which you can import directly.\nimport config (relates to config.py somewhere on your system)\nA package is a collection of Python modules that you can import all of, or just import the modules you want. For example:\nimport random (all modules in random package)\nfrom random import randint (importing module from packages)\nImport dos and don'ts\nYou can (and will) import many modules in one script, PEP asks that you follow this structure:\npython\nimport standard_library_modules\nimport external_library_modules\nimport local_modules\nMore info on this and other styles can be found here\nYou will also see that some people will import multiple modules in one line:\nimport os, sys, csv, math, random\nDo not do this, it makes your code hard to read and modularise\nHowever, it is good to import multiple modules from the same package in one line:\nfrom random import randrange, randint\nWriting Your Own Local Modules\nExercise\nCreate a file and call it city.py.\nPut some variables in there that describe a city of your choice (size, population, country etc)\nNow, create a file and call it main.py.\nImport your city.py file and print out the city information with formatted strings. \nhttps://gitlab.cs.cf.ac.uk/scm7cg/cm6111_python_modules/tree/master",
"import city\n\nprint(city.name)\nprint(\"This city has {0} people\".format(city.pop))\n\n",
"Installing packages\nEver used aptitude or yum on Linux? These are package managers that allow you to extend the functionality of the system you are using. Python has these, in the form of pip and easy_install.\nExercise\nWhich one of these should you use? (Cite your sources)\n| pip | easy_install |\n|----------------------------------------|--------------------------------------------------------------------------------|\n| actively maintained | partially maintained |\n| part of core python distribution | support for version control |\n| packages downloaded and then installed | packages downloaded and installed asynchronously |\n| allows uninstall | does not provide uninstall functionality |\n| automated installing of requirements | if an install fails, it may not fail cleanly and leave your environment broken |\npip\nThe recommended tool for installing Python packages. \nStands for Pip Installs Packages. Packages can be found on PyPi\nUsage:\npip install <package-name>\nNote: Some packages will require administrator privilges to be installed.\nExercise\n\nInstall a package called blessings\nFind the documentation on PyPi\nWrite a script that uses blessings \nMake the script print Sup, World in bold\nList 3 commands you can run with pip\n\nVirtual Environments (virtual env)\nPicture this: You are given a project to work on in a team. You install some packages, write some code and push it to git. Your team-mates say they cannot get it to run. \nWhy can they not get it to run?\n\nModules not installed\nModules won't install\nDifferent operating system\nDifferent Python version\n\nAny number of these and more.\nVirtualEnv\npip installs packages globally by default. This means your Python code is always affected by the current state of your system. If you upgrade a package to the latest version that breaks what you are working on, it will also break every other project that uses that system.\nVirtualEnv aims to address this. This package creates an isolated Python environment in a directory with your name of choice.\nFrom here you can, specify Python versions, install packages and run code.\nvirtualenv <environment_name>\nThat is how you get started. Do not type this yet!\nWhat does virtualenv <env> do?\nInstalls an isolated Python environment in a directory named after your env variable. All scripts are put into the bin folder, like so:\n\nExercise\nCreate a virtual environment, called 'comp_thinking'\nActivating your virtual environment\nUnix: source bin/activate\nWindows: \\Scripts\\activate\nThis adds the scripts in bin to your PATH, so they are executed when you run pip or python.\nYou can also call the scripts directly:\nbin/python <script>.py\nExercise\nActivate your environment!\nDeactivating an environment\nGuess...\ndeactivate\nIt is that simple. If you do not want to use the environment again, then you can simply delete the folder.\nExercise\nYou guessed it, deactivate your environment!\nExercise\n\nCreate a new virtual environment, call it test\nActivate it (or just use the scripts)\nInstall the terminaltables package\nWrite a script to read details about the user (favourite movie/game, age, height etc)\nUse the documentation to print an ASCII table of these data\n\nCommand Line Arguments\nSo, we can output with print and we can input with input. But...input relies on user interaction as the script runs. Here, we can use arguments in the command line to act as our input.\npython <script>.py argument some_other_argument\nNOTE: This is only a brief intro to using arguments and we will come back to these as the course progresses.",
"import sys #system package to read command line args\n\n\nprint(len(sys.argv))\nprint(sys.argv)",
"Exercise\nRewrite the movie exercise you wrote in the last session to use command line arguments\n\nNo homework, do coursework"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
MLWave/kepler-mapper
|
docs/notebooks/TOR-XGB-TDA.ipynb
|
mit
|
[
"Detecting Encrypted TOR Traffic with Boosting and Topological Data Analysis\nHJ van Veen - MLWave\nWe establish strong baselines for both supervised and unsupervised detection of encrypted TOR traffic.\n**Note: This article uses the 5-second lag dataset. For better comparison we will use the 15-second lag dataset in the near future.\nIntroduction\nGradient Boosted Decision Trees (GBDT) is a very powerful learning algorithm for supervised learning on tabular data <a href=\"#References\">[1]</a>. Modern implementations include XGBoost <a href=\"#References\">[2]</a>, Catboost <a href=\"#References\">[3]</a>, LightGBM <a href=\"#References\">[4]</a> and scikit-learn's GradientBoostingClassifier <a href=\"#References\">[5]</a>. Of these, especially XGBoost has seen tremendous successes in machine learning competitions <a href=\"#References\">[6]</a>, starting with its introduction during the Higgs Boson Detection challenge in 2014 <a href=\"#References\">[7]</a>. The success of XGBoost can be explained on multiple dimensions: It is a robust implementation of the original algorithms, it is very fast -- allowing data scientists to quickly find better parameters <a href=\"#References\">[8]</a>, it does not suffer much from overfit, is scale-invariant, and it has an active community providing constant improvements, such as early stopping <a href=\"#References\">[9]</a> and GPU support <a href=\"#References\">[10]</a>.\nAnomaly detection algorithms automatically find samples that are different from regular samples. Many methods exist. We use the Isolation Forest in combination with nearest neighbor distances. The Isolation Forest works by randomly splitting up the data <a href=\"#References\">[11]</a>. Outliers, on average, are easier to isolate through splitting. Nearest neighbor distance looks at the summed distances for a sample and its five nearest neighbors. Outliers, on average, have a larger distance between their nearest neighbors than regular samples <a href=\"#References\">[12]</a>.\nTopological Data Analysis (TDA) is concerned with the meaning, shape, and connectedness of data <a href=\"#References\">[13]</a>. Benefits of TDA include: Unsupervised data exploration / automatic hypothesis generation, ability to deal with noise and missing values, invariance, and the generation of meaningful compressed summaries. TDA has shown efficient applications in a number of diverse fields: healthcare <a href=\"#References\">[14]</a>, computational biology <a href=\"#References\">[15]</a>, control theory <a href=\"#References\">[16]</a>, community detection <a href=\"#References\">[17]</a>, machine learning <a href=\"#References\">[18]</a>, sports analysis <a href=\"#References\">[19]</a>, and information security <a href=\"#References\">[20]</a>. One tool from TDA is the $MAPPER$ algorithm. $MAPPER$ turns data and data projections into a graph by covering it with overlapping intervals and clustering <a href=\"#References\">[21]</a>. To guide exploration, the nodes of the graph may be colored with a function of interest <a href=\"#References\">[22]</a>. There are an increasing number of implementations of $MAPPER$. We use the open source implementation KeplerMapper from scikit-TDA <a href=\"#References\">[23]</a>.\nThe TOR network allows users to communicate and host content while preserving privacy and anonimity <a href=\"#References\">[24]</a>. As such, it can be used by dissidents and other people who prefer not to be tracked by commercial companies or governments. But these strong privacy and anonimity features are also attractive to criminals. A 2016 study in 'Survival - Global Politics and Strategy' found at least 57% of TOR websites are involved in illicit behavior, ranging from the trade in illegal arms, counterfeit ID documents, pornography, and drugs, money laundering & credit card fraud, and the sharing of violent material, such as bomb making tutorials and terrorist propaganda <a href=\"#References\">[25]</a>.\nNetwork Intrusion Detection Systems are a first line of defense for governments and companies <a href=\"#References\">[26]</a>. An undetected hacker will try to elevate their priviledges, moving from the weakest link to more hardened system-critical network nodes <a href=\"#References\">[27]</a>. If the hacker's goal is to get access to sensitive data (for instance: for resale -, industrial espionage -, or extortion purposes) then any stolen data needs to be exfiltrated. Similarly, cryptolockers often need to communicate with a command & control server outside the network. Depending on the level of sophistication of the malware or hackers, exfiltration may be open and visible, run encrypted through the TOR network in an effort to hide the destination, or use advanced DNS tunneling techniques.\nMotivation\n\nCurrent Network Intrusion Detection Systems, much like the old spam detectors, rely mostly on rules, signatures, and anomaly detection. Labeled data is scarce. Writing rules is a very costly task requiring domain expertise. Signatures may fail to catch new types of attacks until they are updated. Anomalous/unusual behavior is not necessarily suspicous/adversarial behavior.\nMachine Learning for Information Security suffers a lot from poor false positive rates. False positives lead to alarm fatigue and can swamp an intelligence analyst with irrelevant work.\nDespite the possibility of false positives, it is often better to be safe than sorry. Suspicious network behavior, such as outgoing connections to the TOR network, require immediate attention. A network node can be shut down remotely, after which a security engineer can investigate the machine. The best practice of a multi-layered security makes this possible <a href=\"#References\">[28]</a>: Instead of a single firewall to rule them all, hackers can be detected in various stages of their network intrusion, up to the final step of data exfilitration.\n\nData\nWe use a dataset written for the paper \"Characterization of Tor Traffic Using Time Based Features\" (Lashkari et al.) <a href=\"#References\">[29]</a>, graciously provided by the Canadian Institute for Cybersecurity <a href=\"#References\">[30]</a>. This dataset combines older research on nonTOR network traffic with more recently captured TOR traffic (both were created on the same network) <a href=\"#References\">[31]</a>. The data includes features that are more specific to the network used, such as the source and destination IP/Port, and a range of time-based features with a 5 second lag.\n|Feature|Type|Description|Time-based|\n|---|---|---|\n|'Source IP'|Object|Source IP4 Address. String with dots.|No|\n|' Source Port'|Float|Source Port sending packets.|No|\n|' Destination IP'|Object|Destination IP4 Address.|No|\n|' Destination Port'|Float|Destination Port receiving packets.|No|\n|' Protocol'|Float|Integer [5-17] denoting protocol used.|No|\n|' Flow Duration'|Float|Length of connection in seconds|Yes|\n|' Flow Bytes/s'|Float|Bytes per seconds send|Yes|\n|' Flow Packets/s'|Object|Packets per second send. Contains \"infinity\" strings.|Yes|\n|' Flow IAT Mean'|Float|Flow Inter Arrival Time.|Yes|\n|' Flow IAT Std'|Float||Yes|\n|' Flow IAT Max'|Float||Yes|\n|' Flow IAT Min'|Float||Yes|\n|'Fwd IAT Mean'|Float|Forward Inter Arrival Time.|Yes|\n|' Fwd IAT Std'|Float||Yes|\n|' Fwd IAT Max'|Float||Yes|\n|' Fwd IAT Min'|Float||Yes|\n|'Bwd IAT Mean'|Float|Backwards Inter Arrival Time.|Yes|\n|' Bwd IAT Std'|Float||Yes|\n|' Bwd IAT Max'|Float||Yes|\n|' Bwd IAT Min'|Float||Yes|\n|'Active Mean'|Float|Average amount of time in seconds before connection went idle.|Yes|\n|' Active Std'|Float||Yes|\n|' Active Max'|Float||Yes|\n|' Active Min'|Float||Yes|\n|'Idle Mean'|Float|Average amount of time in seconds before connection became active.|Yes|\n|' Idle Std'|Float|Zero variance feature.|Yes|\n|' Idle Max'|Float||Yes|\n|' Idle Min'|Float||Yes|\n|'label'|Object|Either \"nonTOR\" or \"TOR\". ~17% TOR signal.|-|\nExperimental setup\n\nSupervised ML. We establish a strong baseline with XGBoost on the full data and on a subset (only time-based features, which generalize better to new domains). We follow the dataset standard of creating a 20% holdout validation set, and use 5-fold stratified cross-validation for parameter tuning <a href=\"#References\">[32]</a>. For tuning we use random search on sane parameter ranges, as random search is easy to implement and given enough time, will equal or beat more sophisticated methods <a href=\"#References\">[33]</a>. We do not use feature selection, but opt to let our learning algorithm deal with those. Missing values are also handled by XGBoost and not manually imputed or hardcoded.\nUnsupervised ML. We use $MAPPER$ in combination with the Isolation Forest and the summed distances to the five nearest neighbors. We use an overlap percentage of 150% and 40 intervals per dimension for a total of 1600 hypercubes. Clustering is done with agglomerative clustering using the euclidean distance metric and 3 clusters per interval. For these experiments we use only the time-based features. We don't scale the data, despite only Isolation Forest being scale-invariant.",
"import numpy as np\nimport pandas as pd\nimport xgboost\nfrom sklearn import model_selection, metrics",
"Data Prep\nThere are string values \"Infinity\" inside the data, causing mixed types. We need to label-encode the target column. We turn the IP addresses into floats by removing the dots.\nWe also create a subset of features by removing Source Port, Source IP, Destination Port, Destination IP, and Protocol. This to avoid overfitting/improve future generalization and focus only on the time-based features, like most other researchers have done.",
"df = pd.read_csv(\"CSV/Scenario-A/merged_5s.csv\")\n\ndf.replace('Infinity', -1, inplace=True)\ndf[\"label\"] = df[\"label\"].map({\"nonTOR\": 0, \"TOR\": 1})\ndf[\"Source IP\"] = df[\"Source IP\"].apply(lambda x: float(x.replace(\".\", \"\")))\ndf[\" Destination IP\"] = df[\" Destination IP\"].apply(lambda x: float(x.replace(\".\", \"\")))\n\nfeatures_all = [c for c in df.columns if c not in \n ['label']]\n\nfeatures = [c for c in df.columns if c not in \n ['Source IP',\n ' Source Port',\n ' Destination IP',\n ' Destination Port',\n ' Protocol',\n 'label']]\nfeatures\n\nX = np.array(df[features])\nX_all = np.array(df[features_all])\ny = np.array(df.label)\nprint(X.shape, np.mean(y))",
"Local evaluation setup\nWe create a stratified holdout set of 20%. Any modeling choices (such as parameter tuning) are guided by 5-fold stratified cross-validation on the remaining dataset.",
"splitter = model_selection.StratifiedShuffleSplit(\n n_splits=1,\n test_size=0.2,\n random_state=0)\n\nfor train_index, test_index in splitter.split(X, y):\n X_train, X_holdout = X[train_index], X[test_index]\n X_train_all, X_holdout_all = X_all[train_index], X_all[test_index]\n y_train, y_holdout = y[train_index], y[test_index]\n \nprint(X_train.shape, X_holdout.shape)",
"5-fold non-tuned XGBoost",
"model = xgboost.XGBClassifier(seed=0)\nprint(model)\n\nskf = model_selection.StratifiedKFold(\n n_splits=5,\n shuffle=True,\n random_state=0)\n\nfor i, (train_index, test_index) in enumerate(skf.split(X_train, y_train)):\n X_train_fold, X_test_fold = X_train[train_index], X_train[test_index]\n y_train_fold, y_test_fold = y_train[train_index], y_train[test_index]\n model.fit(X_train_fold, y_train_fold)\n probas = model.predict_proba(X_test_fold)[:,1]\n preds = (probas > 0.5).astype(int)\n \n print(\"-\"*60)\n print(\"Fold: %d (%s/%s)\" %(i, X_train_fold.shape, X_test_fold.shape))\n print(metrics.classification_report(y_test_fold, preds, target_names=[\"nonTOR\", \"TOR\"]))\n print(\"Confusion Matrix: \\n%s\\n\"%metrics.confusion_matrix(y_test_fold, preds))\n print(\"Log loss : %f\" % (metrics.log_loss(y_test_fold, probas)))\n print(\"AUC : %f\" % (metrics.roc_auc_score(y_test_fold, probas)))\n print(\"Accuracy : %f\" % (metrics.accuracy_score(y_test_fold, preds)))\n print(\"Precision: %f\" % (metrics.precision_score(y_test_fold, preds)))\n print(\"Recall : %f\" % (metrics.recall_score(y_test_fold, preds)))\n print(\"F1-score : %f\" % (metrics.f1_score(y_test_fold, preds)))",
"Hyper parameter tuning\nWe found the below parameters by running a random gridsearch on the first fold in ~50 iterations (minimizing log loss). We use an AWS distributed closed-source auto-tuning library called \"Cher\" with the following parameter ranges:\n\"XGBClassifier\": {\n \"max_depth\": (2,12),\n \"n_estimators\": (20, 2500),\n \"objective\": [\"binary:logistic\"],\n \"missing\": np.nan,\n \"gamma\": [0, 0, 0, 0, 0, 0.01, 0.1, 0.2, 0.3, 0.5, 1., 10., 100.],\n \"learning_rate\": [0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.15, 0.2, 0.1 ,0.1],\n \"min_child_weight\": [1, 1, 1, 1, 2, 3, 4, 5, 1, 6, 7, 8, 9, 10, 11, 15, 30, 60, 100, 1, 1, 1],\n \"max_delta_step\": [0, 0, 0, 0, 0, 1, 2, 5, 8],\n \"nthread\": -1,\n \"subsample\": [i/100. for i in range(20,100)],\n \"colsample_bytree\": [i/100. for i in range(20,100)],\n \"colsample_bylevel\": [i/100. for i in range(20,100)],\n \"reg_alpha\": [0, 0, 0, 0, 0, 0.00000001, 0.00000005, 0.0000005, 0.000005],\n \"reg_lambda\": [1, 1, 1, 1, 2, 3, 4, 5, 1],\n \"scale_pos_weight\": 1,\n \"base_score\": 0.5,\n \"seed\": (0,999999)\n}",
"model = xgboost.XGBClassifier(base_score=0.5, colsample_bylevel=0.68, colsample_bytree=0.84,\n gamma=0.1, learning_rate=0.1, max_delta_step=0, max_depth=11,\n min_child_weight=1, missing=None, n_estimators=1122, nthread=-1,\n objective='binary:logistic', reg_alpha=0.0, reg_lambda=4,\n scale_pos_weight=1, seed=189548, silent=True, subsample=0.98)",
"5-fold tuned XGBoost",
"print(model)\nfor i, (train_index, test_index) in enumerate(skf.split(X_train, y_train)):\n X_train_fold, X_test_fold = X_train[train_index], X_train[test_index]\n y_train_fold, y_test_fold = y_train[train_index], y_train[test_index]\n model.fit(X_train_fold, y_train_fold)\n probas = model.predict_proba(X_test_fold)[:,1]\n preds = (probas > 0.5).astype(int)\n\n print(\"-\"*60)\n print(\"Fold: %d (%s/%s)\" %(i, X_train_fold.shape, X_test_fold.shape))\n print(metrics.classification_report(y_test_fold, preds, target_names=[\"nonTOR\", \"TOR\"]))\n print(\"Confusion Matrix: \\n%s\\n\"%metrics.confusion_matrix(y_test_fold, preds))\n print(\"Log loss : %f\" % (metrics.log_loss(y_test_fold, probas)))\n print(\"AUC : %f\" % (metrics.roc_auc_score(y_test_fold, probas)))\n print(\"Accuracy : %f\" % (metrics.accuracy_score(y_test_fold, preds)))\n print(\"Precision: %f\" % (metrics.precision_score(y_test_fold, preds)))\n print(\"Recall : %f\" % (metrics.recall_score(y_test_fold, preds)))\n print(\"F1-score : %f\" % (metrics.f1_score(y_test_fold, preds)))",
"Holdout set evaluation",
"model.fit(X_train, y_train) \nprobas = model.predict_proba(X_holdout)[:,1]\npreds = (probas > 0.5).astype(int)\n\nprint(metrics.classification_report(y_holdout, preds, target_names=[\"nonTOR\", \"TOR\"]))\nprint(\"Confusion Matrix: \\n%s\\n\"%metrics.confusion_matrix(y_holdout, preds))\nprint(\"Log loss : %f\" % (metrics.log_loss(y_holdout, probas)))\nprint(\"AUC : %f\" % (metrics.roc_auc_score(y_holdout, probas)))\nprint(\"Accuracy : %f\" % (metrics.accuracy_score(y_holdout, preds)))\nprint(\"Precision: %f\" % (metrics.precision_score(y_holdout, preds)))\nprint(\"Recall : %f\" % (metrics.recall_score(y_holdout, preds)))\nprint(\"F1-score : %f\" % (metrics.f1_score(y_holdout, preds)))",
"Results\n|Model|Precision|Recall|F1-Score\n|---|---|---|---|\n|Logistic Regression (Singh et al., 2018) <a href=\"#References\">[34]</a>|0.87|0.87|0.87|\n|SVM (Singh et al., 2018)|0.9|0.9|0.9|\n|Naïve Bayes (Singh et al., 2018)|0.91|0.6|0.7|\n|C4.5 Decision Tree + Feature Selection (Lashkari et al., 2017) <a href=\"#References\">[29]</a>|0.948|0.934|-|\n|Deep Learning (Singh et al., 2018)|0.95|0.95|0.95|\n|Random Forest (Singh et al., 2018)|0.96|0.96|0.96|\n|XGBoost + Tuning|0.974|0.977|0.976|\nHoldout evaluation with all the available features\nUsing all the features results in near perfect performance, suggesting \"leaky\" features (These features are not to be used for predictive modeling, but are there for completeness). Nevertheless we show how using all features also results in a strong baseline over previous research.",
"model.fit(X_train_all, y_train) \nprobas = model.predict_proba(X_holdout_all)[:,1]\npreds = (probas > 0.5).astype(int)\n\nprint(metrics.classification_report(y_holdout, preds, target_names=[\"nonTOR\", \"TOR\"]))\nprint(\"Confusion Matrix: \\n%s\\n\"%metrics.confusion_matrix(y_holdout, preds))\nprint(\"Log loss : %f\" % (metrics.log_loss(y_holdout, probas)))\nprint(\"AUC : %f\" % (metrics.roc_auc_score(y_holdout, probas)))\nprint(\"Accuracy : %f\" % (metrics.accuracy_score(y_holdout, preds)))\nprint(\"Precision: %f\" % (metrics.precision_score(y_holdout, preds)))\nprint(\"Recall : %f\" % (metrics.recall_score(y_holdout, preds)))",
"Results\n|Model|Precision|Recall|Accuracy\n|---|---|---|---|\n|ANN (Hodo et al., 2017) <a href=\"References\">[35]</a>|0.983|0.937|0.991|\n|SVM (Hodo et al., 2017)|0.79|0.67|0.94|\n|ANN + Feature Selection (Hodo et al., 2017)|0.998|0.988|0.998|\n|SVM + Feature Selection (Hodo et al., 2017)|0.8|0.984|0.881|\n|XGBoost + Tuning|0.999|1.|0.999|\nTopological Data Analysis",
"import kmapper as km\nimport pandas as pd\nimport numpy as np\nfrom sklearn import ensemble, cluster\n\ndf = pd.read_csv(\"CSV/Scenario-A/merged_5s.csv\")\ndf.replace('Infinity', -1, inplace=True)\ndf[\" Flow Bytes/s\"] = df[\" Flow Bytes/s\"].apply(lambda x: float(x))\ndf[\" Flow Packets/s\"] = df[\" Flow Packets/s\"].apply(lambda x: float(x))\ndf[\"label\"] = df[\"label\"].map({\"nonTOR\": 0, \"TOR\": 1})\ndf[\"Source IP\"] = df[\"Source IP\"].apply(lambda x: float(x.replace(\".\", \"\")))\ndf[\" Destination IP\"] = df[\" Destination IP\"].apply(lambda x: float(x.replace(\".\", \"\")))\ndf.fillna(-2, inplace=True)\n\nfeatures = [c for c in df.columns if c not in \n ['Source IP',\n ' Source Port',\n ' Destination IP',\n ' Destination Port',\n ' Protocol',\n 'label']]\n\nX = np.array(df[features])\ny = np.array(df.label)\n\nprojector = ensemble.IsolationForest(random_state=0, n_jobs=-1)\nprojector.fit(X)\nlens1 = projector.decision_function(X)\n\nmapper = km.KeplerMapper(verbose=3)\nlens2 = mapper.fit_transform(X, projection=\"knn_distance_5\")\n\nlens = np.c_[lens1, lens2]\n\nG = mapper.map(\n lens,\n X,\n nr_cubes=40,\n overlap_perc=1.5,\n clusterer=cluster.AgglomerativeClustering(3))\n\n_ = mapper.visualize(\n G,\n custom_tooltips=y,\n color_function=y,\n path_html=\"tor-tda.html\",\n inverse_X=X,\n inverse_X_names=list(df[features].columns),\n projected_X=lens,\n projected_X_names=[\"IsolationForest\", \"KNN-distance 5\"],\n title=\"Detecting encrypted Tor Traffic with Isolation Forest and Nearest Neighbor Distance\"\n)",
"Image of output\n\nLink to output\n<a href=\"https://mlwave.github.io/tda/tor-tda.html\">TDA Tor Graph</a>\nDiscussion\nBoth deep learning and unsupervised TDA may benefit from more data and rawer features. One strength of deep learning is its ability to automaticly generate useful features. A properly tuned and architected RNN/LSTM or ConvNet on more data will likely beat or equal gradient boosting <a href=\"#References\">[36]</a>. Likewise for TDA: TDA is very good at extracting structure from raw time-series data. Using the preprocessed 5 second lag-features turns the problem more into a classification problem, than a temporal /forecasting problem.\nThe XGBoost baseline can be further improved: Other authors showed feature selection to be effective at discarding noise. Stacked generalization can improve many pure classification problems, at the cost of an increased complexity and latency. Likewise with feature expansion through feature interactions, the score can be improved a small bit <a href=\"#References\">[37]</a>.\nThe graph created with $MAPPER$ shows a concentration of anomalous samples that are predominantly nonTor traffic. This confirms our earlier note that anomalous behavior is not necessarily suspicious behavior. The separation could be better, but it is already possible to identify different types of Tor traffic, and see how they differ (an above average or below average Flow Duration can both signal Tor traffic.)\nThe large max_depth=11 found by XGBoost on this relatively small dataset signals that either the problem is very complex (and needs large complexity to be solved well), or that memorization of patterns is important for good performance on this dataset (larger max_depth's find more feature interactions and are better at memorization).\nThanks\nThanks to dr. Satnam Singh and Balamurali A R for the inspiring article. Thanks to my colleagues at Nubank InfoSec, especially <a href=\"https://github.com/jonasabreu\">Jonas Abreu</a>, for helpful discussions and consulting on domain expertise. Thanks to the Canadian Institute for Cybersecurity (dr. Lashkari et al.) for creating and providing the data used <a href=\"#References\">[38]</a>, and writing the original paper with great clarity.\nReferences\n[1] Freund, Schapire (1999). <br>A short introduction to boosting\n<br>\n[2] Chen, Tianqi and Guestrin, Carlos (2016) <br>XGBoost: A Scalable Tree Boosting System\n<br>\n[3] Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika Dorogush, Andrey Gulin (2017) <br>CatBoost: unbiased boosting with categorical features\n<br>\n[4] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. (2017) <br>LightGBM: A Highly Efficient Gradient Boosting Decision Tree.\n<br>\n[5] Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E. (2011) <br>Scikit-learn: Machine Learning in Python\n<br>\n[6] Community (2014-). <br>Awesome XGBoost\n<br>\n[7] CERN and Kaggle (2014) <br>Higgs Boson Machine Learning Challenge\n<br>\n[8] Tianqi Chen on Quora (2015) <br>What makes xgboost run much faster than many other implementations of gradient boosting?\n<br>\n[9] Zygmunt Zając (2015) <br>Early stopping\n<br>\n[10] Rory Mitchell, Andrey Adinets, Thejaswi Rao, Eibe Frank (2018) <br>XGBoost: Scalable GPU Accelerated Learning\n<br>\n[11] Fei Tony Liu, Kai Ming Ting, Zhi-Hua Zhou (2008) <br>Isolation Forest\n<br>\n[12] Sridhar Ramaswamy, Rajeev Rastogi, and Kyuseok Shim. (2000) <br>Efficient algorithms for mining outliers from large data sets\n<br>\n[13] Gunnar Carlsson (2008) <br>Topology and Data\n<br>\n[14] Devi Ramanan (2015) <br>Identification of Type 2 Diabetes Subgroups through Topological Data Analysis of Patient Similarity\n<br>\n[15] Pablo G. Cámara (2017) <br>Topological methods for genomics: present and future directions\n<br>\n[16] Wei Guo, Ashis Gopal Banerjee (2017) <br>Identification of Key Features Using Topological Data Analysis for Accurate Prediction of Manufacturing System Outputs\n<br>\n[17] Mustafa Hajij, Bei Wang, Paul Rosen (2018) <br>MOG: Mapper on Graphs for Relationship Preserving Clustering\n<br>\n[18] Anthony Bak (2015) <br>Topology and Machine Learning\n<br>\n[19] Muthu Alagappan (2012) <br>From 5 to 13: Redefining the Positions in Basketball\n<br>\n[20] Marc Coudriau, Abdelkader Lahmadi, Jérôme François (2016) <br>Topological analysis and visualisation of network monitoring data: Darknet case study\n<br>\n[21] Gurjeet Singh, Facundo Mémoli, and Gunnar Carlsson (2007) <br>Topological Methods for the Analysis of High Dimensional\nData Sets and 3D Object Recognition\n<br>\n[22] P. Y. Lum, G. Singh, A. Lehman, T. Ishkanov, M. Vejdemo-Johansson, M. Alagappan, J. Carlsson & G. Carlsson (2009) <br>Extracting insights from the shape of complex data using topology\n<br>\n[23] Hendrik Jacob van Veen, and Nathaniel Saul (2017) <br>KeplerMapper\n<br>\n[24] Karsten Loesing and Steven J. Murdoch and Roger Dingledine (2010) <br>A Case Study on Measuring Statistical Data in the Tor Anonymity Network\n<br>\n[25] Daniel Moore, Thomas Rid (2016) <br>Cryptopolitik and the Darknet\n<br>\n[26] Stephen Northcutt, Judy Novak (2002) <br><a href=\"http://justpain.com/eBooks/Security/Network%20Intrusion%20Detection%20(New%20Riders).pdf\">Network Intrusion Detection, Third Edition</a>\n<br>\n[27] Justin Grana, David Wolpert, Joshua Neil, Dongping Xie, Tanmoy Bhattacharya, Russell Bent (2016) <br>A Likelihood Ratio Detector for Identifying Within-Perimeter Computer Network Attacks.\n<br>\n[28] Simon Denman (2012) <br>Why multi-layered security is still the best defence\n<br>\n[29] Arash Habibi Lashkari, Gerard Draper Gil, Mohammad Saiful Islam Mamun, Ali A. Ghorbani (2017) <br>Characterization of Tor Traffic using Time based Features\n<br>\n[30] Canadian Institute for Cybersecurity (Retrieved: 2018) <br>Canadian Institute for Cybersecurity\n<br>\n[31] Draper-Gil, G., Lashkari, A. H., Mamun, M. S. I., and Ghorbani, A. A. (2016). <br>Characterization of encrypted and vpn traffic using time-related features\n<br>\n[32] Trevor Hastie, Robert Tibshirani, Jerome H. Friedman (2001) <br>The Elements of Statistical Learning\n<br>\n[33] James Bergstra, Yoshua Bengio (2012) <br>Random Search for Hyper-Parameter Optimization\n<br>\n[34] Satnam Singh, Balamurali A R (2018) <br>Using Deep Learning for Information Security\n<br>\n[35] Elike Hodo, Xavier Bellekens, Ephraim Iorkyase, Andrew Hamilton, Christos Tachtatzis, Robert Atkinson (2017) <br>Machine Learning Approach for Detection of nonTor Traffic\n<br>\n[36] Gábor Melis, Chris Dyer, Phil Blunsom (2017) <br>On the State of the Art of Evaluation in Neural Language Models\n<br>\n[37] Marios Michailidis (2017) <br>Investigating machine learning methods in recommender systems\n<br>\n[38] Canadian Institute for Cybersecurity (2016) <br>Tor-nonTor dataset (ISCXTor2016)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mathause/regionmask
|
docs/notebooks/mask_3D.ipynb
|
mit
|
[
"%matplotlib inline\n%config InlineBackend.figure_format = \"retina\"\n\nfrom matplotlib import rcParams\n\nrcParams[\"savefig.dpi\"] = 200\nrcParams[\"font.size\"] = 8\n\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")",
"Create 3D boolean masks\nIn this tutorial we will show how to create 3D boolean masks for arbitrary latitude and longitude grids. It uses the same algorithm to determine if a gridpoint is in a region as for the 2D mask. However, it returns a xarray.Dataset with shape region x lat x lon, gridpoints that do not fall in a region are False, the gridpoints that fall in a region are True.\n3D masks are convenient as they can be used to directly calculate weighted regional means (over all regions) using xarray v0.15.1 or later. Further, the mask includes the region names and abbreviations as non-dimension coordinates.\nImport regionmask and check the version:",
"import regionmask\n\nregionmask.__version__",
"Load xarray and numpy:",
"import xarray as xr\nimport numpy as np\n\n# don't expand data\nxr.set_options(display_style=\"text\", display_expand_data=False)",
"Creating a mask\nDefine a lon/ lat grid with a 1° grid spacing, where the points define the center of the grid:",
"lon = np.arange(-179.5, 180)\nlat = np.arange(-89.5, 90)",
"We will create a mask with the SREX regions (Seneviratne et al., 2012).",
"regionmask.defined_regions.srex",
"The function mask_3D determines which gripoints lie within the polygon making up each region:",
"mask = regionmask.defined_regions.srex.mask_3D(lon, lat)\nmask",
"As mentioned, mask is a boolean xarray.Dataset with shape region x lat x lon. It contains region (=numbers) as dimension coordinate as well as abbrevs and names as non-dimension coordinates (see the xarray docs for the details on the terminology).\nPlotting\nPlotting individual layers\nThe four first layers look as follows:",
"import cartopy.crs as ccrs\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors as mplc\n\ncmap1 = mplc.ListedColormap([\"none\", \"#9ecae1\"])\n\nfg = mask.isel(region=slice(4)).plot(\n subplot_kws=dict(projection=ccrs.PlateCarree()),\n col=\"region\",\n col_wrap=2,\n transform=ccrs.PlateCarree(),\n add_colorbar=False,\n aspect=1.5,\n cmap=cmap1,\n)\n\nfor ax in fg.axes.flatten():\n ax.coastlines()\n\nfg.fig.subplots_adjust(hspace=0, wspace=0.1);",
"Plotting flattened masks\nA 3D mask cannot be directly plotted - it needs to be flattened first. To do this regionmask offers a convenience function: regionmask.plot_3D_mask. The function takes a 3D mask as argument, all other keyword arguments are passed through to xr.plot.pcolormesh.",
"regionmask.plot_3D_mask(mask, add_colorbar=False, cmap=\"plasma\");",
"Working with a 3D mask\nmasks can be used to select data in a certain region and to calculate regional averages - let's illustrate this with a 'real' dataset:",
"airtemps = xr.tutorial.load_dataset(\"air_temperature\")",
"The example data is a temperature field over North America. Let's plot the first time step:",
"# choose a good projection for regional maps\nproj = ccrs.LambertConformal(central_longitude=-100)\n\nax = plt.subplot(111, projection=proj)\n\nairtemps.isel(time=1).air.plot.pcolormesh(ax=ax, transform=ccrs.PlateCarree())\n\nax.coastlines();",
"An xarray object can be passed to the mask_3D function:",
"mask_3D = regionmask.defined_regions.srex.mask_3D(airtemps)\nmask_3D",
"Per default this creates a mask containing one layer (slice) for each region containing (at least) one gridpoint. As the example data only has values over Northern America we only get only 6 layers even though there are 26 SREX regions. To obtain all layers specify drop=False:",
"mask_full = regionmask.defined_regions.srex.mask_3D(airtemps, drop=False)\nmask_full",
"Note mask_full now has 26 layers.\nSelect a region\nAs mask_3D contains region, abbrevs, and names as (non-dimension) coordinates we can use each of those to select an individual region:",
"# 1) by the index of the region:\nr1 = mask_3D.sel(region=3)\n\n# 2) with the abbreviation\nr2 = mask_3D.isel(region=(mask_3D.abbrevs == \"WNA\"))\n\n# 3) with the long name:\nr3 = mask_3D.isel(region=(mask_3D.names == \"E. North America\"))",
"This also applies to the regionally-averaged data below. \nIt is currently not possible to use sel with a non-dimension coordinate - to directly select abbrev or name you need to create a MultiIndex:",
"mask_3D.set_index(regions=[\"region\", \"abbrevs\", \"names\"]);",
"Mask out a region\nUsing where a specific region can be 'masked out' (i.e. all data points outside of the region become NaN):",
"airtemps_cna = airtemps.where(r1)",
"Which looks as follows:",
"proj = ccrs.LambertConformal(central_longitude=-100)\n\nax = plt.subplot(111, projection=proj)\n\nairtemps_cna.isel(time=1).air.plot(ax=ax, transform=ccrs.PlateCarree())\n\nax.coastlines();",
"We could now use airtemps_cna to calculate the regional average for 'Central North America'. However, there is a more elegant way.\nCalculate weighted regional averages\nUsing the 3-dimensional mask it is possible to calculate weighted averages of all regions in one go, using the weighted method (requires xarray 0.15.1 or later). As proxy of the grid cell area we use cos(lat).",
"weights = np.cos(np.deg2rad(airtemps.lat))\n\nts_airtemps_regional = airtemps.weighted(mask_3D * weights).mean(dim=(\"lat\", \"lon\"))",
"Let's break down what happens here. By multiplying mask_3D * weights we get a DataArray where gridpoints not in the region get a weight of 0. Gridpoints within a region get a weight proportional to the gridcell area. airtemps.weighted(mask_3D * weights) creates an xarray object which can be used for weighted operations. From this we calculate the weighted mean over the lat and lon dimensions. The resulting dataarray has the dimensions region x time:",
"ts_airtemps_regional",
"The regionally-averaged time series can be plotted:",
"ts_airtemps_regional.air.plot(col=\"region\", col_wrap=3);",
"Restrict the mask to land points\nCombining the mask of the regions with a land-sea mask we can create a land-only mask using the land_110 region from NaturalEarth.\nWith this caveat in mind we can create the land-sea mask:",
"land_110 = regionmask.defined_regions.natural_earth_v5_0_0.land_110\n\nland_mask = land_110.mask_3D(airtemps)",
"and plot it",
"proj = ccrs.LambertConformal(central_longitude=-100)\n\nax = plt.subplot(111, projection=proj)\n\nland_mask.squeeze().plot.pcolormesh(\n ax=ax, transform=ccrs.PlateCarree(), cmap=cmap1, add_colorbar=False\n)\n\nax.coastlines();",
"To create the combined mask we multiply the two:",
"mask_lsm = mask_3D * land_mask.squeeze(drop=True)",
"Note the .squeeze(drop=True). This is required to remove the region dimension from land_mask.\nFinally, we compare the original mask with the one restricted to land points:",
"f, axes = plt.subplots(1, 2, subplot_kw=dict(projection=proj))\n\nax = axes[0]\nmask_3D.sel(region=2).plot(\n ax=ax, transform=ccrs.PlateCarree(), add_colorbar=False, cmap=cmap1\n)\nax.coastlines()\nax.set_title(\"Regional mask: all points\")\n\nax = axes[1]\nmask_lsm.sel(region=2).plot(\n ax=ax, transform=ccrs.PlateCarree(), add_colorbar=False, cmap=cmap1\n)\nax.coastlines()\nax.set_title(\"Regional mask: land only\");",
"References\n\nSpecial Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX, Seneviratne et al., 2012)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jeanbaptistepriez/predicsis-ai-faq-tuto
|
39.how_to_change_the_type_of_a_feature?/How to change a type, using PredicSis.ai python SDK.ipynb
|
gpl-3.0
|
[
"Goal\nBuilding a model with other types for features in PredicSis.ai GUI using the Python SDK\nPrerequisites\n\nPredicSis.ai Python SDK (pip install predicsis; documentation)\n\nA predictive model available on your PredicSis.ai instance\n\n\nJupyter (see http://jupyter.org/)",
"# Load PredicSis.ai SDK\nfrom predicsis import PredicSis",
"Choose your project",
"pj = PredicSis.project('Outbound Mail Campaign')",
"Retrieve and describe the frame\nFirst, pick the default schema of the project. New models can be built from the modified default schema.",
"dflt_schm = pj.default_schema()\n\ndflt_schm.describe()",
"Change type of a native feature (from the central table)\nPick the frame which includes the feature you want to change the type.",
"master_frame=dflt_schm.frame('Customers')\n\nmaster_frame.describe()",
"Change the type of your feature using set_categorical() or set_numerical() methods.",
"master_frame.set_categorical('region_code')\n\nmaster_frame.describe()",
"Type is modified.\nA model has to be build again from the default schema.",
"mdl = dflt_schm.fit('model with categorical region_code')\n\nmdl.central().describe()",
"Same for features from a peripheral table",
"email = dflt_schm.frame('Email')\n\nemail.describe()\n\nemail.set_categorical('nb_of_days_since_event')\n\nemail.describe()",
"Type has been changed, a new model has to be builed from the default schema. To consider the change in the peripheral table, a number of aggregates has to be requested.",
"mdl2 = dflt_schm.fit('Model with type change in email frame',nb_aggregates=50)\n\nmdl2.central().describe()",
"The change has been considered, as a CountDistinct has been calculated for nb_of_days_since_event , a CountDistinct is calculated over categorical features. CountDistinct(Email.nb_of_days_since_event)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
therealAJ/python-sandbox
|
data-science/learning/ud2/Part 2 Exercise Solutions/Linear Regression/Linear Regression - Project Exercise .ipynb
|
gpl-3.0
|
[
"<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nLinear Regression - Project Exercise\nCongratulations! You just got some contract work with an Ecommerce company based in New York City that sells clothing online but they also have in-store style and clothing advice sessions. Customers come in to the store, have sessions/meetings with a personal stylist, then they can go home and order either on a mobile app or website for the clothes they want.\nThe company is trying to decide whether to focus their efforts on their mobile app experience or their website. They've hired you on contract to help them figure it out! Let's get started!\nJust follow the steps below to analyze the customer data (it's fake, don't worry I didn't give you real credit card numbers or emails).\nImports\n Import pandas, numpy, matplotlib,and seaborn. Then set %matplotlib inline \n(You'll import sklearn as you need it.)",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%matplotlib inline",
"Get the Data\nWe'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns:\n\nAvg. Session Length: Average session of in-store style advice sessions.\nTime on App: Average time spent on App in minutes\nTime on Website: Average time spent on Website in minutes\nLength of Membership: How many years the customer has been a member. \n\n Read in the Ecommerce Customers csv file as a DataFrame called customers.",
"customers = pd.read_csv('Ecommerce Customers')",
"Check the head of customers, and check out its info() and describe() methods.",
"customers.head()\n\ncustomers.info()\n\ncustomers.describe()",
"Exploratory Data Analysis\nLet's explore the data!\nFor the rest of the exercise we'll only be using the numerical data of the csv file.\n\nUse seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns. Does the correlation make sense?",
"sns.jointplot(x='Time on Website', y='Yearly Amount Spent', data=customers, kind='scatter')",
"Do the same but with the Time on App column instead.",
"sns.jointplot(x='Time on App', y='Yearly Amount Spent', data=customers)",
"Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.",
"sns.jointplot(x='Time on App', y='Length of Membership', data=customers, kind='hex')",
"Let's explore these types of relationships across the entire data set. Use pairplot to recreate the plot below.(Don't worry about the the colors)",
"sns.pairplot(customers)",
"Based off this plot what looks to be the most correlated feature with Yearly Amount Spent?",
"#Length of Membership is the strongest correlated feature with Yearly Amount Spent",
"Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership.",
"sns.lmplot(data=customers, x='Length of Membership', y='Yearly Amount Spent' )",
"Training and Testing Data\nNow that we've explored the data a bit, let's go ahead and split the data into training and testing sets.\n Set a variable X equal to the numerical features of the customers and a variable y equal to the \"Yearly Amount Spent\" column.",
"customers.columns\n\ny = customers['Yearly Amount Spent']\n\nX = customers[['Avg. Session Length', \n 'Time on App',\n 'Time on Website',\n 'Length of Membership']]",
"Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101",
"from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X,\n y,\n test_size=0.3,\n random_state=101)",
"Training the Model\nNow its time to train our model on our training data!\n Import LinearRegression from sklearn.linear_model",
"from sklearn.linear_model import LinearRegression",
"Create an instance of a LinearRegression() model named lm.",
"lm = LinearRegression()",
"Train/fit lm on the training data.",
"lm.fit(X_train,y_train)",
"Print out the coefficients of the model",
"lm.coef_",
"Predicting Test Data\nNow that we have fit our model, let's evaluate its performance by predicting off the test values!\n Use lm.predict() to predict off the X_test set of the data.",
"predictions = lm.predict(X_test)",
"Create a scatterplot of the real test values versus the predicted values.",
"plt.scatter(y_test,predictions)\nplt.xlabel('Y test (True Value)')\nplt.ylabel('Predicted Y')",
"Evaluating the Model\nLet's evaluate our model performance by calculating the residual sum of squares and the explained variance score (R^2).\n Calculate the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the formulas",
"from sklearn import metrics\n\nprint('MAE ', metrics.mean_absolute_error(y_test, predictions))\nprint('MSE ', metrics.mean_squared_error(y_test, predictions))\nprint('RMSE ', np.sqrt(metrics.mean_squared_error(y_test, predictions)))\n\nmetrics.explained_variance_score(y_test,predictions)",
"Residuals\nYou should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data. \nPlot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist().",
"sns.distplot((y_test - predictions), bins=50)",
"Conclusion\nWe still want to figure out the answer to the original question, do we focus our efforst on mobile app or website development? Or maybe that doesn't even really matter, and Membership Time is what is really important. Let's see if we can interpret the coefficients at all to get an idea.\n Recreate the dataframe below.",
"cdf = pd.DataFrame(data=lm.coef_,index=X.columns,columns=['Coeff'])\ncdf",
"How can you interpret these coefficients? \nDo you think the company should focus more on their mobile app or on their website?\nThe Company should focus on their mobile app\nGreat Job!\nCongrats on your contract work! The company loved the insights! Let's move on."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/ec-earth-consortium/cmip6/models/sandbox-3/landice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: EC-EARTH-CONSORTIUM\nSource ID: SANDBOX-3\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:00\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-3', 'landice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --> Mass Balance\n7. Ice --> Mass Balance --> Basal\n8. Ice --> Mass Balance --> Frontal\n9. Ice --> Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Ice Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify how ice albedo is modelled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Atmospheric Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Oceanic Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the ocean and ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs an adative grid being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Base Resolution\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThe base resolution (in metres), before any adaption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Resolution Limit\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Projection\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of glaciers in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of glaciers, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Dynamic Areal Extent\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes the model include a dynamic glacial extent?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Grounding Line Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.3. Ice Sheet\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice sheets simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.4. Ice Shelf\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice shelves simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Ice --> Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Ice --> Mass Balance --> Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Ocean\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Ice --> Mass Balance --> Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Melting\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Ice --> Dynamics\n**\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Approximation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nApproximation type used in modelling ice dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Adaptive Timestep\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.4. Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ethen8181/machine-learning
|
model_selection/prob_calibration/prob_calibration.ipynb
|
mit
|
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Probability-Calibration\" data-toc-modified-id=\"Probability-Calibration-1\"><span class=\"toc-item-num\">1 </span>Probability Calibration</a></span><ul class=\"toc-item\"><li><span><a href=\"#Data-Preprocessing\" data-toc-modified-id=\"Data-Preprocessing-1.1\"><span class=\"toc-item-num\">1.1 </span>Data Preprocessing</a></span></li><li><span><a href=\"#Model-Training\" data-toc-modified-id=\"Model-Training-1.2\"><span class=\"toc-item-num\">1.2 </span>Model Training</a></span></li><li><span><a href=\"#Measuring-Calibration\" data-toc-modified-id=\"Measuring-Calibration-1.3\"><span class=\"toc-item-num\">1.3 </span>Measuring Calibration</a></span></li><li><span><a href=\"#Calibration-Model\" data-toc-modified-id=\"Calibration-Model-1.4\"><span class=\"toc-item-num\">1.4 </span>Calibration Model</a></span></li><li><span><a href=\"#Calibration-Model-Evaluation\" data-toc-modified-id=\"Calibration-Model-Evaluation-1.5\"><span class=\"toc-item-num\">1.5 </span>Calibration Model Evaluation</a></span></li><li><span><a href=\"#Final-Notes\" data-toc-modified-id=\"Final-Notes-1.6\"><span class=\"toc-item-num\">1.6 </span>Final Notes</a></span></li></ul></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-2\"><span class=\"toc-item-num\">2 </span>Reference</a></span></li></ul></div>",
"import os\n\n# path : store the current path to convert back to it later\npath = os.getcwd()\nos.chdir(os.path.join('..', '..', 'notebook_format'))\n\nfrom formats import load_style\nload_style(css_style='custom2.css', plot_style=False)\n\nos.chdir(path)\n\n# 1. magic for inline plot\n# 2. magic to print version\n# 3. magic so that the notebook will reload external python modules\n# 4. magic to enable retina (high resolution) plots\n# https://gist.github.com/minrk/3301035\n%matplotlib inline\n%load_ext watermark\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format='retina'\n\nimport os\nimport time\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom xgboost import XGBClassifier\nfrom sklearn.model_selection import train_test_split\n\n# prevent scientific notations\npd.set_option('display.float_format', lambda x: '%.4f' % x)\n\n%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn,matplotlib,xgboost",
"Probability Calibration\nWell calibrated classifiers are classifiers for which the output probability can be directly interpreted as a confidence level. The definition of a well calibrated (binary) classifier should classify samples such that among the samples which the model gave a predicted probability value close to 0.8, approximately 80% of them actually belong to the positive class. For example, when looking up the weather forecast, we usually get a precipitation probability. e.g. If the weather forecast says there's a 80% chance of raining, then how trustworthy is this probability? In other words, if we take 100 days of data that were claimed to have a 80% chance of raining, how many rainy days were there? If the number of rainy days were around 80, then that means that particular rain forecast is indeed well calibrated.\nAs it turns out, a lot of the classifiers/models that we used on a day to day basis might not be calibrated right out of the box, either due to the objective function of the model or simply when working with highly imbalanced datasets, our model's probability estimates can be skewed towards the majority class. Another way to put it is: After training a classifier, the output we get might just be a ranking score instead of well calibrated probability. A ranking score is essentially evaluating how well does the model score positive examples above negative ones, whereas a calibrated probability is evaluating how closely the scores generated by our model resembles an actual probability.\nObtaining a well calibrated probability becomes important when:\n\nWe wish to use the probability threshold to inform some action. e.g. We'll reject the loan approval if the default rate is higher than 50% or we'll defer the judgment to humans if the probability is lower than some threshold.\nIf our ranking formula is not solely based on the original model's score. In some cases, we may wish to use the score/probability along with some additional factors for ranking purpose. e.g. In the advertising cost per click model, we're going to rank the ads by its expected value (the probability of clicking on the ad multiplied by the ad fee for the click).\n\nData Preprocessing\nWe'll be using the credit card default dataset from UCI, we can download this dataset from Kaggle as well.",
"input_path = 'UCI_Credit_Card.csv'\ndf = pd.read_csv(input_path)\nprint(df.shape)\ndf.head()\n\nid_cols = ['ID']\ncat_cols = ['EDUCATION', 'SEX', 'MARRIAGE']\nnum_cols = [\n 'LIMIT_BAL', 'AGE',\n 'PAY_0', 'PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6',\n 'BILL_AMT1', 'BILL_AMT2', 'BILL_AMT3', 'BILL_AMT4', 'BILL_AMT5', 'BILL_AMT6',\n 'PAY_AMT1', 'PAY_AMT2', 'PAY_AMT3', 'PAY_AMT4', 'PAY_AMT5', 'PAY_AMT6'\n]\nlabel_col = 'default.payment.next.month'\n\ninput_cols = num_cols + cat_cols\n\nlabel_distr = np.round(np.bincount(df[label_col]) / df.shape[0], 3)\nprint('label distribution: ', label_distr)",
"We'll generate a train/validation/test three way split. The validation set created here is mainly used to calibrate our model. As per good practice, we should not be using the same dataset for both the training and calibration process.",
"test_size = 0.1\nval_size = 0.3\nrandom_state = 1234\n\ndf_train, df_test = train_test_split(\n df,\n test_size=test_size,\n random_state=random_state,\n stratify=df[label_col])\n\ndf_train, df_val = train_test_split(\n df_train,\n test_size=val_size,\n random_state=random_state,\n stratify=df_train[label_col])\n\nprint('train shape: ', df_train.shape)\nprint('validation shape: ', df_val.shape)\nprint('test shape: ', df_test.shape)\n\ndf_train.head()",
"Model Training\nWe'll train a binary classifier to predict default payment, and evaluate the model using some common evaluation metrics. In our example, we'll only focus on the widely used boosted tree open sourced library xgboost, though the calibration process and technique introduced in later section is applicable for any arbitrary model.",
"# parameters chosen in an adhoc manner\nxgb_params = {\n 'learning_rate': 0.1,\n 'max_depth': 6,\n 'n_estimators': 30\n}\n\nxgb = XGBClassifier(**xgb_params)\nxgb.fit(df_train[input_cols].values, df_train[label_col].values)",
"A lot of the helper functions/class are organized as under the calibration_module, which can be found under the same folder as this notebook. link",
"from calibration_module.utils import compute_binary_score\n\n# evaluate the metrics for training and validation set\nestimators = {\n 'xgb': xgb\n}\ndf_groups = {\n 'train': df_train,\n 'val': df_val\n}\n\nestimator_metrics = []\nfor name, estimator in estimators.items():\n for df_name, df_group in df_groups.items():\n y_prob = estimator.predict_proba(df_group[input_cols].values)[:, 1]\n # compute various binary classification metrics\n metric_dict = compute_binary_score(df_group[label_col], y_prob)\n metric_dict['name'] = name + '_' + df_name\n estimator_metrics.append(metric_dict)\n\ndf_metrics = pd.DataFrame(estimator_metrics)\ndf_metrics",
"Measuring Calibration\nWe'll first discuss how do we measure whether a model is well-calibrated or not. The main idea here is to first discretize our model predictions into $M$ interval bins, and calculate the average fraction of positives and predicted probability of each bin. Here, the number of bin is configurable, and samples that have similar predicted score will fall into the same bin.\nLet $B_m$ be the set of samples whose predicted probability falls into interval $I_m = \\big( \\frac{m - 1}{M}, \\frac{m}{M}\\big]$. The fraction of positives for $B_m$ can be computed by:\n\\begin{align}\npos(B_m) = \\frac{1}{|B_m|} \\sum_{i \\in B_m} y_i\n\\end{align}\nWhere $y_i$ is the true class label for sample $i$ (assuming in the binary classification setting 1 denotes a positive class and 0 otherwise). On the other hand, the predicted probability within bin $B_m$ is defined as:\n\\begin{align}\nprob(B_m) = \\frac{1}{|B_m|} \\sum_{i \\in B_m} \\hat{p_i}\n\\end{align}\nWhere $\\hat{p_i}$ is the predicted probability for sample $i$. Given the two terms, fraction of positives and predicted probability within each bin, we can either build a calibration curve to visualize the amount of miscalibration or directly compute a summary statistics.\nCalibration Curve or also known as a Reliability Diagram. For each bin, the mean predicted probability, $prob(B_m)$, is plotted against the fraction of positive cases for that bin, $pos(B_m)$. If the model is well-calibrated, then the points will fall near the diagonal line, and any deviation from that diagonal line in the visualization depicts some level of miscalibration with our model.\nExpected Calibrator Error (ECE) is one commonly used summary statistic that measures the difference between the expected probability and fraction of positives.\n\\begin{align}\nECE = \\sqrt{ \\sum_{m=1}^M \\frac{|B_m|}{n} \\big(prob(B_m) - pos(B_m)\\big)^2 }\n\\end{align}\nWhere $n$ is the total number of samples. Here the expected calibration error is measured by the RMSE (Root Meas Squared Error) between $prob(B_m)$ and $pos(B_m)$. If we wish to have a metric that is less sensitive to outliers, we could also switch to MAE (Mean Absolute Error).\nWe'll now take a look at these concepts in action.",
"# extract the validation and test true label and predicted probability,\n# as we are working with binary classification in this use case, we can\n# extract the predicted probability for the positive class\nlabels_val = df_val[label_col].values\nxgb_pred_val = xgb.predict_proba(df_val[input_cols].values)[:, 1]\n\nlabels_test = df_test[label_col].values\nxgb_pred_test = xgb.predict_proba(df_test[input_cols].values)[:, 1]",
"We implement a compute_calibration_summary that builds on top of scikit-learn's calibration_curve. Instead of only plotting the calibration curve, we also return a table that contains summary statistics on the model performance, and calibration error.",
"from calibration_module.utils import compute_calibration_summary\n\n# link the label and probability into a dataframe\nscore_col = 'score'\ndf_xgb_eval_val = pd.DataFrame({\n label_col: labels_val,\n score_col: xgb_pred_val\n})\ndf_xgb_eval_test = pd.DataFrame({\n label_col: labels_test,\n score_col: xgb_pred_test\n})\n\n# key to the dictionary is for giving the result\n# a descriptive name\neval_dict = {\n 'xgb_val': df_xgb_eval_val,\n 'xgb_test': df_xgb_eval_test\n}\n\n# change default style figure and font size\nplt.rcParams['figure.figsize'] = 12, 8\nplt.rcParams['font.size'] = 12\n\nn_bins = 15\ndf_result = compute_calibration_summary(eval_dict, label_col, score_col, n_bins=n_bins)\ndf_result",
"Judging from the calibration plot, we can see there are some points that fall above and below the diagonal line.\n\nBelow the diagonal: The model has over-forecast; the probabilities are too large.\nAbove the diagonal: The model has under-forecast; the probabilities are too small.\n\nBut from the looks of it, it seems like the predicted score is pretty well calibrated as the dots fall closely to the diagonal line.\nCalibration Model\nThe calibration technique that we'll be introducing here are all rescaling operation that is applied after the predictions have been made by a predictive mode, i.e. this assumes we already have a model, and we would only like to perform some post-processing steps to calibration our original model's prediction. As mentioned in the data preprocessing step, When training/learning the calibration function, we should ensure the data that is used to fit the original model and the one that is used for calibration does not overlap. e.g.\n\nWe can split the data into training / validation set, After our base model is trained on the training\nset, the predictions on the validation set are used to fit the calibration model.\nOr do it in a cross validation way, where the data is split into $C$ folds. For each fold, one part is held aside for use as an validation set while the training is performed using the other $C-1$ fold. After repeating the process for all $C$ folds, we can compute final probability by doing an arithmetic mean of the calibrated classifier's predictions.\nWhether we're using train/validation split, or cross validation. It boils down to using the predicted probability as the single input feature, and the hold set's label as the target.\nTo evaluate whether we successfully calibrated our model, we can/should check various evaluation metrics. e.g. Our ranking metrics such as AUC should remain somewhat the same, whereas our probability-related metrics such as calibration error should improve.\n\nWe'll introduce some notations for discussing the calibration models itself. Assuming a binary classification setting, where given a sample $x_i$ and its corresponding label $y_i$, our original model will produce a predicted probability of the positive class $\\hat{p_i}$. Given that most models are not calibrated out of the box, the calibration model's goal is to post processed $\\hat{p_i}$ and produce a well calibrated probability $\\hat{q_i}$.\nTwo popular choices are Platt Scaling and Isotonic Regression.\nPlatt Scaling: Is a parametric approach. At a high level, Platt Scaling amounts to training a logistic regression on of the original classifier's output with respect to the true class labels.\nIsotonic Regression: A non-parametric approach. With this approach, the idea is to fit a piecewise constant non-decreasing function, where we would merge similar scores into bins such that each bin becomes monotonically increasing. e.g. The first bin may have the range [0, 0.2] and probability 0.15, meaning that any instance with a score between 0 and 0.2 should be assigned a probability estimate of 0.15. More formally,\n\\begin{align}\n& \\underset{\\mathbf{\\theta, a}}{\\text{min}}\n&& \\sum_{m=1}^M \\sum_{i=1}^n \\mathbb{1} (a_m \\leq \\hat{p_i} < a_{m+1}) (\\theta_m - y_i)^2 \\ \\nonumber\n& \\text{subject to}\n&& 0 = a_1 \\leq a_2 \\leq ... \\leq a_{M+1} = 1, \\theta_1 \\leq \\theta_2 \\leq ... \\theta_M\n\\end{align}\nWhere $M$ is the number of bins, $a_1, ..., a_{M+1}$ are the interval boundaries that defines each mutually exclusive bins, $B_1, ..., B_M$. $\\theta_1, ..., \\theta_M$ are the corresponding calibrated score for $\\hat{p_i}$ that fall under the bin's boundaries. During the optimization process, the bin boundaries and prediction values are jointly optimized.\nIn general, Platt Scaling is preferable if the calibration curve has a sigmoid shape and when there is few calibration data. Whereas, Isotonic Regression, being a non-parametric method, is preferable for non-sigmoid calibration curves and in situations where many additional data can be used for calibration. But again, it doesn't hurt to try both approaches on our data and see which one leads to a lower calibration error on the holdout test set.\nApart from Platt Scaling and Isotonic Regression that we'll often come across in online materials, here we'll also introduce two additional methods. Namely, Histogram Binning and Platt Scaling Binning\nHistogram Binning is a stricter version of Isotonic Regression, where we would directly define the bin boundaries either by choosing it to be of equal length interval or equal sample size interval. As for the prediction values for each bin, we would set it equal to the $pos(B_m)$, the average number of positive samples in that bin.\nPlatt Scaling Binning is a blend of Platt Scaling and Histogram Binning. We first fit a Platt Scaling function, $f$, then just like Histogram Binning, we would bin the input samples. The main difference here is that we would bin the samples with the output from Platt Scaling instead of the original predicted probability. And for each bin, the calibrated prediction is the average of the scaling function, $f$, instead of the average number of positive samples. The motivation behind this method is the output from our scaling function lies within a narrower range than the original label values, hence it should introduce a lower calibration error.\nOne important thing to note is (not commonly mentioned) while applying Platt Scaling related calibration method is that logistic regression assumes a linear relationship between the input and log odds class probability output. Hence in theory, it should be beneficial to first transform the class probability $p$ into a log odds scale, $z$ before passing it to Platt Scaling.\n\\begin{align}\nz = \\log \\left({p\\over 1-p}\\right)\n\\end{align}",
"from sklearn.calibration import IsotonicRegression\nfrom calibration_module.calibrator import (\n HistogramCalibrator,\n PlattCalibrator,\n PlattHistogramCalibrator\n)\n\nisotonic = IsotonicRegression(out_of_bounds='clip',\n y_min=xgb_pred_val.min(),\n y_max=xgb_pred_val.max())\nisotonic.fit(xgb_pred_val, labels_val)\nisotonic_probs = isotonic.predict(xgb_pred_test)\nisotonic_probs\n\nhistogram = HistogramCalibrator(n_bins=n_bins)\nhistogram.fit(xgb_pred_val, labels_val)\nhistogram_probs = histogram.predict(xgb_pred_test)\nhistogram_probs\n\nplatt = PlattCalibrator(log_odds=True)\nplatt.fit(xgb_pred_val, labels_val)\nplatt_probs = platt.predict(xgb_pred_test)\nplatt_probs\n\nplatt_histogram = PlattHistogramCalibrator(n_bins=n_bins, log_odds=True)\nplatt_histogram.fit(xgb_pred_val, labels_val)\nplatt_histogram_probs = platt_histogram.predict(xgb_pred_test)\nplatt_histogram_probs",
"Calibration Model Evaluation\nIn this section, we compare the calibration error of various calibration models.",
"score_col = 'score'\ndf_xgb_eval_test = pd.DataFrame({\n label_col: labels_test,\n score_col: xgb_pred_test\n})\ndf_xgb_isotonic_eval_test = pd.DataFrame({\n label_col: labels_test,\n score_col: isotonic_probs + 1e-3\n})\ndf_xgb_platt_eval_test = pd.DataFrame({\n label_col: labels_test,\n score_col: platt_probs\n})\ndf_xgb_platt_histogram_eval_test = pd.DataFrame({\n label_col: labels_test,\n score_col: platt_histogram_probs\n})\ndf_xgb_histogram_eval_test = pd.DataFrame({\n label_col: labels_test,\n score_col: histogram_probs\n})\n\n\neval_dict = {\n 'xgb': df_xgb_eval_test,\n 'xgb isotonic': df_xgb_isotonic_eval_test,\n 'xgb histogram': df_xgb_histogram_eval_test,\n 'xgb platt': df_xgb_platt_eval_test,\n 'xgb platt histogram': df_xgb_platt_histogram_eval_test\n}\n\ndf_result = compute_calibration_summary(eval_dict, label_col, score_col, n_bins=n_bins)\ndf_result.sort_values('calibration_error')",
"We also test out the calibration error by Platt Scaling related methods without the log odds transformation. It turns out, in this example, missing the log odds transformation step undermines the performance by a significant amount.",
"platt = PlattCalibrator(log_odds=False)\nplatt.fit(xgb_pred_val, labels_val)\nplatt_probs = platt.predict(xgb_pred_test)\nplatt_probs\n\nplatt_histogram = PlattHistogramCalibrator(n_bins=n_bins, log_odds=False)\nplatt_histogram.fit(xgb_pred_val, labels_val)\nplatt_histogram_probs = platt_histogram.predict(xgb_pred_test)\nplatt_histogram_probs\n\ndf_xgb_platt_eval_test = pd.DataFrame({\n label_col: labels_test,\n score_col: platt_probs\n})\ndf_xgb_platt_histogram_eval_test = pd.DataFrame({\n label_col: labels_test,\n score_col: platt_histogram_probs\n})\n\neval_dict = {\n 'xgb': df_xgb_eval_test,\n 'xgb platt': df_xgb_platt_eval_test,\n 'xgb platt histogram': df_xgb_platt_histogram_eval_test\n}\n\ndf_result = compute_calibration_summary(eval_dict, label_col, score_col, n_bins=n_bins)\ndf_result.sort_values('calibration_error')",
"Final Notes\nAlthough our primary focus was on calibrating binary classification models, we can extend the concepts and notations to multi-class setting by treating the problem as $K$ one versus all problems, where $K$ is the number of distinct classes. For $k = 1, ..., K$, we would create a binary classification where the label is $\\mathbb{1}(y_i = k)$, giving us $K$ calibration model, one for each class.\nOther than the techniques introduced here, there are many other methods that can be used to calibrate our model. e.g. for ease of production, some work resort to using a piecewise linear function:\n\\begin{align}\n\\hat{q_i}=\n\\begin{cases}\n\\hat{p_i} & \\hat{p_i} < t_c \\\nt_c \\cdot \\big( 1 + log( \\frac{\\hat{p_i}}{t_c} ) \\big) & \\hat{p_i} \\geq t_c\n\\end{cases}\n\\end{align}\nIn this case, the calibration function is saying for any predicted probability higher than a user-defined calibration threshold $t_c$, we will scale the prediction using the function specified above. The piecewise linear function can be of any arbitrary function, and unlike the other estimators that we can directly plug and play, this requires us to have a much better understanding of our data's distribution.\nGiven all the rage with deep learning models lately, there are even ones that are tailored for them. Calibration also becomes an important topic there, as modern neural networks often times optimizes for negative log likelihood. Upon being able to correctly classify the majority of the training samples, that measure can be further minimized by increasing the probability of its prediction, which will ultimately result in over/under confident predicted score.\nOne caveat to note about measuring calibration error is that the number of bins do matter, play with the parameter and we might find surprising results. As we are measuring the calibration error of a continuous output (probability output from the model) by grouping samples into finite set of bins, the measure that we've obtained will only be an approximation of the true calibration error. The intuition behind this is that averaging a continuous number within a bin allows errors at different regions of a bin to cancel out with each other.\nReference\n\nBlog: Probability calibration\nBlog: A Guide to Calibration Plots in Python\nBlog: How and When to Use a Calibrated Classification Model with scikit-learn\nYoutube - Model Calibration - is your model ready for the real world?\nSklearn Documentation: Probability calibration\nSklearn Documentation: Probability calibration curves\nPaper: A. Niculescu-Mizil, R. Caruana (2012) - Obtaining Calibrated Probabilities from Boosting\nPaper: T. Leathart, E. Frank, G. Holmes, B. Pfahringer (2017) - Probability Calibration Trees\nPaper: C. Guo, G.Pleiss, Y. Sun, K. Weinberger (2017) - On Calibration of Modern Neural Networks\nPaper: A. Kumar, P. Liang, T. Ma (2020) - Verified Uncertainty Calibration"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mathnathan/notebooks
|
notes/Reproduce Double Descent.ipynb
|
mit
|
[
"Introduction\nWe would like to understand the governing principles behind the double descent (DD) phenomenon. Therefore we would like to construct the simplest possible environment capable of demonstrating the DD behavior. We will use linear regression. We must first create the machine learning problem which includes defining the data generating distribution, the model, and the loss function. The data generating distribution will be a straight line with homoskedastic noise. The model will be a polynomial of arbitrary order and the loss function will be the traditional SSE.\nThe Data Generating Distribution\nWe will work backwards by first defining a relationship between random variables and then construct the resulting distribution. Let \n$$y=mx+\\epsilon$$\nwhere $X \\sim U[0,1]$ ($p(x)=1$) and $\\epsilon \\sim \\mathcal{N}(0,s^2)$ is the noise term. We immediately see that $y$ can be interpreted as the result of a reparameterization, thus given a particular observation $X=x$ the random variable $Y$ is also distributed normally $\\mathcal{N}(mx,s^2)$ with the resulting pdf.\n$$p(y|x) = \\frac{1}{\\sqrt{2\\pi s^2}}\\exp\\bigg(-\\frac{(y-mx)^2}{2 s^2}\\bigg)$$\nIn this way, we can trivially define the joint pdf\n$$p(x,y) = p(y|x)p(x) = p(y|x) = \\frac{1}{\\sqrt{2\\pi s^2}}\\exp\\bigg(-\\frac{(y-mx)^2}{2 s^2}\\bigg)$$\n\nWe can create observations from $P(X,Y)$ via ancestral sampling, i.e. we first draw a sample $x\\sim p(x)$ and then use it to draw a sample $y \\sim p(y|x)$ resulting in $(x,y) \\sim P(X,Y)$. We create a simple interface for generating an arbtrary number of samples below.",
"import torch\n\nclass P():\n \n def __init__(self, m, s):\n \n self.m = m # Slope of line\n self.s = s # Standard deviation of injected noise\n \n def sample(self, size):\n \n x = torch.rand(size, dtype=torch.double)\n y = self.m*x + torch.randn(size, dtype=torch.double)*self.s\n return (x,y)\n\nimport matplotlib.pyplot as plt\n\np = P(2.3, 0.25)\n(x_pts,y_pts) = p.sample(100)\nplt.title(\"Samples from Data Generating Distribution\")\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\")\nplt.scatter(x_pts, y_pts)\nplt.show()",
"The Model\nThe double descent phenomena has been shown to take place with increasing capacity and training epochs. To easily explore this our model should have a convenient way to increase and decrease its capacity, i.e. roughly the degree of the polynomial. Below we create a very simple polynomial class where the order can be specified during construction of the object.",
"class Polynomial():\n \n def __init__(self, order):\n \n self.order = order\n self.params = torch.tensor(2*np.random.rand(order+1)-1, dtype=torch.double, requires_grad=True)\n \n def __call__(self, xvals, take_grad=True):\n \n if take_grad: # If take_grad is true, then take gradient\n xvals = torch.tensor(np.vander(xvals,self.order+1), dtype=torch.double)\n y_vals = xvals.mv(self.params)\n else:\n with torch.no_grad(): # Use this for plotting\n xvals = torch.tensor(np.vander(xvals,self.order+1), dtype=torch.double)\n y_vals = xvals.mv(self.params)\n \n return y_vals",
"We will check a few plots of various randomly initialized polynomials",
"import numpy as np\nxvals = np.linspace(-2,2,100)\nfor order in range(1,5):\n poly = Polynomial(order)\n yvals = poly(xvals, take_grad=False)\n plt.plot(xvals,yvals.detach().numpy(),label='order {}'.format(order))\nplt.title(\"Randomly Initialized Polynomials\")\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\")\nplt.legend()\nplt.show()",
"The Loss Function\nWe will simply use the standard sum of squared errors loss function for regression here. However we will define our loss function as a functional to be consistent with traditional generalization theory. Therefore it will accept the model and the data as parameters.",
"def loss(model, data, take_grad=True):\n xvals, y_true = data\n y_pred = model(xvals, take_grad)\n return torch.pow(y_true-y_pred,2).mean()",
"Sanity Check\nNow we run through everything once and plot the results to help convince us that everything is working as expected. First we generate 10 points from a line with a slope of 3.4 and standard deviation of 0.3.",
"data = P(3.4,0.3).sample(20)",
"We start with the simplest model to visualize and check, a first order polynomial",
"poly = Polynomial(1)",
"If we plot both the line and the sampled points we should be able to eyeball the error",
"def create_plot(poly,data):\n plt.title(\"Sampled Points and Polynomial\")\n plt.xlabel(\"$x$\")\n plt.ylabel(\"$y$\")\n plt.scatter(data[0],data[1],label=\"Samples\")\n xpts = np.linspace(-0.25,1.25,250)\n plt.plot(xpts,poly(xpts,take_grad=False).numpy(), c='r', label=\"Polynomial\")\n plt.legend()\n plt.xlim(-0.1,1.1)\n plt.ylim(-1,4)\n plt.show()\ncreate_plot(poly,data)\n\nloss(poly,data,take_grad=False)",
"Fit the Line to the Points\nNow we simply fit the line to the points using traditional stochastic gradient descent (SGD)",
"l = loss(poly,data)\nl.backward()\nwith torch.no_grad():\n poly.params -= 0.1*poly.params.grad",
"In the above code block we passed the entire dataset through in one batch, then updated the parameters accordingly, let us take a look at the line now to see if it improved at all.",
"create_plot(poly,data)",
"Fit to Convergence\nEverything checks out, we now run a full example and iterate over many more samples to convergence",
"dataset_size = 20\nepochs = 100\nlr = 1e-1\np = P(2.583,0.27)\ndata = p.sample(dataset_size)\npoly = Polynomial(1)",
"Let's create a plot a see where we are starting from",
"create_plot(poly,data)\n\nfor epoch in range(epochs):\n loss(poly,data).backward()\n with torch.no_grad():\n poly.params -= lr*poly.params.grad\n poly.params.grad.zero_()\n\ncreate_plot(poly,data)",
"Higher Order Polynomials\nLet's repeat the exact same experiment but with a quadratic rather than a straight line!",
"poly = Polynomial(2)\ncreate_plot(poly,data)\n\nfor epoch in range(epochs):\n loss(poly,data).backward()\n with torch.no_grad():\n poly.params -= lr*poly.params.grad\n poly.params.grad.zero_()\n\ncreate_plot(poly,data)",
"Double Descent\nWe now have a framework to try and reproduce the double descent phenomena. We will start with seeing if it happens with number of epochs. This must be done with a model that has many more parameters than necessary to ensure there is significant overfitting. Then after numerous epochs, we should begin to see the model enter the \"Interpolation Regime\". To appropriate observe the phenomenon we must measure both the training and the test loss as a function of the epochs.",
"from tqdm import tqdm\nepochs = int(1e4)\nlearning_rate = 1e-2\npoly = Polynomial(1000)\ncreate_plot(poly,data)\ntest_data = p.sample(100)\ntrain_err = []\ntest_err = []\noptimizer = torch.optim.Adam([poly.params], lr=learning_rate)\nfor epoch in tqdm(range(epochs)):\n train_loss = loss(poly,data)\n test_loss = loss(poly,test_data,take_grad=False)\n train_err.append(train_loss)\n test_err.append(test_loss)\n optimizer.zero_grad()\n train_loss.backward()\n optimizer.step()\n\ndef plot_loss(train, test):\n plt.title(\"Generalization Performance\")\n plt.xlabel(\"$Epoch$\")\n plt.ylabel(\"$Loss$\")\n plt.plot(train, c='b', label=\"train_err\")\n plt.plot(test, c='r', label=\"test_err\")\n plt.legend()\n plt.show()\nplot_loss(train_err, test_err)\n\nplt.title(\"Sampled Points and Polynomial\")\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\")\nplt.scatter(data[0],data[1],label=\"Samples\")\nxpts = np.linspace(-0.25,1.25,250)\nplt.plot(xpts,poly(xpts,take_grad=False), c='r', label=\"Polynomial\")\nplt.legend()\nplt.xlim(-0.1,1.1)\nplt.ylim(-1,4)\nplt.show()\n\norder = 10\nparams = np.linalg.pinv(np.polynomial.legendre.legvander(data[0],order)).dot(data[1])\nparams = params*np.sqrt(2*np.arange(0, params.shape[1], 1)+1)\n\nplt.title(\"Sampled Points and Polynomial\")\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\")\nplt.scatter(data[0],data[1],label=\"Samples\")\nxpts = np.linspace(-0.25,1.25,250)\nplt.plot(xpts,np.vander(xpts, order+1).dot(params), c='r', label=\"Polynomial\")\nplt.legend()\nplt.xlim(-0.1,1.1)\nplt.ylim(-1,4)\nplt.show()\n\nparams.shape"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jorisvandenbossche/DS-python-data-analysis
|
notebooks/case2_observations.ipynb
|
bsd-3-clause
|
[
"<p><font size=\"6\"><b>CASE - Observation data</b></font></p>\n\n\n© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nplt.style.use('seaborn-whitegrid')",
"Introduction\nObservation data of species (when and where is a given species observed) is typical in biodiversity studies. Large international initiatives support the collection of this data by volunteers, e.g. iNaturalist. Thanks to initiatives like GBIF, a lot of these data is also openly available. \nIn this example, data originates from a study of a Chihuahuan desert ecosystem near Portal, Arizona. It is a long-term observation study in 24 different plots (each plot identified with a verbatimLocality identifier) and defines, apart from the species, location and date of the observations, also the sex and the weight (if available).\nThe data consists of two data sets:\n\nobservations.csv the individual observations.\nspecies_names.csv the overview list of the species names.\n\nLet's start with the observations data!\nReading in the observations data\n<div class=\"alert alert-success\">\n\n**EXERCISE 1**\n\n- Read in the `data/observations.csv` file with Pandas and assign the resulting DataFrame to a variable with the name `observations`.\n- Make sure the 'occurrenceID' column is used as the index of the resulting DataFrame while reading in the data set.\n- Inspect the first five rows of the DataFrame and the data types of each of the data columns.\n\n<details><summary>Hints</summary>\n\n- All read functions in Pandas start with `pd.read_...`.\n- Setting a column as index can be done with an argument of the `read_csv` function To check the documentation of a function, use the keystroke combination of SHIFT + TAB when the cursor is on the function.\n- Remember `.head()` and `.info()`?\n\n</details>",
"# %load _solutions/case2_observations1.py\n\n# %load _solutions/case2_observations2.py\n\n# %load _solutions/case2_observations3.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 2**\n\nCreate a new column with the name `eventDate` which contains datetime-aware information of each observation. To do so, combine the columns `day`, `month` and `year` into a datetime-aware data type by using the `pd.to_datetime` function from Pandas (check the help of that function to see how multiple columns with the year, month and day can be converted).\n\n<details><summary>Hints</summary>\n\n- `pd.to_datetime` can automatically combine the information from multiple columns. To select multiple columns, use a list of column names, e.g. `df[[\"my_col1\", \"my_col2\"]]`\n- To create a new column, assign the result to new name, e.g. `df[\"my_new_col\"] = df[\"my_col\"] + 1`\n\n</details>",
"# %load _solutions/case2_observations4.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 3**\n\nFor convenience when this dataset will be combined with other datasets, add a new column, `datasetName`, to the survey data set with `\"Ecological Archives E090-118-D1.\"` as value for each of the individual records (static value for the entire data set)\n\n<details><summary>Hints</summary>\n\n- When a column does not exist, a new `df[\"a_new_column\"]` can be created by assigning a value to it.\n- Pandas will automatically broadcast a single string value to each of the rows in the DataFrame.\n\n</details>",
"# %load _solutions/case2_observations5.py",
"Cleaning the verbatimSex column",
"observations[\"verbatimSex\"].unique()",
"For the further analysis (and the species concerned in this specific data set), the sex information should be either male or female. We want to create a new column, named sex and convert the current values to the corresponding sex, taking into account the following mapping:\n* M -> male\n* F -> female\n* R -> male\n* P -> female\n* Z -> nan\n<div class=\"alert alert-success\">\n\n**EXERCISE 4**\n\n- Express the mapping of the values (e.g. `M` -> `male`) into a Python dictionary object with the variable name `sex_dict`. `Z` values correspond to _Not a Number_, which can be defined as `np.nan`. \n- Use the `sex_dict` dictionary to replace the values in the `verbatimSex` column to the new values and save the mapped values in a new column 'sex' of the DataFrame.\n- Check the conversion by printing the unique values within the new column `sex`.\n\n<details><summary>Hints</summary>\n\n- A dictionary is a Python standard library data structure, see https://docs.python.org/3/tutorial/datastructures.html#dictionaries - no Pandas magic involved when you need a key/value mapping.\n- When you need to replace values, look for the Pandas method `replace`. \n\n</details>",
"# %load _solutions/case2_observations6.py\n\n# %load _solutions/case2_observations7.py\n\n# %load _solutions/case2_observations8.py",
"Tackle missing values (NaN) and duplicate values\nSee pandas_07_missing_values.ipynb for an overview of functionality to work with missing values.\n<div class=\"alert alert-success\">\n\n**EXERCISE 5**\n\nHow many records in the data set have no information about the `species`? Use the `isna()` method to find out.\n\n<details><summary>Hints</summary>\n\n- Do NOT use `survey_data_processed['species'] == np.nan`, but use the available method `isna()` to check if a value is NaN\n- The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum.\n\n</details>",
"# %load _solutions/case2_observations9.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 6**\n\nHow many duplicate records are present in the dataset? Use the method `duplicated()` to check if a row is a duplicate.\n\n<details><summary>Hints</summary>\n\n- The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum.\n\n</details>",
"# %load _solutions/case2_observations10.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 7**\n\n- Select all duplicate data by filtering the `observations` data and assign the result to a new variable `duplicate_observations`. The `duplicated()` method provides a `keep` argument define which duplicates (if any) to mark.\n- Sort the `duplicate_observations` data on both the columns `eventDate` and `verbatimLocality` and show the first 9 records.\n\n<details><summary>Hints</summary>\n\n- Check the documentation of the `duplicated` method to find out which value the argument `keep` requires to select all duplicate data.\n- `sort_values()` can work with a single columns name as well as a list of names.\n\n</details>",
"# %load _solutions/case2_observations11.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 8**\n\n- Exclude the duplicate values (i.e. keep the first occurrence while removing the other ones) from the `observations` data set and save the result as `observations_unique`. Use the `drop duplicates()` method from Pandas.\n- How many observations are still left in the data set? \n\n<details><summary>Hints</summary>\n\n- `keep=First` is the default option for `drop_duplicates`\n- The number of rows in a DataFrame is equal to the `len`gth\n\n</details>",
"# %load _solutions/case2_observations12.py\n\n# %load _solutions/case2_observations13.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 9**\n\nUse the `dropna()` method to find out: \n\n- For how many observations (rows) we have all the information available (i.e. no NaN values in any of the columns)? \n- For how many observations (rows) we do have the `species_ID` data available ? \n- Remove the data without `species_ID` data from the observations and assign the result to a new variable `observations_with_ID`\n\n<details><summary>Hints</summary>\n\n- `dropna` by default removes by default all rows for which _any_ of the columns contains a `NaN` value.\n- To specify which specific columns to check, use the `subset` argument\n\n</details>",
"# %load _solutions/case2_observations14.py\n\n# %load _solutions/case2_observations15.py\n\n# %load _solutions/case2_observations16.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 10**\n\nFilter the `observations` data and select only those records that do not have a `species_ID` while having information on the `sex`. Store the result as variable `not_identified`.\n\n<details><summary>Hints</summary>\n\n- To combine logical operators element-wise in Pandas, use the `&` operator.\n- Pandas provides both a `isna()` and a `notna()` method to check the existence of `NaN` values.\n\n</details>",
"# %load _solutions/case2_observations17.py\n\n# %load _solutions/case2_observations18.py",
"Adding the names of the observed species",
"# Recap from previous exercises - remove duplicates and observations without species information\nobservations_unique_ = observations.drop_duplicates()\nobservations_data = observations_unique_.dropna(subset=['species_ID'])",
"In the data set observations, the column specied_ID provides only an identifier instead of the full name. The name information is provided in a separate file species_names.csv:",
"species_names = pd.read_csv(\"data/species_names.csv\")\nspecies_names.head()",
"The species names contains for each identifier in the ID column the scientific name of a species. The species_names data set contains in total 38 different scientific names:",
"species_names.shape",
"For further analysis, let's combine both in a single DataFrame in the following exercise.\n<div class=\"alert alert-success\">\n\n**EXERCISE 11**\n\nCombine the DataFrames `observations_data` and `species_names` by adding the corresponding species name information (name, class, kingdom,..) to the individual observations using the `pd.merge()` function. Assign the output to a new variable `survey_data`.\n\n<details><summary>Hints</summary>\n\n- This is an example of a database JOIN operation. Pandas provides the `pd.merge` function to join two data sets using a common identifier.\n- Take into account that our key-column is different for `observations` and `species_names`, respectively `specied_ID` and `ID`. The `pd.merge()` function has `left_on` and `right_on` keywords to specify the name of the column in the left and right `DataFrame` to merge on.\n\n</details>",
"# %load _solutions/case2_observations19.py",
"Select subsets according to taxa of species",
"survey_data['taxa'].value_counts()\n#survey_data.groupby('taxa').size()",
"<div class=\"alert alert-success\">\n\n**EXERCISE 12**\n\n- Select the observations for which the `taxa` is equal to 'Rabbit', 'Bird' or 'Reptile'. Assign the result to a variable `non_rodent_species`. Use the `isin` method for the selection.\n\n<details><summary>Hints</summary>\n\n- You do not have to combine three different conditions, but use the `isin` operator with a list of names.\n\n</details>",
"# %load _solutions/case2_observations20.py\n\n# %load _solutions/case2_observations21.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 13**\n\nSelect the observations for which the `name` starts with the characters 'r' (make sure it does not matter if a capital character is used in the 'taxa' name). Call the resulting variable `r_species`.\n\n<details><summary>Hints</summary>\n\n- Remember the `.str.` construction to provide all kind of string functionalities? You can combine multiple of these after each other.\n- If the presence of capital letters should not matter, make everything lowercase first before comparing (`.lower()`) \n\n</details>",
"# %load _solutions/case2_observations22.py\n\n# %load _solutions/case2_observations23.py\n\nr_species[\"name\"].value_counts()",
"<div class=\"alert alert-success\">\n\n**EXERCISE 14**\n\nSelect the observations that are not Birds. Call the resulting variable <code>non_bird_species</code>.\n\n<details><summary>Hints</summary>\n\n- Logical operators like `==`, `!=`, `>`,... can still be used.\n\n</details>",
"# %load _solutions/case2_observations24.py\n\nlen(non_bird_species)",
"<div class=\"alert alert-success\">\n\n**EXERCISE 15**\n\nSelect the __Bird__ (taxa is Bird) observations from 1985-01 till 1989-12 usint the `eventDate` column. Call the resulting variable `birds_85_89`.\n\n<details><summary>Hints</summary>\n\n- No hints, you can do this! (with the help of some `<=` and `&`, and don't forget the put brackets around each comparison that you combine)\n\n</details>",
"# %load _solutions/case2_observations25.py\n\n# %load _solutions/case2_observations26.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 16**\n\n- Drop the observations for which no `weight` information is available.\n- On the filtered data, compare the median weight for each of the species (use the `name` column)\n- Sort the output from high to low median weight (i.e. descending)\n\n__Note__ You can do this all in a single line statement, but don't have to do it as such!\n\n<details><summary>Hints</summary> \n\n- You will need `dropna`, `groupby`, `median` and `sort_values`.\n\n</details>",
"# %load _solutions/case2_observations27.py\n\n# %load _solutions/case2_observations28.py",
"Species abundance\n<div class=\"alert alert-success\">\n\n**EXERCISE 17**\n\nWhich 8 species (use the `name` column to identify the different species) have been observed most over the entire data set?\n\n<details><summary>Hints</summary>\n\n- Pandas provide a function to combine sorting and showing the first n records, see [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.nlargest.html)...\n\n</details>",
"# %load _solutions/case2_observations29.py\n\n# %load _solutions/case2_observations30.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 18**\n\n- What is the number of different species (`name`) in each of the `verbatimLocality` plots? Use the `nunique` method. Assign the output to a new variable `n_species_per_plot`.\n- Define a Matplotlib `Figure` (`fig`) and `Axes` (`ax`) to prepare a plot. Make an horizontal bar chart using Pandas `plot` function linked to the just created Matplotlib `ax`. Each bar represents the `species per plot/verbatimLocality`. Change the y-label to 'Plot number'.\n\n<details><summary>Hints</summary>\n\n- _...in each of the..._ should provide a hint to use `groupby` for this exercise. The `nunique` is the aggregation function for each of the groups.\n- `fig, ax = plt.subplots()` prepares a Matplotlib Figure and Axes.\n\n</details>",
"# %load _solutions/case2_observations31.py\n\n# %load _solutions/case2_observations32.py\n\n# %load _solutions/case2_observations33.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 19**\n\n- What is the number of plots (`verbatimLocality`) each of the species (`name`) have been observed in? Assign the output to a new variable `n_plots_per_species`. Sort the counts from low to high.\n- Make an horizontal bar chart using Pandas `plot` function to show the number of plots each of the species was found (using the `n_plots_per_species` variable). \n\n<details><summary>Hints</summary>\n\n- Use the previous exercise to solve this one.\n\n</details>",
"# %load _solutions/case2_observations34.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 20**\n\n- Starting from the `survey_data`, calculate the amount of males and females present in each of the plots (`verbatimLocality`). The result should return the counts for each of the combinations of `sex` and `verbatimLocality`. Assign to a new variable `n_plot_sex` and ensure the counts are in a column named \"count\".\n- Use a `pivot_table` to convert the `n_plot_sex` DataFrame to a new DataFrame with the `verbatimLocality` as index and `male`/`female` as column names. Assign to a new variable `pivoted`.\n\n<details><summary>Hints</summary>\n\n- _...for each of the combinations..._ `groupby` can also be used with multiple columns at the same time.\n- If a `groupby` operation gives a Series as result, you can give that Series a name with the `.rename(..)` method.\n- `reset_index()` is useful function to convert multiple indices into columns again.\n\n</details>",
"# %load _solutions/case2_observations35.py\n\n# %load _solutions/case2_observations36.py",
"As such, we can use the variable pivoted to plot the result:",
"pivoted.plot(kind='bar', figsize=(12, 6), rot=0)",
"<div class=\"alert alert-success\">\n\n**EXERCISE 21**\n\nRecreate the previous plot with the `catplot` function from the Seaborn library directly starting from <code>survey_data</code>. \n\n<details><summary>Hints</summary>\n\n- Check the `kind` argument of the `catplot` function to find out how to use counts to define the bars instead of a `y` value.\n- To link a column to different colors, use the `hue` argument\n- Using `height` and `aspect`, the figure size can be optimized.\n\n\n</details>",
"# %load _solutions/case2_observations37.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 22**\n\n- Create a table, called `heatmap_prep`, based on the `survey_data` DataFrame with the row index the individual years, in the column the months of the year (1-> 12) and as values of the table, the counts for each of these year/month combinations.\n- Using the seaborn <a href=\"http://seaborn.pydata.org/generated/seaborn.heatmap.html\">documentation</a>, make a heatmap starting from the `heatmap_prep` variable.\n\n<details><summary>Hints</summary>\n\n- A `pivot_table` has an `aggfunc` parameter by which the aggregation of the cells combined into the year/month element are combined (e.g. mean, max, count,...). \n- You can use the `ID` to count the number of observations.\n- seaborn has an `heatmap` function which requires a short-form DataFrame, comparable to giving each element in a table a color value.\n\n</details>",
"# %load _solutions/case2_observations38.py",
"Remark that we started from a tidy data format (also called long format) and converted to short format with in the row index the years, in the column the months and the counts for each of these year/month combinations as values.\n<div class=\"alert alert-success\">\n\n**EXERCISE 23**\n\n- Make a summary table with the number of records of each of the species in each of the plots (called `verbatimLocality`)? Each of the species `name`s is a row index and each of the `verbatimLocality` plots is a column name.\n- Use the Seaborn <a href=\"http://seaborn.pydata.org/generated/seaborn.heatmap.html\">documentation</a> to make a heatmap.\n\n<details><summary>Hints</summary>\n\n- Make sure to pass the correct columns to respectively the `index`, `columns`, `values` and `aggfunc` parameters of the `pivot_table` function. You can use the `ID` to count the number of observations for each name/locality combination (when counting rows, the exact column doesn't matter).\n\n</details>",
"# %load _solutions/case2_observations39.py\n\n# %load _solutions/case2_observations40.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 24**\n\nMake a plot visualizing the evolution of the number of observations for each of the individual __years__ (i.e. annual counts) using the `resample` method.\n\n<details><summary>Hints</summary>\n\n- You want to `resample` the data using the `eventDate` column to create annual counts. If the index is not a datetime-index, you can use the `on=` keyword to specify which datetime column to use.\n- `resample` needs an aggregation function on how to combine the values within a single 'group' (in this case data within a year). In this example, we want to know the `size` of each group, i.e. the number of records within each year.\n\n</details>",
"# %load _solutions/case2_observations41.py",
"(OPTIONAL SECTION) Evolution of species during monitoring period\nIn this section, all plots can be made with the embedded Pandas plot function, unless specificly asked\n<div class=\"alert alert-success\">\n\n**EXERCISE 25**\n\nPlot using Pandas `plot` function the number of records for `Dipodomys merriami` for each month of the year (January (1) -> December (12)), aggregated over all years.\n\n<details><summary>Hints</summary>\n\n- _...for each month of..._ requires `groupby`. \n- `resample` is not useful here, as we do not want to change the time-interval, but look at month of the year (over all years)\n\n</details>",
"# %load _solutions/case2_observations42.py\n\n# %load _solutions/case2_observations43.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 26**\n\nPlot, for the species 'Dipodomys merriami', 'Dipodomys ordii', 'Reithrodontomys megalotis' and 'Chaetodipus baileyi', the monthly number of records as a function of time for the whole monitoring period. Plot each of the individual species in a separate subplot and provide them all with the same y-axis scale\n\n<details><summary>Hints</summary>\n\n- `isin` is useful to select from within a list of elements.\n- `groupby` AND `resample` need to be combined. We do want to change the time-interval to represent data as a function of time (`resample`) and we want to do this _for each name/species_ (`groupby`). The order matters!\n- `unstack` is a Pandas function a bit similar to `pivot`. Check the [unstack documentation](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.unstack.html) as it might be helpful for this exercise.\n\n</details>",
"# %load _solutions/case2_observations44.py\n\n# %load _solutions/case2_observations45.py\n\n# %load _solutions/case2_observations46.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 27**\n\nRecreate the same plot as in the previous exercise using Seaborn `relplot` functon with the `month_evolution` variable.\n\n<details><summary>Hints</summary>\n\n- We want to have the `counts` as a function of `eventDate`, so link these columns to y and x respectively.\n- To create subplots in Seaborn, the usage of _facetting_ (splitting data sets to multiple facets) is used by linking a column name to the `row`/`col` parameter. \n- Using `height` and `aspect`, the figure size can be optimized.\n\n</details>",
"# Given as solution..\nsubsetspecies = survey_data[survey_data[\"name\"].isin(['Dipodomys merriami', 'Dipodomys ordii',\n 'Reithrodontomys megalotis', 'Chaetodipus baileyi'])]\nmonth_evolution = subsetspecies.groupby(\"name\").resample('M', on='eventDate').size().rename(\"counts\")\nmonth_evolution = month_evolution.reset_index()\n\n# %load _solutions/case2_observations47.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 28**\n\nPlot the annual amount of occurrences for each of the 'taxa' as a function of time using Seaborn. Plot each taxa in a separate subplot and do not share the y-axis among the facets.\n\n<details><summary>Hints</summary>\n\n- Combine `resample` and `groupby`!\n- Check out the previous exercise for the plot function.\n- Pass the `sharey=False` to the `facet_kws` argument as a dictionary.\n\n</details>",
"# %load _solutions/case2_observations48.py\n\n# %load _solutions/case2_observations49.py\n\n# %load _solutions/case2_observations50.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 29**\n\nThe observations where taken by volunteers. You wonder on which day of the week the most observations where done. Calculate for each day of the week (`weekday`) the number of observations and make a barplot.\n\n<details><summary>Hints</summary>\n\n- Did you know the Python standard Library has a module `calendar` which contains names of week days, month names,...?\n\n</details>",
"# %load _solutions/case2_observations51.py",
"Nice work!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
moonbury/pythonanywhere
|
github/PythonDataScienceEssentials/chapter_1/First steps.ipynb
|
gpl-3.0
|
[
"Let's first check the Python version you have installed on your machine.\nRemember, to run the examples, it must be 2.7.X",
"import sys\nprint \"Your Python version is\", sys.version",
"Now let's check if you have all the necessary toolkits installed and working properly:",
"errors = 0\n\ntry:\n import numpy as np\n print \"Numpy installed, version\", np.__version__\nexcept ImportError:\n print \"Numpy is not installed!\"\n errors += 1\n\ntry:\n import scipy\n print \"Scipy installed, version\", scipy.__version__\nexcept ImportError:\n print \"Scipy is not installed!\"\n errors += 1\n\ntry:\n import matplotlib\n print \"Matplotlib installed, version\", matplotlib.__version__\nexcept ImportError:\n print \"Matplotlib is not installed!\"\n errors += 1\n\ntry:\n import sklearn\n print \"Sklearn installed, version\", sklearn.__version__\nexcept ImportError:\n print \"Sklearn is not installed!\"\n errors += 1\n\ntry:\n import networkx\n print \"Networkx installed, version\", networkx.__version__\nexcept ImportError:\n print \"Networkx is not installed!\"\n errors += 1\n\ntry:\n import nltk\n print \"Nltk installed, version\", nltk.__version__\nexcept ImportError:\n print \"Nltk is not installed!\"\n errors += 1",
"Here''s the verdict:",
"if errors == 0:\n print \"Your machine can run the code\"\nelse:\n print \"We found\", errors, \"errors. Please check them and install the missing toolkits\""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
FlorentSilve/Udacity_ML_nanodegree
|
projects/titanic_survival_exploration/titanic_survival_exploration.ipynb
|
mit
|
[
"Machine Learning Engineer Nanodegree\nIntroduction and Foundations\nProject: Titanic Survival Exploration\nIn 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.\n\nTip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. \n\nGetting Started\nTo begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.\nRun the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.\n\nTip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.",
"# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import display # Allows the use of display() for DataFrames\n\n# Import supplementary visualizations code visuals.py\nimport visuals as vs\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the dataset\nin_file = 'titanic_data.csv'\nfull_data = pd.read_csv(in_file)\n\n# Print the first few entries of the RMS Titanic data\ndisplay(full_data.head())",
"From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:\n- Survived: Outcome of survival (0 = No; 1 = Yes)\n- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)\n- Name: Name of passenger\n- Sex: Sex of the passenger\n- Age: Age of the passenger (Some entries contain NaN)\n- SibSp: Number of siblings and spouses of the passenger aboard\n- Parch: Number of parents and children of the passenger aboard\n- Ticket: Ticket number of the passenger\n- Fare: Fare paid by the passenger\n- Cabin Cabin number of the passenger (Some entries contain NaN)\n- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)\nSince we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.\nRun the code cell below to remove Survived as a feature of the dataset and store it in outcomes.",
"# Store the 'Survived' feature in a new variable and remove it from the dataset\noutcomes = full_data['Survived']\ndata = full_data.drop('Survived', axis = 1)\n\n# Show the new dataset with 'Survived' removed\ndisplay(data.head())",
"The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].\nTo measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers. \nThink: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?",
"def accuracy_score(truth, pred):\n \"\"\" Returns accuracy score for input truth and predictions. \"\"\"\n \n # Ensure that the number of predictions matches number of outcomes\n if len(truth) == len(pred): \n \n # Calculate and return the accuracy as a percent\n return \"Predictions have an accuracy of {:.2f}%.\".format((truth == pred).mean()*100)\n \n else:\n return \"Number of predictions does not match number of outcomes!\"\n \n# Test the 'accuracy_score' function\npredictions = pd.Series(np.ones(5, dtype = int))\nprint accuracy_score(outcomes[:5], predictions)",
"Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.\n\nMaking Predictions\nIf we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.\nThe predictions_0 function below will always predict that a passenger did not survive.",
"def predictions_0(data):\n \"\"\" Model with no features. Always predicts a passenger did not survive. \"\"\"\n\n predictions = []\n for _, passenger in data.iterrows():\n \n # Predict the survival of 'passenger'\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_0(data)",
"Question 1\nUsing the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?\nHint: Run the code cell below to see the accuracy of this prediction.",
"print accuracy_score(outcomes, predictions)",
"Answer: 61.62%\n\nLet's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.\nRun the code cell below to plot the survival outcomes of passengers based on their sex.",
"vs.survival_stats(data, outcomes, 'Sex')",
"Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.",
"def predictions_1(data):\n \"\"\" Model with one feature: \n - Predict a passenger survived if they are female. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n if passenger['Sex']=='female':\n predictions.append(1)\n else:\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_1(data)",
"Question 2\nHow accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?\nHint: Run the code cell below to see the accuracy of this prediction.",
"print accuracy_score(outcomes, predictions)",
"Answer: 78.68%\n\nUsing just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.\nRun the code cell below to plot the survival outcomes of male passengers based on their age.",
"vs.survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\"])",
"Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.",
"def predictions_2(data):\n \"\"\" Model with two features: \n - Predict a passenger survived if they are female.\n - Predict a passenger survived if they are male and younger than 10. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n if passenger['Sex']=='female' or passenger['Age']<10:\n predictions.append(1)\n else:\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_2(data)",
"Question 3\nHow accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?\nHint: Run the code cell below to see the accuracy of this prediction.",
"print accuracy_score(outcomes, predictions)",
"Answer: 79.35%\n\nAdding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. \nPclass, Sex, Age, SibSp, and Parch are some suggested features to try.\nUse the survival_stats function below to to examine various survival statistics.\nHint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: [\"Sex == 'male'\", \"Age < 18\"]",
"vs.survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\", \"Age < 18\"])",
"After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.\nMake sure to keep track of the various features and conditions you tried before arriving at your final prediction model.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.",
"def predictions_3(data):\n \"\"\" Model with multiple features. Makes a prediction with an accuracy of at least 80%. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n if passenger['Pclass']==3 and passenger['Embarked']=='S':\n predictions.append(0)\n elif passenger['Sex']=='female' or passenger['Age']<10:\n predictions.append(1)\n elif passenger['Sex']=='female' and passenger['Pclass']==3 and passenger['Embarked']=='S':\n predictions.append(0)\n else:\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_3(data)",
"Question 4\nDescribe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?\nHint: Run the code cell below to see the accuracy of your predictions.",
"print accuracy_score(outcomes, predictions)",
"Answer: To improve accuracy of predictions, I tried to identify a subset of missclassified passengers (for instance a subset of females who did not survive and for which predictions_2 are inaccurate). Stratifying by passenger class was not sufficient (50% of female in class 3 still survive). However, stratifying by both class and port of embarkation showed that the majority of female in class 3 who embarked in Southampton did not survive.\nConclusion\nAfter several iterations of exploring and conditioning on the data, you have built a useful algorithm for predicting the survival of each passenger aboard the RMS Titanic. The technique applied in this project is a manual implementation of a simple machine learning model, the decision tree. A decision tree splits a set of data into smaller and smaller groups (called nodes), by one feature at a time. Each time a subset of the data is split, our predictions become more accurate if each of the resulting subgroups are more homogeneous (contain similar labels) than before. The advantage of having a computer do things for us is that it will be more exhaustive and more precise than our manual exploration above. This link provides another introduction into machine learning using a decision tree.\nA decision tree is just one of many models that come from supervised learning. In supervised learning, we attempt to use features of the data to predict or model things with objective outcome labels. That is to say, each of our data points has a known outcome value, such as a categorical, discrete label like 'Survived', or a numerical, continuous value like predicting the price of a house.\nQuestion 5\nThink of a real-world scenario where supervised learning could be applied. What would be the outcome variable that you are trying to predict? Name two features about the data used in this scenario that might be helpful for making the predictions. \nAnswer: A real-world scenario in Finance where supervised learning can and is applied is to assess the credit risk of a given borrower. One question to answer could be the following: given a set of characteristics about a loan and the borrower in question, what is the probability of the borrower to default over the lifetime of the loan? In that case, the outcome variable would be a binary variable 0/1 corresponding to default. An alternative question could be: what is the probability of the borrower to default at every point in time over the original lifetime of the loan? In that case, the outcome variable is P(t), probability of default at time t.\nFeatures to make this prediction include: income, age, loan-to-value ratio, debt-to-income ration, credit score, employment status, size of the property, zip-code, etc..\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
danijel3/ASRDemos
|
notebooks/VoxforgeDataPrep.ipynb
|
apache-2.0
|
[
"Preparing the Voxforge database\nThis notebook will demonstrate how to prepare the free Voxforge database for training. This database is a medium sized (~80 hours) database available online for free under the GPL license. A much more common database used in most research is the TIMIT, but that costs $250 and also much smaller (~4h - although much more professionally developed than Voxforge). The best alternative today is the Librispeech database, but that has a few dozen GB of data (almost 1000h) and wouldn't be sensible for a simple demo. So Voxforge it is...\nFirst thing to do is realize what a speech corpus actually is: in its simplest form it is a collection of audio files (containing preferably speech only) with a set of transcripts of the speech. There are a few extensions to this that are worth noting:\n * phonemes - transcripts are usually presented as a list of words - although not a rule, it is often easier to start the recognition process with phonemes and go from there. Voxforge defines a list of 39 phonemes (+ silence) and contains a lexicon mapping the words into phonemes (more about that below)\n * aligned speech - the transcripts are usually just a sequence of words/phonemes, but they don't denote which word/phoneme occurs when - there are models that can learn from that (seq. learning, e.g. CTC, attention models), but having alignments is usually a big plus. TIMIT was hand-aligned by a group of professionals (which is why its a popular resource for research), but Voxforge wasn't. Fortunately, we can use one of the many available tools to do this automatically (with a margin of error - more on that below)\n * meta-data - each recording session in the Voxforge database contains a readme file with useful information about the speaker and the environment that the recording took place in. When making a serious speech recognizer, this information can be very useful (e.g. for speaker adaptation - taking into account the speaker id, gender, age, etc...)\nDownloading the corpus\nTo start working with the corpus, it needs to be downloaded first. All the files can be found in the download section of the Voxforge website under this URL:\nhttp://www.repository.voxforge1.org/downloads/SpeechCorpus/Trunk/Audio/Main/16kHz_16bit/\nThere are 2 versions of the main corpus: sampled at 16kHz and 8kHz. The 16 kHz one is of better quality and is known as \"desktop quality speech\". While the original recordings were made at an even higher quality (44.1 kHz), 16k is completely sufficient for recoginzing speech (higher quality doesn't help much). 8 kHz is known as the telephony quality and is a standard value for the old (uncompressed, aka T0) digital telephone signal. If you are making a recognizer that has to work in the telephony environment, you should use this data instead\nTo download the whole dataset, a small program in Python is included in this demo. Be warned, this can take a long time (I think Voxforge is throttling the speed to save on costs) and restarts may be neccessary. The python method does check for failed downloads (compares file sizes) and restarts whatever wasn't downloaded completely, so you can run the method 2-3 times to make sure everything is ok.\nAlternatively, wou can use a program like wget and enter this command (where \"audio\" is the dir to save the data to):\nwget -P audio -l 1 -N -nd -c -e robots=off -A tgz -r -np http://www.repository.voxforge1.org/downloads/SpeechCorpus/Trunk/Audio/Main/16kHz_16bit\n\nFirst lets import all the voxforge methods from the python directory. These will need the following libraries installed on your system:\n * numpy - for working with data\n * random, urllib, lxml, os, tarfile, gzip, re, pickle, shutil - these are standard system libraries and anyone should have them\n * scikits.audiolab - to load the audio files from the database (WAV and FLAC files)\n * tqdm - a simple library for progressbars that you can install using pip",
"import sys\n\nsys.path.append('../python')\n\nfrom voxforge import *",
"Ignore any warnings above (I coudn't be bothered to compile audiolab with Alsa). Below you will find the method to download the Voxforge database. You only need to do this once, so you can run it either here or from a console or use wget. Be warned that it takes a long time (as mentioned earlier) so it's a good idea to leave it running over night.",
"downloadVoxforgeData('../audio')",
"Loading the corpus\nOnce the data is downloaded and stored in the 'audio' subdir of the main project dir, we can start loading the data into a Python datastructure. There are several methods that can be used for that. The following method will load a file and display its contents:",
"f=loadFile('../audio/Joel-20080716-qoz.tgz')\nprint f.props\nprint f.prompts\nprint f.data\n\n%xdel f",
"The loadBySpeaker method will load the whole folder and organize its contents by speakers (as a dictionary). Each utterance contains only the data and the prompts. For this demo, only 30 files are read - as this isn't a method we are going to ultimately use.",
"corp=loadBySpeaker('../audio', limit=30)",
"The corpus can also be extended by the phonetic transcription of the utterances using a lexicon file. Voxforge does provide such a file on its website and it is downloaded automatically (if it doesn't already exist).\nNote that a single word can have several transcriptions. In the lexicon, these alternatives will have sequential number suffixes added to the word (word, word2, word3, etc), but this particular function will do nothing about that. Choosing the right pronounciation variant has to be done either manually, or by using a more sophisticated program (a pre-trained ASR system) to choose the right version automatically.",
"addPhonemesSpk(corp,'../data/lex.tgz')\n\nprint corp.keys()\n\nspk=corp.keys()[0]\n\nprint corp[spk]\n\n%xdel corp",
"Aligned corpus\nAs mentioned earlier, this sort or cropus has it's downsides. For one, we don't know when each phoneme occurs so we cannot train the system discriminatavely. While it's still possible, it would be nice if we could start with a simpler example. Another problem is choosing the right pronounciation variant mentioned above.\nTo solve these issues, an automatic alignement was created using a different ASR system called Kaldi. This system is a very good ASR solution that implements various types of models. It also contains simple out-of-the-box scripts for training on Voxforge data.\nTo create the alignments using Kaldi, a working system had to be trained first and what's interesting, the same Voxforge data was used to train the system. How was this done? Well, Kaldi uses (among other things) a classic Gaussian Mixture Model and trains it using the EM algorithm. Initially the alignment is assumed to be even, throughout the file, but as the system is trained iteratively, the model gets better and thus the alignment gets more accurate. The system is trained with gradually better models to achieve even more accurate results and the provided solution here is generated using the \"tri3b\" model, as described in the scripts.\nThe alignments in Kaldi are stored in special binary files, but there are simple tools to help convert them into something more easier to use. The type of file chosen for this example is the CTM file, which contains a series of lines in a text file, each line describing a single word or phoneme. The description has 5 columns: encoded file name, unused id (always 1), segment start, segment length and segment text (i.e. word of phoneme name/value). This file was generated using Kaldi, compressed using gzip and stored in 'ali.ctm.gz' in the 'data' directory of this project.\nPlease note, that the number of files in this aligned set is smaller than the acutal count in the whole Voxforge dataset. This is because there is a small percentage of errors in the database (around a 100 files or so) and some recordings are of such poor quality that Kaldi couldn't generate a reasonable alignemnet for these files. We can simply ignore them here. This, however, doesn't mean that all the alignments present in the CTM are 100% accurate. There can still be mistakes there, but hopefully they are unlikely enough to not cause any issue.\nWhile this file contains everything that we need, it'd be useful to convert it into a datastructure that can be easily used in Python. The convertCTMToAli method is used for that:",
"convertCTMToAli('../data/ali.ctm.gz','../data/phones.list','../audio','../data/ali.pklz')",
"We store the generated datastructure into a gzipped and pickled file, so we don't need to perform this more than once. This file is already included in the repository, so you can skip the step above.\nWe can read the file like this:",
"import gzip\nimport pickle\nwith gzip.open('../data/ali.pklz') as f:\n ali=pickle.load(f)\n \nprint 'Number of utterances: {}'.format(len(ali))",
"Here is an example of the structure and its attributes loaded from that file:",
"print ali[100].spk\nprint ali[100].phones\nprint ali[100].ph_lens\nprint ali[100].archive\nprint ali[100].audiofile\nprint ali[100].data",
"Please note that the audio data is not yet loaded at this step (it's set to None).\nTest data\nBefore we go on, we need to prepare our test set. This needs to be completely independent from the training data and it needs to be the same for all the experiemnts we want to do, if we want to be able to make them comparable in any way. The test set also needs to be \"representable\" of the whole data we are working on (so they need to be chosen randomly from all the data).\nThis isn't the only way we could perform our experiments - very often people use what is known as \"k-fold cross validation\", but that would take a lot of time to do for all our experiemnts, so choosing a single representative evaluation set is a more convinient option.\nNow, generally most corpora have a designated evaluation set: for example, TIMIT has just such a set of 192 files that is used by most papers on the subject. Voxforge doesn't have anything like that and there aren't many papers out there using it as a resource anyway. One of the most advanced uses of Voxforge is in Kaldi and there they only shuffle the training set and choose 20-30 random speakers from that. To make our experiemtns at least \"comparable\" to TIMIT, we will try and do a similar thing here, but we will save the list of speakers (and their files) so anyone can use the same when conducting their experiemnts.\nWARNING If you want to compare the results of your own experiemnts to the ones from these notebooks, then don't run the code below and use the files provided in the repo. If you run the code below, you will reset the test ordering and your experiements won't be strictly comparable to the ones from these notebooks.",
"import random\nfrom sets import Set\n\n#make a list of speaker names\nspk=set()\nfor utt in ali:\n spk.add(utt.spk)\n\nprint 'Number of speakers: {}'.format(len(spk))\n\n#choose 20 random speakers\ntst_spk=list(spk)\nrandom.shuffle(tst_spk)\ntst_spk=tst_spk[:20]\n\n\n#save the list for reference - if anyone else wants to use our list (will be saved in the repo)\nwith open('../data/test_spk.list', 'w') as f:\n for spk in tst_spk:\n f.write(\"{}\\n\".format(spk))\n\nali_test=filter(lambda x: x.spk in tst_spk, ali)\nali_train=filter(lambda x: not x.spk in tst_spk, ali)\n\nprint 'Number of test utterances: {}'.format(len(ali_test))\nprint 'Number of train utterances: {}'.format(len(ali_train))\n\n#shuffle the utterances, to make them more uniform\nrandom.shuffle(ali_test)\nrandom.shuffle(ali_train)\n\n#save the data for future use\nwith gzip.open('../data/ali_test.pklz','wb') as f:\n pickle.dump(ali_test,f,pickle.HIGHEST_PROTOCOL)\n \nwith gzip.open('../data/ali_train.pklz','wb') as f:\n pickle.dump(ali_train,f,pickle.HIGHEST_PROTOCOL) ",
"To make things more managable for this demo, we will take 5% of the training set and work using that instead of the whole 80 hours. 5% should give us an amount similar to TIMIT. If you wish to re-run the experiments using the whole dataset, go to the bottom of this notebook for further instructions.",
"num=int(len(ali_train)*0.05)\n\nali_small=ali_train[:num]\n\nwith gzip.open('../data/ali_train_small.pklz','wb') as f:\n pickle.dump(ali_small,f,pickle.HIGHEST_PROTOCOL)",
"Here we load additional data using the loadAlignedCorpus method. It loads the alignment and the appropriate audio datafile for each utterance (it can take a while for larger corpora):",
"corp=loadAlignedCorpus('../data/ali_train_small.pklz','../audio')",
"We have to do the same for the test data:",
"corp_test=loadAlignedCorpus('../data/ali_test.pklz','../audio')",
"Now we can check if we have all the neccessary data: phonemes, phoneme alignments and data.",
"print 'Number of utterances: {}'.format(len(corp))\n\nprint 'List of phonemes:\\n{}'.format(corp[0].phones)\nprint 'Lengths of phonemes:\\n{}'.format(corp[0].ph_lens)\nprint 'Audio:\\n{}'.format(corp[0].data)\n\nsamp_num=0\nfor utt in corp:\n samp_num+=utt.data.size\n\nprint 'Length of cropus: {} hours'.format(((samp_num/16000.0)/60.0)/60.0)",
"Feature extraction\nTo perform a simple test, we will use a standard set of audio features used in many, if not most papers on speech recognition. This set of features will first split each file into a bunch of small chunks of equal size, giving about a 100 of such frames per second. Each chunk will then be converted into a vector of 39 real values. Furhtermore, each vector will be assigned a phonetic class (value from 0..39) thanks to the alignemnt created above. The problem can then be sovled as a simple classification problem that maps a real vector to a phonetic class.\nThis particular set of features is calculated to match a specification developed in a classic toolkit known as HTK. All the details on this feature set can be found under the linked repository here. If you want to experiment with this feature set (highly encouraged) please read the description there.\nIn the code below, we extract the set of features for each utterance and store the results, together with the classification decision for each frame. For performance reasons, we will store all the files in HDF5 format using the h5py library. This will allow us to read data directly from the drive without wasting too much RAM. This isn't as important when doing small experiemnts, but it will get relevant for doing the large ones.\nThe structure of the HDF5 file is broken into utterances. The file contains a list of utterances sotred as groups in the root and each utterance has 2 datasets: inputs and outputs. Later also normalized inputs are added.\nSince we intend to use this procedure more than once, we will encapsulate it into a function:",
"import sys\n\nsys.path.append('../PyHTK/python')\n\nimport numpy as np\nfrom HTKFeat import MFCC_HTK\nimport h5py\n\nfrom tqdm import *\n\ndef extract_features(corpus, savefile):\n \n mfcc=MFCC_HTK()\n h5f=h5py.File(savefile,'w')\n\n uid=0\n for utt in tqdm(corpus):\n\n feat=mfcc.get_feats(utt.data)\n delta=mfcc.get_delta(feat)\n acc=mfcc.get_delta(delta)\n\n feat=np.hstack((feat,delta,acc))\n utt_len=feat.shape[0]\n\n o=[]\n for i in range(len(utt.phones)):\n num=utt.ph_lens[i]/10\n o.extend([utt.phones[i]]*num)\n\n # here we fix an off-by-one error that happens very inrequently\n if utt_len-len(o)==1:\n o.append(o[-1])\n\n assert len(o)==utt_len\n\n uid+=1\n #instead of a proper name, we simply use a unique identifier: utt00001, utt00002, ..., utt99999\n g=h5f.create_group('/utt{:05d}'.format(uid))\n \n g['in']=feat\n g['out']=o\n \n h5f.flush()\n \n h5f.close()",
"Now let's process the small training and test datasets:",
"extract_features(corp,'../data/mfcc_train_small.hdf5')\nextract_features(corp_test,'../data/mfcc_test.hdf5')",
"Normalization\nWhile usable as-is, many machine learning models will perform badly if the data isn't standardized. Standarization or normalization stands for making sure that all the samples are distributed to a reasonable scale - usually centered around 0 (with a mean of 0) and spread to a standard deviation of 1. The reason for this is because the data can come from various sources - some people are louder, some are quiter, some have higher pitched voices, some lower, some used a more sensitive microphones than other, etc. That is why the audio between sessions can have various ranges of values. Normalization makes sure that all the recordings are tuned to a similar scale before processing.\nTo perform normalization we simply need to compute the mean and standard deviation of the given signal and then subtract the mean and divide by the standard deviation (thus making the new mean 0 and new stdev 1). A common question is what signal do we use to perform these calculations? Do we calculate it once for the whole corpus, or once per each utterance? Maybe once per speaker or once per session? Or maybe several times per utterance?\nGenerally, the longer the signal we do this on, the better (the statistics get more accurate), but performing it only once on the whole corpus doesn't make much sense because of what is written above. The reason we normalize the data is to remove the differences between recording sessions, so at minimum we should normalize each session seperately. In practice, it's easier to just normalize each utterance as they are long enough on their own. This is known as \"batch normalization\" (where each utterance is one batch).\nBut this makes one assumption, that the recording conditions don't change significantly throughout the whole utterance. In certain cases, it may actually be a good idea to split the utterance into several parts and normalize them sepeartely, in case the volume changes throught the recording, or maybe there is more than one speaker in a single file. This is solved best by using a technique known as \"online normalization\" which uses a sliding window to compute the statistics and can react to rapid changes in the values. This is, however, beyond the scope of this simple demo (and shouldn't really be neccessary for this corpus anyway).",
"def normalize(corp_file):\n \n h5f=h5py.File(corp_file)\n\n b=0\n for utt in tqdm(h5f):\n \n f=h5f[utt]['in']\n n=f-np.mean(f)\n n/=np.std(n) \n h5f[utt]['norm']=n\n \n h5f.flush()\n \n h5f.close()\n \n\nnormalize('../data/mfcc_train_small.hdf5')\nnormalize('../data/mfcc_test.hdf5')",
"To see what's inside we can run the following command in the terminal:",
"!h5ls ../data/mfcc_test.hdf5/utt00001",
"Simple classification example\nTo finish here, we will use a simple SGD classifier from the scikit.learn library to classify the phonemes from the database. We have all the datasets prepared above, so all we need to do is load the preapared arrays. We will use a special class called Corpus included in the data.py file. In the constructor we provide the path to the file and say that we wish to load the normalized inputs. Next we use the get() method to load the list of all the input and out values. This method returns a tuple - one value for inputs and one for outputs. Each of these is a list of arrays corresponding to individual utterances. We can then convert it into a single contiguous array using the concatenate and vstack methods:",
"from data import Corpus\nimport numpy as np\n\ntrain=Corpus('../data/mfcc_train_small.hdf5',load_normalized=True)\ntest=Corpus('../data/mfcc_test.hdf5',load_normalized=True)\n\ng=train.get()\ntr_in=np.vstack(g[0])\ntr_out=np.concatenate(g[1])\n\nprint 'Training input shape: {}'.format(tr_in.shape)\nprint 'Training output shape: {}'.format(tr_out.shape)\n\ng=test.get()\ntst_in=np.vstack(g[0])\ntst_out=np.concatenate(g[1])\n\nprint 'Test input shape: {}'.format(tst_in.shape)\nprint 'Test output shape: {}'.format(tst_in.shape)\n\ntrain.close()\ntest.close()",
"Here we create the SGD classifier model. Please note that the settings below work on the version 0.17 of scikit-learn, so it's recommended to upgrade. If you can't, then feel free to modify the settings to something that works for you. You may also turn on verbose to get more information on the training process. Here it's off to preserve space in the notebook.",
"import sklearn\nprint sklearn.__version__\n\nfrom sklearn.linear_model import SGDClassifier\n\nmodel=SGDClassifier(loss='log',n_jobs=-1,verbose=0,n_iter=100)",
"Here we train the model. It took 4 minutes for me:",
"%time model.fit(tr_in,tr_out)",
"Here we get about ~52% accuracy which is pretty bad for phoneme recogntion. In other notebooks, we will try to improve on that.",
"acc=model.score(tst_in,tst_out)\nprint 'Accuracy: {:%}'.format(acc)",
"Other data\nHere we will also prepare the rest of the data to perform other experiments. If you wish to make only simple experiments and don't want to waste time on preparing large datasets and wasting a lot of time, feel free to skip these steps. Be warned that the dataset for the full 80 hours of training data takes up to 10GB in so you will need that much memory in RAM as well as your drive to make it work using the code present in this notebook.",
"corp=loadAlignedCorpus('../data/ali_train.pklz','../audio')\nextract_features(corp,'../data/mfcc_train.hdf5')\nnormalize('../data/mfcc_train.hdf5')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
metpy/MetPy
|
v0.9/_downloads/8b48dbfbd7332023b4aeb5274ed5d62e/Point_Interpolation.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Point Interpolation\nCompares different point interpolation approaches.",
"import cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nfrom matplotlib.colors import BoundaryNorm\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom metpy.cbook import get_test_data\nfrom metpy.interpolate import (interpolate_to_grid, remove_nan_observations,\n remove_repeat_coordinates)\nfrom metpy.plots import add_metpy_logo\n\ndef basic_map(proj):\n \"\"\"Make our basic default map for plotting\"\"\"\n fig = plt.figure(figsize=(15, 10))\n add_metpy_logo(fig, 0, 80, size='large')\n view = fig.add_axes([0, 0, 1, 1], projection=proj)\n view._hold = True # Work-around for CartoPy 0.16/Matplotlib 3.0.0 incompatibility\n view.set_extent([-120, -70, 20, 50])\n view.add_feature(cfeature.STATES.with_scale('50m'))\n view.add_feature(cfeature.OCEAN)\n view.add_feature(cfeature.COASTLINE)\n view.add_feature(cfeature.BORDERS, linestyle=':')\n return fig, view\n\n\ndef station_test_data(variable_names, proj_from=None, proj_to=None):\n with get_test_data('station_data.txt') as f:\n all_data = np.loadtxt(f, skiprows=1, delimiter=',',\n usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),\n dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'),\n ('slp', 'f'), ('air_temperature', 'f'),\n ('cloud_fraction', 'f'), ('dewpoint', 'f'),\n ('weather', '16S'),\n ('wind_dir', 'f'), ('wind_speed', 'f')]))\n\n all_stids = [s.decode('ascii') for s in all_data['stid']]\n\n data = np.concatenate([all_data[all_stids.index(site)].reshape(1, ) for site in all_stids])\n\n value = data[variable_names]\n lon = data['lon']\n lat = data['lat']\n\n if proj_from is not None and proj_to is not None:\n\n try:\n\n proj_points = proj_to.transform_points(proj_from, lon, lat)\n return proj_points[:, 0], proj_points[:, 1], value\n\n except Exception as e:\n\n print(e)\n return None\n\n return lon, lat, value\n\n\nfrom_proj = ccrs.Geodetic()\nto_proj = ccrs.AlbersEqualArea(central_longitude=-97.0000, central_latitude=38.0000)\n\nlevels = list(range(-20, 20, 1))\ncmap = plt.get_cmap('magma')\nnorm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)\n\nx, y, temp = station_test_data('air_temperature', from_proj, to_proj)\n\nx, y, temp = remove_nan_observations(x, y, temp)\nx, y, temp = remove_repeat_coordinates(x, y, temp)",
"Scipy.interpolate linear",
"gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='linear', hres=75000)\nimg = np.ma.masked_where(np.isnan(img), img)\nfig, view = basic_map(to_proj)\nmmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)\nfig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)",
"Natural neighbor interpolation (MetPy implementation)\nReference <https://github.com/Unidata/MetPy/files/138653/cwp-657.pdf>_",
"gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='natural_neighbor', hres=75000)\nimg = np.ma.masked_where(np.isnan(img), img)\nfig, view = basic_map(to_proj)\nmmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)\nfig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)",
"Cressman interpolation\nsearch_radius = 100 km\ngrid resolution = 25 km\nmin_neighbors = 1",
"gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='cressman', minimum_neighbors=1,\n hres=75000, search_radius=100000)\nimg = np.ma.masked_where(np.isnan(img), img)\nfig, view = basic_map(to_proj)\nmmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)\nfig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)",
"Barnes Interpolation\nsearch_radius = 100km\nmin_neighbors = 3",
"gx, gy, img1 = interpolate_to_grid(x, y, temp, interp_type='barnes', hres=75000,\n search_radius=100000)\nimg1 = np.ma.masked_where(np.isnan(img1), img1)\nfig, view = basic_map(to_proj)\nmmb = view.pcolormesh(gx, gy, img1, cmap=cmap, norm=norm)\nfig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)",
"Radial basis function interpolation\nlinear",
"gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='rbf', hres=75000, rbf_func='linear',\n rbf_smooth=0)\nimg = np.ma.masked_where(np.isnan(img), img)\nfig, view = basic_map(to_proj)\nmmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)\nfig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)\n\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
thewtex/SimpleITK-Notebooks
|
63_Registration_Initialization.ipynb
|
apache-2.0
|
[
"<h1 align=\"center\"> Registration Initialization: We Have to Start Somewhere</h1>\n\nInitialization is a critical aspect of most registration algorithms, given that most algorithms are formulated as an iterative optimization problem.\nIn many cases we perform initialization in an automatic manner by making assumptions with regard to the contents of the image and the imaging protocol. For instance, if we expect that images were acquired with the patient in a known orientation we can align the geometric centers of the two volumes or the center of mass of the image contents if the anatomy is not centered in the image (this is what we previously did in this example).\nWhen the orientation is not known, or is known but incorrect, this approach will not yield a reasonable initial estimate for the registration.\nWhen working with clinical images, the DICOM tags define the orientation and position of the anatomy in the volume. The tags of interest are:\n<ul>\n <li> (0020|0032) Image Position (Patient) : coordinates of the the first transmitted voxel. </li>\n <li>(0020|0037) Image Orientation (Patient): directions of first row and column in 3D space. </li>\n <li>(0018|5100) Patient Position: Patient placement on the table \n <ul>\n <li> Head First Prone (HFP)</li>\n <li> Head First Supine (HFS)</li>\n <li> Head First Decibitus Right (HFDR)</li>\n <li> Head First Decibitus Left (HFDL)</li>\n <li> Feet First Prone (FFP)</li>\n <li> Feet First Supine (FFS)</li>\n <li> Feet First Decibitus Right (FFDR)</li>\n <li> Feet First Decibitus Left (FFDL)</li>\n </ul>\n </li>\n</ul>\n\nThe patient position is manually entered by the CT/MR operator and thus can be erroneous (HFP instead of FFP will result in a $180^o$ orientation error).\nA heuristic, yet effective, solution is to use a sampling strategy of the parameter space. Note that this strategy is primarily usefull in low dimensional parameter spaces (rigid or possibly affine transformations). \nIn this notebook we illustrate how to sample the parameter space in a fixed pattern. We then initialize the registration with the parameters that correspond to the best similiarity metric value obtained by our sampling.",
"import SimpleITK as sitk\nimport os\nimport numpy as np\n\nfrom ipywidgets import interact, fixed\nfrom downloaddata import fetch_data as fdata\n\nimport registration_callbacks as rc\nimport registration_utilities as ru\n\n# Always write output to a separate directory, we don't want to pollute the source directory. \nOUTPUT_DIR = 'Output'\n\n%matplotlib inline\n\n# This is the registration configuration which we use in all cases. The only parameter that we vary \n# is the initial_transform. \ndef multires_registration(fixed_image, moving_image, initial_transform):\n registration_method = sitk.ImageRegistrationMethod()\n registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)\n registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\n registration_method.SetMetricSamplingPercentage(0.01)\n registration_method.SetInterpolator(sitk.sitkLinear)\n registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100, estimateLearningRate=registration_method.Once)\n registration_method.SetOptimizerScalesFromPhysicalShift() \n registration_method.SetInitialTransform(initial_transform)\n registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])\n registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas = [2,1,0])\n registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()\n\n registration_method.AddCommand(sitk.sitkStartEvent, rc.metric_start_plot)\n registration_method.AddCommand(sitk.sitkEndEvent, rc.metric_end_plot)\n registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent, rc.metric_update_multires_iterations) \n registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_plot_values(registration_method))\n\n final_transform = registration_method.Execute(fixed_image, moving_image)\n print('Final metric value: {0}'.format(registration_method.GetMetricValue()))\n print('Optimizer\\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))\n return final_transform",
"Loading Data",
"data_directory = os.path.dirname(fdata(\"CIRS057A_MR_CT_DICOM/readme.txt\"))\n\nfixed_series_ID = \"1.2.840.113619.2.290.3.3233817346.783.1399004564.515\"\nmoving_series_ID = \"1.3.12.2.1107.5.2.18.41548.30000014030519285935000000933\"\n\nreader = sitk.ImageSeriesReader()\nfixed_image = sitk.ReadImage(reader.GetGDCMSeriesFileNames(data_directory, fixed_series_ID), sitk.sitkFloat32)\nmoving_image = sitk.ReadImage(reader.GetGDCMSeriesFileNames(data_directory, moving_series_ID), sitk.sitkFloat32)\n\n# To provide a reasonable display we need to window/level the images. By default we could have used the intensity\n# ranges found in the images [SimpleITK's StatisticsImageFilter], but these are not the best values for viewing.\n# Using an external viewer we identified the following settings.\nfixed_intensity_range = (-1183,544)\nmoving_intensity_range = (0,355)\n\ninteract(lambda image1_z, image2_z, image1, image2,:ru.display_scalar_images(image1_z, image2_z, image1, image2, \n fixed_intensity_range,\n moving_intensity_range,\n 'fixed image',\n 'moving image'), \n image1_z=(0,fixed_image.GetSize()[2]-1), \n image2_z=(0,moving_image.GetSize()[2]-1), \n image1 = fixed(fixed_image), \n image2=fixed(moving_image));",
"Arbitrarily rotate the moving image.",
"rotation_x = 0.0\nrotation_z = 0.0\n\ndef modify_rotation(rx_in_degrees, rz_in_degrees):\n global rotation_x, rotation_z\n \n rotation_x = np.radians(rx_in_degrees)\n rotation_z = np.radians(rz_in_degrees)\n \ninteract(modify_rotation, rx_in_degrees=(0.0,180.0,5.0), rz_in_degrees=(-90.0,180.0,5.0));\n\nresample = sitk.ResampleImageFilter()\nresample.SetReferenceImage(moving_image)\nresample.SetInterpolator(sitk.sitkLinear)\n# Rotate around the physical center of the image. \nrotation_center = moving_image.TransformContinuousIndexToPhysicalPoint([(index-1)/2.0 for index in moving_image.GetSize()])\ntransform = sitk.Euler3DTransform(rotation_center, rotation_x, 0, rotation_z, (0,0,0))\nresample.SetTransform(transform)\nmodified_moving_image = resample.Execute(moving_image)\n\ninteract(lambda image1_z, image2_z, image1, image2,:ru.display_scalar_images(image1_z, image2_z, image1, image2, \n moving_intensity_range,\n moving_intensity_range, 'original', 'rotated'), \n image1_z=(0,moving_image.GetSize()[2]-1), \n image2_z=(0,modified_moving_image.GetSize()[2]-1), \n image1 = fixed(moving_image), \n image2=fixed(modified_moving_image));",
"Register using standard initialization (assumes orientation is similar)",
"initial_transform = sitk.CenteredTransformInitializer(fixed_image, \n modified_moving_image, \n sitk.Euler3DTransform(), \n sitk.CenteredTransformInitializerFilter.GEOMETRY)\n\nfinal_transform = multires_registration(fixed_image, modified_moving_image, initial_transform)",
"Visually evaluate our results:",
"moving_resampled = sitk.Resample(modified_moving_image, fixed_image, final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelIDValue())\n\ninteract(ru.display_images_with_alpha, image_z=(0,fixed_image.GetSize()[2]), alpha=(0.0,1.0,0.05), \n image1 = fixed(sitk.IntensityWindowing(fixed_image, fixed_intensity_range[0], fixed_intensity_range[1])), \n image2=fixed(sitk.IntensityWindowing(moving_resampled, moving_intensity_range[0], moving_intensity_range[1])));",
"Register using heuristic initialization approach (using multiple orientations)\nAs we want to account for significant orientation differences due to erroneous patient position (HFS...) we evaluate the similarity measure at locations corresponding to the various orientation differences. This can be done in two ways which will be illustrated below:\n<ul>\n<li>Use the ImageRegistrationMethod.MetricEvaluate() method.</li>\n<li>Use the Exhaustive optimizer.\n</ul>\n\nThe former approach is more computationally intensive as it constructs and configures a metric object each time it is invoked. It is therefore more appropriate for use if the set of parameter values we want to evaluate are not on a rectilinear grid in the parameter space. The latter approach is appropriate if the set of parameter values are on a rectilinear grid, in which case the approach is more computationally efficient.\nIn both cases we use the CenteredTransformInitializer to obtain the initial translation.\nMetricEvaluate\nTo use the MetricEvaluate method we create a ImageRegistrationMethod, set its metric and interpolator. We then iterate over all parameter settings, set the initial transform and evaluate the metric. The minimal similarity measure value corresponds to the best paramter settings.",
"# Dictionary with all the orientations we will try. We omit the identity (x=0, y=0, z=0) as we always use it. This\n# set of rotations is arbitrary. For a complete grid coverage we would have 64 entries (0,pi/2,pi,1.5pi for each angle).\nall_orientations = {'x=0, y=0, z=90': (0.0,0.0,np.pi/2.0),\n 'x=0, y=0, z=-90': (0.0,0.0,-np.pi),\n 'x=0, y=0, z=180': (0.0,0.0,np.pi),\n 'x=180, y=0, z=0': (np.pi,0.0,0.0),\n 'x=180, y=0, z=90': (np.pi,0.0,np.pi/2.0),\n 'x=180, y=0, z=-90': (np.pi,0.0,-np.pi/2.0),\n 'x=180, y=0, z=180': (np.pi,0.0,np.pi)} \n\n# Registration framework setup.\nregistration_method = sitk.ImageRegistrationMethod()\nregistration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)\nregistration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\nregistration_method.SetMetricSamplingPercentage(0.01)\nregistration_method.SetInterpolator(sitk.sitkLinear)\n\n# Evaluate the similarity metric using the eight possible orientations, translation remains the same for all.\ninitial_transform = sitk.Euler3DTransform(sitk.CenteredTransformInitializer(fixed_image, \n modified_moving_image, \n sitk.Euler3DTransform(), \n sitk.CenteredTransformInitializerFilter.GEOMETRY))\nregistration_method.SetInitialTransform(initial_transform, inPlace=False)\nbest_orientation = (0.0,0.0,0.0)\nbest_similarity_value = registration_method.MetricEvaluate(fixed_image, modified_moving_image)\n\n# Iterate over all other rotation parameter settings. \nfor key, orientation in all_orientations.items():\n initial_transform.SetRotation(*orientation)\n registration_method.SetInitialTransform(initial_transform)\n current_similarity_value = registration_method.MetricEvaluate(fixed_image, modified_moving_image)\n if current_similarity_value < best_similarity_value:\n best_similarity_value = current_similarity_value\n best_orientation = orientation\n\ninitial_transform.SetRotation(*best_orientation)\n\nfinal_transform = multires_registration(fixed_image, modified_moving_image, initial_transform) ",
"Visually evaluate our results:",
"moving_resampled = sitk.Resample(modified_moving_image, fixed_image, final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelIDValue())\n\ninteract(ru.display_images_with_alpha, image_z=(0,fixed_image.GetSize()[2]), alpha=(0.0,1.0,0.05), \n image1 = fixed(sitk.IntensityWindowing(fixed_image, fixed_intensity_range[0], fixed_intensity_range[1])), \n image2=fixed(sitk.IntensityWindowing(moving_resampled, moving_intensity_range[0], moving_intensity_range[1])));",
"Exhaustive optimizer\nThe exhaustive optimizer evaluates the similarity measure using a grid overlaid on the parameter space.\nThe grid is centered on the parameter values set by the SetInitialTransform, and the location of its vertices are determined by the <b>numberOfSteps</b>, <b>stepLength</b> and <b>optimizer scales</b>. To quote the documentation of this class: \"a side of the region is stepLength(2numberOfSteps[d]+1)*scaling[d].\"\nUsing this approach we have superfluous evaluations (15 evaluations corresponding to 3 values for rotations around the x axis and five for rotation around the z axis, as compared to the 8 evaluations using the MetricEvaluate method).",
"initial_transform = sitk.CenteredTransformInitializer(fixed_image, \n modified_moving_image, \n sitk.Euler3DTransform(), \n sitk.CenteredTransformInitializerFilter.GEOMETRY)\nregistration_method = sitk.ImageRegistrationMethod()\nregistration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)\nregistration_method.SetMetricSamplingStrategy(registration_method.RANDOM)\nregistration_method.SetMetricSamplingPercentage(0.01)\nregistration_method.SetInterpolator(sitk.sitkLinear)\n# The order of parameters for the Euler3DTransform is [angle_x, angle_y, angle_z, t_x, t_y, t_z]. The parameter \n# sampling grid is centered on the initial_transform parameter values, that are all zero for the rotations. Given\n# the number of steps and their length and optimizer scales we have:\n# angle_x = -pi, 0, pi\n# angle_y = 0\n# angle_z = -pi, -pi/2, 0, pi/2, pi\nregistration_method.SetOptimizerAsExhaustive(numberOfSteps=[1,0,2,0,0,0], stepLength = np.pi)\nregistration_method.SetOptimizerScales([1,1,0.5,1,1,1])\n\n#Perform the registration in-place so that the initial_transform is modified.\nregistration_method.SetInitialTransform(initial_transform, inPlace=True)\nregistration_method.Execute(fixed_image, modified_moving_image)\n\nfinal_transform = multires_registration(fixed_image, modified_moving_image, initial_transform)",
"Visually evaluate our results:",
"moving_resampled = sitk.Resample(modified_moving_image, fixed_image, final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelIDValue())\n\ninteract(ru.display_images_with_alpha, image_z=(0,fixed_image.GetSize()[2]), alpha=(0.0,1.0,0.05), \n image1 = fixed(sitk.IntensityWindowing(fixed_image, fixed_intensity_range[0], fixed_intensity_range[1])), \n image2=fixed(sitk.IntensityWindowing(moving_resampled, moving_intensity_range[0], moving_intensity_range[1])));"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
eford/rebound
|
ipython_examples/FourierSpectrum.ipynb
|
gpl-3.0
|
[
"Fourier analysis & resonances\nA great benefit of being able to call rebound from within python is the ability to directly apply sophisticated analysis tools from scipy and other python libraries. Here we will do a simple Fourier analysis of a reduced Solar System consisting of Jupiter and Saturn. Let's begin by setting our units and adding these planets using JPL's horizons database:",
"import rebound\nimport numpy as np\nsim = rebound.Simulation()\nsim.units = ('AU', 'yr', 'Msun')\nsim.add(\"Sun\")\nsim.add(\"Jupiter\")\nsim.add(\"Saturn\")",
"Now let's set the integrator to whfast, and sacrificing accuracy for speed, set the timestep for the integration to about $10\\%$ of Jupiter's orbital period.",
"sim.integrator = \"whfast\"\nsim.dt = 1. # in years. About 10% of Jupiter's period\nsim.move_to_com()",
"The last line (moving to the center of mass frame) is important to take out the linear drift in positions due to the constant COM motion. Without it we would erase some of the signal at low frequencies.\nNow let's run the integration, storing time series for the two planets' eccentricities (for plotting) and x-positions (for the Fourier analysis). Additionally, we store the mean longitudes and pericenter longitudes (varpi) for reasons that will become clear below. Having some idea of what the secular timescales are in the Solar System, we'll run the integration for $3\\times 10^5$ yrs. We choose to collect $10^5$ outputs in order to resolve the planets' orbital periods ($\\sim 10$ yrs) in the Fourier spectrum.",
"Nout = 100000\ntmax = 3.e5\nNplanets = 2\n\nx = np.zeros((Nplanets,Nout))\necc = np.zeros((Nplanets,Nout))\nlongitude = np.zeros((Nplanets,Nout))\nvarpi = np.zeros((Nplanets,Nout))\n\ntimes = np.linspace(0.,tmax,Nout)\nps = sim.particles\n\nfor i,time in enumerate(times):\n sim.integrate(time) \n # note we used above the default exact_finish_time = 1, which changes the timestep near the outputs to match\n # the output times we want. This is what we want for a Fourier spectrum, but technically breaks WHFast's\n # symplectic nature. Not a big deal here.\n os = sim.calculate_orbits()\n for j in range(Nplanets):\n x[j][i] = ps[j+1].x # we use the 0 index in x for Jup and 1 for Sat, but the indices for ps start with the Sun at 0\n ecc[j][i] = os[j].e\n longitude[j][i] = os[j].l\n varpi[j][i] = os[j].Omega + os[j].omega",
"Let's see what the eccentricity evolution looks like with matplotlib:",
"%matplotlib inline\nlabels = [\"Jupiter\", \"Saturn\"]\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(12,5))\nax = plt.subplot(111)\nplt.plot(times,ecc[0],label=labels[0])\nplt.plot(times,ecc[1],label=labels[1])\nax.set_xlabel(\"Time (yrs)\")\nax.set_ylabel(\"Eccentricity\")\nplt.legend();",
"Now let's try to analyze the periodicities in this signal. Here we have a uniformly spaced time series, so we could run a Fast Fourier Transform, but as an example of the wider array of tools available through scipy, let's run a Lomb-Scargle periodogram (which allows for non-uniform time series). This could also be used when storing outputs at each timestep using the integrator IAS15 (which uses adaptive and therefore nonuniform timesteps).\nLet's check for periodicities with periods logarithmically spaced between 10 and $10^5$ yrs. From the documentation, we find that the lombscargle function requires a list of corresponding angular frequencies (ws), and we obtain the appropriate normalization for the plot. To avoid conversions to orbital elements, we analyze the time series of Jupiter's x-position.",
"from scipy import signal\nNpts = 3000\nlogPmin = np.log10(10.)\nlogPmax = np.log10(1.e5)\nPs = np.logspace(logPmin,logPmax,Npts)\nws = np.asarray([2*np.pi/P for P in Ps])\n\nperiodogram = signal.lombscargle(times,x[0],ws)\n\nfig = plt.figure(figsize=(12,5))\nax = plt.subplot(111)\nax.plot(Ps,np.sqrt(4*periodogram/Nout))\nax.set_xscale('log')\nax.set_xlim([10**logPmin,10**logPmax])\nax.set_ylim([0,0.15])\nax.set_xlabel(\"Period (yrs)\")\nax.set_ylabel(\"Power\")",
"We pick out the obvious signal in the eccentricity plot with a period of $\\approx 45000$ yrs, which is due to secular interactions between the two planets. There is quite a bit of power aliased into neighboring frequencies due to the short integration duration, with contributions from the second secular timescale, which is out at $\\sim 2\\times10^5$ yrs and causes a slower, low-amplitude modulation of the eccentricity signal plotted above (we limited the time of integration so that the example runs in a few seconds). \nAdditionally, though it was invisible on the scale of the eccentricity plot above, we clearly see a strong signal at Jupiter's orbital period of about 12 years. \nBut wait! Even on this scale set by the dominant frequencies of the problem, we see an additional blip just below $10^3$ yrs. Such a periodicity is actually visible in the above eccentricity plot if you inspect the thickness of the lines. Let's investigate by narrowing the period range:",
"fig = plt.figure(figsize=(12,5))\nax = plt.subplot(111)\nax.plot(Ps,np.sqrt(4*periodogram/Nout))\nax.set_xscale('log')\nax.set_xlim([600,1600])\nax.set_ylim([0,0.003])\nax.set_xlabel(\"Period (yrs)\")\nax.set_ylabel(\"Power\")",
"This is the right timescale to be due to resonant perturbations between giant planets ($\\sim 100$ orbits). In fact, Jupiter and Saturn are close to a 5:2 mean-motion resonance. This is the famous great inequality that Laplace showed was responsible for slight offsets in the predicted positions of the two giant planets. Let's check whether this is in fact responsible for the peak. \nIn this case, we have that the mean longitude of Jupiter $\\lambda_J$ cycles approximately 5 times for every 2 of Saturn's ($\\lambda_S$). The game is to construct a slowly-varying resonant angle, which here could be $\\phi_{5:2} = 5\\lambda_S - 2\\lambda_J - 3\\varpi_J$, where $\\varpi_J$ is Jupiter's longitude of pericenter. This last term is a much smaller contribution to the variation of $\\phi_{5:2}$ than the first two, but ensures that the coefficients in the resonant angle sum to zero and therefore that the physics do not depend on your choice of coordinates.\nTo see a clear trend, we have to shift each value of $\\phi_{5:2}$ into the range $[0,360]$ degrees, so we define a small helper function that does the wrapping and conversion to degrees:",
"def zeroTo360(val):\n while val < 0:\n val += 2*np.pi\n while val > 2*np.pi:\n val -= 2*np.pi\n return val*180/np.pi",
"Now we construct $\\phi_{5:2}$ and plot it over the first 5000 yrs.",
"phi = [zeroTo360(5.*longitude[1][i] - 2.*longitude[0][i] - 3.*varpi[0][i]) for i in range(Nout)]\nfig = plt.figure(figsize=(12,5))\nax = plt.subplot(111)\nax.plot(times,phi)\nax.set_xlim([0,5.e3])\nax.set_ylim([0,360.])\nax.set_xlabel(\"time (yrs)\")\nax.set_ylabel(r\"$\\phi_{5:2}$\")",
"We see that the resonant angle $\\phi_{5:2}$ circulates, but with a long period of $\\approx 900$ yrs (compared to the orbital periods of $\\sim 10$ yrs), which precisely matches the blip we saw in the Lomb-Scargle periodogram. This is approximately the same oscillation period observed in the Solar System, despite our simplified setup!\nThis resonant angle is able to have a visible effect because its (small) effects build up coherently over many orbits. As a further illustration, other resonance angles like those at the 2:1 will circulate much faster (because Jupiter and Saturn's period ratio is not close to 2). We can easily plot this. Taking one of the 2:1 resonance angles $\\phi_{2:1} = 2\\lambda_S - \\lambda_J - \\varpi_J$,",
"phi2 = [zeroTo360(2*longitude[1][i] - longitude[0][i] - varpi[0][i]) for i in range(Nout)]\nfig = plt.figure(figsize=(12,5))\nax = plt.subplot(111)\nax.plot(times,phi2)\nax.set_xlim([0,5.e3])\nax.set_ylim([0,360.])\nax.set_xlabel(\"time (yrs)\")\nax.set_ylabel(r\"$\\phi_{2:1}$\")",
"In this case, since we are far from this particular resonance (the 2:1), the corresponding resonance angles vary on fast (orbital) timescales, and their effects simply average out."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
CUBoulder-ASTR2600/lectures
|
lecture_19_intro_classes.ipynb
|
isc
|
[
"Classes\nIn a very simplicistic view, Classes are just pre-defined storage containers with additional functionality that knows how to access the currently stored data. (=methods).\nSo, for example, the simplest class possible is actually an empty container you can just use to store stuff in, very similar to dictionaries, or structures in other languages:",
"class Mammal:\n pass",
"This is sometimes useful to just quickly store things together in the same object for logical separations:",
"mammal = Mammal() # now I have an \"object\" or \"instance\" of a class\n\nmammal\n\nmammal.n_of_legs = 4 # this is a new data attribute.\n# as usual, Python doesn't need it to be defined before.\n\nmammal.noise = 'blarg'\n\nmammal.__dict__ # this is a useful internal attribute listing all data attributes\n\nprint(mammal.n_of_legs) # note this attribute becomes <tab>-complete-able",
"Now, for a mid-size to bigger project, we don't want to define things only on the go.\nCreating a class with some structure and functionality will increase our efficiency when working\nwith a lot of data that has a lot of sub-structure.",
"class Mammal:\n # these are data attributes of the class\n name = 'Mammal'\n n_of_legs = 0\n noise = None # indicating non-functionality\n nutrition_status = 0\n \n # this is a method of the class\n # methods are just like functions, but they always refer back to the\n # current object with the first argument being 'self', and after that\n # can take other arguments for functionality.\n def make_noise(self):\n if self.noise is not None:\n print(self.noise)\n else:\n print(\"Not implemented yet.\")\n \n def feed(self, units):\n \"This is a minimalist docstring.\"\n \n self.nutrition_status += units\n print(\"Was fed {} units of nutrition.\".format(units))\n\nprint(Mammal.n_of_legs)",
"Let's create an object of this class:",
"mammal = Mammal() # this is called \"instantiation of an object\"",
"Coding standards The usual applied coding style is that classes are defined with capital letters, and the instantiated object is often called the same name but with small letters. (Unless the object becomes more specific, see later).\nNow, let's use a method:",
"mammal.make_noise()",
"The noise attribute isn't set yet, so that's what we get.",
"mammal.noise = 'snort'\n\nmammal.noise = None\n\nmammal.make_noise()",
"\"Under the hood\" we have changed an attribute of the method being used. In other words, how a method works can be highly status dependent.\nThis status-like programming style is both hard to follow at times, but also creates great opportunities, for example, to write methods that just automatically do the right thing, because they would read-out its own status from class attributes that have been set when a status changed.\nNote: \n1. instances are independent of each other\n2. You do not HAVE TO create an object to use things inside a class\nFirst, # 1: Independence",
"mammal2 = Mammal()\nmammal2.make_noise()\n\nmammal2.noise = 'burp'\n\nfor m in [mammal, mammal2]:\n m.make_noise()\n\nmammal.feed(5)\n\nfor m in [mammal, mammal2]:\n print(m.nutrition_status)",
"Now #2: \nClasses can be used for accessing class-based data, that are NOT supposed to change between instances, like the name 'mammal' for example.",
"Mammal.name\n\nMammal.n_of_legs",
"Using __init__ to initalize a class\nIt's a bit inconvenient to set up things after instantiating a new object,\nso here's how to do it in one go, using the special __init__ method:",
"import datetime as dt\n\nclass Mammal:\n # these are data attributes of the class\n name = 'Mammal'\n n_of_legs = 0\n noise = None\n nutrition_status = 0\n \n # Always refer to self first in class methods!\n # This 'self' is used to attach data to itself when being \n # 'alive' as an instance later on!\n def __init__(self, noise, legs):\n \"\"\"The initialization method. Always called __init__ ! \"\"\"\n self.noise = noise \n self.n_of_legs = legs \n self.creation_time = dt.datetime.now().isoformat()\n \n # this is a method of the class\n # methods are just like functions, but they always refer back to the\n # current object with the first argument being 'self', and after that\n # can take other arguments for functionality.\n def make_noise(self):\n if self.noise is not None:\n print(self.noise)\n else:\n print(\"Not implemented yet.\")\n \n def feed(self, units):\n \"This is a minimalist docstring.\"\n \n self.nutrition_status += units\n print(\"Was fed {} units of nutrition.\".format(units))\n\nmammal = Mammal('bark', 4)\n\nmammal.make_noise()\n\nmammal.n_of_legs",
"Note the difference between class attributes and instance attributes.\nInstance attributes only exist after instantiation of an object (an 'alive' version of the mere theoretical class).\nWhile class attributes always exist.",
"Mammal.creation_time\n\nmammal.creation_time\n\nMammal.name\n# i did not create this attribute with the __init__ method, yet it exists",
"The __init__ function is often used to execute something that takes a bit more time than standard operation, maybe like connecting to a remote database and read out some data.\nBy generating an instance of that class, the data that was read out then stays alive within that object and can be accessed whenever required later on.\nFor more applicability, let's leave the Mammals alone for now and talk about how to apply classes to Planets, but keep them in mind for later when we discuss inheritance.",
"# Define the class. Planet is the name of the class.\nclass Planet:\n # note how i can make an argument optional, just like for functions\n def __init__(self, name, diameter=5000):\n \"\"\"The initialization of my class.\n This is a special method (function) that gets called every time you\n create a new class. Every method in a class has \"self\" as the first\n parameter. The additional parameters to __init__ are the class's\n input parameters (in this case name and diameter). You can set a\n default value to each parameter (diameter has a default of 5000).\n \"\"\"\n self.name = name\n self.diameter = diameter\n \n def __str__(self):\n s = \"This is {} with a diameter of {}\".format(self.name, self.diameter)\n return s\n \n# def __repr__(self):\n# return self.__str__()",
"Creating and Using a Class\nTo recap:\nTo use a class you've written, you first need to create an \"instance\" of the class. Very much like with lists or dictionaries (in fact, dictionaries and lists are classes!). Some classes have required input parameters. Some classes have optional input parameters. Some classes have no input parameters at all.",
"# Create 2 planets. The first by passing in as input the name and diameter.\n# With the second planet we just pass in the name so the diameter takes on\n# the default value (5000 in this case).\nplanet1 = Planet(\"Crypton\", 13000)\nplanet2 = Planet(\"Eternia\")\n\nprint(planet1)\n\nplanet1\n\n# Define the class. Planet is the name of the class.\nclass Planet:\n # note how i can make an argument optional, just like for functions\n def __init__(self, name, diameter = 5000):\n \"\"\"The initialization of my class.\n This is a special method (function) that gets called every time you\n create a new class. Every method in a class has \"self\" as the first\n parameter. The additional parameters to __init__ are the class's\n input parameters (in this case name and diameter). You can set a\n default value to each parameter (diameter has a default of 5000).\n \"\"\"\n self.name = name\n self.diameter = diameter\n \n def __str__(self):\n s = \"This is {} with a diameter of {}\".format(self.name, self.diameter)\n return s\n \n def __repr__(self):\n return self.__str__()\n\np = Planet('Rubycon', 10000)\np",
"Inheritance\nOne of the most powerful features of OO-programming is inheritance.\nThis means I can inherit features of class definitions in a so called sub-class:",
"class Mammal:\n n_of_legs = 0\n noise = None\n \n def make_noise(self):\n print(self.noise)\n \nclass Dog(Mammal):\n n_of_legs=4\n noise = 'bark'\n\ndog = Dog()\n\ndog.make_noise()",
"The Dog class has inherited the method make_noise from the Mammal class, because we assume all mammals make some kind of noise.\nThe Dog classes fixes the noise to the pre-defined noise and from now on, we would not have to deal with setting the noise anymore, because the Dog class specizalized it for us.\nCreate and Use a Class Based on Planet (derived from the Planet class)",
"class Earth(Planet):\n def __init__(self):\n \"\"\" This is the initialization method for Earth. This will run every time\n an Earth is created. Notice there are no input parameters.\n \"\"\"\n # This is the initialization method for the mother class Planet. \n # Notice we pass in \"Earth\" and \"12700\".\n # Now every Earth will have the name \"Earth\" and a diameter of \"12700\".\n super().__init__(\"Earth\", 12700)\n print(\"THIS IS EARTH!\")\n # Here we are adding more variables (data). These belong to Earth and\n # not Planet. Again, the \"self\" in front of the variable allows us to\n # use these variables anywhere in this class.\n self.oceans = [\"Pacific\", \"Atlantic\", \"Indian\", \"Southern\", \"Artic\"]\n self.continents = [\"North America\", \"South America\", \"Antarctica\", \\\n \"Africa\", \"Europe\", \"Asia\", \"Australia\"]\n\n# Create a new Earth\nearth = Earth()\n\nearth.diameter\n\nearth\n\n# Print the Earth's oceans list and continents list. Note that calling the\n# methods found in the Earth class look the same as calling methods found in\n# the Planet class.\nprint(\"Ocean List: \", earth.oceans)\nprint(\"Continent List:\", earth.continents)\n\nearth.mass = 6e24\n\nearth.__dict__"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.