repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
sdpython/ensae_teaching_cs
|
_doc/notebooks/td1a_soft/td1a_cython_edit_correction.ipynb
|
mit
|
[
"1A.soft - Calcul numérique et Cython - correction",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()",
"Exercice : python/C appliqué à une distance d'édition\nOn reprend la fonction donnée dans l'énoncé.",
"def distance_edition(mot1, mot2):\n dist = { (-1,-1): 0 }\n for i,c in enumerate(mot1) :\n dist[i,-1] = dist[i-1,-1] + 1\n dist[-1,i] = dist[-1,i-1] + 1\n for j,d in enumerate(mot2) :\n opt = [ ]\n if (i-1,j) in dist : \n x = dist[i-1,j] + 1\n opt.append(x)\n if (i,j-1) in dist : \n x = dist[i,j-1] + 1\n opt.append(x)\n if (i-1,j-1) in dist :\n x = dist[i-1,j-1] + (1 if c != d else 0)\n opt.append(x)\n dist[i,j] = min(opt)\n return dist[len(mot1)-1,len(mot2)-1]\n\n%timeit distance_edition(\"idstzance\",\"distances\")",
"solution avec notebook\nLes préliminaires :",
"%load_ext cython",
"Puis :",
"%%cython --annotate\ncimport cython\n\ndef cidistance_edition(str mot1, str mot2):\n cdef int dist [500][500]\n cdef int cost, c \n cdef int l1 = len(mot1)\n cdef int l2 = len(mot2)\n \n dist[0][0] = 0\n for i in range(l1):\n dist[i+1][0] = dist[i][0] + 1\n dist[0][i+1] = dist[0][i] + 1\n for j in range(l2):\n cost = dist[i][j+1] + 1\n c = dist[i+1][j] + 1\n if c < cost : cost = c\n c = dist[i][j]\n if mot1[i] != mot2[j] : c += 1\n if c < cost : cost = c\n dist[i+1][j+1] = cost\n cost = dist[l1][l2]\n return cost\n\nmot1, mot2 = \"idstzance\",\"distances\"\n%timeit cidistance_edition(mot1, mot2)",
"solution sans notebook",
"import sys\nfrom pyquickhelper.loghelper import run_cmd\n\ncode = \"\"\"\ndef cdistance_edition(str mot1, str mot2):\n cdef int dist [500][500]\n cdef int cost, c \n cdef int l1 = len(mot1)\n cdef int l2 = len(mot2)\n \n dist[0][0] = 0\n for i in range(l1):\n dist[i+1][0] = dist[i][0] + 1\n dist[0][i+1] = dist[0][i] + 1\n for j in range(l2):\n cost = dist[i][j+1] + 1\n c = dist[i+1][j] + 1\n if c < cost : cost = c\n c = dist[i][j]\n if mot1[i] != mot2[j] : c += 1\n if c < cost : cost = c\n dist[i+1][j+1] = cost\n cost = dist[l1][l2]\n return cost\n\"\"\"\n\nname = \"cedit_distance\"\nwith open(name + \".pyx\",\"w\") as f : f.write(code)\n\nsetup_code = \"\"\"\nfrom distutils.core import setup\nfrom Cython.Build import cythonize\nsetup(\n ext_modules = cythonize(\"__NAME__.pyx\",\n compiler_directives={'language_level' : \"3\"})\n)\n\"\"\".replace(\"__NAME__\",name)\n\nwith open(\"setup.py\",\"w\") as f:\n f.write(setup_code)\n\ncmd = \"{0} setup.py build_ext --inplace\".format(sys.executable)\n\nout,err = run_cmd(cmd)\nif err is not None and err != '': \n raise Exception(err)\n \nimport pyximport\npyximport.install()\nimport cedit_distance\n \nfrom cedit_distance import cdistance_edition\n\nmot1, mot2 = \"idstzance\",\"distances\"\n%timeit cdistance_edition(mot1, mot2)",
"La version Cython est 10 fois plus rapide. Et cela ne semble pas dépendre de la dimension du problème.",
"mot1 = mot1 * 10\nmot2 = mot2 * 10\n%timeit distance_edition(mot1,mot2)\n%timeit cdistance_edition(mot1, mot2)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chseifert/tutorials
|
visual-perception/Contrast-Effects.ipynb
|
apache-2.0
|
[
"Contrast Effects\nAuthors\nNdèye Gagnessiry Ndiaye and Christin Seifert\nLicense\nThis work is licensed under the Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/\nThis notebook illustrates 3 contrast effects:\n- Simultaneous Brightness Contrast \n- Chevreul Illusion \n- Contrast Crispening\nSimultaneous Brightness Contrast\nSimultaneous Brightness Contrast is the general effect where a gray patch placed on a dark background looks lighter than the same gray patch on a light background (foreground and background affect each other). The effect is based on lateral inhibition.\nAlso see the following video as an example:\nhttps://www.youtube.com/watch?v=ZYh4SxE7Xp8",
"import numpy as np\nimport matplotlib.pyplot as plt\n",
"The following image shows a gray square on different backgrounds. The inner square always has the same color (84% gray), and is successively shown on 0%, 50%, 100%, and 150% gray background patches. Note, how the inner squares are perceived differently (square on the right looks considerably darker than the square on the left). \nSuggestion: Change the gray values of the inner and outer squares and see what happens.",
"# defining the inner square as 3x3 array with an initial gray value\ninner_gray_value = 120\ninner_square = np.full((3,3), inner_gray_value, np.double)\n\n# defining the outer squares and overlaying the inner square\na = np.zeros((5,5), np.double)\na[1:4, 1:4] = inner_square\n\nb = np.full((5,5), 50, np.double)\nb[1:4, 1:4] = inner_square\n\nc = np.full((5,5), 100, np.double)\nc[1:4, 1:4] = inner_square\n\nd = np.full((5,5), 150, np.double)\nd[1:4, 1:4] = inner_square\n\nsimultaneous=np.hstack((a,b,c,d))\n\n\nim=plt.imshow(simultaneous, cmap='gray',interpolation='nearest',vmin=0, vmax=255) \n#plt.rcParams[\"figure.figsize\"] = (70,10)\nplt.axis('off')\nplt.colorbar(im, orientation='horizontal')\nplt.show()\n",
"Chevreul Illusion\nThe following images visualizes the Chevreul illusion. We use a sequence of gray bands (200%, 150%, 100%, 75% and 50% gray). One band has a uniform gray value. When putting the bands next to each other, the gray values seem to be darker at the edges. This is due to lateral inhibition, a feature of our visual system that increases edge contrasts and helps us to better detect outlines of shapes.",
"e = np.full((9,5), 200, np.double)\nf = np.full((9,5), 150, np.double)\ng = np.full((9,5), 100, np.double)\nh = np.full((9,5), 75, np.double)\ni = np.full((9,5), 50, np.double)\nimage1= np.hstack((e,f,g,h,i))\n\ne[:,4] = 255\nf[:,4] = 255\ng[:,4] = 255\nh[:,4] = 255\ni[:,4] = 255\nimage2=np.hstack((e,f,g,h,i))\n\nplt.subplot(1,2,1)\nplt.imshow(image1, cmap='gray',vmin=0, vmax=255,interpolation='nearest',aspect=4) \nplt.title('Bands')\nplt.axis('off')\n\n\nplt.subplot(1,2,2)\nplt.imshow(image2, cmap='gray',vmin=0, vmax=255,interpolation='nearest',aspect=4) \nplt.title('Bands with white breaks')\nplt.axis('off')\n\nplt.show()",
"Contrast Crispening\nThe following images show the gray strips on a gray-scale background. Left image: All vertical gray bands are the same. Note how different parts of the vertical gray bands are enhanced (i.e., difference better perceivable) depending on the gray value of the background. In fact, differences are enhanced when the gray value in the foreground is closer to the gray value in the background. On the right, the same vertical bands are shown but without the background. In this image you can (perceptually) verify that all vertical gray bands are indeed the same.",
"strips = np.linspace( 0, 255, 10, np.double) \nstrips = strips.reshape((-1, 1))\nM = np.linspace( 255, 0, 10, np.double) \nn = np.ones((20, 10), np.double)\n\nbackground = n[:,:]*M\nbackground[5:15,::2] = strips\n\nwithout_background = np.full((20,10), 255, np.double)\nwithout_background[5:15,::2] = strips\n\nplt.subplot(1,2,1)\nplt.imshow(background, cmap='gray',vmin=0, vmax=255,interpolation='nearest') \nplt.tick_params(axis='both', left='off', top='off', right='off', bottom='off', labelleft='off', labeltop='off', labelright='off', labelbottom='off')\n\n\nplt.subplot(1,2,2)\nplt.imshow(without_background, cmap='gray',vmin=0, vmax=255,interpolation='nearest')\nplt.tick_params(axis='both', left='off', top='off', right='off', bottom='off', labelleft='off', labeltop='off', labelright='off', labelbottom='off')\n\n\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tpin3694/tpin3694.github.io
|
machine-learning/remove_punctuation.ipynb
|
mit
|
[
"Title: Remove Punctuation\nSlug: remove_punctuation\nSummary: How to remove punctuation from unstructured text data for machine learning in Python. \nDate: 2016-09-08 12:00\nCategory: Machine Learning\nTags: Preprocessing Text\nAuthors: Chris Albon\nPreliminaries",
"# Load libraries\nimport string\nimport numpy as np",
"Create Text Data",
"# Create text\ntext_data = ['Hi!!!! I. Love. This. Song....', \n '10000% Agree!!!! #LoveIT', \n 'Right?!?!']",
"Remove Punctuation",
"# Create function using string.punctuation to remove all punctuation\ndef remove_punctuation(sentence: str) -> str:\n return sentence.translate(str.maketrans('', '', string.punctuation))\n\n# Apply function\n[remove_punctuation(sentence) for sentence in text_data]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wcmckee/wcmckee.com
|
posts/getsdrawn.ipynb
|
mit
|
[
"<h1>GetsDrawn DotCom</h1>\n\nThis is a python script to generate the website GetsDrawn. It takes data from /r/RedditGetsDrawn and makes something awesome.\nThe script has envolved and been rewritten several times. \nThe first script for rgdsnatch was written after I got banned from posting my artwork on /r/RedditGetsDrawn. The plan was to create a new site that displayed stuff from /r/RedditGetsDrawn. \nCurrently it only displays the most recent 25 items on redditgetsdrawn. The script looks at the newest 25 reference photos on RedditGetsDrawn. It focuses only on jpeg/png images and ignores and links to none .jpg or .png ending files. \nIt is needed to instead of ignoring them files - get the image or images in some cases, from the link.\nThe photos are always submitted from imgur.\nStill filter out the i.imgur files, but take the links and filter them through a python imgur module returning the .jpeg or .png files. \nThis is moving forward from rgdsnatch.py because I am stuck on it. \nTODO\nFix the links that don't link to png/jpeg and link to webaddress. \nNeeds to get the images that are at that web address and embed them.\nDisplay artwork submitted under the images. \nUpload artwork to user. Sends them a message on redditgetsdrawn with links. \nMore pandas\nSaves reference images to imgs/year/month/day/reference/username-reference.png\nSaves art images to imgs/year/month/day/art/username-line-bw-colour.png \nCreates index.html file with:\nTitle of site and logo: GetsDrawn\nLast updated date and time. \nPath of image file /imgs/year/month/day/username-reference.png. \n(This needs changed to just their username).\nSave off .meta data from reddit of each photo, saving it to reference folder.\nusername-yrmnthday.meta - contains info such as author, title, upvotes, downvotes.\nCurrently saving .meta files to a meta folder - along side art and reference. \nFolder sorting system of files. \nwebsitename/index.html-style.css-imgs/YEAR(15)-MONTH(2)-DAY(4)/art-reference-meta\nInside art folder\nCurrently it generates USERNAME-line/bw/colour.png 50/50 white files. Maybe should be getting art replies from reddit?\nInside reference folder\nReference fold is working decent. \nit creates USERNAME-reference.png / jpeg files. \nCurrently saves username-line-bw-colour.png to imgs folder. Instead get it to save to imgs/year/month/day/usernames.png.\nScript checks the year/month/day and if folder isnt created, it creates it. If folder is there, exit. \nMaybe get the reference image and save it with the line/bw/color.pngs\nThe script now filters the jpeg and png image and skips links to imgur pages. This needs to be fixed by getting the images from the imgur pages.\nIt renames the image files to the redditor username followed by a -reference tag (and ending with png of course).\nIt opens these files up with PIL and checks the sizes. \nIt needs to resize the images that are larger than 800px to 800px.\nThese images need to be linked in the index.html instead of the imgur altenatives. \nInstead of the jpeg/png files on imgur they are downloaded to the server with this script. \nFilter through as images are getting downloaded and if it has been less than certain time or if the image has been submitted before \nExtending the subreddits it gets data from to cycle though a list, run script though list of subreddits.\nBrowse certain days - Current day by default but option to scroll through other days.\nFilters - male/female/animals/couples etc\nFunction that returns only male portraits. \ntags to add to photos. \nFilter images with tags",
"import os \nimport requests\nfrom bs4 import BeautifulSoup\nimport re\nimport json\nimport time\nimport praw\nimport dominate\nfrom dominate.tags import * \nfrom time import gmtime, strftime\n#import nose\n#import unittest\nimport numpy as np\nimport pandas as pd\nfrom pandas import *\nfrom PIL import Image\nfrom pprint import pprint\n#import pyttsx\nimport shutil\n\ngtsdrndir = ('/home/wcmckee/getsdrawndotcom')\n\nos.chdir(gtsdrndir)\n\nr = praw.Reddit(user_agent='getsdrawndotcom')\n\n#getmin = r.get_redditor('itwillbemine')\n\n#mincom = getmin.get_comments()\n\n#engine = pyttsx.init()\n\n#engine.say('The quick brown fox jumped over the lazy dog.')\n#engine.runAndWait()\n\n#shtweet = []\n\n#for mi in mincom:\n# print mi\n# shtweet.append(mi)\n\nbodycom = []\nbodyicv = dict()\n\n#beginz = pyttsx.init()\n\n#for shtz in shtweet:\n# print shtz.downs\n# print shtz.ups\n# print shtz.body\n# print shtz.replies\n #beginz.say(shtz.author)\n #beginz.say(shtz.body)\n #beginz.runAndWait()\n \n# bodycom.append(shtz.body)\n #bodyic\n\n#bodycom \n\ngetnewr = r.get_subreddit('redditgetsdrawn')\n\nrdnew = getnewr.get_new()\n\nlisrgc = []\nlisauth = []\n\nfor uz in rdnew:\n #print uz\n lisrgc.append(uz)\n\ngtdrndic = dict()\n\nimgdir = ('/home/wcmckee/getsdrawndotcom/imgs')\n\nartlist = os.listdir(imgdir)\n\nfrom time import time\n\nyearz = strftime(\"%y\", gmtime())\nmonthz = strftime(\"%m\", gmtime())\ndayz = strftime(\"%d\", gmtime())\n\n\n#strftime(\"%y %m %d\", gmtime())\n\nimgzdir = ('imgs/')\nyrzpat = (imgzdir + yearz)\nmonzpath = (yrzpat + '/' + monthz)\ndayzpath = (monzpath + '/' + dayz)\nrmgzdays = (dayzpath + '/reference')\nimgzdays = (dayzpath + '/art')\nmetzdays = (dayzpath + '/meta')\n\nrepathz = ('imgs/' + yearz + '/' + monthz + '/' + dayz + '/')\n\nmetzdays\n\nimgzdays\n\nrepathz\n\ndef ospacheck():\n if os.path.isdir(imgzdir + yearz) == True:\n print 'its true'\n else:\n print 'its false'\n os.mkdir(imgzdir + yearz)\n \n\nospacheck()\n\n#if os.path.isdir(imgzdir + yearz) == True:\n# print 'its true'\n#else:\n# print 'its false'\n# os.mkdir(imgzdir + yearz)\n\nlizmon = ['monzpath', 'dayzpath', 'imgzdays', 'rmgzdays', 'metzdays']\n\nfor liz in lizmon:\n if os.path.isdir(liz) == True:\n print 'its true'\n else:\n print 'its false'\n os.mkdir(liz)\n\nfullhom = ('/home/wcmckee/getsdrawndotcom/')\n\n#artlist\n\nhttpad = ('http://getsdrawn.com/imgs')\n\n#im = Image.new(\"RGB\", (512, 512), \"white\")\n#im.save(file + \".thumbnail\", \"JPEG\")\n\nrmgzdays = (dayzpath + '/reference')\nimgzdays = (dayzpath + '/art')\nmetzdays = (dayzpath + '/meta')\n\nos.chdir(fullhom + metzdays)\n\nmetadict = dict()",
"if i save the data to the file how am i going to get it to update as the post is archieved. Such as up and down votes.",
"for lisz in lisrgc:\n metadict.update({'up': lisz.ups})\n metadict.update({'down': lisz.downs})\n metadict.update({'title': lisz.title})\n metadict.update({'created': lisz.created})\n #metadict.update({'createdutc': lisz.created_utc})\n #print lisz.ups\n #print lisz.downs\n #print lisz.created\n #print lisz.comments\n\nmetadict",
"Need to save json object.\nDict is created but it isnt saving. Looping through lisrgc twice, should only require the one loop.\nCycle through lisr and append to dict/concert to json, and also cycle through lisr.author meta folders saving the json that was created.",
"for lisr in lisrgc:\n gtdrndic.update({'title': lisr.title})\n lisauth.append(str(lisr.author))\n for osliz in os.listdir(fullhom + metzdays):\n with open(str(lisr.author) + '.meta', \"w\") as f:\n rstrin = lisr.title.encode('ascii', 'ignore').decode('ascii')\n #print matdict\n #metadict = dict()\n #for lisz in lisrgc:\n # metadict.update({'up': lisz.ups})\n # metadict.update({'down': lisz.downs})\n # metadict.update({'title': lisz.title})\n # metadict.update({'created': lisz.created})\n f.write(rstrin)\n\n\n#matdict",
"I have it creating a meta folder and creating/writing username.meta files. It wrote 'test' in each folder, but now it writes the photo author title of post.. the username/image data. It should be writing more than author title - maybe upvotes/downvotes, subreddit, time published etc.",
"#os.listdir(dayzpath)",
"Instead of creating these white images, why not download the art replies of the reference photo.",
"#for lisa in lisauth:\n# #print lisa + '-line.png'\n# im = Image.new(\"RGB\", (512, 512), \"white\")\n# im.save(lisa + '-line.png')\n# im = Image.new(\"RGB\", (512, 512), \"white\")\n# im.save(lisa + '-bw.png')\n\n #print lisa + '-bw.png'\n# im = Image.new(\"RGB\", (512, 512), \"white\")\n# im.save(lisa + '-colour.png')\n\n #print lisa + '-colour.png'\n\nos.listdir('/home/wcmckee/getsdrawndotcom/imgs')\n\n#lisauth",
"I want to save the list of usernames that submit images as png files in a dir. \nCurrently when I call the list of authors it returns Redditor(user_name='theusername'). I want to return 'theusername'.\nOnce this is resolved I can add '-line.png' '-bw.png' '-colour.png' to each folder.",
"#lisr.author\n\nnamlis = []\n\nopsinz = open('/home/wcmckee/visignsys/index.meta', 'r')\npanz = opsinz.read()\n\nos.chdir('/home/wcmckee/getsdrawndotcom/' + rmgzdays)",
"Filter the non jpeg/png links. Need to perform request or imgur api to get the jpeg/png files from the link. Hey maybe bs4?",
"from imgurpython import ImgurClient\n\nopps = open('/home/wcmckee/ps.txt', 'r')\nopzs = open('/home/wcmckee/ps2.txt', 'r')\noprd = opps.read()\nopzrd = opzs.read()\n\nclient = ImgurClient(oprd, opzrd)\n\n# Example request\n#items = client.gallery()\n#for item in items:\n# print(item.link)\n \n\n#itz = client.get_album_images()\n\nlinklis = []",
"I need to get the image ids from each url. Strip the http://imgur.com/ from the string. The gallery id is the random characters after. if it's an album a is added. if multi imgs then , is used to seprate. \nDoesnt currently work.",
"for rdz in lisrgc:\n if 'http://imgur.com' in rdz.url:\n print rdz.url\n #itz = client.get_album_images()\n# reimg = requests.get(rdz.url)\n## retxt = reimg.text\n# souptxt = BeautifulSoup(''.join(retxt))\n# soupurz = souptxt.findAll('img')\n# for soupuz in soupurz:\n# imgurl = soupuz['src']\n# print imgurl\n# linklis.append(imgurl)\n \n #try:\n # imzdata = requests.get(imgurl)\n\nlinklis\n\nif '.jpg' in linklis:\n print 'yes'\nelse:\n print 'no'\n\n#panz()\nfor rdz in lisrgc:\n (rdz.title)\n #a(rdz.url)\n if 'http://i.imgur.com' in rdz.url:\n #print rdz.url\n print (rdz.url)\n url = rdz.url\n response = requests.get(url, stream=True)\n with open(str(rdz.author) + '-reference.png', 'wb') as out_file:\n shutil.copyfileobj(response.raw, out_file)\n del response\n\napsize = []\n\naptype = []\n\nbasewidth = 600\n\nimgdict = dict()\n\nfor rmglis in os.listdir('/home/wcmckee/getsdrawndotcom/' + rmgzdays):\n #print rmglis\n im = Image.open(rmglis)\n #print im.size\n imgdict.update({rmglis : im.size})\n #im.thumbnail(size, Image.ANTIALIAS)\n #im.save(file + \".thumbnail\", \"JPEG\")\n apsize.append(im.size)\n aptype.append(rmglis)\n\n#for imdva in imgdict.values():\n #print imdva\n #for deva in imdva:\n #print deva\n # if deva < 1000:\n # print 'omg less than 1000'\n # else:\n # print 'omg more than 1000'\n # print deva / 2\n #print imgdict.values\n # Needs to update imgdict.values with this new number. Must halve height also.\n\n#basewidth = 300\n#img = Image.open('somepic.jpg')\n#wpercent = (basewidth/float(img.size[0]))\n#hsize = int((float(img.size[1])*float(wpercent)))\n#img = img.resize((basewidth,hsize), PIL.Image.ANTIALIAS)\n#img.save('sompic.jpg')\n\n#os.chdir(metzdays)\n\n#for numz in apsize:\n# print numz[0]\n # if numz[0] > 800:\n# print ('greater than 800')\n# else:\n# print ('less than 800!')\n\nreliz = []\n\nfor refls in os.listdir('/home/wcmckee/getsdrawndotcom/' + rmgzdays):\n #print rmgzdays + refls\n reliz.append(rmgzdays + '/' + refls)\n\nreliz\n\naptype\n\nopad = open('/home/wcmckee/ad.html', 'r')\n\nopred = opad.read()\n\nstr2 = opred.replace(\"\\n\", \"\")\n\nstr2\n\ndoc = dominate.document(title='GetsDrawn')\n\nwith doc.head:\n link(rel='stylesheet', href='style.css')\n script(type ='text/javascript', src='script.js')\n str(str2)\n \n with div():\n attr(cls='header')\n h1('GetsDrawn')\n p(img('imgs/getsdrawn-bw.png', src='imgs/getsdrawn-bw.png'))\n #p(img('imgs/15/01/02/ReptileLover82-reference.png', src= 'imgs/15/01/02/ReptileLover82-reference.png'))\n h1('Updated ', strftime(\"%a, %d %b %Y %H:%M:%S +0000\", gmtime()))\n p(panz)\n p(bodycom)\n \n \n\nwith doc:\n with div(id='body').add(ol()):\n for rdz in reliz:\n #h1(rdz.title)\n #a(rdz.url)\n #p(img(rdz, src='%s' % rdz))\n #print rdz\n p(img(rdz, src = rdz))\n p(rdz)\n\n\n \n #print rdz.url\n #if '.jpg' in rdz.url:\n # img(rdz.urlz)\n #else:\n # a(rdz.urlz)\n #h1(str(rdz.author))\n \n #li(img(i.lower(), src='%s' % i))\n\n with div():\n attr(cls='body')\n p('GetsDrawn is open source')\n a('https://github.com/getsdrawn/getsdrawndotcom')\n a('https://reddit.com/r/redditgetsdrawn')\n\n#print doc\n\ndocre = doc.render()\n\n#s = docre.decode('ascii', 'ignore')\n\nyourstring = docre.encode('ascii', 'ignore').decode('ascii')\n\nindfil = ('/home/wcmckee/getsdrawndotcom/index.html')\n\nmkind = open(indfil, 'w')\nmkind.write(yourstring)\nmkind.close()\n\n#os.system('scp -r /home/wcmckee/getsdrawndotcom/ wcmckee@getsdrawn.com:/home/wcmckee/getsdrawndotcom')\n\n#rsync -azP source destination\n\n#updatehtm = raw_input('Update index? Y/n')\n#updateref = raw_input('Update reference? Y/n')\n\n#if 'y' or '' in updatehtm:\n# os.system('scp -r /home/wcmckee/getsdrawndotcom/index.html wcmckee@getsdrawn.com:/home/wcmckee/getsdrawndotcom/index.html')\n#elif 'n' in updatehtm:\n# print 'not uploading'\n#if 'y' or '' in updateref:\n# os.system('rsync -azP /home/wcmckee/getsdrawndotcom/ wcmckee@getsdrawn.com:/home/wcmckee/getsdrawndotcom/')\n\nos.system('scp -r /home/wcmckee/getsdrawndotcom/index.html wcmckee@getsdrawn.com:/home/wcmckee/getsdrawndotcom/index.html')\n\n#os.system('scp -r /home/wcmckee/getsdrawndotcom/style.css wcmckee@getsdrawn.com:/home/wcmckee/getsdrawndotcom/style.css')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chi-hung/notebooks
|
BASH.ipynb
|
mit
|
[
"import os",
"Let's play iPython and BASH a bit\ncount number of paths in $PATH:",
"path=!echo $PATH\n\nprint path\n\npath[0].split(\":\")\n\nprint len(path[0].split(\":\"))",
"which is the same as the following command in BASH shell:",
"!echo $PATH|tr \":\" \" \"|wc -w",
"change the language environment",
"!locale\n\n!export LANG='en_US.UTF-8'\n\n!locale",
"look for files:\nthe flag -ld of \"ls\" means \"list directory\".",
"!ls -ld /etc/p*\n\n!ls -ld /etc/p* | wc -l\n\n!ls -ld /etc/p????\n\n!ls -ld /etc/p???? | wc -l\n\n!ls -ld /etc/p[aeiou]*\n\n!ls -ld /etc/p[aeiou]* | wc -l\n\n!ls -ld /etc/p[!aeiou]*\n\n!ls -ld /etc/p[^aeiou]*",
"where [!aeiou] means: not a or e or i or u.\nIndeed,in this case, regular expressoin [^aeiou] also works.",
"!touch d{m,n,o}t\n\n!ls",
"regular expression",
"import re\n\n!wget http://linux.vbird.org/linux_basic/0330regularex/regular_express.txt\n\n!cat -n regular_express.txt\n\n!cat regular_express.txt |wc -l\n\n!grep -n 'the' regular_express.txt",
"the above BASH command is the same as the following code in Python:",
"pattern = re.compile('the')\n\nfor line in open(\"regular_express.txt\", \"r\"):\n if pattern.search(line) is not None:\n print line\n\n!grep -nv 'the' regular_express.txt\n\n!grep -ni 'the' regular_express.txt\n\n!grep -n 'air' regular_express.txt\n\n!grep -ni 't[ae]st' regular_express.txt\n\n!grep -n 't[ae]st' regular_express.txt\n\n!grep -n '[^g]oo' regular_express.txt\n\n!grep -n '[[:digit:]]' regular_express.txt\n\n!grep -n '[[:lower:]]' regular_express.txt",
"get the line which is not started by any capital or lower-case alphabets:",
"!grep -n '^[^A-Za-z]' regular_express.txt",
"get the line which is ended by '!':",
"!grep -n '!$' regular_express.txt",
"get the line which is ended by '.' :",
"!grep -n '.$' regular_express.txt\n\n!grep -n 's.c' regular_express.txt\n\n!grep -n 's[a-zA-Z]c' regular_express.txt\n\n!grep -n 'oo' regular_express.txt\n\n!grep -n 'ooo*' regular_express.txt\n\n!grep -n 'g*g' regular_express.txt",
"using 'ooo*' will lead to the same result as using 'oo'",
"!grep -nE 'goo+g' regular_express.txt\n\n!ps -el |grep -E '^[0-9]+ R'",
"-E flag has to be added if one want to use expressions such as + or |.",
"!dpkg -L iproute2 | grep -E '/bin|/sbin'\n\n!dpkg -L iproute2 | grep -E '/bin|/sbin' | wc -l\n\n!dpkg -L iproute2 | grep -E '/s?bin' | wc -l",
"Remark\n?: showing once or 0 times\n*: showing for any times (including 0 times)\n+: showing at least once\nRemark:\n-i, --ignore-case\n-v --invert-match\n-n, --line-number\n-E: extended",
"f=open(\"regular_express.txt\", \"r\")\nfile=f.read()\n\nprint repr(file)\n\nfor line in open(\"regular_express.txt\", \"r\"):\n print line\n\nos.mkdir(\"tmp\")\n\nos.listdir(os.getcwd())\n\nls\n\nos.removedirs(\"tmp\")\n\nos.listdir(\".\")",
"learn a bit of os.walk():",
"for root,dirs,files in os.walk(os.getcwd()):\n print root,dirs,files\n print\n\nfor root, dirs, files in os.walk(os.getcwd()):\n for file in files:\n print os.path.join(root,file)\n\nfor root, dirs, files in os.walk(os.getcwd()):\n for file in files:\n if file.endswith('.txt'):\n print file\n\n!whereis regex\n\n!find / -name 'ifconfig'\n\n!find -type f -user chweng -name '*.txt'\n\n!find -type d -user chweng -name '0*'\n\n!sudo fdisk -l | grep -nE \"^Disk /dev/[hs]d\"\n\nsudo find /etc -type f | wc -l\n\ndu -hl /etc/",
"bash\nchweng@chweng-VirtualBox:~$ jobs\n[1]+ Running sleep 100 &\nchweng@chweng-VirtualBox:~$ fg 1 # move process 1 to the foreground\nsleep 100\nthen, we can press ctrl+z, which will move the process to the background and pause it.\nthen, type \"bg 1\" in order to start it again in the background\nprocess\nflags: \n-e:all processes\n-l:state\n-f: also print UID(user ID) and PPID(parent process ID)",
"!ps -el",
"the \"top\" command\nhttp://mugurel.sumanariu.ro/linux/the-difference-among-virt-res-and-shr-in-top-output/\n\nRES stands for the resident size, which is an accurate representation of how much actual physical memory a process is consuming. (This also corresponds directly to the %MEM column.) This will virtually always be less than the VIRT size, since most programs depend on the C library.\nSHR indicates how much of the VIRT size is actually sharable (memory or libraries). In the case of libraries, it does not necessarily mean that the entire library is resident. For example, if a program only uses a few functions in a library, the whole library is mapped and will be counted in VIRT and SHR, but only the parts of the library file containing the functions being used will actually be loaded in and be counted under RES.\n\nuse \"top -o RES\" to sort by resident size(RES)\n16.11.2016\nThe following cell is the content of the script testPS.sh:\n```BASH\n!/bin/bash\nps -f\nread\n```\nNow, we execute it:\n```BASH\nchweng@chweng-VirtualBox:~/code/exercises$ ./testPS.sh \nUID PID PPID C STIME TTY TIME CMD\nchweng 9960 2374 0 16:26 pts/31 00:00:00 bash\nchweng 9974 9960 0 16:26 pts/31 00:00:00 /bin/bash ./testPS.sh\nchweng 9975 9974 0 16:26 pts/31 00:00:00 ps -f\nchweng@chweng-VirtualBox:~/code/exercises$ source testPS.sh \nUID PID PPID C STIME TTY TIME CMD\nchweng 9960 2374 0 16:26 pts/31 00:00:00 bash\nchweng 9980 9960 0 16:26 pts/31 00:00:00 ps -f\n```\n```BASH\nchweng@chweng-VirtualBox:~$ var1=1111\nchweng@chweng-VirtualBox:~$ var2=3333\nchweng@chweng-VirtualBox:~$ echo \"${var1}222\"\n1111222\nchweng@chweng-VirtualBox:~$ set | grep \"var1\"\nvar1=1111\nchweng@chweng-VirtualBox:~$ set | grep \"var2\"\nvar2=3333\n```\nBASH\nchweng@chweng-VirtualBox:$ export var1\nchweng@chweng-VirtualBox:$ bash\nchweng@chweng-VirtualBox:$ echo $var1\n1111\nchweng@chweng-VirtualBox:$ echo $var2\nSome examples that uses BASH variables:",
"%%bash\nvar=12345\necho \"The length of var1=$var is ${#var}.\"\n\n%%bash\n\nset $(eval du -sh ~$user);dir_sz=$1\necho \"$1,$2\"\necho \"${dir_sz}\"\n\necho \"\"\n\nset $(eval df -h|grep \" /$\");fs_sz=$2\necho \"$1,$2\"\necho \"${fs_sz}\"\n\n%%bash\n\nset $(eval du -sh ~$user);dir_sz=$1\nset $(eval df -h|grep \" /$\");fs_sz=$2\necho \"Size of my home directory is ${dir_sz}.\"\necho \"Size of my file system size is ${fs_sz}.\"",
"In BASH, a line is excuted successfully if the exit status is 0.",
"%%bash\n\n:\necho \"exit status=$?\"\n\n%%bash\n\nls /dhuoewyr242q\necho \"exit status=$?\"\n\n%%bash\n\ntrue\necho \"exit status=$?\"\n\n%%bash\n\nfalse\necho \"exit status=$?\"\n\n%%bash\n\nvalue=123\ntest $value==\"123\"\necho \"exit status=$?\"\necho\"\"\ntest $value==\"456\"\necho \"exit status=$?\"",
"The above result is wrong. it's necessary to wrap == with SPACES, as the follows:",
"%%bash\n\nvalue=123\ntest $value == \"123\"\necho \"exit status=$?\"\necho\"\"\ntest $value == \"456\"\necho \"exit status=$?\"",
"the command test can be replaced by its synonym [ ]:",
"%%bash\nhelp [\n\n%%bash\n\nvalue=123\n[ $value == \"123\" ]\necho \"exit status=$?\"\necho\"\"\n[ $value == \"456\" ]\necho \"exit status=$?\"\n\n%%bash\n\n/usr/bin/[ 0 == 1 ]\necho \"exit status=$?\"",
"example: ex4-4.sh:\nuse the advanced-test:[[]]\nto compare different integer strings in different forms (it could be that one in decimal and another in octal format)",
"%%bash\n\n#!/bin/bash\n# using [ and [[\n\nfile=/etc/passwd\n\nif [[ -e $file ]]\nthen\n echo \"Password file exists.\"\nfi\n\n# [[ Octal and hexadecimal evaluation ]]\n# Thank you, Moritz Gronbach, for pointing this out.\n\ndecimal=15\noctal=017 # = 15 (decimal)\nhex=0x0f # = 15 (decimal)\n\nif [ \"$decimal\" -eq \"$octal\" ]\nthen\n echo \"$decimal equals $octal\"\nelse\n echo \"$decimal is not equal to $octal\" # 15 is not equal to 017\nfi # Doesn't evaluate within [ single brackets ]!\n\nif [[ \"$decimal\" -eq \"$octal\" ]]\nthen\n echo \"$decimal equals $octal\" # 15 equals 017\nelse\n echo \"$decimal is not equal to $octal\"\nfi # Evaluates within [[ double brackets ]]!\n\nif [[ \"$decimal\" -eq \"$hex\" ]]\nthen\n echo \"$decimal equals $hex\" # 15 equals 0x0f\nelse\n echo \"$decimal is not equal to $hex\"\nfi # [[ $hexadecimal ]] also evaluates!",
"example: ex4-4.sh:",
"!mkdir /home/chweng/a\n\n!touch /home/chweng/a/123.txt\n\n%%bash\n#!/bin/bash\n# using file test operator\n\nDEST=\"~/b\"\nSRC=\"~/a\"\n\n# Make sure backup dir exits\nif [ ! -d $DEST ]\nthen\n mkdir -p $DEST\nfi\n\n# If source directory does not exits, die...\nif [ ! -d $SRC ]\nthen\n echo \"$SRC directory not found. Cannot make backup to $DEST\"\n exit 1\nfi\n\n# Okay, dump backup using tar\necho \"Backup directory $DEST...\"\necho \"Source directory $SRC...\"\n/bin/tar -Jcf $DEST/backup.tar.xz $SRC 2>/dev/null\n\n# Find out if backup failed or not\nif [ $? -eq 0 ] \nthen\n echo \"Backup done!\"\nelse\n echo \"Backup failed\"\nfi",
"See if an integer A is greater equal than another integer B:",
"%%bash\ni=5\nif [ $i -ge 0 ];then echo \"$i >= 0\";fi",
"alternatively, one can write it like this (with the help of )",
"%%bash\ni=5\nif (($i >= 0));then echo \"$i >= 0\";fi\n\n%%bash\ni=5\nif [ $i >= 0 ];then echo \"$i >= 0\";fi\n\n%%bash\ni=05\nif [ $i -ge 0 ];then echo \"$i >= 0\";fi",
"arithmatic calculations are enclosed by (()):",
"%%bash\necho $((7**2))\n\n%%bash\necho $((7%3))\n\n%%bash\n#!/bin/bash\n# calculate the available % of disk space\n\necho \"Current Mount Points:\"\nmount | grep -E 'ext[234]|xfs' | cut -f 3 -d ' ' \n\n#read -p \"Enter a Mount Point: \" mntpnt\nmntpnt=\"/home\"\n\nsizekb=$(df $mntpnt | tail -1 | tr -s ' ' | cut -f 2 -d ' ')\navailkb=$(df $mntpnt | tail -1 | tr -s ' ' | cut -f 4 -d ' ')\n\navailpct=$(echo \"scale=4; $availkb/$sizekb * 100\" | bc)\n\nprintf \"There is %5.2f%% available in %s\\n\" $availpct $mntpnt\n\nexit 0",
"Another code which do exactly the same thing:",
"%%bash\n#!/bin/bash\n# calculate the available % of disk space\n\necho \"Current Mount Points:\"\nmount | egrep 'ext[234]|xfs' | cut -f 3 -d ' ' # -f: field; -d:delimiter\n\n#read -p \"Enter a Mount Point: \" mntpnt\nmntpnt=\"/home\"\n\ndf_out=\"$(df $mntpnt | tail -1)\"\n\nset $df_out\n\navailpct=$(echo \"scale=4; ${4}/${2} * 100\" | bc)\n\nprintf \"There is %5.2f%% available in %s\\n\" $availpct $mntpnt\n\nexit 0\n\n%%bash\n\n#!/bin/bash\n# shift left is double, shift right is half\n\ndeclare -i number\n\n#read -p \"Enter a number: \" number\nnumber=-4\n\necho \" Double $number is: $((number << 1))\"\necho \" Half of $number is: $((number >> 1))\"\n\nexit 0",
"therefore, it is arithmatic shift in the above script.\n17112016\nlocal variables can be declared as readonly",
"%%bash\n\n#!/bin/bash\n# declare constants variables\n\nreadonly DATA=/home/sales/data/feb09.dat\necho $DATA\necho\n\nDATA=/tmp/foo\n# Error ... readonly variable\n\necho $DATA\n\nexit 0",
"switch case",
"%%bash\n#!/bin/bash\n# Testing ranges of characters.\n\nKeypress=5\n\n#echo; echo \"Hit a key, then hit return.\"\n#read Keypress\n\ncase \"$Keypress\" in\n [[:lower:]] ) echo \"Lowercase letter\";;\n [[:upper:]] ) echo \"Uppercase letter\";;\n [0-9] ) echo \"Digit\";;\n * ) echo \"Punctuation, whitespace, or other\";;\nesac # Allows ranges of characters in [square brackets],\n #+ or POSIX ranges in [[double square brackets.\n\n%%bash\n#!/bin/bash\n# menu case\n\necho -n \"\n\n Menu of available commands: \n =================================\n 1. full directory listing\n 2. display current directory name\n 3. display the date\n \n q. quit \n =================================\n Select a number from the list: \"\n #read answer\n answer=2\n\ncase \"$answer\" in\n q*|exit|bye ) echo \"Quitting!\" ; exit ;;\n 1) echo \"The contents of the current directory:\"\n ls -al ;;\n 2) echo \"The name of the current directory is $(pwd)\" ;;\n 3) echo -n \"The current date is: \" \n date +%m/%d/%Y ;;\n *) echo \"Only choices 1, 2, 3 or q are valid\" ;;\nesac\n\nexit",
"in the above example, \\$(pwd) can be replaced by $PWD if the env variable PWD exists\nthe \"for each\" loop",
"for planet in \"Mercury 36\" \"Venus 67\" \"Earth 93\" \"Mars 142\" \"Jupiter 483\"\n\n%%bash\n#!/bin/bash\n# Planets revisited.\n# Associate the name of each planet with its distance from the sun.\n\nfor planet in \"Mercury 36\" \"Venus 67\" \"Earth 93\" \"Mars 142\" \"Jupiter 483\"\ndo\n set -- $planet # Parses variable \"planet\"\n #+ and sets positional parameters.\n # The \"--\" prevents nasty surprises if $planet is null or\n #+ begins with a dash.\n\n# May need to save original positional parameters,\n#+ since they get overwritten.\n# One way of doing this is to use an array,\n# original_params=(\"$@\")\n\n echo \"$1 $2,000,000 miles from the sun\"\n #-------two tabs---concatenate zeroes onto parameter $2\ndone\n\n# (Thanks, S.C., for additional clarification.)\n\nexit 0",
"another way to create a loop, as what people normally do in Java:",
"%%bash\n#!/bin/bash\n\n#echo $1\n#file=$1\n\ncd /home/chweng/code/exercises/\nfile=\"hello.sh\"\n \nif [ -f $file ]\nthen\n echo \"the file $file exists\"\nfi \n\nfor((j=1;j<=5;j++))\ndo\n echo $j, \"Hello World\"\ndone",
"review: when to use the enhanced-test [[ ]]? \nA: when && or || operator is used",
"%%bash\n#!/bin/bash\n\n#echo $1\n#file=$1\n\ncd /home/chweng/code/exercises/\nfile=\"hello.sh\"\n \nif [[ -f $file && true ]]\nthen\n echo \"the file $file exists\"\nfi \n\nfor((j=1;j<=5;j++))\ndo\n echo $j, \"Hello World\"\ndone",
"while loop:",
"%%bash\n\n#!/bin/bash\n# increment number\n\n# set n to 1\nn=1\n\n# continue until $n equals 5\nwhile [ $n -le 5 ] \ndo\n echo \"Welcome $n times.\"\n n=$(( n+1 )) # increments $n\ndone\n\n\n%%bash\n\n#!/bin/bash\n# increment number\n\n# set n to 1\nn=1\n\n# continue until $n equals 5\nwhile (( n <= 5 )) \ndo\n echo \"Welcome $n times.\"\n (( n++ )) # increments $n\ndone\n\n%%bash\n#!/bin/bash\n# while can read data\n\nls -al | while read perms links owner group size mon day time file\ndo\n [[ \"$perms\" != \"total\" && $size -gt 100 ]] && echo \"$file $size\"\ndone\n\nexit",
"break & continue:",
"%%bash\n#!/bin/bash\n# break, continue usage\n\nLIMIT=19 # Upper limit\n\necho\necho \"Printing Numbers 1 through 20 (but not 3 and 11).\"\n\na=0\n\nwhile (( a <= LIMIT))\ndo\n ((a++))\n \n if [[ \"$a\" -eq 3 || \"$a\" -eq 11 ]] # Excludes 3 and 11.\n then\n continue # Skip rest of this particular loop iteration.\n fi\n\n echo -n \"$a \" # This will not execute for 3 and 11.\ndone\n\n# Exercise:\n# Why does the loop print up to 20?\n\necho; echo\n\necho Printing Numbers 1 through 20, but something happens after 2.\n\n##################################################################\n# Same loop, but substituting 'break' for 'continue'.\n\na=0\n\nwhile [ \"$a\" -le \"$LIMIT\" ]\ndo\n a=$(($a+1))\n\n if [ \"$a\" -gt 2 ] \n then\n break # Skip entire rest of loop.\n fi\n\n echo -n \"$a \"\ndone\nexit 0\n\n%%bash\n#!/bin/bash\n# The \"continue N\" command, continuing at the Nth level loop.\n\nfor outer in I II III IV V # outer loop\ndo\n echo; echo -n \"Group $outer: \"\n \n # --------------------------------------------------------------------\n for inner in 1 2 3 4 5 6 7 8 9 10 # inner loop\n do\n if [[ \"$inner\" -eq 7 && \"$outer\" = \"III\" ]]\n then\n continue 2 # Continue at loop on 2nd level, that is \"outer loop\".\n # Replace above line with a simple \"continue\"\n # to see normal loop behavior.\n fi\n\n echo -n \"$inner \" # 7 8 9 10 will not echo on \"Group III.\"\n done\n # --------------------------------------------------------------------\ndone\n\necho; echo\n\nexit 0\n\n\n%%bash\n#!/bin/bash\n# The \"continue N\" command, continuing at the Nth level loop.\n\nfor outer in I II III IV V # outer loop\ndo\n echo; echo -n \"Group $outer: \"\n \n # --------------------------------------------------------------------\n for inner in 1 2 3 4 5 6 7 8 9 10 # inner loop\n do\n if [[ \"$inner\" -eq 7 && \"$outer\" = \"III\" ]]\n then\n break 2 # Continue at loop on 2nd level, that is \"outer loop\".\n # Replace above line with a simple \"continue\"\n # to see normal loop behavior.\n fi\n\n echo -n \"$inner \" # 7 8 9 10 will not echo on \"Group III.\"\n done\n # --------------------------------------------------------------------\ndone\n\necho; echo\n\nexit 0\n",
"function",
"%%bash\n#!/bin/bash\n# Exercising functions (simple).\n\nJUST_A_SECOND=1\n\nfunky ()\n{ # This is about as simple as functions get.\n echo \"This is a funky function.\"\n echo \"Now exiting funky function.\"\n} # Function declaration must precede call.\n\n\nfun ()\n{ # A somewhat more complex function.\n i=0\n REPEATS=5\n\n echo\n echo \"And now the fun really begins.\"\n echo\n\n sleep $JUST_A_SECOND # Hey, wait a second!\n while [ $i -lt $REPEATS ]\n do\n echo \"----------FUNCTIONS---------->\"\n echo \"<------------ARE-------------\"\n echo \"<------------FUN------------>\"\n echo\n ((i++))\n done\n}\n\n# Now, call the functions.\n\nfunky\nfun\n\nexit $?\n",
"bash\nchweng@chweng-VirtualBox:~$ env |grep \"PWD\"\nPWD=/home/chweng\nchweng@chweng-VirtualBox:~$ \nchweng@chweng-VirtualBox:~$ cd Desktop/\nchweng@chweng-VirtualBox:~/Desktop$ env |grep \"PWD\"\nPWD=/home/chweng/Desktop\nOLDPWD=/home/chweng",
"%%bash\n#!/bin/bash\n# Global and local variables inside a function.\n\nfunc ()\n{\n local loc_var=23 # Declared as local variable.\n echo # Uses the 'local' builtin.\n echo \"\\\"loc_var\\\" in function = $loc_var\"\n global_var=999 # Not declared as local.\n # Therefore, defaults to global. \n echo \"\\\"global_var\\\" in function = $global_var\"\n} \n\nfunc\n\n# Now, to see if local variable \"loc_var\" exists outside the function.\n\necho\necho \"\\\"loc_var\\\" outside function = $loc_var\"\n # $loc_var outside function = \n # No, $loc_var not visible globally.\necho \"\\\"global_var\\\" outside function = $global_var\"\n # $global_var outside function = 999\n # $global_var is visible globally.\necho \n\nexit 0\n# In contrast to C, a Bash variable declared inside a function\n#+ is local ONLY if declared as such.\n\n\n%%bash\n#!/bin/bash\n# passing data to function\n\n# DECLARE FUNCTIONS\nshifter() # function to demonstrate parameter \n # list management in a function\n{\n echo \"$# parameters passed to $0\"\n while (( $# > 0 ))\n do\n echo \"$*\"\n shift \n done\n}\n\n# MAIN \n#read -p \"Please type a list of five words (then press Return): \" varlist\nvarlist=\"i my me mine myself\"\n\nset $varlist # this creates positional parameters in the parent\n\nshifter $* # call the function and pass argument list\n\necho \"$# parameters in the parent \"\necho \"Parameters: $*\"\n\nexit\n\n%%bash\n#!/bin/bash\n# Functions and parameters\n\nDEFAULT=default # Default param value.\n\nfunc2 () {\n if [ -z \"$1\" ] # Is parameter #1 zero length?\n then\n echo \"-Parameter #1 is zero length.-\" # Or no parameter passed.\n else\n echo \"-Parameter #1 is \\\"$1\\\".-\"\n fi\n\n variable=${1-$DEFAULT} # What does\n\n echo \"variable = $variable\" #+ parameter substitution show?\n # ---------------------------\n # It distinguishes between\n #+ no param and a null param.\n if [ \"$2\" ]\n then\n echo \"-Parameter #2 is \\\"$2\\\".-\"\n fi\n\n return 0\n}\n\necho\n\necho \"Nothing passed.\"\nfunc2 # Called with no params\necho\n\necho \"Zero-length parameter passed.\"\nfunc2 \"\" # Called with zero-length param\necho\n\necho \"Null parameter passed.\"\nfunc2 \"$uninitialized_param\" # Called with uninitialized param\necho\n\necho \"One parameter passed.\"\nfunc2 first # Called with one param\necho\n\necho \"Two parameters passed.\"\nfunc2 first second # Called with two params\necho\n\necho \"\\\"\\\" \\\"second\\\" passed.\"\nfunc2 \"\" second # Called with zero-length first parameter\necho # and ASCII string as a second one.\n\nexit 0\n\n%%bash\n\n#!/bin/bash\n# using stdout passing data\n\n# Declare function\naddup() # function to add the number to itself\n{\n echo \"$((numvar + numvar))\"\n}\n\n# MAIN \nwhile : # start infinite loop\ndo\n clear # clear the screen\n declare -i numvar=0 # declare integer variable\n \n # read user input into variable(s)\n echo; echo\n #read -p \"Please enter a number (0 = quit the script): \" numvar otherwords\n numvar=100\n \n if (( numvar == 0 )) # test the user input\n then\n exit $numvar\n else\n result=$(addup) # call the function addup \n # and get data from function\n echo \"$numvar + $numvar = $result\"\n #read -p \"Press any key to continue...\"\n fi\n break # this is added by myself because I'd like to print the output in the notebook\ndone\n",
"list open files:",
"!lsof / |grep \"/home/chweng/.ipython\"\n\n!strace -c find /etc -name \"python*\"",
"library trace: trace library calls demanded by the specified process.",
"!ltrace -c find /etc -name \"python*\"",
"awk:",
"%%bash\n\n#Example 1: Printing the First Field of the /etc/hosts File to stdout\n\n#cat /etc/hosts | awk '{print $1}'\n\n#Pipes data to awk\n\nawk '{print \"field one:\\t\" $1}' /etc/hosts\n#Uses /etc/hosts as an input file\n\n#'{print}' #Prints the current record\n#'{print $0}' #Prints the current record (more specific)\n#'{print $1}' #Prints the first field in the current record\n#'{print \"field one:\" $1}' #Prints some text before field 1\n#'{print \"field one:\\t\" $1}' #Prints some text, a tab, then field 1\n#'{print \"field three:\" $3; print $1}' #Prints fields on two lines in\n \n\n!awk ' { print \"\" ; print $0 }' ~/code/module08",
"```bash\n1. ls -al /etc | awk '$1 ~ /^d/ {print \"dir: \",$9}'\n\n\nll /etc | awk '$1 ~ /^d/ && $9 !~ /^./ {print \"dir: \",$9}'\n\n\nll /sbin |awk '/^-/ && $2 > 1 {print \"file:\",$9 \"\\t links: \",$2}'\n\n\ncat /etc/services | awk '$1 == \"ssh\" {print \"service: \",$1,$2}'\n\n\nss -an | awk '/^ESTAB/ && $4 ~ /:22$/ {print \"ssh from:\",$5}'\n\n\nmount | awk '$5 ~ /ext[234]/ || $5 == \"xfs\" {print $3,\"(\"$5\")\"}'\n\n\nps -ef | awk '$2 == 1 , $2 == 10 {print}'\n```",
"!ls -al /home/chweng\n\n!ls -al /home/chweng/ | awk '/^d/ {print \"dir: \",$9}'\n\n!mount |grep \"ext\"\n\n!mount | awk '/ ext[234] / {print \"device: \",$1,\"\\tmount point: \" $3}'\n\n!mount | awk '/ ext[234] / {print \"device: %5s \\tmount point: %5s\",$1, $3}'\n\n!ip addr show",
"select ipv4's ip:",
"!ip addr show | awk '/inet / {print $2}'\n\n!cat /etc/passwd | awk -F : '/^chweng/ {print \"id:\",$1,\" \\thome:\",$6}'",
"bash\nchweng@VirtualBox:~$ sudo fdisk -l | awk '/^Disk \\/dev\\/[hs]d/ {print $2,$3,$4}'\n/dev/sda: 1 TiB,",
"!cat /etc/group | awk -F : '/^sudo/ {print $1 ,\"users are:\", $4}'\n\nimport os\nos.chdir(\"/home/chweng/code\")",
"bash\nchweng@ubuntu221:~/code$ javac -d . TestDiceThrowEx1.java \nchweng@ubuntu221:~/code$ ls\nRoadLog exercises module03 module05 module07 tw\nTestDiceThrowEx1.java module02 module04 module06 module08\nchweng@ubuntu221:~/code$ java tw.loop.TestDiceThrowEx1 \ndiceNumber=5\nTry Again.\ndiceNumber=5\nTry Again.\ndiceNumber=6\nTry Again.\ndiceNumber=6\nTry Again.\ndiceNumber=4\nTry Again.\ndiceNumber=5\nTry Again.\ndiceNumber=6\nTry Again.\ndiceNumber=2\nYou Win.",
"%%bash\nls -l /home/chweng/code/exercises/ |awk '/^-/ && $2 = 1 {print \"file:\",$9 \"\\t links: \",$2}'",
"bash\nstrace -c -f java tw.loop.TestDiceThrowEx1\nsort:",
"!ps -ef | awk '$2 == 1 , $2 == 10 {print}'",
"```bash\ninput data from nbafile file\n\n\nawk '$3 == 82 {print $1,\" \\t\",$5}' nbafile\n\n\nawk '$3 < 78' nbafile\n\n\nawk '$2 ~ /c.*l/' nbafile\n\n\nawk '$1 ~ /^s/ && $4 > 80 {print $1 \"\\t\\t\" $4}' nbafile\n```",
"%%bash\n\necho \"This is a book\" | awk '\n { print \"length of the string : \",$0,\" is : \",length($0) }'"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
plissonf/DeepPlay
|
notebooks/web_scraping.ipynb
|
mit
|
[
"AIDA Freediving Records\nThe project DeepPlay aims at exploring and displaying the world of competitive freediving using web-scraping, machine learning and data visualizations (e.g. D3.js). The main source of information is the official website of AIDA, International Association for the Development of Apnea. The present work has been created within 10 days including exploratory data analysis.\n1- Scraping the data from the website\n2- Data preparation / cleaning / extension (separate name / country, get GPS locations, get gender...)\n3- Early data exploration (see exploratory_data_analysis.html)\n-\nLoad modules",
"from bs4 import BeautifulSoup\nfrom lxml import html\nimport requests as rq\nimport pandas as pd\nimport re\nimport logging",
"-\nThe method get_discipline_value(key) selects one of 6 disciplines (dictionary keys: STA, DYN, DNF, CWT, CNF, FIM) and allocates its corresponding value (id) to a new url, discipline_url.\nIf the discipline is mispelled or inexistent, get_discipline_value throws the sentence \"Check your spelling ... is not a freediving discipline\".\nThe method is called within the following method scraper( ) function to obtain html pages associated with a discipline.",
"def get_discipline_value(key):\n\n disc = {'STA': 8 ,\n 'DYN': 6,\n 'DNF': 7,\n 'CWT': 3,\n 'CNF': 4,\n 'FIM': 5\n }\n if key in disc.keys():\n value = disc[key]\n discipline_url = '{}{}'.format('&disciplineId=', value) \n return discipline_url\n else:\n logging.warning('Check your spelling. ' + key + ' is not a freediving discipline')\n\nget_discipline_value('NFT')",
"-\nThe method cleanser( ) changes the list of lists named 'data' which is collected all html pages for each discipline into a cleaned and labelled dataframe df. The method uses regular expressions. It will also be called within the method scraper( ).",
"def cleanser(a_list):\n \n df = pd.DataFrame(a_list)\n df.columns = ['Ranking', 'Name', 'Results', 'Announced', 'Points', 'Penalties', 'Date', 'Place']\n df['Ranking'] = df['Ranking'].str.replace('.', '')\n df['Country'] = df['Name'].str.extract('.*\\((.*)\\).*', expand=True)\n df['Name'] = df['Name'].str.replace(r\"\\(.*\\)\",\"\")\n df['Results'] = df['Results'].str.replace('m', '')\n df['Date'] = pd.to_datetime(df['Date'])\n df = df.drop_duplicates(['Name', 'Results', 'Announced', 'Points', 'Penalties', 'Date', 'Place', 'Country'])\n return df",
"-\nThe method scraper( ) crawls through an entire freediving discipline, identifies how many pages it consists of (max_pages), obtains html code from all urls and save this code into a list of lists (data). The later is saved into a cleaned data frame using cleanser( ), ready for data analysis",
"def scraper(key):\n \n #Obtain html code for url and Parse the page\n base_url = 'https://www.aidainternational.org/Ranking/Rankings?page='\n url = '{}1{}'.format(base_url, get_discipline_value(key))\n\n page = rq.get(url)\n soup = BeautifulSoup(page.content, \"lxml\")\n\n\n #Use regex to identify the maximum number of pages for the discipline of interest\n page_count = soup.findAll(text=re.compile(r\"Page .+ of .+\"))\n max_pages = str(page_count).split(' ')[3].split('\\\\')[0]\n total_obs = int(max_pages)*20\n\n data = []\n for p in range(1, int(max_pages)+1):\n\n #For each page, create corresponding url, request the library, obtain html code and parse the page\n url = '{}{}{}'.format(base_url, p, get_discipline_value(key))\n\n #The break plays the role of safety guard if dictionary key is wrong (not spelled properly or non-existent) then the request\n #for library is not executed (and not going through the for loop to generate the data), an empty dataframe is saved\n if url == '{}{}None'.format(base_url, p):\n break\n else:\n new_page = rq.get(url)\n new_soup = BeautifulSoup(new_page.content, \"lxml\")\n\n #For each page, each parsed page is saved into the list named \"data\"\n rows = new_soup.table.tbody.findAll('tr')\n for row in rows:\n cols = row.find_all('td')\n cols = [ele.text.strip() for ele in cols]\n data.append([ele for ele in cols if ele])\n\n p += 1\n\n #Results from list \"data\" are cleaned using \"cleanser\" method and saved in a dataframe clean_df\n clean_df = cleanser(data)\n pd.set_option('max_rows', int(total_obs))\n pd.set_option('expand_frame_repr', True)\n\n #Dataframe df is saved in file results_key.csv to access results offline\n filename = '/Users/fabienplisson/Desktop/Github_shares/DeepPlay/deepplay/data/cleaned/results_{}.csv'.format(key)\n clean_df.to_csv(filename, encoding ='utf-8')\n logging.warning('Finished!')\n #with open(filename,'a') as f:\n #f.write(clean_df.encode('uft-8'))\n #f.closed\n\n\nscraper('DYN')",
"-\nFuture Steps\n\nIntegrating all methods into class using Object-oriented programming (OOP)\nTidying up data with more specific regular expressions\nApplying web-scraping to other websites to collect other features and datasets that share similar types of information (athlete name, country, record values (time, distance), location of the event, date)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
danielfrg/danielfrg.github.io-source
|
content/blog/notebooks/2015/09/docker-selenium-crawler.ipynb
|
apache-2.0
|
[
"TD;DR: Using selenium inside a docker container to crawl webistes that need javascript or user interaction + a cluster of those using docker swarm.\nWhile simple HTTP requests are good enough 90% to get the data you want from a website I am always looking for better ways to optimize my crawlers specially in websites that require javascript and user interaction, a login or a click in the right place sometimes give you the access you need. I am looking at to you government websites!\nRecently I have seen more solutions to some of these problems in python such as Splash from ScrapingHub that is basically a QT browser with an scriptable API. I havent tried it and it definetly looks like a viable option but if I am going to render a webpage I want to do it in a \"real\" (Chrome) browser.\nAn easy way to use Chrome (or Firefox or any other popular browser) with an scriptable and multi-language API is using Selenium, which is generally used to test websites, making clicks, fake/real logins and more. To make this process reproducible I used docker to install Selenium, Chrome and the Selenium Chrome driver in a single container, this image can be found at docker-selenium but more and in docker hub also more (and probably) better selenium images can be found in Docker Hub and the result should be the same.\nBrowser\nGetting Selenium running its just as easy as having docker and:\n$ docker pull danielfrg/selenium\n$ docker run -it -p 4444:4444 danielfrg/selenium\nAnd now you'll have a selenium instance running in port 4444. If you use docker-machine get the docker IP from there and point your brower to that port. In my case http://192.168.99.101:4444/wd/hub - this is the same URL you use to point to the Selenium Python API.\nNow in Python we can use it for example to crawl the homepage of yelp.com.\nDisclaimer: This is just an example. I don't know the terms of service of yelp, if you want to try this in a bigger scale read the TOS.\nCode\nNow into the code: Create a Remote Driver, point it to the Selenium URL. Query for yelp.com and just for fun get a screenshot.",
"from selenium import webdriver\nfrom selenium.webdriver.common.desired_capabilities import DesiredCapabilities\n\ndriver = webdriver.Remote(command_executor='http://192.168.99.101:4444/wd/hub',\n desired_capabilities=DesiredCapabilities.CHROME)\n\ndriver.get(\"http://www.yelp.com\")\n\nimage = driver.get_screenshot_as_base64()\n\nfrom IPython.display import HTML\nHTML(\"\"\"<img src=\"data:image/png;base64,{0}\">\"\"\".format(image))",
"Here you can see that the render of the website is correct and in my case it pointed me to the Austin website.\nNow we can use the Python API to can query for the content that we want from the website. To help me a little bit while looking at HTML source and searching for the css (or xpath) that I need I love the selector gadget plugin. For this example I want to get the \"Best of Yelp: Austin\" section that has some javascript and a scroll button with the different categories. The Selection in this case would be: #best-of-yelp-module .navigation li.",
"best = driver.find_element_by_id('best-of-yelp-module')\n\nnavigation = best.find_element_by_class_name('navigation')\n\nsections = navigation.find_elements_by_tag_name('li')\n\nlen(sections)",
"So now I know there are 23 links on that section, 21 categories since there are two buttons to scroll.\nNow I am going to click on each one of this buttons this is going to change the content of another part of the site: .main-content and after that I am getting the name of the category and the list of businesses .main-content .biz-name. A little note: I am waiting for 1 second after the click() event while the content of the webpage updates, this is to wait for request that might happen and the fade in/out effects.",
"import time\n\nbiz = {}\n\nfor section in sections:\n section.click()\n time.sleep(1)\n content = best.find_element_by_class_name('main-content')\n sec_name = content.text.split('\\n')[0]\n biz_names = content.find_elements_by_class_name('biz-name')\n biz_names = [name.text for name in biz_names if name.text]\n biz[sec_name] = biz_names",
"After about 30 seconds I have the content on a dictionary and I can take a look at it.",
"biz",
"After that we can just stop the driver and stop the docker container.",
"driver.quit()",
"Scale\nOk, that looks pretty cool but how to scale this?\nFortunately with docker now you have a lot of options such as Kubernetes and Docker Swarm. In this case I decided to use Docker Swarm since I was lucky to test Rackspace new container service while I was at PyTexas this last weekend.\nThe process to start the container is exactly the same since docker-swarm serves the same docker API, you just need to point docker client to the docker-swarm cluster. Once thats done I can just execute the same docker commands to create multiple docker containers in the cluster.\n```\n$ docker pull danielfrg/selenium # Pull the container in all the nodes\n$ docker run -d -p 4444 danielfrg/selenium # Multiple times to start multiple containers\n$ docker run -d -p 4444 danielfrg/selenium \n$ docker run -d -p 4444 danielfrg/selenium\n$ docker run -d -p 4444 danielfrg/selenium\n$ docker ps\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n34867f93db5a danielfrg/selenium \"sh /opt/selenium/ent\" 4 seconds ago Up 2 seconds xxx.xx.x.3:49154->4444/tcp 40291b37-aa07-43fd-a7de-1f61673a89a1-n3/sleepy_rosalind\nb45d64827e21 danielfrg/selenium \"sh /opt/selenium/ent\" 6 seconds ago Up 4 seconds xxx.xx.x.2:49154->4444/tcp 40291b37-aa07-43fd-a7de-1f61673a89a1-n2/focused_brown\n8a9da9801ee3 danielfrg/selenium \"sh /opt/selenium/ent\" 8 seconds ago Up 7 seconds xxx.xx.x.1:49154->4444/tcp 40291b37-aa07-43fd-a7de-1f61673a89a1-n1/kickass_euclid\n59de7a4811ae danielfrg/selenium \"sh /opt/selenium/ent\" 13 seconds ago Up 11 seconds xxx.xx.x.3:49153->4444/tcp 40291b37-aa07-43fd-a7de-1f61673a89a1-n3/elegant_sammet\n11b13963a2e0 danielfrg/selenium \"sh /opt/selenium/ent\" 15 seconds ago Up 13 seconds xxx.xx.x.2:49153->4444/tcp 40291b37-aa07-43fd-a7de-1f61673a89a1-n2/dreamy_curie\n7beecb6a9e8a danielfrg/selenium \"sh /opt/selenium/ent\" 16 seconds ago Up 15 seconds xxx.xxx.x.1:49153->4444/tcp 40291b37-aa07-43fd-a7de-1f61673a89a1-n1/lonely_bell\n```\nNow that the containers are running we can use the Python docker API to get the IP and port of the containers.",
"import os\nfrom docker import Client\nfrom docker.utils import kwargs_from_env\n\nkwargs = kwargs_from_env()\nkwargs['tls'].assert_hostname = False\nclient = Client(**kwargs)\n\ncontainers = client.containers()\n\nseleniums = [c for c in containers if c['Image'] == 'danielfrg/selenium']\n\nurls = [s['Ports'][0]['IP'] + ':' + str(s['Ports'][0]['PublicPort']) for s in seleniums]\n\nurls",
"Now with this we have a pool of seleniums running that we can use to crawl not only one single page but a larger number of pages.\nFinal thoughts\nThis was a simple experiment for a more complete crawler that is based on newer technologies and that can target a larger number of websites. As I mention earlier data in the goverment website is very helpful in a lot of cases but is usually in some old system that requires clicks, search forms and stuff like that, this technique can be useful in those cases.\nFor a complete system maybe a better integration with Scrapy is needed and there are some experiements between scrapy + selenium:\n\nhttp://stackoverflow.com/questions/17975471/selenium-with-scrapy-for-dynamic-page\nhttps://github.com/voliveirajr/seleniumcrawler"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nyoungb2/CLdb
|
doc/examples/Methanosarcina/arrayBlast.ipynb
|
gpl-2.0
|
[
"Description:\n\nThis notebook goes through the analysis of blasting spacers vs a database containing potential protospacers.\nAny blast datbase can be used.\nA script is provided to easily blast all of the genomes in CLdb.\n\nBefore running this notebook:\n\nrun the Setup notebook\n\nUser-defined variables",
"# directory where you want the spacer blasting to be done\n## CHANGE THIS!\nworkDir = \"/home/nyoungb2/t/CLdb_Methanosarcina/\"",
"Init",
"import os\nfrom IPython.display import FileLinks\n%load_ext rpy2.ipython\n\nblastDir = os.path.join(workDir, 'arrayBlast')\nif not os.path.isdir(blastDir):\n os.makedirs(blastDir)",
"Blast vs all E. coli genomes in the CLdb\nSpacer blast\nSelecting spacer sequences",
"!cd $blastDir; \\\n CLdb -- array2fasta -cluster -cut 1 > spacers_cut1.fna\n\n## getting number of spacer sequences\n!printf 'Number of unique spacers:\\t'\n!cd $blastDir; \\\n grep -c \">\" spacers_cut1.fna; \\\n echo; \\\n head -n 6 spacers_cut1.fna",
"Blast\n\nThe array blast commands are sub-sub commands.\nSO, that's why when you type: 'CLdb -- arrayBlast -h', you get then following:",
"!CLdb -- arrayBlast -h\n\n# listing all arrayBlast subcommands\n!CLdb -- arrayBlast --list",
"Sub-sub-command help:\nCLdb arrayBlast -- run -- -h",
"# spacer blast\n## Really just a thin wrapper around blastn that will blast the spacers against all genomes in CLdb\n### If you want to blast some else, just run the blast yourself (use 'perldoc CLdb_arrayBlast.pl' for more info)\n\n!cd $blastDir; \\\n CLdb -- arrayBlast -- run \\\n -query spacers_cut1.fna \\\n -fork 10 \\\n > spacers_cut1_blastn.xml\n \n# NOTE: using 10 CPUs ",
"Note:\n\nUsing spacer clustering cutoff of 1, so just 'unique' spacer sequences",
"# converting .xml to .srl\n## This retains all of the info in the xml, but produces a smaller file that is faster to load and write.\n## For large blast runs (xml files > 100 Mb), this will take a little while\n\n!cd $blastDir; \\\n CLdb -- arrayBlast -- xml2srl \\\n spacers_cut1_blastn.xml \\\n > spacers_cut1_blastn.srl\n\n# checking output file\n!cd $blastDir; \\\n ls -thlc spacers_cut1_blastn.*",
"Note:\n\nYou could pipe the output from CLdb -- arrayBlast -- run directly into CLdb -- arrayBlast -- xml2srl\nExample: CLdb -- arrayBlast -- run -query spacers_cut1.fna | CLdb -- arrayBlast -- xml2srl\n\nDR Blast\n\nBLASTing DR sequences against the same subjects.\nThis is needed to filter out spacer blast hits to CRISPR arrays.",
"# selecting DRs, so we can blast them.\n## This is needed blast filter out spacers that hit CRISPR arrays\n\n!cd $blastDir; \\\n CLdb -- array2fasta \\\n -cluster -cut 1 -r \\\n > DRs_cut1.fna\n\n## getting number of spacer sequences\n!printf 'Number of unique DRs:\\t'\n!cd $blastDir; \\\n grep -c \">\" DRs_cut1.fna; \\\n echo; \\\n head -n 6 DRs_cut1.fna\n\n# DR blast, just like spacer blast\n\n!cd $blastDir; \\\n CLdb -- arrayBlast -- run \\\n -query DRs_cut1.fna \\\n -fork 10 \\\n > DRs_cut1_blastn.xml\n\n# converting .xml to .srl\n## This retains all of the info in the xml, but produces a smaller file that is faster to load and write.\n## For large blast runs (xml files > 100 Mb), this will take a little while\n\n!cd $blastDir; \\\n CLdb -- arrayBlast -- xml2srl \\\n DRs_cut1_blastn.xml \\\n > DRs_cut1_blastn.srl\n\n\n# checking output file\n!cd $blastDir; \\\n ls -thlc DRs_cut1_blastn.*",
"Filtering out spacer hits to CRISPRs\n\nBased on DR hits falling adjacent to spacer hits",
"# filtering spacer hits to arrays\n## This is based on the premise that DR hits adjacent to spacer hits signify that the spacer is hitting a CRISPR array\n\n!cd $blastDir; \\\n CLdb -- arrayBlast -- filter_arrayHits \\\n spacers_cut1_blastn.srl \\\n DRs_cut1_blastn.srl \\\n > spacers_cut1_blastn_filt.srl\n \n \n# checking output file\n!cd $blastDir; \\\n echo; \\\n ls -thlc spacers_cut1_blastn_filt.srl",
"Converting blast .srl to .csv",
"!cd $blastDir; \\\n CLdb -- arrayBlast -- srl2csv \\\n spacers_cut1_blastn_filt.srl \\\n > spacers_cut1_blastn_filt.txt\n \n# assessing table of blast hits \n!cd $blastDir; \\\n head spacers_cut1_blastn_filt.txt\n\n#!printf \"Number of blast hits: \"\nnhits = !cd $blastDir; wc -l spacers_cut1_blastn_filt.txt\nnhits = nhits[0].split(' ')[0]\nprint \"Number of blast hits: {}\".format(int(nhits) - 1)",
"Conclusions\n\nWe have some BLAST hits!\nLet's add some information to the blast results\n\nadding crRNA info\n\nAdding crRNA region to .srl (actually the DNA template, refering to it as 'crDNA')\nHow much of the adjacent DR sequences are included in the crDNA is determined by the user (default: 10bp on either side)",
"!cd $blastDir; \\\n CLdb -- arrayBlast -- add_crRNA \\\n < spacers_cut1_blastn_filt.srl \\\n > spacers_cut1_blastn_filt_crDNA.srl\n \n## checking output\n!cd $blastDir; echo; \\\n ls -thlc spacers_cut1_blastn_filt_crDNA.srl",
"Adding protospacer info\n\nSimilar to adding crRNA info, but getting information from the blast database(s)",
"!cd $blastDir; \\\n CLdb -- arrayBlast -- add_proto -workers 24 \\\n < spacers_cut1_blastn_filt_crDNA.srl \\\n > spacers_cut1_blastn_filt_crDNA_proto.srl \n \n## checking output\n!cd $blastDir; echo; \\\n ls -thlc spacers_cut1_blastn_filt_crDNA_proto.srl ",
"Aligning crRNA & protospacer\n\nAlignment added to the .srl file.\nThese are individual alignments between the spacer and protospacer (using clustalw).",
"# aligning crRNA and protospacer\n\n!cd $blastDir; \\\n CLdb -- arrayBlast -- align_proto \\\n < spacers_cut1_blastn_filt_crDNA_proto.srl \\\n > spacers_cut1_blastn_filt_crDNA_proto_aln.srl\n \n## checking output\n!cd $blastDir; echo; \\\n ls -thlc spacers_cut1_blastn_filt_crDNA_proto_aln.srl",
"Getting alignments (fasta)\n\nYou can parse out the alignments you want (parse by subtype, taxon_name, etc.)\nOR you can parse them after writing (add subtype, taxon_name, e-value, etc. to the sequence name)\nThe alignments will be oriented to the crDNA (5'-3' for the crRNA)\nSee CLdb -- arrayBlast --perldoc -- get_align for info on what -outfmt does",
"# getting alignment of protospacer and crDNA and writing as a fasta\n\n!cd $blastDir; \\\n CLdb -- arrayBlast -- get_align \\\n -outfmt taxon_name,subtype,evalue \\\n < spacers_cut1_blastn_filt_crDNA_proto_aln.srl \\\n > blastn_crDNA-proto_aln.fna\n \n\n## checking output\n!cd $blastDir; echo; \\\n ls -thlc blastn_crDNA-proto_aln.fna\n\n# let's take a look at the sequences \n!cd $blastDir; \\\n head -n 10 blastn_crDNA-proto_aln.fna",
"The alignment is really just for each crDNA-protospacer pair (every 4 lines)\nFirst, the crRNA (sequence name starts with \"crRNA\")\nSecond, the protospacer (sequence name start with \"proto\")\nExample: \n\n~~~\n\ncrDNA\nATAGACA\nproto\nATAGACA\n~~~\n\n\n\nNote: the lower case letters are sequence adjacent to the actual protospacer\n\n\nIf you wanted each crRNA-protospacer pair individually, you could use the split bash command like this:\n\nWARNING: this produces a lot of files, so we will make them in a new directory",
"# file/directory\ninFile = os.path.join(blastDir, 'blastn_crDNA-proto_aln.fna')\noutDir = os.path.join(blastDir, 'blastn_crDNA-proto_aln_split')\nif not os.path.isdir(outDir):\n os.makedirs(outDir)\noutPre = os.path.join(outDir, 'split') \n\n# spliting alignment\n!split -l 4 -d -a 4 $inFile $outPre\n\n# checking out files\n!printf 'Number of files produced: '\n!find $outDir -name \"split*\" | wc -l",
"Getting PAMs\n\nLooking at the alignments can be very useful, but it can help to extract what we specifically want and summarize.\nLet's pull out the potential PAM regions from the alignment \nWe are going to assume that the PAM is:\n4bp long \n1bp away from the 5' end of the protospacer ('right' side of the alignment)",
"!cd $blastDir; \\\n CLdb -- arrayBlast -- get_PAM -PAM 1 4 -f - \\\n < blastn_crDNA-proto_aln.fna \\\n > blastn_crDNA-proto_aln_PAM.fna\n\n\n## checking output\n!cd $blastDir; echo; \\\n head blastn_crDNA-proto_aln_PAM.fna",
"Summary table of each unique PAM sequence",
"!cd $blastDir; \\\n egrep -v \"^>\" blastn_crDNA-proto_aln_PAM.fna | \\\n sort | uniq -c | sort -k 1 -n -r ",
"Spacer (crDNA) - protospacer mismatch distribution\n\nLet's look at potential spacer targeting by assessing mismatches of the crDNAs and protospacers.\nThis script will make 2 summary files of mismatches for each crDNA-protospacer alignment\nGiven that spacers can vary in length, the alignment is relative to a user-defined 'SEED' region.\nI'm going to use the default SEED region: '-SEED -8 -1'\nThis SEED region covers the last 8 bp of the alignment",
"!cd $blastDir; \\\n CLdb -- arrayBlast -- get_SEEDstats \\\n --SEED -8 -1 \\\n -prefix blastn_crDNA-proto_aln \\\n < blastn_crDNA-proto_aln.fna \n \n## checking output files\n!cd $blastDir; echo; \\\n ls -thlc blastn_crDNA-proto_aln_SEEDstats*.txt\n\n# checking files\n!cd $blastDir; \\\n head -n 6 blastn_crDNA-proto_aln_SEEDstats-byPos.txt\n \nprint '-' * 50\n\n\n!cd $blastDir; \\\n head -n 6 blastn_crDNA-proto_aln_SEEDstats-summary.txt",
"Plotting mismatches",
"%%R\nlibrary(ggplot2)\nlibrary(gridExtra)\n\n%%R -i blastDir -w 800\n\n# loading table\ninfile = file.path(blastDir, 'blastn_crDNA-proto_aln_SEEDstats-byPos.txt')\n\ntbl = read.delim(infile)\ntbl = tbl[tbl$region != 'protospacer', ]\n\n# mismatches and counts, & normalized mismatches\np.mis = ggplot(tbl, aes(pos_rel_SEED, mismatches, group=region, fill=region)) +\n geom_bar(stat='identity', position='dodge') +\n labs(y='Number\\nof mismatches', x='Alignment position relative to SEED start')\np.count = ggplot(tbl, aes(pos_rel_SEED, count, group=region, fill=region)) + \n geom_bar(stat='identity', position='dodge') +\n labs(y='Number of alignments\\nwith nucleotide\\nat position', x='Alignment position relative to SEED start')\np.norm = ggplot(tbl, aes(pos_rel_SEED, mismatch_norm, group=region, fill=region)) +\n geom_bar(stat='identity', position='dodge') +\n labs(y='Mismatchs /\\nalignments', x='Alignment position relative to SEED start')\n\ngrid.arrange(p.count, p.mis, p.norm, ncol=1)",
"Notes on plots:\n\nThe top plot shows that some spacer-protospacer are not quiet as long as the rest, which is why the 'Number of alignment with nucleotide at position' drops at the higher positions (> ~30).\nThere are fewer mismatches in the middle of the spacer-protospacer alignments.\n\nArray Blast vs NCBI's nt database\n\nSee the E. coli example for conducting a spacer blast vs the nt database."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
echopen/PRJ-medtec_sigproc
|
DeviceCaracterization/FirstCharacterization/first characterization.ipynb
|
mit
|
[
"First raw image characterization\nLets study the two raw images we have : one of a plate and one of a hand",
"import numpy as np\nimport scipy.signal as sp\nimport matplotlib.pyplot as plt\nplt.close('all')\n\n# Constants\nNsample = 1689 # number of samples by line\nNline = 64 # number of lines\n\n# Load data\nplate = np.loadtxt('plate.txt')\nhand = np.loadtxt('hand.txt')",
"The plate\nLet mesure the SNR and have a look at the data. for computing the SNR,\nI compute the variance one the empty half side of the image. This will be my\nestimate of the noise power. To compute the power of the signal, i mesure the\nhighest possible amplitude and square it.",
"# image plotting\n\nplt.figure(1)\nplt.title('plate envelope')\nplt.imshow(np.abs(sp.hilbert(plate)), cmap='gray', aspect='auto')\nplt.colorbar()\nplt.xlabel('samples')\nplt.ylabel('lines')\n\nNempty = 32 # separe the empty from the not empty side\nNplot = 46 # one line chosen for graph display\n\n# repere plotting\n\nplt.plot([0,Nsample-1],[Nempty-0.5,Nempty-0.5],'b--',label='empty border')\nplt.plot([0,Nsample-1],[Nplot,Nplot],'r--',label='plot line')\nplt.legend()\nplt.show()\n\n# graph plotting\n\nplt.figure(2)\nplt.title('one raw line plotting')\nplt.plot(plate[Nplot,:],'r')\nplt.xlabel('samples')\nplt.ylabel('amplitude')\nplt.show() \n\n# SNR computation\n\nPnoise = np.var(plate[:Nempty,:])\nPsignal = ((np.max(plate) - np.min(plate))/2)**2\nprint('SNR :', '{:.1f}'.format(10*np.log10(Psignal/Pnoise)),'dB',\"/\",Pnoise,\" noise var\")",
"Result : actually the data is coded on 14 bits. But the reference of the red pitaya is 20V and the input is only 1V So on the +/-8192 possible values only 410 are useful. Some noise is visible so i computed the SNR. For that i took the plate image and computed the variance on the half top. It give me the power of the noise. I Then took max and min of the whole signal to compute max amplitude and thus the max signal power. This give 40.4 db of SNR. This agreed s with the value stated by jerome.\nConclusion : If the noise we can see is analog noise, an ADC of 10bit should be enough (60 dB of dynamic range for 40db of SNR). Moreover, if we don't find a way of fully use the 14bits of the red pytaia, this will still be a improvement since we pass from +/-410 values to +/-512.\nThe hand",
"# plot it for fun\n\nplt.figure(3)\nplt.title('hand envelope')\nplt.imshow(abs(sp.hilbert(hand)), cmap='gray', aspect='auto')\nplt.colorbar()\nplt.xlabel('samples')\nplt.ylabel('lines')\nplt.show()",
"What does the noise look like?\nBasing ourselves on the fact that the Acquisition rate is 125MHz and Decimation 8.",
"ACQ = 125000000\nDECIMATION = 8\nSizeFFT = len(plate[Nempty])\n\nXScale = range(SizeFFT)\n\nprint len(XScale),SizeFFT\nfor i in range(SizeFFT):\n XScale[i] = (1.0*ACQ/DECIMATION)*float(XScale[i])/(1.0*SizeFFT)\n \nplt.figure(3)\nFFT = np.fft.fft(plate[Nempty])\n# We clean the low frequencies\nfor i in range(10):\n FFT[i]=0\n FFT[-i]=0\n \nplt.plot(XScale,np.real(FFT),\"r\")\nplt.plot(XScale,np.imag(FFT),\"b\")\nplt.plot(XScale,np.abs(FFT),\"g\")\nplt.xlabel('Frequency')\nplt.ylabel(\"FFT'ed\")\nplt.show()",
"Let's average the noise over the first Nempty lines",
"FFT = np.zeros(SizeFFT)\nfor k in range(Nempty):\n FFT = FFT + np.abs(np.fft.fft(plate[k]))\nFFT = FFT/Nempty\n\nfor i in range(10):\n FFT[i] = 0\n FFT[-i] = 0\n\nplt.plot(XScale,np.abs(FFT),\"g\")\nplt.xlabel('Frequency')\nplt.ylabel(\"FFT'ed\")\nplt.show()\n\n# Let's save the noise pattern \nnp.savez_compressed(\"plate.noise\",XScale=XScale,FFT=FFT)\n\n# We observe an interesting pattern\nplt.plot(XScale[20:400],np.abs(FFT[20:400]),\"g\")\nplt.xlabel('Frequency')\nplt.ylabel(\"FFT'ed\")\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/tensorflow-without-a-phd
|
tensorflow-rnn-tutorial/old-school-tensorflow/tutorial/00_RNN_predictions_playground.ipynb
|
apache-2.0
|
[
"An RNN for short-term predictions\nThis model will try to predict the next value in a short sequence based on historical data. This can be used for example to forecast demand based on a couple of weeks of sales data.\n<div class=\"alert alert-block alert-info\">\nThings to do:<br/>\n<ol start=\"0\">\n<li> Run the notebook. Initially it uses a linear model (the simplest one). Look at the RMSE (Root Means Square Error) metrics at the end of the training and how they compare against a couple of simplistic models: random predictions (RMSErnd), predict same as last value (RMSEsal), predict based on trend from two last values (RMSEtfl).\n<li> Now implement the DNN (Dense Neural Network) model [here](#assignment1) using `tf.layers.dense`. See how it performs.\n<li> Swap in the CNN (Convolutional Neural Network) model [here](#assignment2). It is already implemented in function CNN_model. See how it performs.\n<li> Implement the RNN model [here](#assignment3) using a single `tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE)`. See how it performs.\n<li> Make the RNN cell 2-deep [here](#assignment4) using `tf.nn.rnn_cell.MultiRNNCell`. See if this improves things. Try also training for 10 epochs instead of 5.\n<li> You can now go and check out the solutions in file [00_RNN_predictions_solution.ipynb](00_RNN_predictions_solution.ipynb). Its final cell has a loop that benchmarks all the neural network architectures. Try it and then if you have the time, try reducing the data sequence length from 16 to 8 (SEQLEN=8) and see if you can still predict the next value with so little context.\n</ol>\n</div>",
"import numpy as np\nimport utils_datagen\nimport utils_display\nfrom matplotlib import pyplot as plt\nimport tensorflow as tf\nprint(\"Tensorflow version: \" + tf.__version__)",
"Generate fake dataset",
"DATA_SEQ_LEN = 1024*128\ndata = np.concatenate([utils_datagen.create_time_series(waveform, DATA_SEQ_LEN) for waveform in utils_datagen.Waveforms])\nutils_display.picture_this_1(data, DATA_SEQ_LEN)",
"Hyperparameters",
"NB_EPOCHS = 5 # number of times the data is repeated during training\nRNN_CELLSIZE = 32 # size of the RNN cells\nSEQLEN = 16 # unrolled sequence length\nBATCHSIZE = 32 # mini-batch size",
"Visualize training sequences\nThis is what the neural network will see during training.",
"utils_display.picture_this_2(data, BATCHSIZE, SEQLEN) # execute multiple times to see different sample sequences",
"The model definition\nWhen executed, these functions instantiate the Tensorflow graph for our model.",
"# three simplistic predictive models: can you beat them ?\ndef simplistic_models(X):\n # \"random\" model\n Yrnd = tf.random_uniform([tf.shape(X)[0]], -2.0, 2.0) # tf.shape(X)[0] is the batch size\n # \"same as last\" model\n Ysal = X[:,-1]\n # \"trend from last two\" model\n Ytfl = X[:,-1] + (X[:,-1] - X[:,-2])\n return Yrnd, Ysal, Ytfl\n\n# linear model (RMSE: 0.36, with shuffling: 0.17)\ndef linear_model(X):\n Yout = tf.layers.dense(X, 1) # output shape [BATCHSIZE, 1]\n return Yout",
"<a name=\"assignment1\"></a>\n<div class=\"alert alert-block alert-info\">\n**Assignment #1**: Implement the DNN (Dense Neural Network) model using a single `tf.layers.dense` layer. Do not forget to use the DNN_model function when [instantiating the model](#instantiate)\n</div>",
"# 2-layer dense model (RMSE: 0.15-0.18, if training data is not shuffled: 0.38)\ndef DNN_model(X):\n # X shape [BATCHSIZE, SEQLEN]\n \n # --- dummy model: please implement a real one ---\n # to test it, do not forget to use this function (DNN_model) when instantiating the model\n Y = X * tf.Variable(tf.ones([]), name=\"dummy1\") # Y shape [BATCHSIZE, SEQLEN]\n # --- end of dummy model ---\n \n Yout = tf.layers.dense(Y, 1, activation=None) # output shape [BATCHSIZE, 1]. Predicting vectors of 1 element.\n return Yout",
"<a name=\"assignment2\"></a>\n<div class=\"alert alert-block alert-info\">\n**Assignment #2**: Swap in the CNN (Convolutional Neural Network) model. It is already implemented in function CNN_model below so all you have to do is read through the CNN_model code and then use the CNN_model function when [instantiating the model](#instantiate).\n</div>",
"# convolutional (RMSE: 0.31, with shuffling: 0.16)\ndef CNN_model(X):\n X = tf.expand_dims(X, axis=2) # [BATCHSIZE, SEQLEN, 1] is necessary for conv model\n Y = tf.layers.conv1d(X, filters=8, kernel_size=4, activation=tf.nn.relu, padding=\"same\") # [BATCHSIZE, SEQLEN, 8]\n Y = tf.layers.conv1d(Y, filters=16, kernel_size=3, activation=tf.nn.relu, padding=\"same\") # [BATCHSIZE, SEQLEN, 8]\n Y = tf.layers.conv1d(Y, filters=8, kernel_size=1, activation=tf.nn.relu, padding=\"same\") # [BATCHSIZE, SEQLEN, 8]\n Y = tf.layers.max_pooling1d(Y, pool_size=2, strides=2) # [BATCHSIZE, SEQLEN//2, 8]\n Y = tf.layers.conv1d(Y, filters=8, kernel_size=3, activation=tf.nn.relu, padding=\"same\") # [BATCHSIZE, SEQLEN//2, 8]\n Y = tf.layers.max_pooling1d(Y, pool_size=2, strides=2) # [BATCHSIZE, SEQLEN//4, 8]\n # mis-using a conv layer as linear regression :-)\n Yout = tf.layers.conv1d(Y, filters=1, kernel_size=SEQLEN//4, activation=None, padding=\"valid\") # output shape [BATCHSIZE, 1, 1]\n Yout = tf.squeeze(Yout, axis=-1) # output shape [BATCHSIZE, 1]\n return Yout",
"<a name=\"assignment3\"></a>\n<div class=\"alert alert-block alert-info\">\n**Assignment #3**: Implement the RNN (Recurrent Neural Network) model using `tf.nn.rnn_cell.GRUCell` and `tf.nn.dynamic_rnn`. Do not forget to use the RNN_model_N function when [instantiating the model](#instantiate).</div>\n\n<a name=\"assignment4\"></a>\n<div class=\"alert alert-block alert-info\">\n**Assignment #4**: Make the RNN cell 2-deep [here](#assignment2) using `tf.nn.rnn_cell.MultiRNNCell`. See if this improves things. Try also training for 10 epochs instead of 5. Finally try to compute the loss on the last n elemets of the predicted sequence instead of the last (n=SEQLEN//2 for example). Do not forget to use the RNN_model_N function when [instantiating the model](#instantiate).\n</div>\n\n\n<div style=\"text-align: right; font-family: monospace\">\n X shape [BATCHSIZE, SEQLEN, 1]<br/>\n Y shape [BATCHSIZE, SEQLEN, 1]<br/>\n H shape [BATCHSIZE, RNN_CELLSIZE*NLAYERS]\n</div>",
"# RNN model (RMSE: 0.38, with shuffling 0.14, the same with loss on last 8)\ndef RNN_model(X, n=1):\n X = tf.expand_dims(X, axis=2) # shape [BATCHSIZE, SEQLEN, 1] is necessary for RNN model\n batchsize = tf.shape(X)[0] # allow for variable batch size\n \n # --- dummy model: please implement a real RNN model ---\n # to test it, do not forget to use this function (RNN_model) when instantiating the model\n Yn = X * tf.ones([RNN_CELLSIZE], name=\"dummy2\") # Yn shape [BATCHSIZE, SEQLEN, RNN_CELLSIZE]\n # TODO: create a tf.nn.rnn_cell.GRUCell\n # TODO: unroll the cell using tf.nn.dynamic_rnn(..., dtype=tf.float32)\n # --- end of dummy model ---\n \n # This is the regression layer. It is already implemented.\n # Yn [BATCHSIZE, SEQLEN, RNN_CELLSIZE]\n Yn = tf.reshape(Yn, [batchsize*SEQLEN, RNN_CELLSIZE])\n Yr = tf.layers.dense(Yn, 1) # Yr [BATCHSIZE*SEQLEN, 1] predicting vectors of 1 element\n Yr = tf.reshape(Yr, [batchsize, SEQLEN, 1]) # Yr [BATCHSIZE, SEQLEN, 1]\n \n # In this RNN model, you can compute the loss on the last predicted item or the last n predicted items\n # Last n with n=SEQLEN//2 is slightly better. This is a hyperparameter you can adjust in the RNN_model_N\n # function below.\n Yout = Yr[:,-n:SEQLEN,:] # last item(s) in sequence: output shape [BATCHSIZE, n, 1]\n Yout = tf.squeeze(Yout, axis=-1) # remove the last dimension (1): output shape [BATCHSIZE, n]\n return Yout\n\ndef RNN_model_N(X): return RNN_model(X, n=SEQLEN//2)\n\ndef model_fn(features, labels, model):\n X = features # shape [BATCHSIZE, SEQLEN]\n \n Y = model(X)\n\n last_label = labels[:, -1] # last item in sequence: the target value to predict\n last_labels = labels[:, -tf.shape(Y)[1]:SEQLEN] # last p items in sequence (as many as in Y), useful for RNN_model(X, n>1)\n\n loss = tf.losses.mean_squared_error(Y, last_labels) # loss computed on last label(s)\n optimizer = tf.train.AdamOptimizer(learning_rate=0.01)\n train_op = optimizer.minimize(loss)\n Yrnd, Ysal, Ytfl = simplistic_models(X)\n eval_metrics = {\"RMSE\": tf.sqrt(loss),\n # compare agains three simplistic predictive models: can you beat them ?\n \"RMSErnd\": tf.sqrt(tf.losses.mean_squared_error(Yrnd, last_label)),\n \"RMSEsal\": tf.sqrt(tf.losses.mean_squared_error(Ysal, last_label)),\n \"RMSEtfl\": tf.sqrt(tf.losses.mean_squared_error(Ytfl, last_label))}\n \n Yout = Y[:,-1]\n return Yout, loss, eval_metrics, train_op",
"prepare training dataset",
"# training to predict the same sequence shifted by one (next value)\nlabeldata = np.roll(data, -1)\n# slice data into sequences\ntraindata = np.reshape(data, [-1, SEQLEN])\nlabeldata = np.reshape(labeldata, [-1, SEQLEN])\n\n# also make an evaluation dataset by randomly subsampling our fake data\nEVAL_SEQUENCES = DATA_SEQ_LEN*4//SEQLEN//4\njoined_data = np.stack([traindata, labeldata], axis=1) # new shape is [N_sequences, 2(train/eval), SEQLEN]\njoined_evaldata = joined_data[np.random.choice(joined_data.shape[0], EVAL_SEQUENCES, replace=False)]\nevaldata = joined_evaldata[:,0,:]\nevallabels = joined_evaldata[:,1,:]\n\ndef datasets(nb_epochs):\n # Dataset API for batching, shuffling, repeating\n dataset = tf.data.Dataset.from_tensor_slices((traindata, labeldata))\n dataset = dataset.repeat(NB_EPOCHS)\n dataset = dataset.shuffle(DATA_SEQ_LEN*4//SEQLEN) # important ! Number of sequences in shuffle buffer: all of them\n dataset = dataset.batch(BATCHSIZE)\n \n # Dataset API for batching\n evaldataset = tf.data.Dataset.from_tensor_slices((evaldata, evallabels))\n evaldataset = evaldataset.repeat()\n evaldataset = evaldataset.batch(EVAL_SEQUENCES) # just one batch with everything\n\n # Some boilerplate code...\n \n # this creates a Tensorflow iterator of the correct type and shape\n # compatible with both our training and eval datasets\n tf_iter = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)\n # it can be initialized to iterate through the training dataset\n dataset_init_op = tf_iter.make_initializer(dataset)\n # or it can be initialized to iterate through the eval dataset\n evaldataset_init_op = tf_iter.make_initializer(evaldataset)\n # Returns the tensorflow nodes needed by our model_fn.\n features, labels = tf_iter.get_next()\n # When these nodes will be executed (sess.run) in the training or eval loop,\n # they will output the next batch of data.\n\n # Note: when you do not need to swap the dataset (like here between train/eval) just use\n # features, labels = dataset.make_one_shot_iterator().get_next()\n # TODO: easier with tf.estimator.inputs.numpy_input_fn ???\n \n return features, labels, dataset_init_op, evaldataset_init_op",
"<a name=\"instantiate\"></a>\nInstantiate the model",
"tf.reset_default_graph() # restart model graph from scratch\n# instantiate the dataset\nfeatures, labels, dataset_init_op, evaldataset_init_op = datasets(NB_EPOCHS)\n# instantiate the model\nYout, loss, eval_metrics, train_op = model_fn(features, labels, linear_model)",
"Initialize Tensorflow session\nThis resets all neuron weights and biases to initial random values",
"# variable initialization\nsess = tf.Session()\ninit = tf.global_variables_initializer()\nsess.run(init)",
"The training loop\nYou can re-execute this cell to continue training",
"count = 0\nlosses = []\nindices = []\nsess.run(dataset_init_op)\nwhile True:\n try: loss_, _ = sess.run([loss, train_op])\n except tf.errors.OutOfRangeError: break\n # print progress\n if count%300 == 0:\n epoch = count // (DATA_SEQ_LEN*4//BATCHSIZE//SEQLEN)\n print(\"epoch \" + str(epoch) + \", batch \" + str(count) + \", loss=\" + str(loss_))\n if count%10 == 0:\n losses.append(np.mean(loss_))\n indices.append(count)\n count += 1\n \n# final evaluation\nsess.run(evaldataset_init_op)\neval_metrics_, Yout_ = sess.run([eval_metrics, Yout])\nprint(\"Final accuracy on eval dataset:\")\nprint(str(eval_metrics_))\n\nplt.ylim(ymax=np.amax(losses[1:])) # ignore first value(s) for scaling\nplt.plot(indices, losses)\nplt.show()\n\n# execute multiple times to see different sample sequences\nutils_display.picture_this_3(Yout_, evaldata, evallabels, SEQLEN)",
"Copyright 2018 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
BillMills/python-lesson-notebooks
|
intro-python-2.ipynb
|
mit
|
[
"Introduction to Python: Loops and Conditions\nIn the previous lesson, we learned about how to store information in variables, and store instructions in functions for later use. In this lesson, we'll learn the basic tools for making our programs smarter: loops, which allow our programs to repeat themselves many times, and conditions, which allow our programs to make simple decisions for themselves. First though, we'll start by learning about a new type of variable, called a list.\nPart 1: Lists\nSo far, we've put strings and numbers into our variables. Another type of information we can handle in Python is called a list. We can create a list like so:",
"shopping = ['cheese', 'bananas', 'circuitboards']",
"Key things to note:\n - we start with a variable name and an equals sign, like before.\n - the list is surrounded by []\n - the elements of the list are separated by ,\nOnce we've created our list, we can ask for individual elements of it like so:",
"print( shopping[0] )\nprint( shopping[1] )\nprint( shopping[2] )",
"Notice that the first element in the list is referred to by 0; we call these numbers the 'index' of the array element, and they always count starting at zero for the first element.\nIf instead we want to count from the back of the array, we start with -1 and go down from there:",
"print( shopping[-1] )\nprint( shopping[-2] )\nprint( shopping[-3] )",
"We can ask our array how long it is:",
"print( len(shopping) )",
"And we can even sort our array:",
"sorted_shopping = sorted(shopping)\nprint( sorted_shopping )",
"Lists are useful when we have a whole lot of conceptually similar data, or data that has a meaningful order; if you have a sensor that takes the same reading every second, you would probably want to store that data in a list, so that you can preserve what order those measurements came in.\n\nChallenge Problem #1\nWrite a function that takes a list of numbers as an argument, and returns another list; this returned list should have the largest number in the original list as its first element, and the length of the original list as its second element. So, if the input list is [5, 7, 1, 3], the output list should be [7, 4].\n\nPart 2: Loops\nNow that we understand lists, we can learn about one of the most fundamental tools in programming: the for loop. Suppose you had a list of data, and a function that you wanted to apply to each one:",
"def getLeadingBase(read):\n '''\n input: a string representing a read of a genome\n output: the leading base of the input read.\n '''\n \n return read[0]\n\nmyReads = ['GGATC', 'AAACC', 'TTCGT']\n\nprint(getLeadingBase(myReads[0]))\nprint(getLeadingBase(myReads[1]))\nprint(getLeadingBase(myReads[2]))",
"This works fine, but it's a bit tedious; just like last time when we got sick of cutting and pasting our temperature conversion code, it's impractical to cut and paste that print statement for everything in the list - what if there were 3 billion reads in our list, instead of only 3? We can ask Python to repeat the same block of code over and over again, only changing the element of myReads that we're looking at by using a for loop:",
"for read in myReads:\n print(getLeadingBase( read ))",
"Python has run the stuff inside the for loop once for every value in the list provided after the in keyword. A common task is often to loop over a range of numbers; for this, Python provides the helper function range:",
"range(6)\n\nrange(2,4)",
"Give range one number, and it returns an iterator that from 0 up to but not including that number; give range two numbers, and it reutns an iterator counting from the first (inclusive), up to but not including the last. Another common idiom is to use a range of indices to do the same thing we did above:",
"for i in range(len(myReads)):\n print(getLeadingBase( myReads[i] ))",
"This does the exact same thing as above, but gives us a numerical index i, which we could use for something else (referring to another list, doing something special every thrid item...).\n\nChallenge Problem #2\nLists have a handy helper function append(x), which adds the argument to the end of the list. So for example, if I had\nmyList = [1,2,3]\nmyList.append(4)\nmyList would now be [1,2,3,4]. Write a function called addPrefix that takes a list of strings and a prefix as an argument, and returns another list the same as the original, but with prefix added to the front of every string. So for example, \naddPrefix(['GA', 'TC', 'GC'], 'CC') would reurn ['CCGA', 'CCTC', 'CCGC'].\n\nPart 3: Conditions\nSo far, we've learned a lot about how to get Python to repeat itself, using functions and for loops. But in real science, while we may do many similar things in an analysis, they aren't usually all completely identical; based on circumstances, we often have to make decisions and adapt to our observations. The fundamental tool for doing that in Python is the conditional statement, and it's the last tool we need before we can dive into our future lessons.\nSuppose we had some genetic reads, but we only wanted to consider ones that were more than 10 bases long. We could check with a condition:",
"myReads = ['ATGTC', 'G', 'ATG', 'ATGC']\n\nfor read in myReads:\n if len(read) > 3:\n print(read)\n",
"So while we looped through the entire list, we only printed out reads that passed our condition of being longer than 3 bases. We can also add alternative conditions to check for other cases:",
"for read in myReads:\n if len(read) > 3:\n print(read)\n elif len(read) == 3:\n print(read, 'is just barely long enough')",
"Finally, we can add a catch all statement to the end to do something with all the items that didn't satisfy any condition:",
"for read in myReads:\n if len(read) > 3:\n print(read)\n elif len(read) == 3:\n print(read, 'is just barely long enough')\n else:\n print(read, 'is too short.')",
"All conditions start with an if statement, but the number of elifs afterwards is up to you - you can check as many alternate conditions as you like (including none). Similarly, a catchall else can do something for all the leftovers, but it isn't required.\nAbove we saw a couple examples of making logical expressions to check in a condition; these are conditions that evaluate to True or False, like 7 < 3 (False), or 0 == 0 (True) - notice the double equals sign asks the question 'are these two things equal?'.\nFinally, we can combine conditions together using the words and and or:",
"for read in myReads:\n if len(read) > 2 and len(read) < 5:\n print(read, 'length is greater than 2 and less than 5')\n elif len(read) < 3 or len(read) == 4:\n print(read, 'length is either less than 3 or exactly 4')\n else:\n print(read, 'didnt match any conditions.')\n \n ",
"Challenge Problem #3\nStrings can be indexed the same way as lists - so if you have myword = 'Python', then myword[2] will be t. Write a function geneComplement that takes a genome as an argument, and returns its genetic complement - ie, A is swapped with T, and G is swapped with C, so geneComplement('GGCATT') would return CCGTAA."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
david4096/bioapi-examples
|
python_notebooks/1kg_reference_service.ipynb
|
apache-2.0
|
[
"GA4GH 1000 Genomes Reference Service Example\nThis example illustrates how to access the available reference sequences offered by a GA4GH instance. \nInitialize the client\nIn this step we create a client object which will be used to communicate with the server. It is initialized using the URL.",
"from ga4gh.client import client\nc = client.HttpClient(\"http://1kgenomes.ga4gh.org\")",
"Search reference sets\nReference sets collect together named reference sequences as part of released assemblies. The API provides methods for accessing reference sequences.\nThe Thousand Genomes data presented here are mapped to GRCh37, and so this server makes that reference genome available. Datasets and reference genomes are decoupled in the data model, so it is possible to use the same reference set in multiple datasets.\nHere, we list the details of the Reference Set.",
"for reference_set in c.search_reference_sets():\n ncbi37 = reference_set\n print \"name: {}\".format(ncbi37.name)\n print \"ncbi_taxon_id: {}\".format(ncbi37.ncbi_taxon_id)\n print \"description: {}\".format(ncbi37.description)\n print \"source_uri: {}\".format(ncbi37.source_uri)",
"Obtaining individual Reference Sets by ID\nThe API can also obtain an individual reference set if the id is known. In this case, we can observe that only one is available. But in the future, more sets might be implemented.",
"reference_set = c.get_reference_set(reference_set_id=ncbi37.id)\nprint reference_set",
"Search References\nFrom the previous call, we have obtained the parameter required to obtain references which belong to ncbi37. We use its unique identifier to constrain the search for named sequences. As there are 86 of them, we have only chosen to show a few.",
"counter = 0\nfor reference in c.search_references(reference_set_id=ncbi37.id):\n if reference.name == \"1\":\n base_id_ref = reference\n counter += 1\n if counter > 5:\n break\n print reference",
"Get Reference by ID\nReference sequence messages, like those above, can be referenced by their identifier directly. This identifier points to chromosome 1 in this server instance.",
"reference = c.get_reference(reference_id=base_id_ref.id)\nprint reference",
"List Reference Bases\nUsing the reference_id from above we can construct a query to list the alleles present on a sequence using start and end offsets.",
"reference_bases = c.list_reference_bases(base_id_ref.id, start=15000, end= 16000)\nprint reference_bases\nprint len(reference_bases)",
"For documentation on the service, and more information go to.\nhttps://ga4gh-schemas.readthedocs.io/en/latest/schemas/reference_service.proto.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nholtz/structural-analysis
|
theory/frame2d/01-Derivation-Local-Stiffness.ipynb
|
cc0-1.0
|
[
"Plane Frame Member Local Stiffness Matrix\nThis page develops the various member stiffness relationships needed to analyze a planar frame by the matrix stiffness method.\nNumbering and Sign Convention, Local Coordinates\n\nThe above figure shows the numbering and sign convention used for the six end displacements and forces for a general 2D frame member. These are shown relative to the local coordinate system of the member, with origin at the left, or j-end, and x-axis coincident with the axis of the member. The y-axis is positive upwards. It is sometimes necessary to distinguish member ends; the term j-end is used to refer to the left end or origin, and k-end is the other end (note that the CAL manual uses the i- and j- end for the same purpose).\nOf course $f_0$ and $f_3$ are axial forces, $f_1$ and $f_4$ are shear forces, and $f_2$ and $f_5$ are bending moments.\n$\\newcommand{\\mat}[1]{\\left[\\begin{matrix}#1\\end{matrix}\\right]}$\nThe local member stiffness matrix, $\\mat{K_l}$, expresses the end forces as a function of the end displacements, thus:\n$$\n\\mat{f_0\\ f_1\\ f_2\\ f_3\\ f_4\\ f_5} =\n\\mat{K_l}\n\\mat{\\delta_0\\ \\delta_1\\ \\delta_2\\ \\delta_3\\ \\delta_4\\ \\delta_5}\n$$\nDerivation of the Local Member Stiffness Matrix, $\\mat{K_l}$",
"from sympy import *\ninit_printing(use_latex='mathjax')\nfrom sympy.matrices import Matrix",
"Moment Flexibility Coefficients\nWe will first determine the inverse relationship for end moments -- the rotation flexibility matrix, $\\mat{F}$, that expresses the end rotations as a function of the end moments, as shown in the following figure.\n$$\n\\mat{\\delta_2\\ \\delta_5} = \\mat{F} \\mat{f_2\\f_5}\n$$\n\nNote that at each end, there are two other forces acting - axial and shear: $f_0$, $f_1$ and $f_3$, $f_4$. These are not\nshown, for simplicity of figure.\nThe method of virtual work is probably the simplest and most direct way to do develop the rotation flexibility matrix.\nThe following figure shows unit moments placed individually at each end (at location #s 2 and 5). Below each segment\nis the corresponnding bending moment diagram, $m_i$:\n\nThe figure also\nshows the resulting rotational displacements $\\alpha_{ij}$ for each segment; these are the coefficients of the flexibility matrix:\n$$\n\\mat{F} = \\mat{\\alpha_{22} & \\alpha_{52}\\ \\alpha_{25} & \\alpha_{55}}\n$$\nThe coefficients of $\\mat{F}$, $\\alpha_{ij}$, are the end displacements due to unit values of the end forces.\nSpecifically $\\alpha_{ij}$ is the rotation at $i$ due to a unit moment at $j$ and is calculated using the method \nof virtual work (integrating products of bending moments):\n$$\n\\mat{F} = \\mat{\\int_0^L\\frac{m_2 m_2}{EI} dx & \\int_0^L\\frac{m_2 m_5}{EI} dx \\ \n \\int_0^L\\frac{m_5 m_2}{EI} dx & \\int_0^L\\frac{m_5 m_5}{EI} dx }\n$$",
"L,E,I,A,x = symbols('L E I A x')\n\nm2 = x/L - 1 # unit moment at DOF 2\nm5 = x/L # unit moment at DOF 5\nEI = E*I\nF = Matrix([[m2*m2, m2*m5],\n [m5*m2, m5*m5]]).integrate((x,0,L))/EI\nF",
"Moment Stiffness Coefficients\nThe moment stiffness matrix, $\\mat{M}$ expresses the two end moments as functions\nof the end rotations.\n$$\n\\mat{f_2\\f_5} = \\mat{M} \\mat{\\delta_2 \\ \\delta_5}\n$$\n$$\n\\mat{M} = \\mat{F}^{-1}\n$$\nand that is just the inverse of the 2x2 rotation flexibility matrix.",
"M = F**-1\nM",
"Total End Forces - Accounting For Axial and Shear Effects\nThe matrix $\\mat{T_f}$ gives the total set of end forces in terms of the two end moments (i.e., the end forces that are in equilibrium with the end moments):\n$$\n\\mat{f_0\\ f_1\\ f_2\\ f_3\\ f_4\\ f_5} = \\mat{T_f} \\mat{f_2\\f_5}\n$$",
"# transform end moments to end thrust,shear,moment\nTf = Matrix([[0,0],[1/L,1/L],[1,0],[0,0],[-1/L,-1/L],[0,1]])\nTf\n\nf2,f5 = symbols('f2 f5') # confirm that that is OK\nTf*Matrix([[f2],\n [f5]])",
"Total Stiffness Coefficients\nColumn $j$ of the member stiffness matrix $\\mat{K_l}$ consists of all six end forces that\nare consistent with a unit value of displacement $j$ and zero values of the other five displacements.\nTherefore column 2 consists of the six end forces that occur when $\\delta_2 = 1$\nand all other displacements are 0. That can be computed:\n$$\n\\mat{T_f} \\mat{M} \\mat{1\\0}\n$$",
"Kl = zeros(6,6) # build the member stiffness matrix one column at a time\n\nKl[:,2] = Tf * M * Matrix([[1],[0]]) # column 2\nKl[:,2]",
"Column 5 is similar, but with $\\delta_5 = 1$.",
"Kl[:,5] = Tf * M * Matrix([[0],[1]]) # column 5\nKl[:,5]",
"The forces due to end translations are determined by mapping the unit translation to\nend rotations of the elastic curve from the chord. \n\nThat is, for the case of $\\delta_1 = 1$, (and notabley \n$\\delta_2 = \\delta_5 = 0$), then the end moments will be consistent with end rotations\nof $1/L$. Column 1 will be calculated:\n$$ \\mat{T_f} \\mat{M} \\mat{1/L\\1/L} $$",
"Kl[:,1] = Tf * M * Matrix([[1/L],[1/L]]) # column 1\nKl[:,1]",
"The end forces due to a unit value of $\\delta_4$ are the same as those for $\\delta_1$, but with reversed signs.",
"Kl[:,4] = Tf * M * Matrix([[-1/L],[-1/L]]) # column 4\nKl[:,4]",
"Now the axial forces and displacements, which are de-coupled from shears and moments:",
"ac = Matrix([E*A/L,0,0,-E*A/L,0,0])\nac\n\nKl[:,0] = ac # column 0\nKl[:,3] = -ac # column 3",
"The member stiffness matrix in local coordinates, $\\mat{K_l}$, is:",
"Kl",
"And, for ease of copying into other software, here it is again.",
"print(Kl)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gwulfs/research_public
|
lectures/long_short_equity/Long-Short Equity Strategies.ipynb
|
apache-2.0
|
[
"Long-Short Equity Strategies\nBy Delaney Granizo-Mackenzie\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\nhttps://github.com/quantopian/research_public\n\nNotebook released under the Creative Commons Attribution 4.0 License. Please do not remove this attribution.\nLong-short equity refers to the fact that the strategy is both long and short in the equity market. This is a rather general statement, but has over time grown to mean a specific family of strategies. These strategies rank all stocks in the market using some model. The strategy then goes long (buys) the top $n$ equities of the ranking, and goes short on (sells) the bottom $n$ while maintaining equal dollar volume between the long and short positions. This has the advantage of being statistically robust, as by ranking stocks and entering hundreds or thousands of positions, you are making many bets on your ranking model rather than just a few risky bets. You are also betting purely on the quality of your ranking scheme, as the equal dollar volume long and short positions ensure that the strategy will remain market neutral (immune to market movements).\nRanking Scheme\nA ranking scheme is any model that can assign each stocks a number, where higher is better or worse. Examples could be value factors, technical indicators, pricing models, or a combination of all of the above. The Ranking Universes by Factors lecture will cover ranking schemes in more detail. Ranking schemes are the secret sauce of any long-short equity strategy, so developing them is nontrivial.\nMaking a Bet on the Ranking Scheme\nOnce we have determined a ranking scheme, we would like to be able to profit from it. We do this by investing an equal amount of money long into the top of the ranking, and short into the bottom. This ensures that the strategy will make money proportionally to the quality of the ranking only, and will be market neutral.\nLong and Short Baskets\nIf you are ranking $m$ equities, have $d$ dollars to invest, and your total target number of positions to hold is $2n$, then the long and short baskets are created as follors. For each equity in spots $1, \\dots, n$ in the ranking, sell $\\frac{1}{2n} * d$ dollars of that equity. For each equity in spots $m - n, \\dots, m$ in the ranking, buy $\\frac{1}{2n} * d$ dollars of that equity. \nFriction Because of Prices\nBecause equity prices will not always divide $\\frac{1}{2n} * d$ evenly, and equities must be bought in integer amounts, there will be some imprecision and the algorithm should get as close as it can to this number. Most algorithms will have access to some leverage during execution, so it is fine to buy slightly more than $\\frac{1}{2n} * d$ dollars per equity. This does, however, cause some friction at low capital amounts. For a strategy running $d = 100000$, and $n = 500$, we see that \n$$\\frac{1}{2n} * d = \\frac{1}{1000} * 100000 = 100$$\nThis will cause big problems for expensive equities, and cause the algorithm to be overlevered. This is alleviated by trading fewer equities or increasing the capital, $d$. Luckily, long-short equity strategies tend to be very high capicity, so there is for most purposes no ceiling on the amount of money one can invest. For more information on algorithm capacities, refer to the algorithm capacity lecture when it is released.\nReturns Come From The Ranking Spread\nThe returns of a long-short equity strategy are dependent on how well the ranking spreads out the high and low returns. To see how this works, consider this hypothetical example.",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# We'll generate a random factor\ncurrent_factor_values = np.random.normal(0, 1, 10000)\nequity_names = ['Equity ' + str(x) for x in range(10000)]\n# Put it into a dataframe\nfactor_data = pd.Series(current_factor_values, index = equity_names)\nfactor_data = pd.DataFrame(factor_data, columns=['Factor Value'])\n# Take a look at the dataframe\nfactor_data.head(10)\n\n# Now let's say our future returns are dependent on our factor values\nfuture_returns = current_factor_values + np.random.normal(0, 1, 10000)\n\nreturns_data = pd.Series(future_returns, index=equity_names)\nreturns_data = pd.DataFrame(returns_data, columns=['Returns'])\n# Put both the factor values and returns into one dataframe\ndata = returns_data.join(factor_data)\n# Take a look\ndata.head(10)",
"Now that we have factor values and returns, we can see what would happen if we ranked our equities based on factor values, and then entered the long and short positions.",
"# Rank the equities\nranked_data = data.sort('Factor Value')\n\n# Compute the returns of each basket\n# Baskets of size 500, so we create an empty array of shape (10000/500)\nnumber_of_baskets = 10000/500\nbasket_returns = np.zeros(number_of_baskets)\n\nfor i in range(number_of_baskets):\n start = i * 500\n end = i * 500 + 500 \n basket_returns[i] = ranked_data[start:end]['Returns'].mean()\n\n# Plot the returns of each basket\nplt.bar(range(number_of_baskets), basket_returns)\nplt.ylabel('Returns')\nplt.xlabel('Basket')\nplt.legend(['Returns of Each Basket']);",
"Let's compute the returns if we go long the top basket and short the bottom basket.",
"basket_returns[number_of_baskets-1] - basket_returns[0]",
"Market Neutrality is Built-In\nThe nice thing about making money based on the spread of the ranking is that it is unaffected by what the market does.",
"# We'll generate a random factor\ncurrent_factor_values = np.random.normal(0, 1, 10000)\nequity_names = ['Equity ' + str(x) for x in range(10000)]\n# Put it into a dataframe\nfactor_data = pd.Series(current_factor_values, index = equity_names)\nfactor_data = pd.DataFrame(factor_data, columns=['Factor Value'])\n\n# Now let's say our future returns are dependent on our factor values\nfuture_returns = -10 + current_factor_values + np.random.normal(0, 1, 10000)\n\nreturns_data = pd.Series(future_returns, index=equity_names)\nreturns_data = pd.DataFrame(returns_data, columns=['Returns'])\n# Put both the factor values and returns into one dataframe\ndata = returns_data.join(factor_data)\n\n# Rank the equities\nranked_data = data.sort('Factor Value')\n\n# Compute the returns of each basket\n# Baskets of size 500, so we create an empty array of shape (10000/500\nnumber_of_baskets = 10000/500\nbasket_returns = np.zeros(number_of_baskets)\n\nfor i in range(number_of_baskets):\n start = i * 500\n end = i * 500 + 500 \n basket_returns[i] = ranked_data[start:end]['Returns'].mean()\n\nbasket_returns[number_of_baskets-1] - basket_returns[0]",
"Choice and Evaluation of a Ranking Scheme\nThe ranking scheme is where a long-short equity strategy gets its edge, and is the most crucial component. Choosing a good ranking scheme is the entire trick, and there is no easy answer. A good starting point is to pick existing known techniques, and see if you can modify them slightly to get increased returns. More information on ranking scheme construction can be found in the notebooks listed below.\nDuring research of your ranking scheme, it's important to determine whether or not your ranking scheme is actually predictive of future returns. This can be accomplished with spearman rank correlation\nInformation on construction and evaluation of ranking schemes is available in the following notebooks:\n* Ranking Universes by Factors\n* Spearman Rank Correlation\nBoth can be found in the Quantopian Lecture Series.\nLong-Short is a Modular Strategy\nTo execute a long-short equity, you effectively only have to determine the ranking scheme. Everything after that is mechanical. Once you have one long-short equity strategy, you can swap in different ranking schemes and leave everything else in place. It's a very convenient way to quickly iterate over ideas you have without having to worry about tweaking code every time.\nThe ranking schemes can come from pretty much any model as well. It doesn't have to be a value based factor model, it could be a machine learning technique that predicted returns one-month ahead and ranked based on that.\nWe will be releasing sample long-short algorithms to go along with this notebook. Please stay tuned for those.\nAdditional Considerations\nRebalancing Frequency\nEvery ranking system will be predictive of returns over a slightly different timeframe. A price-based mean reversion may be predictive over a few days, while a value-based factor model may be predictive over many months. It is important to determine the timeframe over which your model should be predictive, and statistically verify that before executing your strategy. You do want to overfit by trying to optimize the relabancing frequency, you will inevitably find one that is randomly better than others, but not necessary because of anything in your model.\nOnce you have determined the timeframe on which your ranking scheme is predictive, try to rebalance at about that frequency so you're taking full advantage of your models.\nCapital Capacity\nEvery strategy has a minimum and maximum amount of capital it can trade before it stops being profitable. We will be releasing a full notebook discussing these concepts, but in the meantime consider the following.\nNumber of Equities Traded\nTransaction Costs\nTrading many equities will result in high transaction costs. Say that you want to purchase $1000$ equities, you will incur thousands of dollars of costs per rebalance. Your capital base must be high enough that the transaction costs are a small percentage of the returns being generated by your strategy. Say that you are running $100,000$ dollars and making $1\\%$ per month, then the $1000$ dollars of transaction fees per month would take up your all of returns. You would need to be running the strategy on millions of dollars for it to be profitable over $1000$ equities.\nThe minimum capacity is quite high as such, and dependent largely on the number of equities traded. However, the maximum capacity is also incredibly high, with long-short equity strategies capable of trading hundreds of millions of dollars without losing their edge. This is true because the strategy rebalances relatively infrequently, and the total dollar volume is divided by the number of equities traded. So if you turn over your entire portfolio of $100,000,000$ every month while running 1000 equities, you are only running $100,000$ dollar-volume per month through each equity, which isn't enough to be a significant market share for most securities."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/tf-estimator-tutorials
|
04_Times_Series/01.0 - TF ARRegressor - Estimator + Numpy.ipynb
|
apache-2.0
|
[
"import tensorflow as tf\nimport tensorflow.contrib.timeseries as ts\nimport multiprocessing\nimport pandas as pd\nimport numpy as np\nimport shutil\nfrom datetime import datetime\nimport matplotlib.pyplot as plt\n#%matplotlib inline\n\nprint(tf.__version__)\n\nMODEL_NAME = 'ts-model-01'\n\nTRAIN_DATA_FILE = 'data/train-data.csv'\nTEST_DATA_FILE = 'data/test-data.csv'\n\nRESUME_TRAINING = False\nMULTI_THREADING = True",
"Steps to use the ARRegressor Estimator API\n\nDefine the metadata\nDefine a data (Numpy) input function\nDefine a create Estimator function\nInstantiate an esimator with the run_config and required params\nTrain the estimaor\nEvaluate the estimator\nPredict using the estimator\n\n1. Define Metadata",
"HEADER = ['time_index','value']\nTIME_INDEX_FEATURE_NAME = 'time_index'\nVALUE_FEATURE_NAMES = ['value']",
"2. Define a Data Input Function",
"def generate_input_fn(file_name, mode, header_lines=1, batch_size = None, windows_size = None, tail_count=None):\n \n dataframe = pd.read_csv(file_name, names=HEADER, skiprows=header_lines)\n \n\n if tail_count is not None:\n dataframe = dataframe.tail(tail_count)\n \n print(\"Dataset Size: {}\".format(len(dataframe)))\n print(\"\")\n \n \n data = {\n ts.TrainEvalFeatures.TIMES: dataframe[TIME_INDEX_FEATURE_NAME],\n ts.TrainEvalFeatures.VALUES: dataframe[VALUE_FEATURE_NAMES],\n }\n \n reader = ts.NumpyReader(data)\n \n num_threads = multiprocessing.cpu_count() if MULTI_THREADING else 1\n \n if mode == tf.estimator.ModeKeys.TRAIN:\n input_fn = tf.contrib.timeseries.RandomWindowInputFn(\n reader, \n batch_size=batch_size, \n window_size=windows_size,\n num_threads= num_threads\n )\n \n elif mode == tf.estimator.ModeKeys.EVAL:\n input_fn = tf.contrib.timeseries.WholeDatasetInputFn(reader)\n \n return input_fn",
"3. Define a Create Estimator Function",
"def create_estimator(run_config, hparams):\n\n estimator = ts.ARRegressor(\n periodicities= hparams.periodicities, \n input_window_size= hparams.input_window_size, \n output_window_size= hparams.output_window_size,\n num_features=len(VALUE_FEATURE_NAMES),\n loss=hparams.loss,\n hidden_layer_sizes = hparams.hidden_units,\n optimizer = tf.train.AdagradOptimizer(learning_rate=hparams.learning_rate),\n config=run_config\n )\n \n print(\"\")\n print(\"Estimator Type: {}\".format(type(estimator)))\n print(\"\")\n\n return estimator",
"4. Instantiate and Estimator with the Required Parameters",
"CHECKPOINT_STEPS=1000\n\nhparams = tf.contrib.training.HParams(\n training_steps = 10000,\n periodicities = [200],\n input_window_size = 40,\n output_window_size=10,\n batch_size = 15,\n loss = tf.contrib.timeseries.ARModel.NORMAL_LIKELIHOOD_LOSS, # NORMAL_LIKELIHOOD_LOSS | SQUARED_LOSS\n hidden_units = None,\n learning_rate = 0.1\n \n)\n\n\nmodel_dir = 'trained_models/{}'.format(MODEL_NAME)\n\nrun_config = tf.estimator.RunConfig().replace(\n save_checkpoints_steps=CHECKPOINT_STEPS,\n tf_random_seed=19830610,\n model_dir=model_dir\n)\n \nprint(\"Model directory: {}\".format(run_config.model_dir))\nprint(\"Hyper-parameters: {}\".format(hparams))\nprint(\"\")\n\ntrain_input_fn = generate_input_fn(\n file_name=TRAIN_DATA_FILE,\n mode = tf.estimator.ModeKeys.TRAIN,\n batch_size=hparams.batch_size,\n windows_size = hparams.input_window_size + hparams.output_window_size\n)\n\nestimator = create_estimator(run_config, hparams)",
"5. Train the Estimator",
"if not RESUME_TRAINING:\n shutil.rmtree(model_dir, ignore_errors=True)\n \ntf.logging.set_verbosity(tf.logging.INFO)\n\ntime_start = datetime.utcnow() \nprint(\"Estimator training started at {}\".format(time_start.strftime(\"%H:%M:%S\")))\nprint(\".......................................\")\n\nestimator.train(input_fn=train_input_fn, steps=hparams.training_steps)\n\ntime_end = datetime.utcnow() \nprint(\".......................................\")\nprint(\"Estimator training finished at {}\".format(time_end.strftime(\"%H:%M:%S\")))\nprint(\"\")\ntime_elapsed = time_end - time_start\nprint(\"Estimator training elapsed time: {} seconds\".format(time_elapsed.total_seconds()))",
"6. Evalute the Estimator",
"hparams = tf.contrib.training.HParams(\n training_steps = 10000,\n periodicities = [200],\n input_window_size = 40,\n output_window_size=10,\n batch_size = 15,\n loss = tf.contrib.timeseries.ARModel.SQUARED_LOSS, # NORMAL_LIKELIHOOD_LOSS | SQUARED_LOSS\n hidden_units = None,\n learning_rate = 0.1\n \n)\n\nestimator = create_estimator(run_config, hparams)\n\neval_input_fn = generate_input_fn(\n file_name=TRAIN_DATA_FILE,\n mode = tf.estimator.ModeKeys.EVAL,\n)\n\ntf.logging.set_verbosity(tf.logging.WARN)\nevaluation = estimator.evaluate(input_fn=eval_input_fn, steps=1)\nprint(\"\")\nprint(evaluation.keys())\nprint(\"\")\nprint(\"Evaluation Loss ({}) : {}\".format(hparams.loss, evaluation['loss']))\n\ndef compute_rmse(a, b):\n rmse = np.sqrt(np.sum(np.square(a - b)) / len(a))\n return round(rmse,5)\n\ndef compute_mae(a, b):\n mae = np.sqrt(np.sum(np.abs(a - b)) / len(a))\n return round(mae,5)\n\nx_current = evaluation['times'][0]\ny_current_actual = evaluation['observed'][0].reshape(-1)\ny_current_estimated = evaluation['mean'][0].reshape(-1)\n\nrmse = compute_rmse(y_current_actual, y_current_estimated)\nmae = compute_mae(y_current_actual, y_current_estimated)\nprint(\"Evaluation RMSE {}\".format(rmse))\nprint(\"Evaluation MAE {}\".format(mae))\n\nplt.figure(figsize=(20, 10))\n\nplt.title(\"Time Series Data\")\nplt.plot(x_current, y_current_actual, label='actual')\nplt.plot(x_current, y_current_estimated, label='estimated')\nplt.xlabel(\"Time Index\")\nplt.ylabel(\"Value\")\nplt.legend(loc=2)\nplt.show()",
"7. Predict using the Estimator",
"FORECAST_STEPS = [10,50,100,150,200,250,300]\n\ntf.logging.set_verbosity(tf.logging.ERROR)\n\neval_input_fn = generate_input_fn(\n file_name=TRAIN_DATA_FILE,\n mode = tf.estimator.ModeKeys.EVAL,\n tail_count = hparams.input_window_size + hparams.output_window_size\n)\n\n\nevaluation = estimator.evaluate(input_fn=eval_input_fn, steps=1)\n\ndf_test = pd.read_csv(TEST_DATA_FILE, names=['time_index','value'], header=0)\nprint(\"Test Dataset Size: {}\".format(len(df_test)))\nprint(\"\")\n\nfor steps in FORECAST_STEPS:\n\n forecasts = estimator.predict(input_fn=ts.predict_continuation_input_fn(evaluation, steps=steps))\n forecasts = tuple(forecasts)[0]\n \n x_next = forecasts['times']\n \n y_next_forecast = forecasts['mean']\n y_next_actual = df_test.value[:steps].values\n \n rmse = compute_rmse(y_next_actual, y_next_forecast)\n mae = compute_mae(y_next_actual, y_next_forecast)\n\n print(\"Forecast Steps {}: RMSE {} - MAE {}\".format(steps,rmse,mae))\n\nprint(\"\")\nprint(forecasts.keys())\n\nplt.close('all')\nplt.figure(figsize=(20, 10))\n\nplt.title(\"Time Series Data\")\nplt.plot(x_next, y_next_actual, label='actual')\nplt.plot(x_next, y_next_forecast, label='forecasted')\nplt.xlabel(\"Time Index\")\nplt.ylabel(\"Value\")\nplt.legend(loc=2)\nplt.show()\n\nx_all = np.concatenate( (x_current, x_next) , axis=0)\ny_actual_all = np.concatenate((y_current_actual, y_next_actual), axis=0)\n\nplt.close('all')\nplt.figure(figsize=(20, 10))\n\nplt.title(\"Time Series Data\")\nplt.plot(x_all, y_actual_all, label='actual')\nplt.plot(x_current, y_current_estimated, label='estimated')\nplt.plot(x_next, y_next_forecast, label='forecasted')\nplt.xlabel(\"Time Index\")\nplt.ylabel(\"Value\")\nplt.legend(loc=2)\nplt.show()",
"Questions:\n\nhow to create a serving function and save the model?\nforecast errror range?"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/applied-machine-learning-intensive
|
content/06_other_models/03_decision_trees_and_random_forests/colab.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/06_other_models/00_decision_trees_and_random_forests/colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC.",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Decision Trees and Random Forests\nIn this lab we will apply decision trees and random forests to perform machine learning tasks. These two model types are relatively easy to understand, but they are very powerful tools.\nRandom forests build upon decision tree models, so we'll start by creating a decision tree and then move to random forests.\nLoad Data\nLet's start by loading some data. We'll use the familiar iris dataset from scikit-learn.",
"import pandas as pd\n\nfrom sklearn.datasets import load_iris\n\niris_bunch = load_iris()\n\nfeature_names = iris_bunch.feature_names\ntarget_name = 'species'\n\niris_df = pd.DataFrame(\n iris_bunch.data,\n columns=feature_names\n)\n\niris_df[target_name] = iris_bunch.target\n\niris_df.head()",
"Decision Trees\nDecision trees are models that create a tree structure that has a condition at each non-terminal leaf in the tree. The condition is used to choose which branch to traverse down the tree.\nLet's see what this would look like with a simple example.\nLet's say we want to determine if a piece of fruit is a lemon, lime, orange, or grapefruit. We might have a tree that looks like:\ntxt\n ----------\n -----------| color? |-----------\n | ---------- |\n | | |\n <green> <orange> <yellow>\n | | |\n | | |\n ======== | =========\n | lime | | | lemon |\n ======== --------- =========\n -----| size? |-----\n | --------- |\n | |\n <small> <large>\n | |\n | |\n ========== ==============\n | orange | | grapefruit |\n ========== ==============\nThis would roughly translate to the following code:\n```python\ndef fruit_type(fruit):\n if fruit.color == \"green\":\n return \"lime\"\n if fruit.color == \"yellow\":\n return \"lemon\"\n if fruit.color == \"orange\":\n if fruit.size == \"small\":\n return \"orange\"\n if fruit.size == \"large\":\n return \"grapefruit\"\n```\nAs you can see, the decision tree is very easy to interpret. If you use a decision tree to make predictions and then need to determine why the tree made the decision that it did, it is very easy to inspect.\nAlso, decision trees don't benefit from scaling or normalizing your data, which is different from many types of models.\nCreate a Decision Tree\nNow that we have the data loaded, we can create a decision tree. We'll use the DecisionTreeClassifier from scikit-learn to perform this task.\nNote that there is also a DecisionTreeRegressor that can be used for regression models. In practice, you'll typically see decision trees applied to classification problems more than regression.\nTo build and train the model, we create an instance of the classifier and then call the fit() method that is used for all scikit-learn models.",
"from sklearn import tree\n\ndt = tree.DecisionTreeClassifier()\n\ndt.fit(\n iris_df[feature_names],\n iris_df[target_name]\n)",
"If this were a real application, we'd keep some data to the side for testing.\nVisualize the Tree\nWe now have a decision tree and can use it to make predictions. But before we do that, let's take a look at the tree itself.\nTo do this we create a StringIO object that we can export dot data to. DOT is a graph description language with Python-graphing utilities that we can plot with.",
"import io\nimport pydotplus\n\nfrom IPython.display import Image \n\ndot_data = io.StringIO() \n\ntree.export_graphviz(\n dt,\n out_file=dot_data, \n feature_names=feature_names\n) \n\ngraph = pydotplus.graph_from_dot_data(dot_data.getvalue()) \n\nImage(graph.create_png()) ",
"That tree looks pretty complex. Many branches in the tree is a sign that we may have overfit the model. Let's create the tree again; this time we'll limit the depth.",
"from sklearn import tree\n\ndt = tree.DecisionTreeClassifier(max_depth=2)\n\ndt.fit(\n iris_df[feature_names],\n iris_df[target_name]\n)",
"And plot to see the branching.",
"import io\nimport pydotplus\n\nfrom IPython.display import Image \n\ndot_data = io.StringIO() \n\ntree.export_graphviz(\n dt,\n out_file=dot_data, \n feature_names=feature_names\n) \n\ngraph = pydotplus.graph_from_dot_data(dot_data.getvalue()) \n\nImage(graph.create_png()) ",
"This tree is less likely to be overfitting since we forced it to have a depth of 2. Holding out a test sample and performing validation would be a good way to check.\nWhat are the gini, samples, and value items shown in the tree?\ngini is is the Gini impurity. This is a measure of the chance that you'll misclassify a random element in the dataset at this decision point. Smaller gini is better.\nsamples is a count of the number of samples that have met the criteria to reach this leaf.\nWithin value is the count of each class of data that has made it to this leaf. Summing value should equal sample.\nHyperparameters\nThere are many hyperparameters you can tweak in your decision tree models. One of those is criterion. criterion determines the quality measure that the model will use to determine the shape of the tree.\nThe possible criterion values are gini and entropy. gini is the Gini Impuirty while entropy is a measure of Information Gain.\nIn the example below, we switch the classifier to use \"entropy\" for criterion. You'll see in the resultant tree that we now see \"entropy\" instead of \"gini\", but the resultant trees are the same. For more complex models, though, it may be worthwhile to test the different criterion.",
"import io\nimport pydotplus\n\nfrom IPython.display import Image \nfrom sklearn import tree\n\ndt = tree.DecisionTreeClassifier(\n max_depth=2, \n criterion=\"entropy\"\n)\n\ndt.fit(\n iris_df[feature_names],\n iris_df[target_name]\n)\n\ndot_data = io.StringIO() \n\ntree.export_graphviz(\n dt,\n out_file=dot_data, \n feature_names=feature_names\n) \n\ngraph = pydotplus.graph_from_dot_data(dot_data.getvalue()) \n\nImage(graph.create_png()) ",
"We've limited the depth of the tree using max_depth. We can also limit the number of samples required to be present in a node for it to be considered for splitting using min_samples_split. We can also limit the minimum size of a leaf node using min_samples_leaf. All of these hyperparameters help you to prevent your model from overfitting.\nThere are many other hyperparameters that can be found in the DecisionTreeClassifier documentation.\nExercise 1: Tuning Decision Tree Hyperparameters\nIn this exercise we will use a decision tree to classify wine quality in the Red Wine Quality dataset.\nThe target column in the dataset is quality. Quality is an integer value between 1 and 10 (inclusive). You'll use the other columns in the dataset to build a decision tree to predict wine quality.\nFor this exercise:\n\nHold out some data for final testing of model generalization.\nUse GridSearchCV to compare some hyperparameters for your model. You can choose which parameters to test.\nPrint the hyperparameters of the best performing model.\nPrint the accuracy of the best performing model and the holdout dataset.\nVisualize the best performing tree.\n\nUse as many text and code cells as you need to perform this exercise. We'll get you started with the code to authenticate and download the dataset.\nFirst upload your kaggle.json file, and then run the code block below.",
"! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'",
"Next, download the wine quality dataset.",
"! kaggle datasets download uciml/red-wine-quality-cortez-et-al-2009\n! ls",
"Student Solution",
"# Your Code Goes Here",
"Random Forests\nRandom forests are a simple yet powerful machine learning tool based on decision trees. Random forests are easy to understand, yet they touch upon many advanced machine learning concepts, such as ensemble learning and bagging. These models can be used for both classification and regression. Also, since they are built from decision trees, they are not sensitive to unscaled data.\nYou can think of a random forest as a group decision made by a number of decision trees. For classification problems, the random forest creates multiple decision trees with different subsets of the data. When it is asked to classify a data point, it will ask all of the trees what they think and then take the majority decision.\nFor regression problems, the random forest will again use the opinions of multiple decision trees, but it will take the mean (or some other summation) of the responses and use that as the regression value.\nThis type of modeling, where one model consists of other models, is called ensemble learning. Ensemble learning can often lead to better models because taking the combined, differing opinions of a group of models can reduce overfitting.\nCreate a Random Forest\nCreating a random forest is as easy as creating a decision tree.\nscikit-learn provides a RandomForestClassifier and a RandomForestRegressor, which can be used to combine the predictive power of multiple decision trees.",
"import pandas as pd\n\nfrom sklearn.datasets import load_iris\nfrom sklearn.ensemble import RandomForestClassifier\n\niris_bunch = load_iris()\n\nfeature_names = iris_bunch.feature_names\ntarget_name = 'species'\n\niris_df = pd.DataFrame(\n iris_bunch.data,\n columns=feature_names\n)\n\niris_df[target_name] = iris_bunch.target\n\nrf = RandomForestClassifier()\nrf.fit(\n iris_df[feature_names],\n iris_df[target_name]\n)",
"You can look at different trees in the random forest to see how their decision branching differs. By default there are 100 decision trees created for the model.\nLet's view a few.\nRun the code below a few times, and see if you notice a difference in the trees that are shown.",
"import pydotplus\nimport random\n\nfrom IPython.display import Image \nfrom sklearn.externals.six import StringIO \n\ndot_data = StringIO() \n\ntree.export_graphviz(\n random.choice(rf.estimators_),\n out_file=dot_data, \n feature_names=feature_names\n) \n\ngraph = pydotplus.graph_from_dot_data(dot_data.getvalue()) \n\nImage(graph.create_png()) ",
"Make Predictions\nJust like any other scikit-learn model, you can use the predict() method to make predictions.",
"print(rf.predict([iris_df.iloc[121][feature_names]]))",
"Hyperparameters\nMany of the hyperparameters available in decision trees are also available in random forest models. There are, however, some hyperparameters that are only available in random forests.\nThe two most important are bootstrap and oob_score. These two hyperparameters are relevant to ensemble learning.\nbootstrap determines if the model will use bootstrap sampling. When you bootstrap, only a sample of the dataset will be used for training each tree in the forest. The full dataset will be used as the source of the sampling for each tree, but each sample will have a different set of data points, perhaps with some repetition. In bootstrapping, there is also \"replacement\" of the data, which means a data point can occur in more that one tree.\noob_score stands for \"Out of bag score.\" When you create a bootstrap sample, this is referred to as a bag in machine learning parlance. When the tree is being scored, only data points in the bag sampled for the tree will be used unless oob_score is set to true.\nExercise 2: Feature Importance\nIn this exercise we will use the UCI Abalone dataset to determine the age of sea snails.\nThe target feature in the dataset is rings, which is a proxy for age in the snails. This is a numeric value, but it is stored as an integer and has a biological limit. So we can think of this as a classification problem and use a RandomForestClassifier.\nYou will download the dataset and train a random forest classifier. After you have fit the classifier, the feature_importances_ attribute of the model will be populated. Use the importance scores to print the least important feature.\nNote that some of the features are categorical string values. You'll need to convert these to numeric values to use them in the model.\nUse as many text and code blocks as you need to perform this exercise.\nStudent Solution",
"# Your Code Goes Here",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jan-rybizki/Chempy
|
tutorials/2-Nucleosynthetic_yields.ipynb
|
mit
|
[
"Nucleosynthetic yields\nThese are key to every chemical evolution model. Chempy supports three nucleosynthetic channels at the moment:\n- Core-Collapse Supernova (CC-SN)\n- Supernova of type Ia (SN Ia)\n- Winds from Asymptotic Giant Branch phase of stars (AGB)",
"%pylab inline\n\nfrom Chempy.parameter import ModelParameters\nfrom Chempy.yields import SN2_feedback, AGB_feedback, SN1a_feedback, Hypernova_feedback\n\nfrom Chempy.infall import PRIMORDIAL_INFALL, INFALL\n\n # This loads the default parameters, you can check and change them in paramter.py\n\na = ModelParameters()\n\n# Implemented SN Ia yield tables\n\na.yield_table_name_1a_list \n\n# AGB yields implemented\n\na.yield_table_name_agb_list \n\n# CC-SN yields implemented\n\na.yield_table_name_sn2_list \n\n# Hypernova yields (is mixed with Nomoto2013 CC-SN yields for stars more massive than 25Msun)\n\na.yield_table_name_hn_list \n\n# Here we show the available mass and metallicity range for each yield set\n\n# First for CC-SNe\n\nprint('Available CC-SN yield parameter range')\nfor item in a.yield_table_name_sn2_list:\n basic_sn2 = SN2_feedback()\n getattr(basic_sn2, item)()\n print('----------------------------------')\n print('yield table name: ',item)\n print('provided masses: ', basic_sn2.masses)\n print('provided metallicities',basic_sn2.metallicities)\n ",
"Hyper Nova (HN) is only provided for Nomoto 2013 CC-SN yields and it is mixed 50/50 with it for stars with mass >= 25 Msun",
"# Then for Hypernovae\n\nprint('Available HN yield parameter range')\nfor item in a.yield_table_name_hn_list:\n basic_hn = Hypernova_feedback()\n getattr(basic_hn, item)()\n print('----------------------------------')\n print('yield table name: ',item)\n print('provided masses: ', basic_hn.masses)\n print('provided metallicities',basic_hn.metallicities)\n\n# Here for AGB stars\n\nprint('Available AGB yield parameter range')\nfor item in a.yield_table_name_agb_list:\n basic_agb = AGB_feedback()\n getattr(basic_agb, item)()\n print('----------------------------------')\n print('yield table name: ',item)\n print('provided masses: ', basic_agb.masses)\n print('provided metallicities',basic_agb.metallicities)\n\n# And for SN Ia\n\nprint('Available SN Ia yield parameter range')\nfor item in a.yield_table_name_1a_list:\n basic_1a = SN1a_feedback()\n getattr(basic_1a, item)()\n print('----------------------------------')\n print('yield table name: ',item)\n print('provided masses: ', basic_1a.masses)\n print('provided metallicities',basic_1a.metallicities)\n\nfrom Chempy.data_to_test import elements_plot\nfrom Chempy.solar_abundance import solar_abundances",
"Elements availability\nusually not all elements are provided by a yield table. We have a handy plotting routine to show which elements are given. We check for the default and the alternative yield table.",
"# To get the element list we initialise the solar abundance class\n\nbasic_solar = solar_abundances()\n\n\n\n\n# we load the default yield set:\n\nbasic_sn2 = SN2_feedback()\ngetattr(basic_sn2, \"Nomoto2013\")()\nbasic_1a = SN1a_feedback()\ngetattr(basic_1a, \"Seitenzahl\")()\nbasic_agb = AGB_feedback()\ngetattr(basic_agb, \"Karakas_net_yield\")()\n\n\n#Now we plot the elements available for the default yield set and which elements are available for specific surveys and come from which nucleosynthetic channel\n\nelements_plot('default', basic_agb.elements,basic_sn2.elements,basic_1a.elements,['C','N','O'], basic_solar.table,40)\n\n# Then we load the alternative yield set:\n\nbasic_sn2 = SN2_feedback()\ngetattr(basic_sn2, \"chieffi04\")()\nbasic_1a = SN1a_feedback()\ngetattr(basic_1a, \"Thielemann\")()\nbasic_agb = AGB_feedback()\ngetattr(basic_agb, \"Ventura_net\")()\n\n#And again plot the elements available \n\nelements_plot('alternative', basic_agb.elements,basic_sn2.elements,basic_1a.elements,['C','N','O'], basic_solar.table,40)",
"CC-SN yields\nHere we visualise the yield in [X/Fe] for the whole grid in masses and metallicities for two different yields sets\n- Interestingly CC-SN ejecta can be Solar in their alpha-enhancement for low-mass progenitors (=13Msun)\n- Ths effect is even stronger for the Chieffi04 yields",
"# We need solar abundances for normalisation of the feedback\n\nbasic_solar.Asplund09()\n\n# Then we plot the [Mg/Fe] of Nomoto+ 2013 for all masses and metallicities \n\nfrom Chempy.data_to_test import yield_plot\nbasic_sn2 = SN2_feedback()\ngetattr(basic_sn2, \"Nomoto2013\")()\nyield_plot('Nomoto+2013', basic_sn2, basic_solar, 'Mg')\n\n# And we plot the same for Chieffi+ 2004 CC-yields\n\nbasic_sn2 = SN2_feedback()\ngetattr(basic_sn2, \"chieffi04\")()\nyield_plot('Chieffi+04', basic_sn2, basic_solar, 'Mg')",
"Yield comparison\nWe can plot the differences of the two yield tables for different elements (They are copied into the output/ folder). Here only the result for Ti is displayed.",
"# Now we plot a comparison for different elements between Nomoto+ 2013 and Chieffi+ 2004 CC-yields: \n# You can look into the output/ folder and see the comparison for all those elements\n\nfrom Chempy.data_to_test import yield_comparison_plot\nbasic_sn2 = SN2_feedback()\ngetattr(basic_sn2, \"Nomoto2013\")()\nbasic_sn2_chieffi = SN2_feedback()\ngetattr(basic_sn2_chieffi, \"chieffi04\")()\nfor element in ['C', 'N', 'O', 'Mg', 'Ca', 'Na', 'Al', 'Mn','Ti']:\n yield_comparison_plot('Nomoto13', 'Chieffi04', basic_sn2, basic_sn2_chieffi, basic_solar, element)",
"AGB yield comparison\nWe have a look at the Carbon and Nitrogen yields.\nWe see that high mass AGB stars produce less fraction of C than low-mass AGB stars and that its vice versa for N. The C/N ratio should be IMF sensitive.",
"# We can also plot a comparison between Karakas+ 2010 and Ventura+ 2013 AGB-yields\n# Here we plot the fractional N yield\n\nfrom Chempy.data_to_test import fractional_yield_comparison_plot\nbasic_agb = AGB_feedback()\ngetattr(basic_agb, \"Karakas_net_yield\")()\nbasic_agb_ventura = AGB_feedback()\ngetattr(basic_agb_ventura, \"Ventura_net\")()\n\nfractional_yield_comparison_plot('Karakas10', 'Ventura13', basic_agb, basic_agb_ventura, basic_solar, 'N')\n#The next line produces an error in the 0.2 version. Needs checking \n#fractional_yield_comparison_plot('Karakas10', 'Ventura13', basic_agb, basic_agb_ventura, basic_solar, 'C')\n",
"Yield table query and remnant fraction\n\nHere you see how the yield tables are queried (the metallicity accesses the yield table)\nFor net yield the remnant fraction + the 'unprocessed mass in winds' sums to unity.\nThe changes come from destroyed Hydrogen that is fused into other elements",
"# Different entries of the yield table are queried\n\nprint('Mass, Remnant mass fraction, Unprocessed mass in winds fraction, destroyed Hydrogen of total mass')\nfor i in range(len(basic_agb.masses)):\n print(basic_agb.table[0.02]['Mass'][i],basic_agb.table[0.02]['mass_in_remnants'][i],basic_agb.table[0.02]['unprocessed_mass_in_winds'][i],basic_agb.table[0.02]['H'][i])",
"SN Ia yields\nHere we see that the SNIa ejecta differ quite strongly for our two yieldtables",
"# Here we compare the yields for different iron-peak elements for Seitenzahl+ 2013 and Thielemann+ 2003 SNIa tables\n\nbasic_1a = SN1a_feedback()\ngetattr(basic_1a, 'Seitenzahl')()\nbasic_1a_alternative = SN1a_feedback()\ngetattr(basic_1a_alternative, 'Thielemann')()\nprint('Mass fraction of SN1a ejecta: Cr, Mn, Fe and Ni')\nprint('Seitenzahl2013')\nprint(basic_1a.table[0.02]['Cr'],basic_1a.table[0.02]['Mn'],basic_1a.table[0.02]['Fe'],basic_1a.table[0.02]['Ni'])\nprint('Thielemann2003')\nprint(basic_1a_alternative.table[0.02]['Cr'],basic_1a_alternative.table[0.02]['Mn'],basic_1a_alternative.table[0.02]['Fe'],basic_1a_alternative.table[0.02]['Ni'])\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SHDShim/pytheos
|
examples/6_p_scale_test_Yokoo_Pt.ipynb
|
apache-2.0
|
[
"%cat 0Source_Citation.txt\n\n%matplotlib inline\n# %matplotlib notebook # for interactive",
"For high dpi displays.",
"%config InlineBackend.figure_format = 'retina'",
"0. General note\nThis example compares pressure calculated from pytheos and original publication for the platinum scale by Yokoo 2009.\n1. Global setup",
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom uncertainties import unumpy as unp\nimport pytheos as eos",
"3. Compare",
"eta = np.linspace(1., 0.60, 21)\nprint(eta)\n\nyokoo_pt = eos.platinum.Yokoo2009()\n\nyokoo_pt.print_equations()\n\nyokoo_pt.print_equations()\n\nyokoo_pt.print_parameters()\n\nv0 = 60.37930856339099\n\nyokoo_pt.three_r\n\nv = v0 * (eta) \ntemp = 3000.\n\np = yokoo_pt.cal_p(v, temp * np.ones_like(v))",
"<img src='./tables/Yokoo_Pt.png'>",
"print('for T = ', temp)\nfor eta_i, p_i in zip(eta, p):\n print(\"{0: .3f} {1: .2f}\".format(eta_i, p_i))",
"It is alarming that even 300 K isotherm does not match with table value. The difference is 1%.",
"v = yokoo_pt.cal_v(p, temp * np.ones_like(p), min_strain=0.6)\nprint(1.-(v/v0))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tanmay987/deepLearning
|
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
|
mit
|
[
"Sentiment analysis with TFLearn\nIn this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.\nWe'll start off by importing all the modules we'll need, then load and prepare the data.",
"import pandas as pd\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nfrom tflearn.data_utils import to_categorical\nprint(1)",
"Preparing the data\nFollowing along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.\nRead the data\nUse the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.",
"reviews = pd.read_csv('reviews.txt', header=None)\nlabels = pd.read_csv('labels.txt', header=None)\n",
"Counting word frequency\nTo start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.\n\nExercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.",
"from collections import Counter\ntotal_counts=Counter()\nfor idx, row in reviews.iterrows():\n #print(row)\n for words in row[0].split(' '):\n #print(words)\n total_counts[words] += 1\n\nprint(\"Total words in data set: \", len(total_counts))\n\n",
"Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.",
"vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]\nprint(vocab[:60])",
"What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.",
"print(vocab[-1], ': ', total_counts[vocab[-1]])",
"The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.\nNote: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.\nNow for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.\n\nExercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.",
"word2idx={word: i for i,word in enumerate(vocab)}\nprint(word2idx)",
"Text to vector function\nNow we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:\n\nInitialize the word vector with np.zeros, it should be the length of the vocabulary.\nSplit the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.\nFor each word in that list, increment the element in the index associated with that word, which you get from word2idx.\n\nNote: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.",
"def text_to_vector(text):\n wordVector=np.zeros(len(vocab),dtype=np.int_)\n for word in text.split(' '):\n if word is not None and word.lower() in word2idx.keys() :\n wordVector[word2idx[word.lower()]]+=1\n return wordVector",
"If you do this right, the following code should return\n```\ntext_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake')[:65]\narray([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])\n```",
"if None in word2idx.keys():\n print ('hi')\n\nlen(text_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake'))\n",
"Now, run through our entire review data set and convert each review to a word vector.",
"word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)\nfor ii, (_, text) in enumerate(reviews.iterrows()):\n word_vectors[ii] = text_to_vector(text[0])\n \n \n\n# Printing out the first 5 word vectors\nword_vectors[:5, :23]",
"Train, Validation, Test sets\nNow that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.",
"Y = (labels=='positive').astype(np.int_)\nrecords = len(labels)\n\nshuffle = np.arange(records)\nnp.random.shuffle(shuffle)\ntest_fraction = 0.9\n\ntrain_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]\ntrainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split].flatten(), 2)\ntestX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split].flatten(), 2)\n\ntrainY",
"Building the network\nTFLearn lets you build the network by defining the layers. \nInput layer\nFor the input layer, you just need to tell it how many units you have. For example, \nnet = tflearn.input_data([None, 100])\nwould create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.\nThe number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).\nOutput layer\nThe last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.\nnet = tflearn.fully_connected(net, 2, activation='softmax')\nTraining\nTo set how you train the network, use \nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with the categorical cross-entropy.\n\nFinally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like \nnet = tflearn.input_data([None, 10]) # Input\nnet = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden\nnet = tflearn.fully_connected(net, 2, activation='softmax') # Output\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nmodel = tflearn.DNN(net)\n\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.",
"# Network building\n'''def build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n #### Your code ####\n net = tflearn.input_data([None, 10000]) # Input\n net = tflearn.fully_connected(net, 200, activation='ReLU') # Hidden\n net = tflearn.fully_connected(net, 1, activation='softmax') # Output\n net = tflearn.regression(net, optimizer='sgd', \n learning_rate=0.1, \n loss='categorical_crossentropy')\n model = tflearn.DNN(net)\n return model\n '''\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n # Inputs\n net = tflearn.input_data([None, 10000])\n\n # Hidden layer(s)\n net = tflearn.fully_connected(net, 200, activation='ReLU')\n net = tflearn.fully_connected(net, 25, activation='ReLU')\n\n # Output layer\n net = tflearn.fully_connected(net, 2, activation='softmax')\n net = tflearn.regression(net, optimizer='sgd', \n learning_rate=0.1, \n loss='categorical_crossentropy')\n \n model = tflearn.DNN(net)\n return model",
"Intializing the model\nNext we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.\n\nNote: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.",
"model = build_model()\nprint(1)",
"Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.\nYou can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.",
"# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50)",
"Testing\nAfter you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.",
"predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)\ntest_accuracy = np.mean(predictions == testY[:,0], axis=0)\nprint(\"Test accuracy: \", test_accuracy)",
"Try out your own text!",
"# Helper function that uses your model to predict sentiment\ndef test_sentence(sentence):\n positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]\n print('Sentence: {}'.format(sentence))\n print('P(positive) = {:.3f} :'.format(positive_prob), \n 'Positive' if positive_prob > 0.5 else 'Negative')\n\nsentence = \"Moonlight is by far the best movie of 2016.\"\ntest_sentence(sentence)\n\nsentence = \"Its not a bad movie\"\ntest_sentence(sentence)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nishadsingh1/clipper
|
examples/tutorial/tutorial_part_two.ipynb
|
apache-2.0
|
[
"Clipper Tutorial: Part 2\nIn this part of the tutorial, you will put on your data scientist hat and train and deploy some models to Clipper to improve your application accuracy.\nConnect to Clipper (again)\nBecause this is a separate Python instance, you must create a new Clipper object and connect to your running Clipper instance. Make sure you enter the same information here as you did in part one.",
"import sys\nimport os\nfrom clipper_admin import Clipper\n# Change the username if necessary\nuser = \"\"\n# Set the path to the SSH key\nkey = \"\"\n# Set the SSH host\nhost = \"\"\nclipper = Clipper(host, user, key)",
"Load Cifar\nBecause this is a new notebook, you must load the CIFAR dataset again. This time, you will be using it to train and evaluate machine learning models.\nSet cifar_loc to the same location you did in the \"Download the Images\" section of part one of the tutorial. You will load into Python the number of training and test datapoints specified in \"Extract the images\" section of part one.",
"cifar_loc = \"\"\nimport cifar_utils\ntrain_x, train_y = cifar_utils.filter_data(\n *cifar_utils.load_cifar(cifar_loc, cifar_filename=\"cifar_train.data\", norm=True))\ntest_x, test_y = cifar_utils.filter_data(\n *cifar_utils.load_cifar(cifar_loc, cifar_filename=\"cifar_test.data\", norm=True))",
"Train Logistic Regression Model\nWhen tackling a new problem with machine learning, it's always good to start with simple models and only add complexity when needed. Start by training a logistic regression binary classifier using Scikit-Learn. This model gets about 68% accuracy on the offline evaluation dataset if you use 10,000 training examples. It gets about 74% if you use all 50,000 examples.",
"from sklearn import linear_model as lm \ndef train_sklearn_model(m, train_x, train_y):\n m.fit(train_x, train_y)\n return m\nlr_model = train_sklearn_model(lm.LogisticRegression(), train_x, train_y)\nprint(\"Logistic Regression test score: %f\" % lr_model.score(test_x, test_y))",
"Deploy Logistic Regression Model\nWhile 68-74% accuracy on a CIFAR binary classification task is significantly below state of the art, it's already much better than the 50% accuracy your application yields right now by guessing randomly.\nYou can deploy your logistic regression model directly to Clipper without having to worry about how to serialize the model or integrate it with application code.\nTo deploy a model to Clipper, you must assign it a name (\"sklearn_cifar\"), a version (1), and then provide some metadata about the model itself. In this case, you are specifying that you want to run the model using the sklearn_cifar_container Docker image in the Clipper repo on Docker Hub. You can assign the model descriptive labels, and specify the input type that this model expects. Finally, you can specify how many replicas of the model (how many Docker containers) to launch. Adding more replicas increases the throughput of this model.\nAfter completing this step, Clipper will be managing a new container in Docker with your model in it:\n<img src=\"img/deploy_sklearn_model.png\" style=\"width: 500px;\"/>\n\nOnce again, because you are deploying a Docker image this command may take awhile to download the image. Thanks for being patient!",
"model_name = \"birds_vs_planes_classifier\"\n\nmodel_added = clipper.deploy_model(\n model_name,\n 1,\n lr_model,\n \"clipper/sklearn_cifar_container:latest\",\n \"doubles\",\n num_containers=1\n)\nprint(\"Model deploy successful? {success}\".format(success=model_added))",
"Link your app to your model\nTo use your newly deployed model to generate predictions, it needs to be linked to your Clipper application. Let Clipper know that your \"cifar_demo\" app should use the \"birds_vs_planes_classifier\" model to serve predictions.\nIn the future, when you deploy new versions of the \"birds_vs_planes_classifier\" model, queries to your Clipper application will route to them.",
"clipper.link_model_to_app(app_name, model_name)",
"You can view which models your app is linked to by running the code below.",
"clipper.get_linked_models(app_name)",
"Now that you've deployed and linked your model to your app, go ahead and check back on your running frontend application from part 1. You should see the accuracy rise from around 50% to the accuracy of your SKLearn model (68-74%), without having to stop or modify your application at all!\nLoad TensorFlow Model\nTo improve the accuracy of your application further, you will now deploy a TensorFlow convolutional neural network. This model takes a few hours to train, so you will download the trained model parameters rather than training it from scratch. This model gets about 88% accuracy on the test dataset.\nThere is a pre-trained TensorFlow model stored in the repo using git-lfs. Once you install git-lfs, you can download the model with the command git lfs pull. If you don't want to deploy a TensorFlow model, you can skip this step.",
"import os\nimport tensorflow as tf\nimport numpy as np\ntf_cifar_model_path = os.path.abspath(\"tf_cifar_model/cifar10_model_full\")\ntf_session = tf.Session('', tf.Graph())\nwith tf_session.graph.as_default():\n saver = tf.train.import_meta_graph(\"%s.meta\" % tf_cifar_model_path)\n saver.restore(tf_session, tf_cifar_model_path)\n\ndef tensorflow_score(session, test_x, test_y):\n \"\"\"\n NOTE: This predict method expects pre-whitened (normalized) images\n \"\"\"\n logits = session.run('softmax_logits:0',\n feed_dict={'x:0': test_x})\n relevant_activations = logits[:, [cifar_utils.negative_class, cifar_utils.positive_class]]\n preds = np.argmax(relevant_activations, axis=1)\n return float(np.sum(preds == test_y)) / float(len(test_y))\nprint(\"TensorFlow CNN test score: %f\" % tensorflow_score(tf_session, test_x, test_y))",
"Deploy TensorFlow Model\nSimilar to deploying the logistic regression model, you can now deploy your TensorFlow neural network. Note that you will specify a different model container to use this time: the tf_cifar_container. In this case, you are providing Clipper a serialized version of the model. The container has been set up to reconstruct the original model from the serialized representation.\nAfter completing this step, Clipper will send queries to the newly-deployed TensorFlow model instead of the logistic regression Scikit-Learn model, improving the application's accuracy.\n<img src=\"img/tf_replaces_sklearn_model.png\" style=\"width: 600px;\"/>\n\nOnce again, please patient while the Docker image is downloaded.",
"model_added = clipper.deploy_model(\n model_name,\n 2,\n os.path.abspath(\"tf_cifar_model\"),\n \"clipper/tf_cifar_container:latest\",\n \"doubles\",\n num_containers=1\n)\nprint(\"Model deploy successful? {success}\".format(success=model_added))",
"Inspect Clipper Metrics\nClipper also records various system performance metrics. You can inspect the current state of these metrics with the inspect_instance() command.",
"clipper.inspect_instance()",
"Congratulations! You've now successfully completed the tutorial. You started Clipper, created an application and queried it from a frontend client, and deployed two models trained in two different machine learning frameworks (Scikit-Learn and TensorFlow) to the running system.\nHead back to the notebook from part 1. When you're done watching the accuracy of your application, stop the cell (hit the little \"stop\" square in the notebook toolbar).\n<img src=\"img/warning.jpg\" style=\"width: 400px;\"/>\n\nThis step will stop and remove all Clipper Docker containers running on the host. This command will not affect other Docker containers running on the host machine.\n\nCleanup\nWhen you're completely done with the tutorial and want to shut down your Clipper instance, you can run the stop_all() command to stop all the Clipper Docker containers.\nIf you check the accuracy of your frontend application a final time, you should see accuracy around 88%. If the accuracy is below that, you can try sending more feedback to increase the weight on the TensorFlow model even more.",
"clipper.stop_all()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sjschmidt44/bike_share
|
bike_share_data_2.ipynb
|
mit
|
[
"Plot bike-share data with Matplotlib",
"from pandas import DataFrame, Series\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nweather = pd.read_table('daily_weather.tsv')\nusage = pd.read_table('usage_2012.tsv')\nstations = pd.read_table('stations.tsv')\n\nnewseasons = {'Summer': 'Spring', 'Spring': 'Winter', 'Fall': 'Summer', 'Winter': 'Fall'}\nweather['season_desc'] = weather['season_desc'].map(newseasons)\nweather['Day'] = pd.DatetimeIndex(weather.date).date\nweather['Month'] = pd.DatetimeIndex(weather.date).month",
"Question 1: Plot Daily Temp of 2012\nPlot the daily temperature over the course of the year. (This should probably be a line chart.) Create a bar chart that shows the average temperature and humidity by month.",
"weather['temp'].plot()\n# weather.plot(kind='line', y='temp', x='Day')\nplt.show()\n\nweather[['Month', 'humidity', 'temp']].groupby('Month').aggregate(np.mean).plot(kind='bar')\nplt.show()",
"Question 2: Rental Volumes compared to Temp\nUse a scatterplot to show how the daily rental volume varies with temperature. Use a different series (with different colors) for each season.",
"w = weather[['season_desc', 'temp', 'total_riders']]\nw_fal = w.loc[w['season_desc'] == 'Fall']\nw_win = w.loc[w['season_desc'] == 'Winter']\nw_spr = w.loc[w['season_desc'] == 'Spring']\nw_sum = w.loc[w['season_desc'] == 'Summer']\n\nplt.scatter(w_fal['temp'], w_fal['total_riders'], c='y', label='Fall', s=100, alpha=.5)\nplt.scatter(w_win['temp'], w_win['total_riders'], c='r', label='Winter', s=100, alpha=.5)\nplt.scatter(w_spr['temp'], w_spr['total_riders'], c='b', label='Sprint', s=100, alpha=.5)\nplt.scatter(w_sum['temp'], w_sum['total_riders'], c='g', label='Summer', s=100, alpha=.5)\n\nplt.legend(loc='lower right')\nplt.xlabel('Temperature')\nplt.ylabel('Total Riders')\nplt.show()",
"Question 3: Daily Rentals compared to Windspeed\nCreate another scatterplot to show how daily rental volume varies with windspeed. As above, use a different series for each season.",
"w = weather[['season_desc', 'windspeed', 'total_riders']]\nw_fal = w.loc[w['season_desc'] == 'Fall']\nw_win = w.loc[w['season_desc'] == 'Winter']\nw_spr = w.loc[w['season_desc'] == 'Spring']\nw_sum = w.loc[w['season_desc'] == 'Summer']\n\nplt.scatter(w_fal['windspeed'], w_fal['total_riders'], c='y', label='Fall', s=100, alpha=.5)\nplt.scatter(w_win['windspeed'], w_win['total_riders'], c='r', label='Winter', s=100, alpha=.5)\nplt.scatter(w_spr['windspeed'], w_spr['total_riders'], c='b', label='Sprint', s=100, alpha=.5)\nplt.scatter(w_sum['windspeed'], w_sum['total_riders'], c='g', label='Summer', s=100, alpha=.5)\n\nplt.legend(loc='lower right')\nplt.xlabel('Wind Speed')\nplt.ylabel('Total Riders')\nplt.show()",
"Question 4: Rental Volumes by Geographical Location\nHow do the rental volumes vary with geography? Compute the average daily rentals for each station and use this as the radius for a scatterplot of each station's latitude and longitude.",
"s = stations[['station','lat','long']]\nu = pd.concat([usage['station_start']], axis=1, keys=['station'])\ncounts = u['station'].value_counts()\nc = DataFrame(counts.index, columns=['station'])\nc['counts'] = counts.values\nm = pd.merge(s, c, on='station')\n\nplt.scatter(m['long'], m['lat'], c='b', label='Location', s=(m['counts'] * .05), alpha=.1)\n\nplt.legend(loc='lower right')\nplt.xlabel('Longitude')\nplt.ylabel('Latitude')\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zentonllo/tfg-tensorflow
|
cloud/datalab/notebooks_ejemplo/BigQuery+Magic+Commands+and+DML.ipynb
|
mit
|
[
"BigQuery Magic Commands and DML\nThe examples in this notebook introduce features of BigQuery Standard SQL and BigQuery SQL Data Manipulation Language (beta). BigQuery Standard SQL is compliant with the SQL 2011 standard. You've already seen the use of the magic command %%bq in the Hello BigQuery and BigQuery Commands notebooks. This command and others in the Google Cloud Datalab API support BigQuery Standard SQL.\nUsing the BigQuery Magic command with Standard SQL\nFirst, we will cover some more uses of the %%bq magic command. Let's define a query to work with:",
"%%bq query --name UniqueNames2013\nWITH UniqueNames2013 AS\n(SELECT DISTINCT name\n FROM `bigquery-public-data.usa_names.usa_1910_2013`\n WHERE Year = 2013)\nSELECT * FROM UniqueNames2013",
"Now let's list all available commands to work with %%bq",
"%%bq -h",
"The dryrun argument in %%bq can be helpful to confirm the syntax of the SQL query. Instead of executing the query, it will only return some statistics:",
"%%bq dryrun -q UniqueNames2013",
"Now, let's get a small sample of the results using the sample argument in %%bq:",
"%%bq sample -q UniqueNames2013",
"Finally, We can use the execute command in %%bq to display the results of our query:",
"%%bq execute -q UniqueNames2013",
"Using Google BigQuery SQL Data Manipulation Language\nBelow, we will demonstrate how to use Google BigQuery SQL Data Manipulation Language (DML) in Datalab.\nPreparation\nFirst, let's import the BigQuery module, and create a sample dataset and table to help demonstrate the features of Google BigQuery DML.",
"import google.datalab.bigquery as bq\n\n# Create a new dataset (this will be deleted later in the notebook)\nsample_dataset = bq.Dataset('sampleDML')\nif not sample_dataset.exists():\n sample_dataset.create(friendly_name = 'Sample Dataset for testing DML', description = 'Created from Sample Notebook in Google Cloud Datalab')\n sample_dataset.exists()\n\n# To create a table, we need to create a schema for it.\n# Its easiest to create a schema from some existing data, so this\n# example demonstrates using an example object\nfruit_row = {\n 'name': 'string value',\n 'count': 0\n}\n\nsample_table1 = bq.Table(\"sampleDML.fruit_basket\").create(schema = bq.Schema.from_data([fruit_row]), \n overwrite = True)",
"Inserting Data\nWe can add rows to our newly created fruit_basket table by using an INSERT statement in our BigQuery Standard SQL query.",
"%%bq query\nINSERT sampleDML.fruit_basket (name, count)\nVALUES('banana', 5),\n ('orange', 10),\n ('apple', 15),\n ('mango', 20)",
"You may rewrite the previous query as:",
"%%bq query\nINSERT sampleDML.fruit_basket (name, count)\nSELECT * \nFROM UNNEST([('peach', 25), ('watermelon', 30)])",
"You can also use a WITH clause with INSERT and SELECT.",
"%%bq query\nINSERT sampleDML.fruit_basket(name, count)\nWITH w AS (\n SELECT ARRAY<STRUCT<name string, count int64>>\n [('cherry', 35),\n ('cranberry', 40),\n ('pear', 45)] col\n)\nSELECT name, count FROM w, UNNEST(w.col)",
"Here is an example that copies one table's contents into another. First we will create a new table.",
"fruit_row_detailed = {\n 'name': 'string value',\n 'count': 0,\n 'readytoeat': False\n}\nsample_table2 = bq.Table(\"sampleDML.fruit_basket_detailed\").create(schema = bq.Schema.from_data([fruit_row_detailed]), \n overwrite = True)\n\n%%bq query\nINSERT sampleDML.fruit_basket_detailed (name, count, readytoeat)\nSELECT name, count, false\nFROM sampleDML.fruit_basket",
"Updating Data\nYou can update rows in the fruit_basket table by using an UPDATE statement in the BigQuery Standard SQL query. We will try to do this using the Datalab BigQuery API.",
"%%bq query\nUPDATE sampleDML.fruit_basket_detailed\nSET readytoeat = True\nWHERE name = 'banana'",
"To view the contents of a table in BigQuery, use %%bq tables view command:",
"%%bq tables view --name sampleDML.fruit_basket_detailed",
"Deleting Data\nYou can delete rows in the fruit_basket table by using a DELETE statement in the BigQuery Standard SQL query.",
"%%bq query\nDELETE sampleDML.fruit_basket\nWHERE name in ('cherry', 'cranberry')",
"Use the following query to delete the corresponding entries in sampleDML.fruit_basket_detailed",
"%%bq query\nDELETE sampleDML.fruit_basket_detailed\nWHERE NOT EXISTS\n (SELECT * FROM sampleDML.fruit_basket\n WHERE fruit_basket_detailed.name = fruit_basket.name)",
"Deleting Resources",
"# Clear out sample resources\nsample_dataset.delete(delete_contents = True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
graphistry/pygraphistry
|
demos/for_analysis.ipynb
|
bsd-3-clause
|
[
"Tutorial: Data Analysis in Graphistry\n\nRegister\nLoad table\nPlot: \nSimple: input is a list of edges\nArbitrary: input is a table (hypergraph transform)\n\n\nAdvanced plotting\nFurther reading\nPyGraphistry\nPyGraphistry demos: database connectors, ...\ngraph-app-kit: Streamlit graph dashboarding\nUI Guide\nCSV upload notebook app\n\n\n\n1. Register",
"import graphistry\n\n# To specify Graphistry account & server, use:\n# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')\n# For more options, see https://github.com/graphistry/pygraphistry#configure\n",
"2. Load table\nGraphistry works seamlessly with dataframes like Pandas and GPU RAPIDS cuDF",
"import pandas as pd\n\ndf = pd.read_csv('./data/honeypot.csv')\n\ndf.sample(3)",
"3. Plot\nA. Simple graphs\n\nBuild up a set of bindings. Simple graphs are:\nrequired: edge table, with src+dst ID columns, and optional additional property columns\noptional: node table, with matching node ID column\n\n\nSee UI Guide for in-tool activity\n\nDemo graph schema:\n\nInput table: Above alerts df with columns | attackerIP | victimIP |\nEdges: Link df's columns attackerIP -> victimIP\nNodes: Unspecified; Graphistry defaults to generating based on the edges\nNode colors: Graphistry defaults to inferring the commmunity\nNode sizes: Graphistry defaults to the number of edges (\"degree\")",
"g = graphistry.edges(df, 'attackerIP', 'victimIP')\n\ng.plot()",
"B. Hypergraphs -- Plot arbitrary tables\nThe hypergraph transform is a convenient method to transform tables into graphs:\n\nIt extracts entities from the table and links them together\nEntities get linked together when they are from the same row\n\nApproach 1: Treat each row as a node, and link it to each cell value in it\nDemo graph schema:\n* Edges: row -> attackerIP, row -> victimIP, row -> victimPort, row -> volnName\n* Nodes: row, attackerIP, victimIP, victimPort, vulnName\n* Node colors: Automatic based on inferred commmunity\n* node sizes: Number of edges",
"hg1 = graphistry.hypergraph(\n df,\n\n # Optional: Subset of columns to turn into nodes; defaults to all\n entity_types=['attackerIP', 'victimIP', 'victimPort', 'vulnName'],\n\n # Optional: merge nodes when their IDs appear in multiple columns\n # ... so replace nodes attackerIP::1.1.1.1 and victimIP::1.1.1.1\n # ... with just one node ip::1.1.1.1\n opts={\n 'CATEGORIES': {\n 'ip': ['attackerIP', 'victimIP']\n }\n })\n\nhg1_g = hg1['graph']\nhg1_g.plot()",
"Approach 2: Link values from column entries\nFor more advanced hypergraph control, we can skip the row node, and control which edges are generated, by enabling direct.\nDemo graph schema:\n* Edges: \n * attackerIP -> victimIP, attackerIP -> victimPort, attackerIP -> vulnName\n * victimPort -> victimIP\n * vulnName -> victimIP\n* Nodes: attackerIP, victimIP, victimPort, vulnName\n* Default colors: Automatic based on inferred commmunity\n* Default node size: Number of edges",
"hg2 = graphistry.hypergraph(\n df,\n entity_types=['attackerIP', 'victimIP', 'victimPort', 'vulnName'],\n direct=True,\n opts={\n # Optional: Without, creates edges that are all-to-all for each row \n 'EDGES': {\n 'attackerIP': ['victimIP', 'victimPort', 'vulnName'],\n 'victimPort': ['victimIP'],\n 'vulnName': ['victimIP']\n },\n\n # Optional: merge nodes when their IDs appear in multiple columns\n # ... so replace nodes attackerIP::1.1.1.1 and victimIP::1.1.1.1\n # ... with just one node ip::1.1.1.1\n 'CATEGORIES': {\n 'ip': ['attackerIP', 'victimIP']\n }\n })\n\nhg2_g = hg2['graph']\nhg2_g.plot()",
"3. Advanced plotting\nYou can then drive visual styles based on node and edge attributes\nThis demo starts by computing a node table. By default, you do not need to explictly provide a table of nodes, but then you may lack data for node properties:\n\nRegular inferred graph nodes will only have id and degree\nHypergraph edges and row nodes will have many properties, but hypergraph entity nodes will only have id, type/category, and degree\n\nDemo schema:\n\nNode table: | node_id | type | attacks |\nPoint size: number of attacks\nPoint icon & color: attacker vs victim\nEdge color: based on first attack",
"# Cell:\n# Compute nodes_df by combining entities in attackerIP and victimIP\n# As part of this, compute attack counts for each node \n\ntargets_df = (\n df\n [['victimIP']]\n .drop_duplicates()\n .rename(columns={'victimIP': 'node_id'})\n .assign(type='victim')\n)\n\nattackers_df = (\n df\n .groupby(['attackerIP'])\n .agg(attacks=pd.NamedAgg(column=\"attackerIP\", aggfunc=\"count\"))\n .reset_index()\n .rename(columns={'attackerIP': 'node_id'}).assign(type='attacker')\n)\n\nnodes_df = pd.concat([targets_df, attackers_df])\n\nnodes_df.sort_values(by='attacks', ascending=False)[:5]\n\n# Cell:\n# Add\n\n\n# New encodings features requires api=3: `graphistry.register(api=3, username='...', password='...')\n\ng2 = (g\n .nodes(nodes_df, 'node_id')\n\n # 'red', '#f00', '#ff0000'\n .encode_point_color('type', categorical_mapping={\n 'attacker': 'red',\n 'victim': 'white'\n }, default_mapping='gray')\n\n # Icons: https://fontawesome.com/v4.7/cheatsheet/\n .encode_point_icon('type', categorical_mapping={\n 'attacker': 'bomb',\n 'victim': 'laptop'\n })\n\n # Gradient\n .encode_edge_color('time(min)', palette=['blue', 'purple', 'red'], as_continuous=True)\n\n .encode_point_size('attacks')\n\n .addStyle(bg={'color': '#eee'}, page={'title': 'My Graph'})\n\n # Options: https://hub.graphistry.com/docs/api/1/rest/url/\n .settings(url_params={'play': 1000, 'pointSize': 0.5})\n)\n\ng2.plot(as_files=False)",
"Advanced bindings work with hypergraphs too\nHypergraphs precompute a lot of values on nodes and edges, which we can use to drive clearer visualizations",
"hg2_g._nodes.sample(3)\n\nhg2_g._edges.sample(3)\n\n(hg2_g\n\n .encode_point_color('type', categorical_mapping={\n 'attackerIP': 'yellow',\n 'victimIP': 'blue'\n }, default_mapping='gray')\n\n .encode_point_icon('type', categorical_mapping={\n 'attackerIP': 'bomb',\n 'victimIP': 'laptop'\n }, default_mapping='')\n\n .encode_edge_color('time(min)', palette=['blue', 'purple', 'red'], as_continuous=True)\n\n .settings(url_params={'pointsOfInterestMax': 10})\n\n).plot()",
"Further reading:\n\nPyGraphistry\nPyGraphistry demos: database connectors, ...\ngraph-app-kit: Streamlit graph dashboarding\nUI Guide\nCSV upload notebook app"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
merryjman/astronomy
|
H-R Project/sample.ipynb
|
gpl-3.0
|
[
"Star catalogue analysis\nThanks to UCF Physics undergrad Tyler Townsend for contributing to the development of this notebook.",
"# Import modules that contain functions we need\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt",
"Getting the data",
"# Read in data that will be used for the calculations.\n# Using pandas read_csv method, we can create a data frame\ndata = pd.read_csv(\"https://github.com/adamlamee/CODINGinK12-data/raw/master/stars.csv\")\n\n# We wish too look at the first 5 rows of our data set\ndata.head(5)",
"Star map",
"fig = plt.figure(figsize=(15, 4))\nplt.scatter(data.ra,data.dec, s=0.01)\nplt.xlim(24, 0)\nplt.title(\"All the Stars in the Catalogue\")\nplt.xlabel('right ascension')\nplt.ylabel('declination')",
"Does hotter mean brighter?\nReferences\n\nThe data came from The Astronomy Nexus and their colletion of the Hipparcos, Yale Bright Star, and Gliese catalogues (huge zip file here).\nReversed H-R diagram from The Electric Universe"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jmhsi/justin_tinker
|
data_science/courses/deeplearning1/nbs/lesson3.ipynb
|
apache-2.0
|
[
"Training a better model",
"from theano.sandbox import cuda\n\n%matplotlib inline\nimport utils; reload(utils)\nfrom utils import *\nfrom __future__ import division, print_function\n\n#path = \"data/dogscats/sample/\"\npath = \"data/dogscats/\"\nmodel_path = path + 'models/'\nif not os.path.exists(model_path): os.mkdir(model_path)\n\nbatch_size=64",
"Are we underfitting?\nOur validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:\n\nHow is this possible?\nIs this desirable?\n\nThe answer to (1) is that this is happening because of dropout. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability p (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.\nThe purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.\nSo the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!\n(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.)\nRemoving dropout\nOur high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:\n- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)\n- Split the model between the convolutional (conv) layers and the dense layers\n- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch\n- Create a new model with just the dense layers, and dropout p set to zero\n- Train this new model using the output of the conv layers as training data.\nAs before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent...",
"model = vgg_ft(2)",
"...and load our fine-tuned weights.",
"model.load_weights(model_path+'finetune3.h5')",
"We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer:",
"layers = model.layers\n\nlast_conv_idx = [index for index,layer in enumerate(layers) \n if type(layer) is Convolution2D][-1]\n\nlast_conv_idx\n\nlayers[last_conv_idx]\n\nconv_layers = layers[:last_conv_idx+1]\nconv_model = Sequential(conv_layers)\n# Dense layers - also known as fully connected or 'FC' layers\nfc_layers = layers[last_conv_idx+1:]",
"Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of \"recipes\" that can get us a long way!",
"batches = get_batches(path+'train', shuffle=False, batch_size=batch_size)\nval_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)\n\nval_classes = val_batches.classes\ntrn_classes = batches.classes\nval_labels = onehot(val_classes)\ntrn_labels = onehot(trn_classes)\n\nbatches.class_indices\n\nval_features = conv_model.predict_generator(val_batches, val_batches.nb_sample)\n\ntrn_features = conv_model.predict_generator(batches, batches.nb_sample)\n\nsave_array(model_path + 'train_convlayer_features.bc', trn_features)\nsave_array(model_path + 'valid_convlayer_features.bc', val_features)\n\ntrn_features = load_array(model_path+'train_convlayer_features.bc')\nval_features = load_array(model_path+'valid_convlayer_features.bc')\n\ntrn_features.shape",
"For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.",
"# Copy the weights from the pre-trained model.\n# NB: Since we're removing dropout, we want to half the weights\ndef proc_wgts(layer): return [o/2 for o in layer.get_weights()]\n\n# Such a finely tuned model needs to be updated very slowly!\nopt = RMSprop(lr=0.00001, rho=0.7)\n\ndef get_fc_model():\n model = Sequential([\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dense(4096, activation='relu'),\n Dropout(0.),\n Dense(4096, activation='relu'),\n Dropout(0.),\n Dense(2, activation='softmax')\n ])\n\n for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))\n\n model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nfc_model = get_fc_model()",
"And fit the model in the usual way:",
"fc_model.fit(trn_features, trn_labels, nb_epoch=8, \n batch_size=batch_size, validation_data=(val_features, val_labels))\n\nfc_model.save_weights(model_path+'no_dropout.h5')\n\nfc_model.load_weights(model_path+'no_dropout.h5')",
"Reducing overfitting\nNow that we've gotten the model to overfit, we can take a number of steps to reduce this.\nApproaches to reducing overfitting\nWe do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):\n\nAdd more data\nUse data augmentation\nUse architectures that generalize well\nAdd regularization\nReduce architecture complexity.\n\nWe'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.\nWhich types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)\nWe recommend always using at least some light data augmentation, unless you have so much data that your model will never see the same input twice.\nAbout data augmentation\nKeras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation:",
"# dim_ordering='tf' uses tensorflow dimension ordering,\n# which is the same order as matplotlib uses for display.\n# Therefore when just using for display purposes, this is more convenient\ngen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, \n height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, \n channel_shift_range=10., horizontal_flip=True, dim_ordering='tf')",
"Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested).",
"# Create a 'batch' of a single image\nimg = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0)\n# Request the generator to create batches from this image\naug_iter = gen.flow(img)\n\n# Get eight examples of these augmented images\naug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)]\n\n# The original\nplt.imshow(img[0])",
"As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches.",
"# Augmented data\nplots(aug_imgs, (20,7), 2)\n\n# Ensure that we return to theano dimension ordering\nK.set_image_dim_ordering('th')",
"Adding data augmentation\nLet's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it:",
"gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, \n height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True)\n\nbatches = get_batches(path+'train', gen, batch_size=batch_size)\n# NB: We don't want to augment or shuffle the validation set\nval_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)",
"When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.\nTherefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable:",
"fc_model = get_fc_model()\n\nfor layer in conv_model.layers: layer.trainable = False\n# Look how easy it is to connect two models together!\nconv_model.add(fc_model)",
"Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.",
"conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])\n\nconv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8, \n validation_data=val_batches, nb_val_samples=val_batches.nb_sample)\n\nconv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, \n validation_data=val_batches, nb_val_samples=val_batches.nb_sample)\n\nconv_model.save_weights(model_path + 'aug1.h5')\n\nconv_model.load_weights(model_path + 'aug1.h5')",
"Batch normalization\nAbout batch normalization\nBatch normalization (batchnorm) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called normalization. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.\nPrior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights.\nBatchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that all modern networks should use batchnorm, or something equivalent. There are two reasons for this:\n1. Adding batchnorm to a model can result in 10x or more improvements in training speed\n2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to reduce overfitting.\nAs promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:\n1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean\n2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.\nThis ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so.\nAdding batchnorm to the model\nWe can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers):",
"conv_layers[-1].output_shape[1:]\n\ndef get_bn_layers(p):\n return [\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dense(4096, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(4096, activation='relu'),\n BatchNormalization(),\n Dropout(p),\n Dense(1000, activation='softmax')\n ]\n\ndef load_fc_weights_from_vgg16bn(model):\n \"Load weights for model from the dense layers of the Vgg16BN model.\"\n # See imagenet_batchnorm.ipynb for info on how the weights for\n # Vgg16BN can be generated from the standard Vgg16 weights.\n from vgg16bn import Vgg16BN\n vgg16_bn = Vgg16BN()\n _, fc_layers = split_at(vgg16_bn.model, Convolution2D)\n copy_weights(fc_layers, model.layers)\n\np=0.6\n\nbn_model = Sequential(get_bn_layers(0.6))\n\nload_fc_weights_from_vgg16bn(bn_model)\n\ndef proc_wgts(layer, prev_p, new_p):\n scal = (1-prev_p)/(1-new_p)\n return [o*scal for o in layer.get_weights()]\n\nfor l in bn_model.layers: \n if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6))\n\nbn_model.pop()\nfor layer in bn_model.layers: layer.trainable=False\n\nbn_model.add(Dense(2,activation='softmax'))\n\nbn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])\n\nbn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels))\n\nbn_model.save_weights(model_path+'bn.h5')\n\nbn_model.load_weights(model_path+'bn.h5')\n\nbn_layers = get_bn_layers(0.6)\nbn_layers.pop()\nbn_layers.append(Dense(2,activation='softmax'))\n\nfinal_model = Sequential(conv_layers)\nfor layer in final_model.layers: layer.trainable = False\nfor layer in bn_layers: final_model.add(layer)\n\nfor l1,l2 in zip(bn_model.layers, bn_layers):\n l2.set_weights(l1.get_weights())\n\nfinal_model.compile(optimizer=Adam(), \n loss='categorical_crossentropy', metrics=['accuracy'])\n\nfinal_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, \n validation_data=val_batches, nb_val_samples=val_batches.nb_sample)\n\nfinal_model.save_weights(model_path + 'final1.h5')\n\nfinal_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, \n validation_data=val_batches, nb_val_samples=val_batches.nb_sample)\n\nfinal_model.save_weights(model_path + 'final2.h5')\n\nfinal_model.optimizer.lr=0.001\n\nfinal_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, \n validation_data=val_batches, nb_val_samples=val_batches.nb_sample)\n\nbn_model.save_weights(model_path + 'final3.h5')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
phoebe-project/phoebe2-docs
|
development/examples/plbum_method_compare.ipynb
|
gpl-3.0
|
[
"Comparing pblum methods\nHere we'll look into the influence of pblum_method on the resulting luminosities as a function of the stellar distortion (only applicable for alternate backends).",
"import phoebe\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nb = phoebe.default_binary()\n\nb.add_dataset('lc')\n\nb.add_compute('ellc')",
"And to avoid any issues with falling outside the atmosphere grids, we'll set a simple flat limb-darkening model and disable irradiation.",
"b.set_value_all('ld_mode', 'manual')\nb.set_value_all('ld_func', 'linear')\nb.set_value_all('ld_coeffs', [0.5])\nb.set_value_all('irrad_method', 'none')\n\nb.set_value_all('atm', 'ck2004')\n\nrequiv_max = b.get_value('requiv_max', component='primary', context='component')\nrequiv_max_factors = np.arange(0.3,1.0,0.05)\nsb_pblum_abs = np.zeros_like(requiv_max_factors)\nph_pblum_abs = np.zeros_like(requiv_max_factors)\n\nfor i,requiv_max_factor in enumerate(requiv_max_factors):\n b.set_value('requiv', component='primary', value=requiv_max_factor*requiv_max)\n \n sb_pblum_abs[i] = b.compute_pblums(compute='ellc01', pblum_method='stefan-boltzmann', pblum_abs=True)['pblum_abs@primary@lc01'].value\n ph_pblum_abs[i] = b.compute_pblums(compute='ellc01', pblum_method='phoebe', pblum_abs=True)['pblum_abs@primary@lc01'].value",
"Here we can see that Stefan-Boltzmann (which assumes spherical stars) is an increasingly bad approximation as the distortion of the star increase (as expected). But even in the quite detached case, the luminosities are not in great agreement. For this reason it is important to not trust absolute pblum values when using pblum_method='stefan-boltzmann', but rather just use them as a nuisance parameter or original estimate to adjust the light-levels.",
"_ = plt.plot(requiv_max_factors, sb_pblum_abs, 'k-', label='Stefan-Boltzmann')\n_ = plt.plot(requiv_max_factors, ph_pblum_abs, 'b-', label='PHOEBE mesh')\n_ = plt.xlabel('requiv / requiv_max')\n_ = plt.ylabel('L (W)')\n_ = plt.legend()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
piskvorky/gensim
|
docs/notebooks/Varembed.ipynb
|
lgpl-2.1
|
[
"VarEmbed Tutorial\nVarembed is a word embedding model incorporating morphological information, capturing shared sub-word features. Unlike previous work that constructs word embeddings directly from morphemes, varembed combines morphological and distributional information in a unified probabilistic framework. Varembed thus yields improvements on intrinsic word similarity evaluations. Check out the original paper, arXiv:1608.01056 accepted in EMNLP 2016.\nVarembed is now integrated into Gensim providing ability to load already trained varembed models into gensim with additional functionalities over word vectors already present in gensim.\nThis Tutorial\nIn this tutorial you will learn how to train, load and evaluate varembed model on your data.\nTrain Model\nThe authors provide their code to train a varembed model. Checkout the repository MorphologicalPriorsForWordEmbeddings for to train a varembed model. You'll need to use that code if you want to train a model. \nLoad Varembed Model\nNow that you have an already trained varembed model, you can easily load the varembed word vectors directly into Gensim. <br>\nFor that, you need to provide the path to the word vectors pickle file generated after you train the model and run the script to package varembed embeddings provided in the varembed source code repository.\nWe'll use a varembed model trained on Lee Corpus as the vocabulary, which is already available in gensim.",
"from gensim.models.wrappers import varembed\n\nvector_file = '../../gensim/test/test_data/varembed_leecorpus_vectors.pkl'\nmodel = varembed.VarEmbed.load_varembed_format(vectors=vector_file)",
"This loads a varembed model into Gensim. Also if you want to load with morphemes added into the varembed vectors, you just need to also provide the path to the trained morfessor model binary as an argument. This works as an optional parameter, if not provided, it would just load the varembed vectors without morphemes.",
"morfessor_file = '../../gensim/test/test_data/varembed_leecorpus_morfessor.bin'\nmodel_with_morphemes = varembed.VarEmbed.load_varembed_format(vectors=vector_file, morfessor_model=morfessor_file)",
"This helps load trained varembed models into Gensim. Now you can use this for any of the Keyed Vector functionalities, like 'most_similar', 'similarity' and so on, already provided in gensim.",
"model.most_similar('government')\n\nmodel.similarity('peace', 'grim')",
"Conclusion\nIn this tutorial, we learnt how to load already trained varembed models vectors into gensim and easily use and evaluate it. That's it!\nResources\n\nVarembed Source Code\nGensim\nLee Corpus"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
yevheniyc/Projects
|
1j_NLP_Python/ex01.ipynb
|
mit
|
[
"Exercise 01: extract text from HTML\nThe course repo has a subdirectory called html which includes some example HTML files.",
"%sx ls html/",
"Select one of those files to use as an example, and take a look at its HTML content.",
"file = \"html/article1.html\"\nprint(open(file, \"r\").readlines())",
"Next, use Beautiful Soup to extract text out of the HTML. Following the DOM structure of the HTML document, select the <div/> that encloses the article text, then iterate through the <p/> paragraphs to extract the text from each.",
"from bs4 import BeautifulSoup\n\nwith open(file, \"r\") as f:\n soup = BeautifulSoup(f, \"html.parser\")\n\n for div in soup.find_all(\"div\", id=\"article-body\"):\n for p in div.find_all(\"p\"):\n print(p.get_text(), \"\\n\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
dev/_downloads/88563c785f9a977b7ce2000e660aeacf/30_annotate_raw.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Annotating continuous data\nThis tutorial describes adding annotations to a ~mne.io.Raw object,\nand how annotations are used in later stages of data processing.\nAs usual we'll start by importing the modules we need, loading some\nexample data <sample-dataset>, and (since we won't actually analyze the\nraw data in this tutorial) cropping the ~mne.io.Raw object to just 60\nseconds before loading it into RAM to save memory:",
"import os\nfrom datetime import timedelta\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)\nraw.crop(tmax=60).load_data()",
"~mne.Annotations in MNE-Python are a way of storing short strings of\ninformation about temporal spans of a ~mne.io.Raw object. Below the\nsurface, ~mne.Annotations are list-like <list> objects,\nwhere each element comprises three pieces of information: an onset time\n(in seconds), a duration (also in seconds), and a description (a text\nstring). Additionally, the ~mne.Annotations object itself also keeps\ntrack of orig_time, which is a POSIX timestamp_ denoting a real-world\ntime relative to which the annotation onsets should be interpreted.\nCreating annotations programmatically\nIf you know in advance what spans of the ~mne.io.Raw object you want\nto annotate, ~mne.Annotations can be created programmatically, and\nyou can even pass lists or arrays to the ~mne.Annotations\nconstructor to annotate multiple spans at once:",
"my_annot = mne.Annotations(onset=[3, 5, 7], # in seconds\n duration=[1, 0.5, 0.25], # in seconds, too\n description=['AAA', 'BBB', 'CCC'])\nprint(my_annot)",
"Notice that orig_time is None, because we haven't specified it. In\nthose cases, when you add the annotations to a ~mne.io.Raw object,\nit is assumed that the orig_time matches the time of the first sample of\nthe recording, so orig_time will be set to match the recording\nmeasurement date (raw.info['meas_date']).",
"raw.set_annotations(my_annot)\nprint(raw.annotations)\n\n# convert meas_date (a tuple of seconds, microseconds) into a float:\nmeas_date = raw.info['meas_date']\norig_time = raw.annotations.orig_time\nprint(meas_date == orig_time)",
"Since the example data comes from a Neuromag system that starts counting\nsample numbers before the recording begins, adding my_annot to the\n~mne.io.Raw object also involved another automatic change: an offset\nequalling the time of the first recorded sample (raw.first_samp /\nraw.info['sfreq']) was added to the onset values of each annotation\n(see time-as-index for more info on raw.first_samp):",
"time_of_first_sample = raw.first_samp / raw.info['sfreq']\nprint(my_annot.onset + time_of_first_sample)\nprint(raw.annotations.onset)",
"If you know that your annotation onsets are relative to some other time, you\ncan set orig_time before you call :meth:~mne.io.Raw.set_annotations,\nand the onset times will get adjusted based on the time difference between\nyour specified orig_time and raw.info['meas_date'], but without the\nadditional adjustment for raw.first_samp. orig_time can be specified\nin various ways (see the documentation of ~mne.Annotations for the\noptions); here we'll use an ISO 8601_ formatted string, and set it to be 50\nseconds later than raw.info['meas_date'].",
"time_format = '%Y-%m-%d %H:%M:%S.%f'\nnew_orig_time = (meas_date + timedelta(seconds=50)).strftime(time_format)\nprint(new_orig_time)\n\nlater_annot = mne.Annotations(onset=[3, 5, 7],\n duration=[1, 0.5, 0.25],\n description=['DDD', 'EEE', 'FFF'],\n orig_time=new_orig_time)\n\nraw2 = raw.copy().set_annotations(later_annot)\nprint(later_annot.onset)\nprint(raw2.annotations.onset)",
"<div class=\"alert alert-info\"><h4>Note</h4><p>If your annotations fall outside the range of data times in the\n `~mne.io.Raw` object, the annotations outside the data range will\n not be added to ``raw.annotations``, and a warning will be issued.</p></div>\n\nNow that your annotations have been added to a ~mne.io.Raw object,\nyou can see them when you visualize the ~mne.io.Raw object:",
"fig = raw.plot(start=2, duration=6)",
"The three annotations appear as differently colored rectangles because they\nhave different description values (which are printed along the top\nedge of the plot area). Notice also that colored spans appear in the small\nscroll bar at the bottom of the plot window, making it easy to quickly view\nwhere in a ~mne.io.Raw object the annotations are so you can easily\nbrowse through the data to find and examine them.\nAnnotating Raw objects interactively\nAnnotations can also be added to a ~mne.io.Raw object interactively\nby clicking-and-dragging the mouse in the plot window. To do this, you must\nfirst enter \"annotation mode\" by pressing :kbd:a while the plot window is\nfocused; this will bring up the annotation controls:",
"fig = raw.plot(start=2, duration=6)\nfig.fake_keypress('a')",
"The drop-down-menu on the left determines which existing label will be\ncreated by the next click-and-drag operation in the main plot window. New\nannotation descriptions can be added by clicking the :guilabel:Add\ndescription button; the new description will be added to the list of\ndescriptions and automatically selected.\nThe following functions relate to which description is currently selected in\nthe drop-down-menu:\nWith :guilabel:Remove description you can remove description\nincluding the annotations.\nWith :guilabel:Edit description you can edit\nthe description of either only one annotation (the one currently selected)\nor all annotations of a description.\nWith :guilabel:Set Visible you can show or hide descriptions.\nDuring interactive annotation it is also possible to adjust the start and end\ntimes of existing annotations, by clicking-and-dragging on the left or right\nedges of the highlighting rectangle corresponding to that annotation. When\nan annotation is selected (the background of the label at the bottom changes\nto darker) the values for start and stop are visible in two spinboxes and\ncan also be edited there.\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>Calling :meth:`~mne.io.Raw.set_annotations` **replaces** any annotations\n currently stored in the `~mne.io.Raw` object, so be careful when\n working with annotations that were created interactively (you could lose\n a lot of work if you accidentally overwrite your interactive\n annotations). A good safeguard is to run\n ``interactive_annot = raw.annotations`` after you finish an interactive\n annotation session, so that the annotations are stored in a separate\n variable outside the `~mne.io.Raw` object.</p></div>\n\nHow annotations affect preprocessing and analysis\nYou may have noticed that the description for new labels in the annotation\ncontrols window defaults to BAD_. The reason for this is that annotation\nis often used to mark bad temporal spans of data (such as movement artifacts\nor environmental interference that cannot be removed in other ways such as\nprojection <tut-projectors-background> or filtering). Several\nMNE-Python operations\nare \"annotation aware\" and will avoid using data that is annotated with a\ndescription that begins with \"bad\" or \"BAD\"; such operations typically have a\nboolean reject_by_annotation parameter. Examples of such operations are\nindependent components analysis (mne.preprocessing.ICA), functions\nfor finding heartbeat and blink artifacts\n(:func:~mne.preprocessing.find_ecg_events,\n:func:~mne.preprocessing.find_eog_events), and creation of epoched data\nfrom continuous data (mne.Epochs). See tut-reject-data-spans\nfor details.\nOperations on Annotations objects\n~mne.Annotations objects can be combined by simply adding them with\nthe + operator, as long as they share the same orig_time:",
"new_annot = mne.Annotations(onset=3.75, duration=0.75, description='AAA')\nraw.set_annotations(my_annot + new_annot)\nraw.plot(start=2, duration=6)",
"Notice that it is possible to create overlapping annotations, even when they\nshare the same description. This is not possible when annotating\ninteractively; click-and-dragging to create a new annotation that overlaps\nwith an existing annotation with the same description will cause the old and\nnew annotations to be merged.\nIndividual annotations can be accessed by indexing an\n~mne.Annotations object, and subsets of the annotations can be\nachieved by either slicing or indexing with a list, tuple, or array of\nindices:",
"print(raw.annotations[0]) # just the first annotation\nprint(raw.annotations[:2]) # the first two annotations\nprint(raw.annotations[(3, 2)]) # the fourth and third annotations",
"You can also iterate over the annotations within an ~mne.Annotations\nobject:",
"for ann in raw.annotations:\n descr = ann['description']\n start = ann['onset']\n end = ann['onset'] + ann['duration']\n print(\"'{}' goes from {} to {}\".format(descr, start, end))",
"Note that iterating, indexing and slicing ~mne.Annotations all\nreturn a copy, so changes to an indexed, sliced, or iterated element will not\nmodify the original ~mne.Annotations object.",
"# later_annot WILL be changed, because we're modifying the first element of\n# later_annot.onset directly:\nlater_annot.onset[0] = 99\n\n# later_annot WILL NOT be changed, because later_annot[0] returns a copy\n# before the 'onset' field is changed:\nlater_annot[0]['onset'] = 77\n\nprint(later_annot[0]['onset'])",
"Reading and writing Annotations to/from a file\n~mne.Annotations objects have a :meth:~mne.Annotations.save method\nwhich can write :file:.fif, :file:.csv, and :file:.txt formats (the\nformat to write is inferred from the file extension in the filename you\nprovide). Be aware that the format of the onset information that is written\nto the file depends on the file extension. While :file:.csv files store the\nonset as timestamps, :file:.txt files write floats (in seconds). There is a\ncorresponding :func:~mne.read_annotations function to load them from disk:",
"raw.annotations.save('saved-annotations.csv', overwrite=True)\nannot_from_file = mne.read_annotations('saved-annotations.csv')\nprint(annot_from_file)",
".. LINKS"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.19/_downloads/374e7fb88f562b8ceb7b99b07e106d9b/plot_10_raw_overview.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"The Raw data structure: continuous data\nThis tutorial covers the basics of working with raw EEG/MEG data in Python. It\nintroduces the :class:~mne.io.Raw data structure in detail, including how to\nload, query, subselect, export, and plot data from a :class:~mne.io.Raw\nobject. For more info on visualization of :class:~mne.io.Raw objects, see\ntut-visualize-raw. For info on creating a :class:~mne.io.Raw object\nfrom simulated data in a :class:NumPy array <numpy.ndarray>, see\ntut_creating_data_structures.\n :depth: 2\nAs usual we'll start by importing the modules we need:",
"import os\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport mne",
"Loading continuous data\n^^^^^^^^^^^^^^^^^^^^^^^\n.. sidebar:: Datasets in MNE-Python\nThere are ``data_path`` functions for several example datasets in\nMNE-Python (e.g., :func:`mne.datasets.kiloword.data_path`,\n:func:`mne.datasets.spm_face.data_path`, etc). All of them will check the\ndefault download location first to see if the dataset is already on your\ncomputer, and only download it if necessary. The default download\nlocation is also configurable; see the documentation of any of the\n``data_path`` functions for more information.\n\nAs mentioned in the introductory tutorial <tut-overview>,\nMNE-Python data structures are based around\nthe :file:.fif file format from Neuromag. This tutorial uses an\nexample dataset <sample-dataset> in :file:.fif format, so here we'll\nuse the function :func:mne.io.read_raw_fif to load the raw data; there are\nreader functions for a wide variety of other data formats\n<data-formats> as well.\nThere are also several other example datasets\n<datasets> that can be downloaded with just a few lines\nof code. Functions for downloading example datasets are in the\n:mod:mne.datasets submodule; here we'll use\n:func:mne.datasets.sample.data_path to download the \"sample-dataset\"\ndataset, which contains EEG, MEG, and structural MRI data from one subject\nperforming an audiovisual experiment. When it's done downloading,\n:func:~mne.datasets.sample.data_path will return the folder location where\nit put the files; you can navigate there with your file browser if you want\nto examine the files yourself. Once we have the file path, we can load the\ndata with :func:~mne.io.read_raw_fif. This will return a\n:class:~mne.io.Raw object, which we'll store in a variable called raw.",
"sample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file)",
"As you can see above, :func:~mne.io.read_raw_fif automatically displays\nsome information about the file it's loading. For example, here it tells us\nthat there are three \"projection items\" in the file along with the recorded\ndata; those are :term:SSP projectors <projector> calculated to remove\nenvironmental noise from the MEG signals, and are discussed in a the tutorial\ntut-projectors-background.\nIn addition to the information displayed during loading, you can\nget a glimpse of the basic details of a :class:~mne.io.Raw object by\nprinting it:",
"print(raw)",
"By default, the :samp:mne.io.read_raw_{*} family of functions will not\nload the data into memory (instead the data on disk are memory-mapped_,\nmeaning the data are only read from disk as-needed). Some operations (such as\nfiltering) require that the data be copied into RAM; to do that we could have\npassed the preload=True parameter to :func:~mne.io.read_raw_fif, but we\ncan also copy the data into RAM at any time using the\n:meth:~mne.io.Raw.load_data method. However, since this particular tutorial\ndoesn't do any serious analysis of the data, we'll first\n:meth:~mne.io.Raw.crop the :class:~mne.io.Raw object to 60 seconds so it\nuses less memory and runs more smoothly on our documentation server.",
"raw.crop(tmax=60).load_data()",
"Querying the Raw object\n^^^^^^^^^^^^^^^^^^^^^^^\n.. sidebar:: Attributes vs. Methods\n**Attributes** are usually static properties of Python objects — things\nthat are pre-computed and stored as part of the object's representation\nin memory. Attributes are accessed with the ``.`` operator and do not\nrequire parentheses after the attribute name (example: ``raw.ch_names``).\n\n**Methods** are like specialized functions attached to an object.\nUsually they require additional user input and/or need some computation\nto yield a result. Methods always have parentheses at the end; additional\narguments (if any) go inside those parentheses (examples:\n``raw.estimate_rank()``, ``raw.drop_channels(['EEG 030', 'MEG 2242'])``).\n\nWe saw above that printing the :class:~mne.io.Raw object displays some\nbasic information like the total number of channels, the number of time\npoints at which the data were sampled, total duration, and the approximate\nsize in memory. Much more information is available through the various\nattributes and methods of the :class:~mne.io.Raw class. Some useful\nattributes of :class:~mne.io.Raw objects include a list of the channel\nnames (:attr:~mne.io.Raw.ch_names), an array of the sample times in seconds\n(:attr:~mne.io.Raw.times), and the total number of samples\n(:attr:~mne.io.Raw.n_times); a list of all attributes and methods is given\nin the documentation of the :class:~mne.io.Raw class.\nThe Raw.info attribute\n~~~~~~~~~~~~~~~~~~~~~~~~~~\nThere is also quite a lot of information stored in the raw.info\nattribute, which stores an :class:~mne.Info object that is similar to a\n:class:Python dictionary <dict> (in that it has fields accessed via named\nkeys). Like Python dictionaries, raw.info has a .keys() method that\nshows all the available field names; unlike Python dictionaries, printing\nraw.info will print a nicely-formatted glimpse of each field's data. See\ntut-info-class for more on what is stored in :class:~mne.Info\nobjects, and how to interact with them.",
"n_time_samps = raw.n_times\ntime_secs = raw.times\nch_names = raw.ch_names\nn_chan = len(ch_names) # note: there is no raw.n_channels attribute\nprint('the (cropped) sample data object has {} time samples and {} channels.'\n ''.format(n_time_samps, n_chan))\nprint('The last time sample is at {} seconds.'.format(time_secs[-1]))\nprint('The first few channel names are {}.'.format(', '.join(ch_names[:3])))\nprint() # insert a blank line in the output\n\n# some examples of raw.info:\nprint('bad channels:', raw.info['bads']) # chs marked \"bad\" during acquisition\nprint(raw.info['sfreq'], 'Hz') # sampling frequency\nprint(raw.info['description'], '\\n') # miscellaneous acquisition info\n\nprint(raw.info)",
"<div class=\"alert alert-info\"><h4>Note</h4><p>Most of the fields of ``raw.info`` reflect metadata recorded at\n acquisition time, and should not be changed by the user. There are a few\n exceptions (such as ``raw.info['bads']`` and ``raw.info['projs']``), but\n in most cases there are dedicated MNE-Python functions or methods to\n update the :class:`~mne.Info` object safely (such as\n :meth:`~mne.io.Raw.add_proj` to update ``raw.info['projs']``).</p></div>\n\nTime, sample number, and sample index\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. sidebar:: Sample numbering in VectorView data\nFor data from VectorView systems, it is important to distinguish *sample\nnumber* from *sample index*. See :term:`first_samp` for more information.\n\nOne method of :class:~mne.io.Raw objects that is frequently useful is\n:meth:~mne.io.Raw.time_as_index, which converts a time (in seconds) into\nthe integer index of the sample occurring closest to that time. The method\ncan also take a list or array of times, and will return an array of indices.\nIt is important to remember that there may not be a data sample at exactly\nthe time requested, so the number of samples between time = 1 second and\ntime = 2 seconds may be different than the number of samples between\ntime = 2 and time = 3:",
"print(raw.time_as_index(20))\nprint(raw.time_as_index([20, 30, 40]), '\\n')\n\nprint(np.diff(raw.time_as_index([1, 2, 3])))",
"Modifying Raw objects\n^^^^^^^^^^^^^^^^^^^^^^^^^\n.. sidebar:: len(raw)\nAlthough the :class:`~mne.io.Raw` object underlyingly stores data samples\nin a :class:`NumPy array <numpy.ndarray>` of shape (n_channels,\nn_timepoints), the :class:`~mne.io.Raw` object behaves differently from\n:class:`NumPy arrays <numpy.ndarray>` with respect to the :func:`len`\nfunction. ``len(raw)`` will return the number of timepoints (length along\ndata axis 1), not the number of channels (length along data axis 0).\nHence in this section you'll see ``len(raw.ch_names)`` to get the number\nof channels.\n\n:class:~mne.io.Raw objects have a number of methods that modify the\n:class:~mne.io.Raw instance in-place and return a reference to the modified\ninstance. This can be useful for method chaining_\n(e.g., raw.crop(...).filter(...).pick_channels(...).plot())\nbut it also poses a problem during interactive analysis: if you modify your\n:class:~mne.io.Raw object for an exploratory plot or analysis (say, by\ndropping some channels), you will then need to re-load the data (and repeat\nany earlier processing steps) to undo the channel-dropping and try something\nelse. For that reason, the examples in this section frequently use the\n:meth:~mne.io.Raw.copy method before the other methods being demonstrated,\nso that the original :class:~mne.io.Raw object is still available in the\nvariable raw for use in later examples.\nSelecting, dropping, and reordering channels\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nAltering the channels of a :class:~mne.io.Raw object can be done in several\nways. As a first example, we'll use the :meth:~mne.io.Raw.pick_types method\nto restrict the :class:~mne.io.Raw object to just the EEG and EOG channels:",
"eeg_and_eog = raw.copy().pick_types(meg=False, eeg=True, eog=True)\nprint(len(raw.ch_names), '→', len(eeg_and_eog.ch_names))",
"Similar to the :meth:~mne.io.Raw.pick_types method, there is also the\n:meth:~mne.io.Raw.pick_channels method to pick channels by name, and a\ncorresponding :meth:~mne.io.Raw.drop_channels method to remove channels by\nname:",
"raw_temp = raw.copy()\nprint('Number of channels in raw_temp:')\nprint(len(raw_temp.ch_names), end=' → drop two → ')\nraw_temp.drop_channels(['EEG 037', 'EEG 059'])\nprint(len(raw_temp.ch_names), end=' → pick three → ')\nraw_temp.pick_channels(['MEG 1811', 'EEG 017', 'EOG 061'])\nprint(len(raw_temp.ch_names))",
"If you want the channels in a specific order (e.g., for plotting),\n:meth:~mne.io.Raw.reorder_channels works just like\n:meth:~mne.io.Raw.pick_channels but also reorders the channels; for\nexample, here we pick the EOG and frontal EEG channels, putting the EOG\nfirst and the EEG in reverse order:",
"channel_names = ['EOG 061', 'EEG 003', 'EEG 002', 'EEG 001']\neog_and_frontal_eeg = raw.copy().reorder_channels(channel_names)\nprint(eog_and_frontal_eeg.ch_names)",
"Changing channel name and type\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. sidebar:: Long channel names\nDue to limitations in the :file:`.fif` file format (which MNE-Python uses\nto save :class:`~mne.io.Raw` objects), channel names are limited to a\nmaximum of 15 characters.\n\nYou may have noticed that the EEG channel names in the sample data are\nnumbered rather than labelled according to a standard nomenclature such as\nthe 10-20 <ten_twenty_> or 10-05 <ten_oh_five_> systems, or perhaps it\nbothers you that the channel names contain spaces. It is possible to rename\nchannels using the :meth:~mne.io.Raw.rename_channels method, which takes a\nPython dictionary to map old names to new names. You need not rename all\nchannels at once; provide only the dictionary entries for the channels you\nwant to rename. Here's a frivolous example:",
"raw.rename_channels({'EOG 061': 'blink detector'})",
"This next example replaces spaces in the channel names with underscores,\nusing a Python dict comprehension_:",
"print(raw.ch_names[-3:])\nchannel_renaming_dict = {name: name.replace(' ', '_') for name in raw.ch_names}\nraw.rename_channels(channel_renaming_dict)\nprint(raw.ch_names[-3:])",
"If for some reason the channel types in your :class:~mne.io.Raw object are\ninaccurate, you can change the type of any channel with the\n:meth:~mne.io.Raw.set_channel_types method. The method takes a\n:class:dictionary <dict> mapping channel names to types; allowed types are\necg, eeg, emg, eog, exci, ias, misc, resp, seeg, stim, syst, ecog, hbo,\nhbr. A common use case for changing channel type is when using frontal EEG\nelectrodes as makeshift EOG channels:",
"raw.set_channel_types({'EEG_001': 'eog'})\nprint(raw.copy().pick_types(meg=False, eog=True).ch_names)",
"Selection in the time domain\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIf you want to limit the time domain of a :class:~mne.io.Raw object, you\ncan use the :meth:~mne.io.Raw.crop method, which modifies the\n:class:~mne.io.Raw object in place (we've seen this already at the start of\nthis tutorial, when we cropped the :class:~mne.io.Raw object to 60 seconds\nto reduce memory demands). :meth:~mne.io.Raw.crop takes parameters tmin\nand tmax, both in seconds (here we'll again use :meth:~mne.io.Raw.copy\nfirst to avoid changing the original :class:~mne.io.Raw object):",
"raw_selection = raw.copy().crop(tmin=10, tmax=12.5)\nprint(raw_selection)",
":meth:~mne.io.Raw.crop also modifies the :attr:~mne.io.Raw.first_samp and\n:attr:~mne.io.Raw.times attributes, so that the first sample of the cropped\nobject now corresponds to time = 0. Accordingly, if you wanted to re-crop\nraw_selection from 11 to 12.5 seconds (instead of 10 to 12.5 as above)\nthen the subsequent call to :meth:~mne.io.Raw.crop should get tmin=1\n(not tmin=11), and leave tmax unspecified to keep everything from\ntmin up to the end of the object:",
"print(raw_selection.times.min(), raw_selection.times.max())\nraw_selection.crop(tmin=1)\nprint(raw_selection.times.min(), raw_selection.times.max())",
"Remember that sample times don't always align exactly with requested tmin\nor tmax values (due to sampling), which is why the max values of the\ncropped files don't exactly match the requested tmax (see\ntime-as-index for further details).\nIf you need to select discontinuous spans of a :class:~mne.io.Raw object —\nor combine two or more separate :class:~mne.io.Raw objects — you can use\nthe :meth:~mne.io.Raw.append method:",
"raw_selection1 = raw.copy().crop(tmin=30, tmax=30.1) # 0.1 seconds\nraw_selection2 = raw.copy().crop(tmin=40, tmax=41.1) # 1.1 seconds\nraw_selection3 = raw.copy().crop(tmin=50, tmax=51.3) # 1.3 seconds\nraw_selection1.append([raw_selection2, raw_selection3]) # 2.5 seconds total\nprint(raw_selection1.times.min(), raw_selection1.times.max())",
"<div class=\"alert alert-danger\"><h4>Warning</h4><p>Be careful when concatenating :class:`~mne.io.Raw` objects from different\n recordings, especially when saving: :meth:`~mne.io.Raw.append` only\n preserves the ``info`` attribute of the initial :class:`~mne.io.Raw`\n object (the one outside the :meth:`~mne.io.Raw.append` method call).</p></div>\n\nExtracting data from Raw objects\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nSo far we've been looking at ways to modify a :class:~mne.io.Raw object.\nThis section shows how to extract the data from a :class:~mne.io.Raw object\ninto a :class:NumPy array <numpy.ndarray>, for analysis or plotting using\nfunctions outside of MNE-Python. To select portions of the data,\n:class:~mne.io.Raw objects can be indexed using square brackets. However,\nindexing :class:~mne.io.Raw works differently than indexing a :class:NumPy\narray <numpy.ndarray> in two ways:\n\n\nAlong with the requested sample value(s) MNE-Python also returns an array\n of times (in seconds) corresponding to the requested samples. The data\n array and the times array are returned together as elements of a tuple.\n\n\nThe data array will always be 2-dimensional even if you request only a\n single time sample or a single channel.\n\n\nExtracting data by index\n~~~~~~~~~~~~~~~~~~~~~~~~\nTo illustrate the above two points, let's select a couple seconds of data\nfrom the first channel:",
"sampling_freq = raw.info['sfreq']\nstart_stop_seconds = np.array([11, 13])\nstart_sample, stop_sample = (start_stop_seconds * sampling_freq).astype(int)\nchannel_index = 0\nraw_selection = raw[channel_index, start_sample:stop_sample]\nprint(raw_selection)",
"You can see that it contains 2 arrays. This combination of data and times\nmakes it easy to plot selections of raw data (although note that we're\ntransposing the data array so that each channel is a column instead of a row,\nto match what matplotlib expects when plotting 2-dimensional y against\n1-dimensional x):",
"x = raw_selection[1]\ny = raw_selection[0].T\nplt.plot(x, y)",
"Extracting channels by name\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe :class:~mne.io.Raw object can also be indexed with the names of\nchannels instead of their index numbers. You can pass a single string to get\njust one channel, or a list of strings to select multiple channels. As with\ninteger indexing, this will return a tuple of (data_array, times_array)\nthat can be easily plotted. Since we're plotting 2 channels this time, we'll\nadd a vertical offset to one channel so it's not plotted right on top\nof the other one:",
"channel_names = ['MEG_0712', 'MEG_1022']\ntwo_meg_chans = raw[channel_names, start_sample:stop_sample]\ny_offset = np.array([5e-11, 0]) # just enough to separate the channel traces\nx = two_meg_chans[1]\ny = two_meg_chans[0].T + y_offset\nlines = plt.plot(x, y)\nplt.legend(lines, channel_names)",
"Extracting channels by type\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThere are several ways to select all channels of a given type from a\n:class:~mne.io.Raw object. The safest method is to use\n:func:mne.pick_types to obtain the integer indices of the channels you\nwant, then use those indices with the square-bracket indexing method shown\nabove. The :func:~mne.pick_types function uses the :class:~mne.Info\nattribute of the :class:~mne.io.Raw object to determine channel types, and\ntakes boolean or string parameters to indicate which type(s) to retain. The\nmeg parameter defaults to True, and all others default to False,\nso to get just the EEG channels, we pass eeg=True and meg=False:",
"eeg_channel_indices = mne.pick_types(raw.info, meg=False, eeg=True)\neeg_data, times = raw[eeg_channel_indices]\nprint(eeg_data.shape)",
"Some of the parameters of :func:mne.pick_types accept string arguments as\nwell as booleans. For example, the meg parameter can take values\n'mag', 'grad', 'planar1', or 'planar2' to select only\nmagnetometers, all gradiometers, or a specific type of gradiometer. See the\ndocstring of :meth:mne.pick_types for full details.\nThe Raw.get_data() method\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIf you only want the data (not the corresponding array of times),\n:class:~mne.io.Raw objects have a :meth:~mne.io.Raw.get_data method. Used\nwith no parameters specified, it will extract all data from all channels, in\na (n_channels, n_timepoints) :class:NumPy array <numpy.ndarray>:",
"data = raw.get_data()\nprint(data.shape)",
"If you want the array of times, :meth:~mne.io.Raw.get_data has an optional\nreturn_times parameter:",
"data, times = raw.get_data(return_times=True)\nprint(data.shape)\nprint(times.shape)",
"The :meth:~mne.io.Raw.get_data method can also be used to extract specific\nchannel(s) and sample ranges, via its picks, start, and stop\nparameters. The picks parameter accepts integer channel indices, channel\nnames, or channel types, and preserves the requested channel order given as\nits picks parameter.",
"first_channel_data = raw.get_data(picks=0)\neeg_and_eog_data = raw.get_data(picks=['eeg', 'eog'])\ntwo_meg_chans_data = raw.get_data(picks=['MEG_0712', 'MEG_1022'],\n start=1000, stop=2000)\n\nprint(first_channel_data.shape)\nprint(eeg_and_eog_data.shape)\nprint(two_meg_chans_data.shape)",
"Summary of ways to extract data from Raw objects\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe following table summarizes the various ways of extracting data from a\n:class:~mne.io.Raw object.\n.. cssclass:: table-bordered\n.. rst-class:: midvalign\n+-------------------------------------+-------------------------+\n| Python code | Result |\n| | |\n| | |\n+=====================================+=========================+\n| raw.get_data() | :class:NumPy array |\n| | <numpy.ndarray> |\n| | (n_chans × n_samps) |\n+-------------------------------------+-------------------------+\n| raw[:] | :class:tuple of (data |\n+-------------------------------------+ (n_chans × n_samps), |\n| raw.get_data(return_times=True) | times (1 × n_samps)) |\n+-------------------------------------+-------------------------+\n| raw[0, 1000:2000] | |\n+-------------------------------------+ |\n| raw['MEG 0113', 1000:2000] | |\n+-------------------------------------+ |\n| raw.get_data(picks=0, | :class:`tuple` of |\n| start=1000, stop=2000, | (data (1 × 1000), |\n| return_times=True) | times (1 × 1000)) |\n+-------------------------------------+ |\n| raw.get_data(picks='MEG 0113', | |\n| start=1000, stop=2000, | |\n| return_times=True) | |\n+-------------------------------------+-------------------------+\n| raw[7:9, 1000:2000] | |\n+-------------------------------------+ |\n| raw[[2, 5], 1000:2000] | :class:tuple of |\n+-------------------------------------+ (data (2 × 1000), |\n| raw[['EEG 030', 'EOG 061'], | times (1 × 1000)) |\n| 1000:2000] | |\n+-------------------------------------+-------------------------+\nExporting and saving Raw objects\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n:class:~mne.io.Raw objects have a built-in :meth:~mne.io.Raw.save method,\nwhich can be used to write a partially processed :class:~mne.io.Raw object\nto disk as a :file:.fif file, such that it can be re-loaded later with its\nvarious attributes intact (but see precision for an important\nnote about numerical precision when saving).\nThere are a few other ways to export just the sensor data from a\n:class:~mne.io.Raw object. One is to use indexing or the\n:meth:~mne.io.Raw.get_data method to extract the data, and use\n:func:numpy.save to save the data array:",
"data = raw.get_data()\nnp.save(file='my_data.npy', arr=data)",
"It is also possible to export the data to a :class:Pandas DataFrame\n<pandas.DataFrame> object, and use the saving methods that :mod:Pandas\n<pandas> affords. The :class:~mne.io.Raw object's\n:meth:~mne.io.Raw.to_data_frame method is similar to\n:meth:~mne.io.Raw.get_data in that it has a picks parameter for\nrestricting which channels are exported, and start and stop\nparameters for restricting the time domain. Note that, by default, times will\nbe converted to milliseconds, rounded to the nearest millisecond, and used as\nthe DataFrame index; see the scaling_time parameter in the documentation\nof :meth:~mne.io.Raw.to_data_frame for more details.",
"sampling_freq = raw.info['sfreq']\nstart_end_secs = np.array([10, 13])\nstart_sample, stop_sample = (start_end_secs * sampling_freq).astype(int)\ndf = raw.to_data_frame(picks=['eeg'], start=start_sample, stop=stop_sample)\n# then save using df.to_csv(...), df.to_hdf(...), etc\nprint(df.head())",
"<div class=\"alert alert-info\"><h4>Note</h4><p>When exporting data as a :class:`NumPy array <numpy.ndarray>` or\n :class:`Pandas DataFrame <pandas.DataFrame>`, be sure to properly account\n for the `unit of representation <units>` in your subsequent\n analyses.</p></div>\n\n.. LINKS\nhttps://docs.python.org/3/tutorial/datastructures.html#dictionaries"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
milani/cycleindex
|
examples/gama-network.ipynb
|
bsd-3-clause
|
[
"Gahuku-Gama\nIn this tutorial, we learn how to use cycleindex package to calculate balance ratios $R_l$ for Gahuku-Gama network which is a signed network of tribes of Gahuku-Gama aliance structure 1.\nFirst, let's import packages we use throughout this tutorial.",
"import time\nimport numpy as np\nfrom cycleindex.sampling import nrsampling, vxsampling\nfrom cycleindex import clean_matrix, cycle_count, balance_ratio",
"Now, define the network",
"gama_pos = np.array(\n [[0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1],\n [1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],\n [0,0,0,1,0,1,1,1,0,0,0,0,0,0,0,0],\n [0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0],\n [0,0,0,0,0,0,1,0,1,0,0,0,0,1,0,0],\n [0,0,1,0,0,0,1,1,0,0,1,1,0,0,0,0],\n [0,0,1,0,1,1,0,1,0,0,1,1,1,0,0,0],\n [0,0,1,1,0,1,1,0,0,0,1,1,0,0,0,0],\n [0,0,0,0,1,0,0,0,0,1,0,0,1,0,0,0],\n [0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0],\n [0,0,0,0,0,1,1,1,0,0,0,1,0,0,0,0],\n [0,0,0,0,0,1,1,1,0,0,1,0,0,0,0,0],\n [0,0,0,0,0,0,1,0,1,1,0,0,0,1,0,0],\n [0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0],\n [1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1],\n [1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0]]\n)\n\ngama_neg = np.array(\n [[0,0,1,1,1,1,0,0,0,0,0,1,0,0,0,0],\n [0,0,1,0,1,1,0,0,1,1,0,0,0,0,0,0],\n [1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n [1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1],\n [1,1,0,0,0,0,0,0,1,0,0,0,1,0,0,1],\n [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],\n [0,1,0,0,0,1,0,0,0,0,1,0,0,0,1,0],\n [0,1,0,0,0,0,0,0,0,0,1,0,0,0,1,0],\n [0,0,0,0,0,0,0,0,1,1,0,0,1,0,1,1],\n [1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1],\n [0,0,0,0,0,1,0,0,0,0,1,0,0,0,1,1],\n [0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,1],\n [0,0,0,0,1,0,0,0,1,1,1,1,1,0,0,0],\n [0,0,0,0,1,1,0,0,0,0,1,1,1,1,0,0]]\n)\n\ngama = gama_pos - gama_neg\n\nprint(\"# nodes: {}\".format(len(gama)))\nprint(\"# positive edges: {}\".format(np.sum(np.where(gama > 0))))\nprint(\"# negative edges: {}\".format(np.sum(np.where(gama < 0))))",
"Preprocess\nWe know that isolated vertices, sinks and sources do not contribute in our calculations. So it is better to remove them. The function clean_matrix helps us with that. Gama network is not a good example, as it contains no isolated vertices nor sinks or sources. But we do it for demonstration purposes.",
"gama_reduced = clean_matrix(gama)\n\nprint(\"# nodes: {}\".format(len(gama_reduced)))\nprint(\"# positive edges: {}\".format(np.sum(np.where(gama_reduced > 0))))\nprint(\"# negative edges: {}\".format(np.sum(np.where(gama_reduced < 0))))",
"Counting cycles\nWe start by counting cycles in the network. To do that, we use cycle_count function. It gets the adjacency matrix and the maximum cycle length we need.",
"?cycle_count # run to see the documentation on the pager.\n\nstart = time.time()\ncounts = cycle_count(gama_reduced,5)\nprint(\"Runtime: {:.2f}s\".format(time.time() - start))\n\nprint(counts)\nprint(np.array(counts)) # Numpy deals with floating-point issues better.",
"The first list shows $N_l^+ - N_l^-$ and the second list shows $N_l^+ + N_l^-$ for $l \\in {0,1,...,5}$ where $N_l^+$ and $N_l^-$ are number of positive and negative simple cycles of length l. For weighted networks, the weight of a cycle is equal to multiplication of the weights of the edges in the cycle.\nIt is easy to calculate $N_l^+$ and $N_l^-$ using these two lists.\nCalculating exact balance ratios\nUse balance_ratio function to calculate $R_l = \\dfrac{N_l^-}{N_l^+ + N_l^-} $. This function has a few tricky parameters that we will discuss later. For now, we want exact ratios as the network is small.",
"start = time.time()\nratios = balance_ratio(gama_reduced, 5, exact=True)\nprint(\"Runtime: {:.2f}s\".format(time.time() - start))\nprint(ratios)",
"Estimating balance ratios defining the sampling algorithm and number of samples needed.\nCycleindex provides two functions for graph sampling:\n\nvxsampling is an implementation of vertex expansion algorithm. It chooses a node at random and tries to expand the forming subgraph by selecting the neighbouring nodes at random and adding them to the subgraph 2.\nnrsampling which tries to sample subgraphs uniformly at random 2. Choose this algorithm if the degree distribution of the network at hand is skewed.\n\nUsing these two functions, we are able to estimate balance ratios where exact calculation is not feasible.",
"start = time.time()\nratios = balance_ratio(gama_reduced, 5, exact=False, n_samples=3000, parallel=False, sampling_func=vxsampling)\nprint(\"Runtime: {:.2f}s\".format(time.time() - start))\nprint(ratios)",
"As you can see, the ratios are not accurate, but good enough. We can also use multiple processes to calculate the ratio.",
"start = time.time()\nratios = balance_ratio(gama_reduced, 5, exact=False, n_samples=3000, parallel=True, sampling_func=vxsampling)\nprint(\"Runtime: {:.2f}s\".format(time.time() - start))\nprint(ratios)",
"The PC I am using has only two cores, so the improvement is not that much. When more cores are available, the algorithm uses all of them.\nEstimating balance ratios upto the desired accuracy\nIn the previous section, we used n_samples argument to specify how many samples to use for estimation. Often, we are not sure how many samples we need to have an accurate estimation. We can use accuracy parameter to specify how accurate we expect the result to be. The function then samples the graph until the ratios converge, i.e. the standard deviation falls below the accuracy specified.",
"start = time.time()\nratios = balance_ratio(gama_reduced, 5, exact=False, accuracy=0.01, parallel=True, sampling_func=vxsampling)\nprint(\"Runtime: {:.2f}s\".format(time.time() - start))\nprint(ratios)",
"References\n[1] http://konect.uni-koblenz.de/networks/ucidata-gama\n[2] Lu X., Bressan S. (2012) Sampling Connected Induced Subgraphs Uniformly at Random. In: Ailamaki A., Bowers S. (eds) Scientific and Statistical Database Management. SSDBM 2012. Lecture Notes in Computer Science, vol 7338. Springer, Berlin, Heidelberg"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
empet/LinAlgCS
|
LeastSqSol.ipynb
|
bsd-3-clause
|
[
"Solutia celor mai mici patrate a unui sistem liniar $Ax=b$\nConsideram o matrice $A=[c_1|c_2|\\ldots|c_n]\\in\\mathbb{R}^{m\\times n}$, $m\\geq n$, un vector $b\\in\\mathbb{R}^m$ si sistemul liniar\n$Ax=b$. \nUn sistem de ecuatii liniare in care numarul ecuatiilor este strict mai mare decat numarul, $n$, al necunoscutelor se numeste sistem supradeterminat.\nDaca sistemul supradeterminat $Ax=b$ este incompatibil, atunci nu exista nici un vector $x\\in\\mathbb{R}^n$ astfel incat $Ax=b$.\nIn acest caz se determina solutia celor mai mici patrate, adica un vector $x^\\in\\mathbb{R}^n$, astfel incat dintre toti vectorii $x$ din $\\mathbb{R}^n$, $x^$ minimizeaza distanta de la $b$ la $Ax$:\n$$ ||b-Ax^||\\leq ||b-Ax||\\quad \\Leftrightarrow \\quad ||b-Ax^||^2\\leq ||b-Ax||^2 \\quad \\forall\\: x\\in\\mathbb{R}^n.$$\nRepetam succint cum se determina solutia celor mai mici patrate (vezi Cursul 7):\nSistemul $Ax=b$ fiind incompatibil, vectorul $b\\in\\mathbb{R}^m$ nu apartine subspatiului $col(A)$, generat de coloanele $c_1, c_2\\ldots, c_n\\in\\mathbb{R}^m$ ale matricii $A$. Proiectia ortogonala, $b'$, a vectorului $b$ pe subspatiul $col(A)$ este o combinatie liniara a coloanelor:\n$$b'=x_1^c_1+x_2^c_2+\\cdots+x_n^c_n=Ax^, \\quad \\mbox{unde}\\:\\: x^=(x_1^, x_2^, \\ldots, x_n^)^T$$\nVectorul $x^*$ astfel asociat este solutia celor mai mici patrate a sistemului $Ax=b$. \n$r=b-Ax^$ este reziduul sistemului, iar norma acestui vector la patrat, $||r||^2=||b-Ax^||^2$ este eroarea la patrat, eroare ce se comite interpretand pe $x^*$ ca solutie a sistemului $Ax=b$.\nIn figura urmatoare ilustram problematica aflarii solutiei celor mai mici patrate.",
"from IPython.display import Image\nImage(filename='Imag/leastsq.png')",
"Solutia celor mai mici patrate a sistemului Ax=b si caracteristici ale solutiei si problemei celor mai mici patrate se obtin apeland functia numpy.linalg.lstsq:\n[xstar, rez2, rangA, singA]=np.linalg.lstsq(A,b)\nFunctia np.linalg.lstsq returneaza o lista de obiecte:\n\n\nxstar este array-ul care contine solutia, $x^*$;\n\n\nrez2 este norma la patrat a reziduului: $\\mbox{rez2}=\\|b-Ax^*\\|^2$ \n\n\nrangA este rangul matricii A\n\n\nsingA sunt valorile singulare ale lui A (se vor studia in Cursul 12)\n\n\nSolutia celor mai mici patrate este solutie a sistemului normal $A^TAx=A^Tb$.\n\n\nDeoarece rang($A^TA$)=rang($A$) (vezi Cursul 7), rezulta ca daca coloanele $c_1, c_2, \\ldots, c_n$ ale matricii $A$ sunt liniar independente,\n atunci $A$ are rangul $n$, iar sistemul normal are o unica solutie,\n$x^*$.\n\n\nDaca rangul lui $A$ este mai mic decat $n$ sistemul normal este compatibil nedeterminat si admite o infinitate de solutii $x^*$, astfel incat pentru fiecare dintre ele eroarea de aproximare este minima:\n\n\n$||b-Ax^*||^2\\leq ||b-Ax||^2$, $\\forall x\\in\\mathbb{R}^n$.\nDam cateva exemple de calcul a solutiei celor mai mici patrate:",
"import numpy as np\nA=np.array([[1,2], [-3,1], [5,-1.2], [-2.14, -3.4]], float)\nb=np.array([-5, 2,1,3], float)\n[xstar, rez2, rang, sing]=np.linalg.lstsq(A,b)\nprint A.shape\nprint 'Rangul matricii A este:', rang\nprint 'Solutia celor mai mici patrate este:', xstar\nprint 'Norma reziduului la patrat:', rez2",
"Daca $m=n$ matricea $A$ este patratica. Daca $A$ este nesingulara, atunci solutia celor mai mici patrate este unica solutie pe care sistemul $Ax=b$ o admite:",
"A=np.array([[1,0,-1], [2,3,-5], [4,-2, 1]],float)\nb=np.array([3, -1, 7], float)\n[xstar, rez2, rang, sing]=np.linalg.lstsq(A,b)\n\nprint 'Rangul matricii A este:', rang\nprint 'Solutia celor mai mici patrate este:', xstar\nprint 'Norma reziduului la patrat:', rez2",
"Rezolvand uzual sistemul normal $A^TAx=A^Tb$ obtinem solutia celor mai mici patrate ca mai sus:",
"M=np.dot(A.transpose(), A)# \nc=np.dot(A.transpose(), b)# Mx=c --> x=inv(M)c\nxx=np.dot(np.linalg.inv(M), c)\nprint xx",
"Sa ilustram solutiile analitice ale sistemului $A^TAx=A^Tb$, in cazul in care rangul matricii $A$, \ndeci si al matricii $A^TA$ este mai mic decat $n$.\nApoi invocam functia np.linalg.lstsq(A,b) si identificam printre solutiile analitice, pe cea numerica returnata de functia np.linalg.leastsq.\nPentru aceasta, construim dintr-o lista o matrice din $\\mathbb{R}^{6\\times 4}$:",
"L=2*[1, -3, 0, 0, -3,0,0,1, 0,0,1,-3]\nA=np.asarray(L, float).reshape((6,4))#convertim lista la un array pe care il redimensionam\nb=np.array([2,8,-3,2,-2,5], float) # vectorul termenilor liberi ai sistemului Ax=b\nprint A\nM=np.dot(A.transpose(), A)#M=A^TA\nc=np.dot(A.transpose(), b)# c=A^T b\nprint M\nprint c",
"Forma scara redusa a matricii prelungite, $\\overline{M}=[M|c]$, a sistemului $Mx=c$ este:\n$$S_{\\overline{M}}^0=\\left[\\begin{array}{rrrrr} 1&0&0&-1/3& -1\\0&1&0&-1/9&-1\\0&0&1&-3&1\\0&0&0&0&0\\end{array}\\right]$$\nPrin urmare sistemul $Mx=c$ este compatibil nedeterminat si daca rezolvam sistemul echivalent definit de forma scara redusa, de mai sus, avem solutiile:\n$$\\begin{array}{ll}\\begin{array}{lll}\nx_1^&=& \\displaystyle\\frac{x_4^}{3}-1\\\nx_2^&=&\\displaystyle\\frac{x_4^}{9}-1\\\nx_3^&=&3x_4^+1\\end{array}\\quad x_4^*\\in\\mathbb{R}\\end{array}$$\nAstfel, pentru fiecare alegere particulara a lui $x_4^$ (necunoscuta secundara a sistemului) obtinem o alta solutie, $x^=(x_4^/3-1, x_4^/9-1, 3x_4^+1, x_4^)^T$, a celor mai mici patrate. Toate aceste solutii au particularitatea ca $Ax^=b'$, unde $b'$ este proiectia ortogonala a lui $b$ pe subspatiul coloanelor, col(A). $b'$ este unic, in timp ce $x^$ nu este unic. Cu alte cuvinte aplicatia liniara $L:\\mathbb{R}^4\\to col(A)$, $L(x)=Ax$ nu este injectiva, pentru ca exista o infinitate de vectori $x^$ care sunt aplicati in acelasi vector $b'\\in col(A)$.\nSolutia numerica a celor mai mici patrate pentru sistemul $Ax=b$, unde $A$ si $b$ sunt matricile de mai sus este:",
"[xstar, rez2, rang, sing]=np.linalg.lstsq(A,b)\n\nprint 'Rangul matricii A este:', rang\nprint 'Solutia celor mai mici patrate este:', xstar\nprint 'Norma reziduului la patrat:', rez2",
"Remarcam ca din infinitatea de solutii analitice, numeric este calculata una singura, corespunzatoare lui\n$x_4^=xstar[3]$ (Atentie! indexarea coordonatelor lui $x^$ este tipica pentru matematica, iar ale lui xstar \n este cea din Python).\nDaca incercam sa transmitem ca argument $A$ al functiei np.linalg.lstsq(A,b) o matrice avand numarul de linii mai mic decat numarul de coloane,\neste afisat un mesaj de eroare, pentru ca solutia celor mai mici patrate exista doar pentru sistemele $Ax=b$, unde numarul liniilor lui $A$ este mai mare sau\negal cu numarul coloanelor:",
"A=np.array([[2, 3, -4], [1,5,-2]], float)\nb=np.array([-1,3,7], float)\n[xstar, rez2, rang, sing]=np.linalg.lstsq(A,b)",
"Aplicatie: Constructia unui model adecvat pentru date\nIn experimentele de laborator sau intr-o observatie statistica se inregistreaza valorile a doua variabile\n$X$ si $Y$, monitorizate: $(x_1,y_1), (x_2, y_2), \\ldots, (x_n,y_n)$.\nPentru a putea face predictii relativ la valorile variabilei $Y$ pe baza valorilor lui $X$,\nse determina din datele de observatie, o relatie functionala, $Y=f(X)$, intre cele doua variabile, care aproximeaza intr-un anume sens datele observate $(x_i, y_i)$, prin $(x_i, \\hat{y}_i=f(x_i))$. Pentru o valoare $X=a$, valoarea predictionata pentru $Y$ este atunci\n$\\hat{y}=f(a)$.\nDeterminarea unei relatii functionale din date se numeste in statistica si machine learning, ajustarea unui model la date (fitting a model to data).\nCel mai simplu model pentru un set de date $(x_i,y_i)$, $i=\\overline{1,n}$, $n>2$, este modelul liniar, $y=ax+b$.\nDaca punctele $(x_i,y_i)$ nu sunt coliniare, atunci nu exista o dreapta care sa le contina si prin urmare sistemul rezultat impunand ca aceste puncte sa verifice ecuatia $y=ax+b$:\n$$\\begin{array}{ccc}\nax_1+b&=&y_1\\\nax_2+b&=&y_2\\\n\\vdots& &\\\nax_n+b&=&y_n\\end{array}$$\neste un sistem incompatibil supradeterminat, in necunoscutele $a,b$, care sunt parametrii dreptei.\nSolutia celor mai mici patrate a acestui sistem este $(a^, b^)$ si ea defineste o dreapta\nde ecuatie $y=a^x+b^$, numita dreapta celor mai mici patrate, deoarece (vezi Cursul 7) aproximand valorile $y_i$ prin $\\hat{y}_i=a^x_i+b^$, suma erorilor la patrat este minima:\n$$\\sum_{i=1}^n (y_i-(a^x_i+b^))^2\\leq \\sum_{i=1}^n (y_i-(ax_i+b))^2, \\quad \\forall\\:\\: a, b\\in\\mathbb{R},$$\nadica dintre toate dreptele din plan, de ecuatie $y=ax+b$, dreapta $y=a^x+b^$ aproximeaza cel mai bine datele.\nIn Machine learning in locul erorii globale la patrat, $Er=\\sum_{i=1}^n(y_i-\\hat{y}i)^2$, returnate de np.linalg.lstsq,\nse analizeaza media aritmetica a erorilor la patrat, calculate in fiecare punct: \n $$\\displaystyle\\frac{1}{n}\\sum{i=1}^n(y_i-\\hat{y}_i)^2$$\nDaca aceasta medie este \"rezonabila\", atunci modelul functional dedus este considerat adecvat pentru date.\nSa ilustram calculul dreptei celor mai mici patrate in Python, pornind de la datele statistice (fictive) ce reprezinta media generala pe tara, la bacalaureat, obtinuta de absolventii de liceu in cativa ani.\n<table border=\"1\" bordercolor=\"#000099\" style=\"background-color:#FFFFFF\" width=\"75%\" cellpadding=\"3\" cellspacing=\"0\">\n <tr>\n <td>Anul absolvirii (x)</td>\n <td>1970</td>\n <td>1978 </td>\n <td>1985</td>\n <td> 1990</td>\n <td>1995 </td>\n <td>2000 </td>\n <td>2006 </td>\n <td>2010 </td>\n </tr>\n <tr>\n <td>Media (y)</td>\n <td>8.75</td>\n <td>8.58</td>\n <td>8.95</td>\n <td>7.90</td>\n <td>8.15</td>\n <td>7.70 </td>\n <td>7.25 </td>\n <td>6.33 </td>\n </tr>\n</table>\n\nSa se determine dreapta celor mai mici patrate, si media patratului erorilor. In ipoteza ca aceasta dreapta este un model adecvat pentru datele inregistrate sa se predictioneze care va fi media la bacalaureat in anul 2014.",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\n \nx=np.array([1970, 1978, 1985, 1990, 1995, 2000, 2006, 2010], float)\ny=np.array([8.75, 8.58, 8.95, 7.90, 8.15, 7.70, 7.25, 6.33], float)\nplt.plot(x,y, 'go')",
"Matricea sistemului de ecuatii $ax_i+b=y_i$, $i=\\overline{1,8}$, o constituim apeland functia\nnp.vstack (tuple) \ncare genereaza un array avand drept linii, array-urile 1D din tuple. Prin transpunerea array-ului generat avem matricea sistemului:",
"A=np.vstack((x, np.ones(x.size))).transpose()\nprint A\n[[a, b], Er]=np.linalg.lstsq(A,y)[:2]# cerem returnarea primelor doua obiecte din lista globala\nprint 'dreapta celor mai mici patrate are parametrii a=', a, 'b=', b\nprint 'Media erorilor la patrat este', Er/len(x)\nprint 'Media la bacalaureat predictionata de model pentru anul 2014 este', a*2014+b\n\nxx=[1965, 2015]\n# dreapta este perfect determinata de doua puncte (1965, a*1965+b), (2015, a*2015+b)\nyy=[]\nfor an in xx:\n yy.append(a*an+b)\nplt.plot(xx, yy, 'r')# este trasata dreapta ce uneste punctele alese\nplt.plot(x,y, 'go')# marcam datele din nou pentru a vedea pozitia fata de dreapta model\nplt.plot(2014, a*2014+b, 'ro')# marcam pe dreapta punctul de coordonate (2014,a*2014+b) \n",
"Din punct de vedere algebric dreapta celor mai mici patrate exista si este unica pentru orice set de date\n$(x_i, y_i)$, $i=\\overline{1,n}$, $n>2$, care nu apartin unei drepte verticale $y=c$. Nu intotdeauna insa dreapta celor mai mici patrate este modelul adecvat pentru date, asa cum se vede din exemplul urmator:",
"\nx=[ 0.63, 1.19, 1.44, 2.13, 2.88, 3, 3.46, 3.66, 3.71, 4.19, 4.29, 4.46, 4.67, 4.83]\ny=[ 3.63, 1.42, 0.81, 0.43, -1.02, -0.22, 0.13, 0.93, 1.48, 2.97, 2.52, 3.41, 4.54, 5.07]\nx=np.array(x)\ny=np.array(y)\nA=np.vstack((x, np.ones(x.size))).transpose()\nplt.plot(x,y, color=\"green\", lw=2, ls='*', marker='o')\n[[a, b], Er]=np.linalg.lstsq(A,y)[:2]\nxx=[0, 5]\nyy=[]\nfor elem in xx:\n yy.append(a*elem+b)\nplt.plot(xx, yy, 'r')\nprint 'Dreapta celor mai mici patrate are parametrii a=', a, 'b=', b\nprint 'Eroarea la patrat a celor mai mici patrate este', Er\nprint 'Media erorii la patrat', Er/len(x)",
"Media erorii la patrat fiind mare, modelul liniar nu pare sa fie potrivit pentru aceste date.\nVizual\nne este sugerata ideea sa incercam un model patratic $y=a x^2+bx+c$.\nImpunand ca cele 14 puncte $(x_i, y_i)$ sa verifice ecuatia patratica,\n$ ax_i^2+bx_i+c=y_i$,\nobtinem un sistem de 14 ecuatii cu 3 necunoscute, $a, b, c$:\n$$\\left[\\begin{array}{ccc} x_1^2&x_1&1\\\n x_2^2&x_2&1\\\n\\vdots&\\vdots&\\vdots\\\nx_n^2&x_n&1\\end{array}\\right]\n\\left[\\begin{array}{c} a\\b\\c\\end{array}\\right]\n=\\left[\\begin{array}{c}y_1\\y_2\\\\vdots\\y_n\\end{array}\\right]$$\nSa-i determinam solutia celor mai mici patrate:",
"n=len(x)\nA=np.vstack((x*x, x, np.ones(x.size))).transpose()\nprint A\n[[a, b, c], Er]=np.linalg.lstsq(A,y)[:2]\nprint 'Parabola celor mai mici patrate are parametrii a=', a, 'b=', b, 'c=', c\nprint 'Eroarea medie', Er/n\nX=np.arange(x[0], x[n-1], 0.01)\nplt.plot(X, a*X*X+b*X+c, 'r')\n#nu am apelat plt.plot(x, a*x*x+b*x+c, 'r') pt ca plt.plot uneste punctele \n#consecutive (x[i], y[i]), (x[i+1], y[i+1])\n#prin segmente de dreapta si punctele fiind \"rare\" nu era trasata o parabola,\n# ci o succesiune de segmente ce o aproximeaza. Testati!!!\nplt.plot(x,y, color=\"green\", lw=2, ls='*', marker='o')",
"Atat eroarea medie, cat si figura de mai sus ilustreaza ca modelul patratic (parabola celor mai mici patrate) este mai adecvat pentru datele considerate.\nIn semestrul doi metoda cel mai mici patrate va fi rafinata prin analiza regresiei.\nIn secventele de cod de mai sus am urmat metoda manuala de calcul a dreptei, respectiv parabolei celor mai mici patrate,\nadica am constituit matricea sistemului supradeterminat asociat si am apelat functia lstsq, pentru a ilustra \nlegatura cu solutia celor mai mici patrate.\nSetului de date\n$(x_i, y_i)$, $i=\\overline{0,n-1}$, $n>k$, i se poate asocia polinomul de grad $k$ al celor mai mici patrate,\n$y=p[0]x^k+p[1]x^{k-1}+\\cdots+p[k-1]x+p[k]$, apeland direct functia np.polyfit(x,y,k).\nArray-ul x, ca argument al functiei, contine abscisele punctelor $(x_i, y_i)$, iar y, ordonatele lor.\nFunctia returneaza coeficientii polinomului si optional reziduul si alte informatii:\n numpy.polyfit",
"p=np.polyfit(x,y,2)\nprint p\n\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
aaai2018-paperid-62/aaai2018-paperid-62
|
parameter_figures.ipynb
|
mit
|
[
"Figure generation for all parameters\nThis Jupyter notebook generates figures to show the coverage for each parameter.\nThe data\nWe start by loading the CSV file into a pandas DataFrame.",
"import pandas as pd\n\nfile = 'data/evaluations.csv'\nconversion_dict = {'research_type': lambda x: int(x == 'E')}\nevaluation_data = pd.read_csv(file, sep=',', header=0, index_col=0, converters=conversion_dict)\n\nprint('Samples per conference\\n{}'.format(evaluation_data.groupby('conference').size()))",
"Generation\nWe will generate figures for four different categorisations: method, data, and experiment. The categories consist of the following variables: (method) problem, objective/goal, research method, research questions, and pseudo code; (data) training, validation, test, and results data; (experiment) hypothesis, prediction, method source code, hardware specification, software dependencies, experiment setup, experiment source code.",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\n\nmatplotlib.style.use('ggplot')\n%matplotlib notebook\n\ncolors = matplotlib.cm.get_cmap().colors\nlen_colors = len(colors)\n\ndef plot_bars(data, keys, elements, filename, figsize=(4,4)):\n plot_scores = []\n for (key, element) in zip(keys, elements):\n plot_scores.append(data[element].mean(axis=0))\n \n fig = plt.figure(figsize=figsize)\n ax = plt.subplot(111)\n \n N = len(plot_scores)\n ind = np.arange(N)\n width = 0.7\n \n plot_colors = colors[0:len_colors:int(len_colors/N)]\n ax.bar(ind+0.5, plot_scores, width, align='center',\n alpha=0.5, color=plot_colors)\n for x, y in zip(ind, plot_scores):\n ax.text(x+0.5, 0.95, '{0:.0%}'.format(y),\n ha='center', va='top', size=12)\n ax.set_xlim(0,N)\n ax.set_xticks(ind+0.5)\n ax.set_xticklabels(keys, rotation=35)\n \n ax.set_ylim(0, 1.0)\n ax.set_yticks([0.25, 0.50, 0.75, 1.0])\n ax.set_yticklabels(['25%', r'50%', '75%', '100%'],\n fontdict={'horizontalalignment': 'right'})\n plt.tight_layout()\n plt.savefig('figures/{}.png'.format(filename), format='png',\n bbox_inches='tight')\n\nevaluation_data = evaluation_data.groupby('research_type').get_group(1)\n\nkeys = ['Results', 'Test', 'Valid-\\nation', 'Train']\ncolumns = ['results', 'test', 'validation', 'train'] \nplot_bars(evaluation_data[columns], keys, columns,\n 'freq_data', figsize=(4,3))\n\nkeys = ['Pseudo\\ncode', 'Research\\nquestion',\n 'Research\\nmethod', 'Objective/\\nGoal', 'Problem']\ncolumns = ['pseudocode', 'research_question',\n 'research_method', 'goal/objective', 'problem_description']\nplot_bars(evaluation_data[columns], keys, columns,\n 'freq_method', figsize=(4,3))\n\nkeys = ['Exp.\\ncode', 'Exp.\\nsetup', 'SW\\ndep.',\n 'HW\\nspec.', 'Method\\ncode', 'Prediction', 'Hypothesis']\ncolumns = ['open_experiment_code', 'experiment_setup',\n 'software_dependencies', 'hardware_specification',\n 'open_source_code', 'prediction', 'hypothesis']\nmethod_data = evaluation_data[columns]\nplot_bars(method_data, keys, columns, 'freq_experiment', figsize=(5,3))",
"Versions\nHere's a generated output to keep track of software versions used to run this Jupyter notebook.",
"import IPython\nimport platform\n\nprint('Python version: {}'.format(platform.python_version()))\nprint('IPython version: {}'.format(IPython.__version__))\nprint('matplotlib version: {}'.format(matplotlib.__version__))\nprint('numpy version: {}'.format(np.__version__))\nprint('pandas version: {}'.format(pd.__version__))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/lattice
|
docs/tutorials/premade_models.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TF Lattice Premade Models\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lattice/tutorials/premade_models\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/premade_models.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/lattice/blob/master/docs/tutorials/premade_models.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/premade_models.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nOverview\nPremade Models are quick and easy ways to build TFL tf.keras.model instances for typical use cases. This guide outlines the steps needed to construct a TFL Premade Model and train/test it. \nSetup\nInstalling TF Lattice package:",
"#@test {\"skip\": true}\n!pip install tensorflow-lattice pydot",
"Importing required packages:",
"import tensorflow as tf\n\nimport copy\nimport logging\nimport numpy as np\nimport pandas as pd\nimport sys\nimport tensorflow_lattice as tfl\nlogging.disable(sys.maxsize)",
"Setting the default values used for training in this guide:",
"LEARNING_RATE = 0.01\nBATCH_SIZE = 128\nNUM_EPOCHS = 500\nPREFITTING_NUM_EPOCHS = 10",
"Downloading the UCI Statlog (Heart) dataset:",
"heart_csv_file = tf.keras.utils.get_file(\n 'heart.csv',\n 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')\nheart_df = pd.read_csv(heart_csv_file)\nthal_vocab_list = ['normal', 'fixed', 'reversible']\nheart_df['thal'] = heart_df['thal'].map(\n {v: i for i, v in enumerate(thal_vocab_list)})\nheart_df = heart_df.astype(float)\n\nheart_train_size = int(len(heart_df) * 0.8)\nheart_train_dict = dict(heart_df[:heart_train_size])\nheart_test_dict = dict(heart_df[heart_train_size:])\n\n# This ordering of input features should match the feature configs. If no\n# feature config relies explicitly on the data (i.e. all are 'quantiles'),\n# then you can construct the feature_names list by simply iterating over each\n# feature config and extracting it's name.\nfeature_names = [\n 'age', 'sex', 'cp', 'chol', 'fbs', 'trestbps', 'thalach', 'restecg',\n 'exang', 'oldpeak', 'slope', 'ca', 'thal'\n]\n\n# Since we have some features that manually construct their input keypoints,\n# we need an index mapping of the feature names.\nfeature_name_indices = {name: index for index, name in enumerate(feature_names)}\n\nlabel_name = 'target'\nheart_train_xs = [\n heart_train_dict[feature_name] for feature_name in feature_names\n]\nheart_test_xs = [heart_test_dict[feature_name] for feature_name in feature_names]\nheart_train_ys = heart_train_dict[label_name]\nheart_test_ys = heart_test_dict[label_name]",
"Feature Configs\nFeature calibration and per-feature configurations are set using tfl.configs.FeatureConfig. Feature configurations include monotonicity constraints, per-feature regularization (see tfl.configs.RegularizerConfig), and lattice sizes for lattice models.\nNote that we must fully specify the feature config for any feature that we want our model to recognize. Otherwise the model will have no way of knowing that such a feature exists.\nDefining Our Feature Configs\nNow that we can compute our quantiles, we define a feature config for each feature that we want our model to take as input.",
"# Features:\n# - age\n# - sex\n# - cp chest pain type (4 values)\n# - trestbps resting blood pressure\n# - chol serum cholestoral in mg/dl\n# - fbs fasting blood sugar > 120 mg/dl\n# - restecg resting electrocardiographic results (values 0,1,2)\n# - thalach maximum heart rate achieved\n# - exang exercise induced angina\n# - oldpeak ST depression induced by exercise relative to rest\n# - slope the slope of the peak exercise ST segment\n# - ca number of major vessels (0-3) colored by flourosopy\n# - thal normal; fixed defect; reversable defect\n#\n# Feature configs are used to specify how each feature is calibrated and used.\nheart_feature_configs = [\n tfl.configs.FeatureConfig(\n name='age',\n lattice_size=3,\n monotonicity='increasing',\n # We must set the keypoints manually.\n pwl_calibration_num_keypoints=5,\n pwl_calibration_input_keypoints='quantiles',\n pwl_calibration_clip_max=100,\n # Per feature regularization.\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),\n ],\n ),\n tfl.configs.FeatureConfig(\n name='sex',\n num_buckets=2,\n ),\n tfl.configs.FeatureConfig(\n name='cp',\n monotonicity='increasing',\n # Keypoints that are uniformly spaced.\n pwl_calibration_num_keypoints=4,\n pwl_calibration_input_keypoints=np.linspace(\n np.min(heart_train_xs[feature_name_indices['cp']]),\n np.max(heart_train_xs[feature_name_indices['cp']]),\n num=4),\n ),\n tfl.configs.FeatureConfig(\n name='chol',\n monotonicity='increasing',\n # Explicit input keypoints initialization.\n pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],\n # Calibration can be forced to span the full output range by clamping.\n pwl_calibration_clamp_min=True,\n pwl_calibration_clamp_max=True,\n # Per feature regularization.\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),\n ],\n ),\n tfl.configs.FeatureConfig(\n name='fbs',\n # Partial monotonicity: output(0) <= output(1)\n monotonicity=[(0, 1)],\n num_buckets=2,\n ),\n tfl.configs.FeatureConfig(\n name='trestbps',\n monotonicity='decreasing',\n pwl_calibration_num_keypoints=5,\n pwl_calibration_input_keypoints='quantiles',\n ),\n tfl.configs.FeatureConfig(\n name='thalach',\n monotonicity='decreasing',\n pwl_calibration_num_keypoints=5,\n pwl_calibration_input_keypoints='quantiles',\n ),\n tfl.configs.FeatureConfig(\n name='restecg',\n # Partial monotonicity: output(0) <= output(1), output(0) <= output(2)\n monotonicity=[(0, 1), (0, 2)],\n num_buckets=3,\n ),\n tfl.configs.FeatureConfig(\n name='exang',\n # Partial monotonicity: output(0) <= output(1)\n monotonicity=[(0, 1)],\n num_buckets=2,\n ),\n tfl.configs.FeatureConfig(\n name='oldpeak',\n monotonicity='increasing',\n pwl_calibration_num_keypoints=5,\n pwl_calibration_input_keypoints='quantiles',\n ),\n tfl.configs.FeatureConfig(\n name='slope',\n # Partial monotonicity: output(0) <= output(1), output(1) <= output(2)\n monotonicity=[(0, 1), (1, 2)],\n num_buckets=3,\n ),\n tfl.configs.FeatureConfig(\n name='ca',\n monotonicity='increasing',\n pwl_calibration_num_keypoints=4,\n pwl_calibration_input_keypoints='quantiles',\n ),\n tfl.configs.FeatureConfig(\n name='thal',\n # Partial monotonicity:\n # output(normal) <= output(fixed)\n # output(normal) <= output(reversible)\n monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],\n num_buckets=3,\n # We must specify the vocabulary list in order to later set the\n # monotonicities since we used names and not indices.\n vocabulary_list=thal_vocab_list,\n ),\n]",
"Set Monotonicities and Keypoints\nNext we need to make sure to properly set the monotonicities for features where we used a custom vocabulary (such as 'thal' above).",
"tfl.premade_lib.set_categorical_monotonicities(heart_feature_configs)",
"Finally we can complete our feature configs by calculating and setting the keypoints.",
"feature_keypoints = tfl.premade_lib.compute_feature_keypoints(\n feature_configs=heart_feature_configs, features=heart_train_dict)\ntfl.premade_lib.set_feature_keypoints(\n feature_configs=heart_feature_configs,\n feature_keypoints=feature_keypoints,\n add_missing_feature_configs=False)",
"Calibrated Linear Model\nTo construct a TFL premade model, first construct a model configuration from tfl.configs. A calibrated linear model is constructed using the tfl.configs.CalibratedLinearConfig. It applies piecewise-linear and categorical calibration on the input features, followed by a linear combination and an optional output piecewise-linear calibration. When using output calibration or when output bounds are specified, the linear layer will apply weighted averaging on calibrated inputs.\nThis example creates a calibrated linear model on the first 5 features.",
"# Model config defines the model structure for the premade model.\nlinear_model_config = tfl.configs.CalibratedLinearConfig(\n feature_configs=heart_feature_configs[:5],\n use_bias=True,\n output_calibration=True,\n output_calibration_num_keypoints=10,\n # We initialize the output to [-2.0, 2.0] since we'll be using logits.\n output_initialization=np.linspace(-2.0, 2.0, num=10),\n regularizer_configs=[\n # Regularizer for the output calibrator.\n tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),\n ])\n# A CalibratedLinear premade model constructed from the given model config.\nlinear_model = tfl.premade.CalibratedLinear(linear_model_config)\n# Let's plot our model.\ntf.keras.utils.plot_model(linear_model, show_layer_names=False, rankdir='LR')",
"Now, as with any other tf.keras.Model, we compile and fit the model to our data.",
"linear_model.compile(\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[tf.keras.metrics.AUC(from_logits=True)],\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))\nlinear_model.fit(\n heart_train_xs[:5],\n heart_train_ys,\n epochs=NUM_EPOCHS,\n batch_size=BATCH_SIZE,\n verbose=False)",
"After training our model, we can evaluate it on our test set.",
"print('Test Set Evaluation...')\nprint(linear_model.evaluate(heart_test_xs[:5], heart_test_ys))",
"Calibrated Lattice Model\nA calibrated lattice model is constructed using tfl.configs.CalibratedLatticeConfig. A calibrated lattice model applies piecewise-linear and categorical calibration on the input features, followed by a lattice model and an optional output piecewise-linear calibration.\nThis example creates a calibrated lattice model on the first 5 features.",
"# This is a calibrated lattice model: inputs are calibrated, then combined\n# non-linearly using a lattice layer.\nlattice_model_config = tfl.configs.CalibratedLatticeConfig(\n feature_configs=heart_feature_configs[:5],\n # We initialize the output to [-2.0, 2.0] since we'll be using logits.\n output_initialization=[-2.0, 2.0],\n regularizer_configs=[\n # Torsion regularizer applied to the lattice to make it more linear.\n tfl.configs.RegularizerConfig(name='torsion', l2=1e-2),\n # Globally defined calibration regularizer is applied to all features.\n tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-2),\n ])\n# A CalibratedLattice premade model constructed from the given model config.\nlattice_model = tfl.premade.CalibratedLattice(lattice_model_config)\n# Let's plot our model.\ntf.keras.utils.plot_model(lattice_model, show_layer_names=False, rankdir='LR')",
"As before, we compile, fit, and evaluate our model.",
"lattice_model.compile(\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[tf.keras.metrics.AUC(from_logits=True)],\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))\nlattice_model.fit(\n heart_train_xs[:5],\n heart_train_ys,\n epochs=NUM_EPOCHS,\n batch_size=BATCH_SIZE,\n verbose=False)\nprint('Test Set Evaluation...')\nprint(lattice_model.evaluate(heart_test_xs[:5], heart_test_ys))",
"Calibrated Lattice Ensemble Model\nWhen the number of features is large, you can use an ensemble model, which creates multiple smaller lattices for subsets of the features and averages their output instead of creating just a single huge lattice. Ensemble lattice models are constructed using tfl.configs.CalibratedLatticeEnsembleConfig. A calibrated lattice ensemble model applies piecewise-linear and categorical calibration on the input feature, followed by an ensemble of lattice models and an optional output piecewise-linear calibration.\nExplicit Lattice Ensemble Initialization\nIf you already know which subsets of features you want to feed into your lattices, then you can explicitly set the lattices using feature names. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.",
"# This is a calibrated lattice ensemble model: inputs are calibrated, then\n# combined non-linearly and averaged using multiple lattice layers.\nexplicit_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(\n feature_configs=heart_feature_configs,\n lattices=[['trestbps', 'chol', 'ca'], ['fbs', 'restecg', 'thal'],\n ['fbs', 'cp', 'oldpeak'], ['exang', 'slope', 'thalach'],\n ['restecg', 'age', 'sex']],\n num_lattices=5,\n lattice_rank=3,\n # We initialize the output to [-2.0, 2.0] since we'll be using logits.\n output_initialization=[-2.0, 2.0])\n# A CalibratedLatticeEnsemble premade model constructed from the given\n# model config.\nexplicit_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(\n explicit_ensemble_model_config)\n# Let's plot our model.\ntf.keras.utils.plot_model(\n explicit_ensemble_model, show_layer_names=False, rankdir='LR')",
"As before, we compile, fit, and evaluate our model.",
"explicit_ensemble_model.compile(\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[tf.keras.metrics.AUC(from_logits=True)],\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))\nexplicit_ensemble_model.fit(\n heart_train_xs,\n heart_train_ys,\n epochs=NUM_EPOCHS,\n batch_size=BATCH_SIZE,\n verbose=False)\nprint('Test Set Evaluation...')\nprint(explicit_ensemble_model.evaluate(heart_test_xs, heart_test_ys))",
"Random Lattice Ensemble\nIf you are not sure which subsets of features to feed into your lattices, another option is to use random subsets of features for each lattice. This example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.",
"# This is a calibrated lattice ensemble model: inputs are calibrated, then\n# combined non-linearly and averaged using multiple lattice layers.\nrandom_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(\n feature_configs=heart_feature_configs,\n lattices='random',\n num_lattices=5,\n lattice_rank=3,\n # We initialize the output to [-2.0, 2.0] since we'll be using logits.\n output_initialization=[-2.0, 2.0],\n random_seed=42)\n# Now we must set the random lattice structure and construct the model.\ntfl.premade_lib.set_random_lattice_ensemble(random_ensemble_model_config)\n# A CalibratedLatticeEnsemble premade model constructed from the given\n# model config.\nrandom_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(\n random_ensemble_model_config)\n# Let's plot our model.\ntf.keras.utils.plot_model(\n random_ensemble_model, show_layer_names=False, rankdir='LR')",
"As before, we compile, fit, and evaluate our model.",
"random_ensemble_model.compile(\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[tf.keras.metrics.AUC(from_logits=True)],\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))\nrandom_ensemble_model.fit(\n heart_train_xs,\n heart_train_ys,\n epochs=NUM_EPOCHS,\n batch_size=BATCH_SIZE,\n verbose=False)\nprint('Test Set Evaluation...')\nprint(random_ensemble_model.evaluate(heart_test_xs, heart_test_ys))",
"RTL Layer Random Lattice Ensemble\nWhen using a random lattice ensemble, you can specify that the model use a single tfl.layers.RTL layer. We note that tfl.layers.RTL only supports monotonicity constraints and must have the same lattice size for all features and no per-feature regularization. Note that using a tfl.layers.RTL layer lets you scale to much larger ensembles than using separate tfl.layers.Lattice instances.\nThis example creates a calibrated lattice ensemble model with 5 lattices and 3 features per lattice.",
"# Make sure our feature configs have the same lattice size, no per-feature\n# regularization, and only monotonicity constraints.\nrtl_layer_feature_configs = copy.deepcopy(heart_feature_configs)\nfor feature_config in rtl_layer_feature_configs:\n feature_config.lattice_size = 2\n feature_config.unimodality = 'none'\n feature_config.reflects_trust_in = None\n feature_config.dominates = None\n feature_config.regularizer_configs = None\n# This is a calibrated lattice ensemble model: inputs are calibrated, then\n# combined non-linearly and averaged using multiple lattice layers.\nrtl_layer_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(\n feature_configs=rtl_layer_feature_configs,\n lattices='rtl_layer',\n num_lattices=5,\n lattice_rank=3,\n # We initialize the output to [-2.0, 2.0] since we'll be using logits.\n output_initialization=[-2.0, 2.0],\n random_seed=42)\n# A CalibratedLatticeEnsemble premade model constructed from the given\n# model config. Note that we do not have to specify the lattices by calling\n# a helper function (like before with random) because the RTL Layer will take\n# care of that for us.\nrtl_layer_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(\n rtl_layer_ensemble_model_config)\n# Let's plot our model.\ntf.keras.utils.plot_model(\n rtl_layer_ensemble_model, show_layer_names=False, rankdir='LR')",
"As before, we compile, fit, and evaluate our model.",
"rtl_layer_ensemble_model.compile(\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[tf.keras.metrics.AUC(from_logits=True)],\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))\nrtl_layer_ensemble_model.fit(\n heart_train_xs,\n heart_train_ys,\n epochs=NUM_EPOCHS,\n batch_size=BATCH_SIZE,\n verbose=False)\nprint('Test Set Evaluation...')\nprint(rtl_layer_ensemble_model.evaluate(heart_test_xs, heart_test_ys))",
"Crystals Lattice Ensemble\nPremade also provides a heuristic feature arrangement algorithm, called Crystals. To use the Crystals algorithm, first we train a prefitting model that estimates pairwise feature interactions. We then arrange the final ensemble such that features with more non-linear interactions are in the same lattices.\nthe Premade Library offers helper functions for constructing the prefitting model configuration and extracting the crystals structure. Note that the prefitting model does not need to be fully trained, so a few epochs should be enough.\nThis example creates a calibrated lattice ensemble model with 5 lattice and 3 features per lattice.",
"# This is a calibrated lattice ensemble model: inputs are calibrated, then\n# combines non-linearly and averaged using multiple lattice layers.\ncrystals_ensemble_model_config = tfl.configs.CalibratedLatticeEnsembleConfig(\n feature_configs=heart_feature_configs,\n lattices='crystals',\n num_lattices=5,\n lattice_rank=3,\n # We initialize the output to [-2.0, 2.0] since we'll be using logits.\n output_initialization=[-2.0, 2.0],\n random_seed=42)\n# Now that we have our model config, we can construct a prefitting model config.\nprefitting_model_config = tfl.premade_lib.construct_prefitting_model_config(\n crystals_ensemble_model_config)\n# A CalibratedLatticeEnsemble premade model constructed from the given\n# prefitting model config.\nprefitting_model = tfl.premade.CalibratedLatticeEnsemble(\n prefitting_model_config)\n# We can compile and train our prefitting model as we like.\nprefitting_model.compile(\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))\nprefitting_model.fit(\n heart_train_xs,\n heart_train_ys,\n epochs=PREFITTING_NUM_EPOCHS,\n batch_size=BATCH_SIZE,\n verbose=False)\n# Now that we have our trained prefitting model, we can extract the crystals.\ntfl.premade_lib.set_crystals_lattice_ensemble(crystals_ensemble_model_config,\n prefitting_model_config,\n prefitting_model)\n# A CalibratedLatticeEnsemble premade model constructed from the given\n# model config.\ncrystals_ensemble_model = tfl.premade.CalibratedLatticeEnsemble(\n crystals_ensemble_model_config)\n# Let's plot our model.\ntf.keras.utils.plot_model(\n crystals_ensemble_model, show_layer_names=False, rankdir='LR')",
"As before, we compile, fit, and evaluate our model.",
"crystals_ensemble_model.compile(\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[tf.keras.metrics.AUC(from_logits=True)],\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))\ncrystals_ensemble_model.fit(\n heart_train_xs,\n heart_train_ys,\n epochs=NUM_EPOCHS,\n batch_size=BATCH_SIZE,\n verbose=False)\nprint('Test Set Evaluation...')\nprint(crystals_ensemble_model.evaluate(heart_test_xs, heart_test_ys))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
phobson/statsmodels
|
examples/notebooks/statespace_sarimax_stata.ipynb
|
bsd-3-clause
|
[
"SARIMAX: Introduction\nThis notebook replicates examples from the Stata ARIMA time series estimation and postestimation documentation.\nFirst, we replicate the four estimation examples http://www.stata.com/manuals13/tsarima.pdf:\n\nARIMA(1,1,1) model on the U.S. Wholesale Price Index (WPI) dataset.\nVariation of example 1 which adds an MA(4) term to the ARIMA(1,1,1) specification to allow for an additive seasonal effect.\nARIMA(2,1,0) x (1,1,0,12) model of monthly airline data. This example allows a multiplicative seasonal effect.\nARMA(1,1) model with exogenous regressors; describes consumption as an autoregressive process on which also the money supply is assumed to be an explanatory variable.\n\nSecond, we demonstrate postestimation capabilitites to replicate http://www.stata.com/manuals13/tsarimapostestimation.pdf. The model from example 4 is used to demonstrate:\n\nOne-step-ahead in-sample prediction\nn-step-ahead out-of-sample forecasting\nn-step-ahead in-sample dynamic prediction",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import norm\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\nimport requests\nfrom io import BytesIO",
"ARIMA Example 1: Arima\nAs can be seen in the graphs from Example 2, the Wholesale price index (WPI) is growing over time (i.e. is not stationary). Therefore an ARMA model is not a good specification. In this first example, we consider a model where the original time series is assumed to be integrated of order 1, so that the difference is assumed to be stationary, and fit a model with one autoregressive lag and one moving average lag, as well as an intercept term.\nThe postulated data process is then:\n$$\n\\Delta y_t = c + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nwhere $c$ is the intercept of the ARMA model, $\\Delta$ is the first-difference operator, and we assume $\\epsilon_{t} \\sim N(0, \\sigma^2)$. This can be rewritten to emphasize lag polynomials as (this will be useful in example 2, below):\n$$\n(1 - \\phi_1 L ) \\Delta y_t = c + (1 + \\theta_1 L) \\epsilon_{t}\n$$\nwhere $L$ is the lag operator.\nNotice that one difference between the Stata output and the output below is that Stata estimates the following model:\n$$\n(\\Delta y_t - \\beta_0) = \\phi_1 ( \\Delta y_{t-1} - \\beta_0) + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nwhere $\\beta_0$ is the mean of the process $y_t$. This model is equivalent to the one estimated in the Statsmodels SARIMAX class, but the interpretation is different. To see the equivalence, note that:\n$$\n(\\Delta y_t - \\beta_0) = \\phi_1 ( \\Delta y_{t-1} - \\beta_0) + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t} \\\n\\Delta y_t = (1 - \\phi_1) \\beta_0 + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nso that $c = (1 - \\phi_1) \\beta_0$.",
"# Dataset\nwpi1 = requests.get('http://www.stata-press.com/data/r12/wpi1.dta').content\ndata = pd.read_stata(BytesIO(wpi1))\ndata.index = data.t\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1))\nres = mod.fit()\nprint(res.summary())",
"Thus the maximum likelihood estimates imply that for the process above, we have:\n$$\n\\Delta y_t = 0.1050 + 0.8740 \\Delta y_{t-1} - 0.4206 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nwhere $\\epsilon_{t} \\sim N(0, 0.5226)$. Finally, recall that $c = (1 - \\phi_1) \\beta_0$, and here $c = 0.1050$ and $\\phi_1 = 0.8740$. To compare with the output from Stata, we could calculate the mean:\n$$\\beta_0 = \\frac{c}{1 - \\phi_1} = \\frac{0.1050}{1 - 0.8740} = 0.83$$\nNote: these values are slightly different from the values in the Stata documentation because the optimizer in Statsmodels has found parameters here that yield a higher likelihood. Nonetheless, they are very close.\nARIMA Example 2: Arima with additive seasonal effects\nThis model is an extension of that from example 1. Here the data is assumed to follow the process:\n$$\n\\Delta y_t = c + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\theta_4 \\epsilon_{t-4} + \\epsilon_{t}\n$$\nThe new part of this model is that there is allowed to be a annual seasonal effect (it is annual even though the periodicity is 4 because the dataset is quarterly). The second difference is that this model uses the log of the data rather than the level.\nBefore estimating the dataset, graphs showing:\n\nThe time series (in logs)\nThe first difference of the time series (in logs)\nThe autocorrelation function\nThe partial autocorrelation function.\n\nFrom the first two graphs, we note that the original time series does not appear to be stationary, whereas the first-difference does. This supports either estimating an ARMA model on the first-difference of the data, or estimating an ARIMA model with 1 order of integration (recall that we are taking the latter approach). The last two graphs support the use of an ARMA(1,1,1) model.",
"# Dataset\ndata = pd.read_stata(BytesIO(wpi1))\ndata.index = data.t\ndata['ln_wpi'] = np.log(data['wpi'])\ndata['D.ln_wpi'] = data['ln_wpi'].diff()\n\n# Graph data\nfig, axes = plt.subplots(1, 2, figsize=(15,4))\n\n# Levels\naxes[0].plot(data.index._mpl_repr(), data['wpi'], '-')\naxes[0].set(title='US Wholesale Price Index')\n\n# Log difference\naxes[1].plot(data.index._mpl_repr(), data['D.ln_wpi'], '-')\naxes[1].hlines(0, data.index[0], data.index[-1], 'r')\naxes[1].set(title='US Wholesale Price Index - difference of logs');\n\n# Graph data\nfig, axes = plt.subplots(1, 2, figsize=(15,4))\n\nfig = sm.graphics.tsa.plot_acf(data.ix[1:, 'D.ln_wpi'], lags=40, ax=axes[0])\nfig = sm.graphics.tsa.plot_pacf(data.ix[1:, 'D.ln_wpi'], lags=40, ax=axes[1])",
"To understand how to specify this model in Statsmodels, first recall that from example 1 we used the following code to specify the ARIMA(1,1,1) model:\npython\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1))\nThe order argument is a tuple of the form (AR specification, Integration order, MA specification). The integration order must be an integer (for example, here we assumed one order of integration, so it was specified as 1. In a pure ARMA model where the underlying data is already stationary, it would be 0).\nFor the AR specification and MA specification components, there are two possiblities. The first is to specify the maximum degree of the corresponding lag polynomial, in which case the component is an integer. For example, if we wanted to specify an ARIMA(1,1,4) process, we would use:\npython\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,4))\nand the corresponding data process would be:\n$$\ny_t = c + \\phi_1 y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\theta_2 \\epsilon_{t-2} + \\theta_3 \\epsilon_{t-3} + \\theta_4 \\epsilon_{t-4} + \\epsilon_{t}\n$$\nor\n$$\n(1 - \\phi_1 L)\\Delta y_t = c + (1 + \\theta_1 L + \\theta_2 L^2 + \\theta_3 L^3 + \\theta_4 L^4) \\epsilon_{t}\n$$\nWhen the specification parameter is given as a maximum degree of the lag polynomial, it implies that all polynomial terms up to that degree are included. Notice that this is not the model we want to use, because it would include terms for $\\epsilon_{t-2}$ and $\\epsilon_{t-3}$, which we don't want here.\nWhat we want is a polynomial that has terms for the 1st and 4th degrees, but leaves out the 2nd and 3rd terms. To do that, we need to provide a tuple for the specifiation parameter, where the tuple describes the lag polynomial itself. In particular, here we would want to use:\npython\nar = 1 # this is the maximum degree specification\nma = (1,0,0,1) # this is the lag polynomial specification\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(ar,1,ma)))\nThis gives the following form for the process of the data:\n$$\n\\Delta y_t = c + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\theta_4 \\epsilon_{t-4} + \\epsilon_{t} \\\n(1 - \\phi_1 L)\\Delta y_t = c + (1 + \\theta_1 L + \\theta_4 L^4) \\epsilon_{t}\n$$\nwhich is what we want.",
"# Fit the model\nmod = sm.tsa.statespace.SARIMAX(data['ln_wpi'], trend='c', order=(1,1,1))\nres = mod.fit()\nprint(res.summary())",
"ARIMA Example 3: Airline Model\nIn the previous example, we included a seasonal effect in an additive way, meaning that we added a term allowing the process to depend on the 4th MA lag. It may be instead that we want to model a seasonal effect in a multiplicative way. We often write the model then as an ARIMA $(p,d,q) \\times (P,D,Q)_s$, where the lowercast letters indicate the specification for the non-seasonal component, and the uppercase letters indicate the specification for the seasonal component; $s$ is the periodicity of the seasons (e.g. it is often 4 for quarterly data or 12 for monthly data). The data process can be written generically as:\n$$\n\\phi_p (L) \\tilde \\phi_P (L^s) \\Delta^d \\Delta_s^D y_t = A(t) + \\theta_q (L) \\tilde \\theta_Q (L^s) \\epsilon_t\n$$\nwhere:\n\n$\\phi_p (L)$ is the non-seasonal autoregressive lag polynomial\n$\\tilde \\phi_P (L^s)$ is the seasonal autoregressive lag polynomial\n$\\Delta^d \\Delta_s^D y_t$ is the time series, differenced $d$ times, and seasonally differenced $D$ times.\n$A(t)$ is the trend polynomial (including the intercept)\n$\\theta_q (L)$ is the non-seasonal moving average lag polynomial\n$\\tilde \\theta_Q (L^s)$ is the seasonal moving average lag polynomial\n\nsometimes we rewrite this as:\n$$\n\\phi_p (L) \\tilde \\phi_P (L^s) y_t^* = A(t) + \\theta_q (L) \\tilde \\theta_Q (L^s) \\epsilon_t\n$$\nwhere $y_t^* = \\Delta^d \\Delta_s^D y_t$. This emphasizes that just as in the simple case, after we take differences (here both non-seasonal and seasonal) to make the data stationary, the resulting model is just an ARMA model.\nAs an example, consider the airline model ARIMA $(2,1,0) \\times (1,1,0)_{12}$, with an intercept. The data process can be written in the form above as:\n$$\n(1 - \\phi_1 L - \\phi_2 L^2) (1 - \\tilde \\phi_1 L^{12}) \\Delta \\Delta_{12} y_t = c + \\epsilon_t\n$$\nHere, we have:\n\n$\\phi_p (L) = (1 - \\phi_1 L - \\phi_2 L^2)$\n$\\tilde \\phi_P (L^s) = (1 - \\phi_1 L^12)$\n$d = 1, D = 1, s=12$ indicating that $y_t^*$ is derived from $y_t$ by taking first-differences and then taking 12-th differences.\n$A(t) = c$ is the constant trend polynomial (i.e. just an intercept)\n$\\theta_q (L) = \\tilde \\theta_Q (L^s) = 1$ (i.e. there is no moving average effect)\n\nIt may still be confusing to see the two lag polynomials in front of the time-series variable, but notice that we can multiply the lag polynomials together to get the following model:\n$$\n(1 - \\phi_1 L - \\phi_2 L^2 - \\tilde \\phi_1 L^{12} + \\phi_1 \\tilde \\phi_1 L^{13} + \\phi_2 \\tilde \\phi_1 L^{14} ) y_t^* = c + \\epsilon_t\n$$\nwhich can be rewritten as:\n$$\ny_t^ = c + \\phi_1 y_{t-1}^ + \\phi_2 y_{t-2}^ + \\tilde \\phi_1 y_{t-12}^ - \\phi_1 \\tilde \\phi_1 y_{t-13}^ - \\phi_2 \\tilde \\phi_1 y_{t-14}^ + \\epsilon_t\n$$\nThis is similar to the additively seasonal model from example 2, but the coefficients in front of the autoregressive lags are actually combinations of the underlying seasonal and non-seasonal parameters.\nSpecifying the model in Statsmodels is done simply by adding the seasonal_order argument, which accepts a tuple of the form (Seasonal AR specification, Seasonal Integration order, Seasonal MA, Seasonal periodicity). The seasonal AR and MA specifications, as before, can be expressed as a maximum polynomial degree or as the lag polynomial itself. Seasonal periodicity is an integer.\nFor the airline model ARIMA $(2,1,0) \\times (1,1,0)_{12}$ with an intercept, the command is:\npython\nmod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12))",
"# Dataset\nair2 = requests.get('http://www.stata-press.com/data/r12/air2.dta').content\ndata = pd.read_stata(BytesIO(air2))\ndata.index = pd.date_range(start=datetime(data.time[0], 1, 1), periods=len(data), freq='MS')\ndata['lnair'] = np.log(data['air'])\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12), simple_differencing=True)\nres = mod.fit()\nprint(res.summary())",
"Notice that here we used an additional argument simple_differencing=True. This controls how the order of integration is handled in ARIMA models. If simple_differencing=True, then the time series provided as endog is literatlly differenced and an ARMA model is fit to the resulting new time series. This implies that a number of initial periods are lost to the differencing process, however it may be necessary either to compare results to other packages (e.g. Stata's arima always uses simple differencing) or if the seasonal periodicity is large.\nThe default is simple_differencing=False, in which case the integration component is implemented as part of the state space formulation, and all of the original data can be used in estimation.\nARIMA Example 4: ARMAX (Friedman)\nThis model demonstrates the use of explanatory variables (the X part of ARMAX). When exogenous regressors are included, the SARIMAX module uses the concept of \"regression with SARIMA errors\" (see http://robjhyndman.com/hyndsight/arimax/ for details of regression with ARIMA errors versus alternative specifications), so that the model is specified as:\n$$\ny_t = \\beta_t x_t + u_t \\\n \\phi_p (L) \\tilde \\phi_P (L^s) \\Delta^d \\Delta_s^D u_t = A(t) +\n \\theta_q (L) \\tilde \\theta_Q (L^s) \\epsilon_t\n$$\nNotice that the first equation is just a linear regression, and the second equation just describes the process followed by the error component as SARIMA (as was described in example 3). One reason for this specification is that the estimated parameters have their natural interpretations.\nThis specification nests many simpler specifications. For example, regression with AR(2) errors is:\n$$\ny_t = \\beta_t x_t + u_t \\\n(1 - \\phi_1 L - \\phi_2 L^2) u_t = A(t) + \\epsilon_t\n$$\nThe model considered in this example is regression with ARMA(1,1) errors. The process is then written:\n$$\n\\text{consump}_t = \\beta_0 + \\beta_1 \\text{m2}_t + u_t \\\n(1 - \\phi_1 L) u_t = (1 - \\theta_1 L) \\epsilon_t\n$$\nNotice that $\\beta_0$ is, as described in example 1 above, not the same thing as an intercept specified by trend='c'. Whereas in the examples above we estimated the intercept of the model via the trend polynomial, here, we demonstrate how to estimate $\\beta_0$ itself by adding a constant to the exogenous dataset. In the output, the $beta_0$ is called const, whereas above the intercept $c$ was called intercept in the output.",
"# Dataset\nfriedman2 = requests.get('http://www.stata-press.com/data/r12/friedman2.dta').content\ndata = pd.read_stata(BytesIO(friedman2))\ndata.index = data.time\n\n# Variables\nendog = data.ix['1959':'1981', 'consump']\nexog = sm.add_constant(data.ix['1959':'1981', 'm2'])\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(endog, exog, order=(1,0,1))\nres = mod.fit()\nprint(res.summary())",
"ARIMA Postestimation: Example 1 - Dynamic Forecasting\nHere we describe some of the post-estimation capabilities of Statsmodels' SARIMAX.\nFirst, using the model from example, we estimate the parameters using data that excludes the last few observations (this is a little artificial as an example, but it allows considering performance of out-of-sample forecasting and facilitates comparison to Stata's documentation).",
"# Dataset\nraw = pd.read_stata(BytesIO(friedman2))\nraw.index = raw.time\ndata = raw.ix[:'1981']\n\n# Variables\nendog = data.ix['1959':, 'consump']\nexog = sm.add_constant(data.ix['1959':, 'm2'])\nnobs = endog.shape[0]\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(endog.ix[:'1978-01-01'], exog=exog.ix[:'1978-01-01'], order=(1,0,1))\nfit_res = mod.fit()\nprint(fit_res.summary())",
"Next, we want to get results for the full dataset but using the estimated parameters (on a subset of the data).",
"mod = sm.tsa.statespace.SARIMAX(endog, exog=exog, order=(1,0,1))\nres = mod.filter(fit_res.params)",
"The predict command is first applied here to get in-sample predictions. We use the full_results=True argument to allow us to calculate confidence intervals (the default output of predict is just the predicted values).\nWith no other arguments, predict returns the one-step-ahead in-sample predictions for the entire sample.",
"# In-sample one-step-ahead predictions\npredict = res.get_prediction()\npredict_ci = predict.conf_int()",
"We can also get dynamic predictions. One-step-ahead prediction uses the true values of the endogenous values at each step to predict the next in-sample value. Dynamic predictions use one-step-ahead prediction up to some point in the dataset (specified by the dynamic argument); after that, the previous predicted endogenous values are used in place of the true endogenous values for each new predicted element.\nThe dynamic argument is specified to be an offset relative to the start argument. If start is not specified, it is assumed to be 0.\nHere we perform dynamic prediction starting in the first quarter of 1978.",
"# Dynamic predictions\npredict_dy = res.get_prediction(dynamic='1978-01-01')\npredict_dy_ci = predict_dy.conf_int()",
"We can graph the one-step-ahead and dynamic predictions (and the corresponding confidence intervals) to see their relative performance. Notice that up to the point where dynamic prediction begins (1978:Q1), the two are the same.",
"# Graph\nfig, ax = plt.subplots(figsize=(9,4))\nnpre = 4\nax.set(title='Personal consumption', xlabel='Date', ylabel='Billions of dollars')\n\n# Plot data points\ndata.ix['1977-07-01':, 'consump'].plot(ax=ax, style='o', label='Observed')\n\n# Plot predictions\npredict.predicted_mean.ix['1977-07-01':].plot(ax=ax, style='r--', label='One-step-ahead forecast')\nci = predict_ci.ix['1977-07-01':]\nax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], color='r', alpha=0.1)\npredict_dy.predicted_mean.ix['1977-07-01':].plot(ax=ax, style='g', label='Dynamic forecast (1978)')\nci = predict_dy_ci.ix['1977-07-01':]\nax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], color='g', alpha=0.1)\n\nlegend = ax.legend(loc='lower right')",
"Finally, graph the prediction error. It is obvious that, as one would suspect, one-step-ahead prediction is considerably better.",
"# Prediction error\n\n# Graph\nfig, ax = plt.subplots(figsize=(9,4))\nnpre = 4\nax.set(title='Forecast error', xlabel='Date', ylabel='Forecast - Actual')\n\n# In-sample one-step-ahead predictions and 95% confidence intervals\npredict_error = predict.predicted_mean - endog\npredict_error.ix['1977-10-01':].plot(ax=ax, label='One-step-ahead forecast')\nci = predict_ci.ix['1977-10-01':].copy()\nci.iloc[:,0] -= endog.loc['1977-10-01':]\nci.iloc[:,1] -= endog.loc['1977-10-01':]\nax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], alpha=0.1)\n\n# Dynamic predictions and 95% confidence intervals\npredict_dy_error = predict_dy.predicted_mean - endog\npredict_dy_error.ix['1977-10-01':].plot(ax=ax, style='r', label='Dynamic forecast (1978)')\nci = predict_dy_ci.ix['1977-10-01':].copy()\nci.iloc[:,0] -= endog.loc['1977-10-01':]\nci.iloc[:,1] -= endog.loc['1977-10-01':]\nax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], color='r', alpha=0.1)\n\nlegend = ax.legend(loc='lower left');\nlegend.get_frame().set_facecolor('w')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arcyfelix/Courses
|
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/11-Advanced-Quantopian-Topics/00-Pipeline-Example-Walkthrough.ipynb
|
apache-2.0
|
[
"Pipeline Example",
"from quantopian.pipeline import Pipeline\nfrom quantopian.research import run_pipeline\nfrom quantopian.pipeline.data.builtin import USEquityPricing",
"Getting the Securities we want.\nThe Q500US and Q1500US\nThese gropus of tradeable stocks are refered to as \"universes\", because all your trades will use these stocks as their \"Universe\" of available stock, they won't be trading with anything outside these groups.",
"from quantopian.pipeline.filters import Q1500US",
"There are two main benefits of the Q500US and Q1500US. Firstly, they greatly reduce the risk of an order not being filled. Secondly, they allow for more meaningful comparisons between strategies as now they will be used as the standard universes for algorithms.",
"universe = Q1500US()",
"Filtering the universe further with Classifiers\nLet's only grab stocks in the energy sector: https://www.quantopian.com/help/fundamentals#industry-sector",
"from quantopian.pipeline.data import morningstar\n\nsector = morningstar.asset_classification.morningstar_sector_code.latest",
"Alternative:",
"#from quantopian.pipeline.classifiers.morningstar import Sector\n#morningstar_sector = Sector()\n\nenergy_sector = sector.eq(309)",
"Masking Filters\nMasks can be also be applied to methods that return filters like top, bottom, and percentile_between.\nMasks are most useful when we want to apply a filter in the earlier steps of a combined computation. For example, suppose we want to get the 50 securities with the highest open price that are also in the top 10% of dollar volume. \nSuppose that we then want the 90th-100th percentile of these securities by close price. We can do this with the following:",
"from quantopian.pipeline.factors import SimpleMovingAverage, AverageDollarVolume\n\n# Dollar volume factor\ndollar_volume = AverageDollarVolume(window_length = 30)\n\n# High dollar volume filter\nhigh_dollar_volume = dollar_volume.percentile_between(90, 100)\n\n# Top open price filter (high dollar volume securities)\ntop_open_price = USEquityPricing.open.latest.top(50, \n mask = high_dollar_volume)\n\n# Top percentile close price filter (high dollar volume, top 50 open price)\nhigh_close_price = USEquityPricing.close.latest.percentile_between(90, 100, \n mask = top_open_price)",
"Applying Filters and Factors\nLet's apply our own filters, following along with some of the examples above. Let's select the following securities:\n\nStocks in Q1500US\nStocks that are in the energy Sector\nThey must be relatively highly traded stocks in the market (by dollar volume traded, need to be in the top 5% traded)\n\nThen we'll calculate the percent difference as we've done previously. Using this percent difference we'll create an unsophisticated strategy that shorts anything with negative percent difference (the difference between the 10 day mean and the 30 day mean).",
"def make_pipeline():\n \n # Base universe filter.\n base_universe = Q1500US()\n \n # Sector Classifier as Filter\n energy_sector = sector.eq(309)\n \n # Masking Base Energy Stocks\n base_energy = base_universe & energy_sector\n \n # Dollar volume factor\n dollar_volume = AverageDollarVolume(window_length = 30)\n\n # Top half of dollar volume filter\n high_dollar_volume = dollar_volume.percentile_between(95, 100)\n \n # Final Filter Mask\n top_half_base_energy = base_energy & high_dollar_volume\n \n # 10-day close price average.\n mean_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], \n window_length = 10, \n mask = top_half_base_energy)\n\n # 30-day close price average.\n mean_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], \n window_length = 30, \n mask = top_half_base_energy)\n\n # Percent difference factor.\n percent_difference = (mean_10 - mean_30) / mean_30\n \n # Create a filter to select securities to short.\n shorts = percent_difference < 0\n \n # Create a filter to select securities to long.\n longs = percent_difference > 0\n \n # Filter for the securities that we want to trade.\n securities_to_trade = (shorts | longs)\n \n return Pipeline(\n columns = {\n 'longs': longs,\n 'shorts': shorts,\n 'percent_diff':percent_difference\n },\n screen=securities_to_trade\n )\n\nresult = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')\nresult\n\nresult.info()",
"Executing this Strategy in the IDE",
"from quantopian.algorithm import attach_pipeline,pipeline_output\nfrom quantopian.pipeline import Pipeline\nfrom quantopian.pipeline.data.builtin import USEquityPricing\nfrom quantopian.pipeline.factors import AverageDollarVolume,SimpleMovingAverage\nfrom quantopian.pipeline.filters.morningstar import Q1500US\nfrom quantopian.pipeline.data import morningstar\n\ndef initialize(context):\n \n schedule_function(my_rebalance,date_rules.week_start(),time_rules.market_open(hours = 1))\n \n my_pipe = make_pipeline()\n attach_pipeline(my_pipe, 'my_pipeline')\n \ndef my_rebalance(context,data):\n for security in context.portfolio.positions:\n if security not in context.longs and security not in context.shorts and data.can_trade(security):\n order_target_percent(security,0)\n \n for security in context.longs:\n if data.can_trade(security):\n order_target_percent(security,context.long_weight)\n\n for security in context.shorts:\n if data.can_trade(security):\n order_target_percent(security,context.short_weight)\n\n\n\n\ndef my_compute_weights(context):\n \n if len(context.longs) == 0:\n long_weight = 0\n else:\n long_weight = 0.5 / len(context.longs)\n \n if len(context.shorts) == 0:\n short_weight = 0\n else:\n short_weight = 0.5 / len(context.shorts)\n \n return (long_weight,short_weight)\n\n\n\n\n\n\ndef before_trading_start(context,data):\n context.output = pipeline_output('my_pipeline')\n \n # LONG\n context.longs = context.output[context.output['longs']].index.tolist()\n \n # SHORT\n context.shorts = context.output[context.output['shorts']].index.tolist()\n\n\n context.long_weight,context.short_weight = my_compute_weights(context)\n\n\n\ndef make_pipeline():\n \n # Universe Q1500US\n base_universe = Q1500US()\n \n # Energy Sector\n sector = morningstar.asset_classification.morningstar_sector_code.latest\n energy_sector = sector.eq(309)\n \n # Make Mask of 1500US and Energy\n base_energy = base_universe & energy_sector\n \n # Dollar Volume (30 Days) Grab the Info\n dollar_volume = AverageDollarVolume(window_length = 30)\n \n # Grab the top 5% in avg dollar volume\n high_dollar_volume = dollar_volume.percentile_between(95, 100)\n \n # Combine the filters\n top_five_base_energy = base_energy & high_dollar_volume\n \n # 10 day mean close\n mean_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length = 10, mask = top_five_base_energy)\n \n # 30 day mean close\n mean_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length = 30, mask = top_five_base_energy)\n \n # Percent Difference\n percent_difference = (mean_10-mean_30)/mean_30\n \n # List of Shorts\n shorts = percent_difference < 0\n \n # List of Longs\n longs = percent_difference > 0\n \n # Final Mask/Filter for anything in shorts or longs\n securities_to_trade = (shorts | longs)\n \n # Return Pipeline\n return Pipeline(columns={\n 'longs':longs,\n 'shorts':shorts,\n 'perc_diff':percent_difference\n },screen=securities_to_trade)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rflamary/POT
|
docs/source/auto_examples/plot_OT_L1_vs_L2.ipynb
|
mit
|
[
"%matplotlib inline",
"2D Optimal transport for different metrics\n2D OT on empirical distributio with different gound metric.\nStole the figure idea from Fig. 1 and 2 in\nhttps://arxiv.org/pdf/1706.07650.pdf",
"# Author: Remi Flamary <remi.flamary@unice.fr>\n#\n# License: MIT License\n\nimport numpy as np\nimport matplotlib.pylab as pl\nimport ot\nimport ot.plot",
"Dataset 1 : uniform sampling",
"n = 20 # nb samples\nxs = np.zeros((n, 2))\nxs[:, 0] = np.arange(n) + 1\nxs[:, 1] = (np.arange(n) + 1) * -0.001 # to make it strictly convex...\n\nxt = np.zeros((n, 2))\nxt[:, 1] = np.arange(n) + 1\n\na, b = ot.unif(n), ot.unif(n) # uniform distribution on samples\n\n# loss matrix\nM1 = ot.dist(xs, xt, metric='euclidean')\nM1 /= M1.max()\n\n# loss matrix\nM2 = ot.dist(xs, xt, metric='sqeuclidean')\nM2 /= M2.max()\n\n# loss matrix\nMp = np.sqrt(ot.dist(xs, xt, metric='euclidean'))\nMp /= Mp.max()\n\n# Data\npl.figure(1, figsize=(7, 3))\npl.clf()\npl.plot(xs[:, 0], xs[:, 1], '+b', label='Source samples')\npl.plot(xt[:, 0], xt[:, 1], 'xr', label='Target samples')\npl.axis('equal')\npl.title('Source and target distributions')\n\n\n# Cost matrices\npl.figure(2, figsize=(7, 3))\n\npl.subplot(1, 3, 1)\npl.imshow(M1, interpolation='nearest')\npl.title('Euclidean cost')\n\npl.subplot(1, 3, 2)\npl.imshow(M2, interpolation='nearest')\npl.title('Squared Euclidean cost')\n\npl.subplot(1, 3, 3)\npl.imshow(Mp, interpolation='nearest')\npl.title('Sqrt Euclidean cost')\npl.tight_layout()",
"Dataset 1 : Plot OT Matrices",
"#%% EMD\nG1 = ot.emd(a, b, M1)\nG2 = ot.emd(a, b, M2)\nGp = ot.emd(a, b, Mp)\n\n# OT matrices\npl.figure(3, figsize=(7, 3))\n\npl.subplot(1, 3, 1)\not.plot.plot2D_samples_mat(xs, xt, G1, c=[.5, .5, 1])\npl.plot(xs[:, 0], xs[:, 1], '+b', label='Source samples')\npl.plot(xt[:, 0], xt[:, 1], 'xr', label='Target samples')\npl.axis('equal')\n# pl.legend(loc=0)\npl.title('OT Euclidean')\n\npl.subplot(1, 3, 2)\not.plot.plot2D_samples_mat(xs, xt, G2, c=[.5, .5, 1])\npl.plot(xs[:, 0], xs[:, 1], '+b', label='Source samples')\npl.plot(xt[:, 0], xt[:, 1], 'xr', label='Target samples')\npl.axis('equal')\n# pl.legend(loc=0)\npl.title('OT squared Euclidean')\n\npl.subplot(1, 3, 3)\not.plot.plot2D_samples_mat(xs, xt, Gp, c=[.5, .5, 1])\npl.plot(xs[:, 0], xs[:, 1], '+b', label='Source samples')\npl.plot(xt[:, 0], xt[:, 1], 'xr', label='Target samples')\npl.axis('equal')\n# pl.legend(loc=0)\npl.title('OT sqrt Euclidean')\npl.tight_layout()\n\npl.show()",
"Dataset 2 : Partial circle",
"n = 50 # nb samples\nxtot = np.zeros((n + 1, 2))\nxtot[:, 0] = np.cos(\n (np.arange(n + 1) + 1.0) * 0.9 / (n + 2) * 2 * np.pi)\nxtot[:, 1] = np.sin(\n (np.arange(n + 1) + 1.0) * 0.9 / (n + 2) * 2 * np.pi)\n\nxs = xtot[:n, :]\nxt = xtot[1:, :]\n\na, b = ot.unif(n), ot.unif(n) # uniform distribution on samples\n\n# loss matrix\nM1 = ot.dist(xs, xt, metric='euclidean')\nM1 /= M1.max()\n\n# loss matrix\nM2 = ot.dist(xs, xt, metric='sqeuclidean')\nM2 /= M2.max()\n\n# loss matrix\nMp = np.sqrt(ot.dist(xs, xt, metric='euclidean'))\nMp /= Mp.max()\n\n\n# Data\npl.figure(4, figsize=(7, 3))\npl.clf()\npl.plot(xs[:, 0], xs[:, 1], '+b', label='Source samples')\npl.plot(xt[:, 0], xt[:, 1], 'xr', label='Target samples')\npl.axis('equal')\npl.title('Source and traget distributions')\n\n\n# Cost matrices\npl.figure(5, figsize=(7, 3))\n\npl.subplot(1, 3, 1)\npl.imshow(M1, interpolation='nearest')\npl.title('Euclidean cost')\n\npl.subplot(1, 3, 2)\npl.imshow(M2, interpolation='nearest')\npl.title('Squared Euclidean cost')\n\npl.subplot(1, 3, 3)\npl.imshow(Mp, interpolation='nearest')\npl.title('Sqrt Euclidean cost')\npl.tight_layout()",
"Dataset 2 : Plot OT Matrices",
"#%% EMD\nG1 = ot.emd(a, b, M1)\nG2 = ot.emd(a, b, M2)\nGp = ot.emd(a, b, Mp)\n\n# OT matrices\npl.figure(6, figsize=(7, 3))\n\npl.subplot(1, 3, 1)\not.plot.plot2D_samples_mat(xs, xt, G1, c=[.5, .5, 1])\npl.plot(xs[:, 0], xs[:, 1], '+b', label='Source samples')\npl.plot(xt[:, 0], xt[:, 1], 'xr', label='Target samples')\npl.axis('equal')\n# pl.legend(loc=0)\npl.title('OT Euclidean')\n\npl.subplot(1, 3, 2)\not.plot.plot2D_samples_mat(xs, xt, G2, c=[.5, .5, 1])\npl.plot(xs[:, 0], xs[:, 1], '+b', label='Source samples')\npl.plot(xt[:, 0], xt[:, 1], 'xr', label='Target samples')\npl.axis('equal')\n# pl.legend(loc=0)\npl.title('OT squared Euclidean')\n\npl.subplot(1, 3, 3)\not.plot.plot2D_samples_mat(xs, xt, Gp, c=[.5, .5, 1])\npl.plot(xs[:, 0], xs[:, 1], '+b', label='Source samples')\npl.plot(xt[:, 0], xt[:, 1], 'xr', label='Target samples')\npl.axis('equal')\n# pl.legend(loc=0)\npl.title('OT sqrt Euclidean')\npl.tight_layout()\n\npl.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wikistat/Intro-Python
|
Cal3-PythonGraphes.ipynb
|
mit
|
[
"<center>\n<a href=\"http://www.insa-toulouse.fr/\" ><img src=\"http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg\" style=\"float:left; max-width: 120px; display: inline\" alt=\"INSA\"/></a> \n<a href=\"http://wikistat.fr/\" ><img src=\"http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg\" style=\"float:right; max-width: 250px; display: inline\" alt=\"Wikistat\"/></a>\n</center>\n<a href=\"https://www.python.org/\"><img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/390px-Python_logo_and_wordmark.svg.png\" style=\"max-width: 200px; display: inline\" alt=\"Python\"/></a> pour Statistique et Science des Données\nGraphes 2D et 3D en <a href=\"https://www.python.org/\"><img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/390px-Python_logo_and_wordmark.svg.png\" style=\"max-width: 200px; display: inline\" alt=\"Python\"/></a> <a href=\"http://matplotlib.org/\"><img src=\"http://matplotlib.org/_static/logo2.svg\" style=\"max-width: 200px; display: inline\" alt=\"matplotlib\"/>\nCe calepin est une version simplifée de celui développé par J.R. Johansson. D'autres calepins du même auteur sont accessibles ici.\nImportant la commande ci-dessous provoque l'insertion des gaphiques dans le calepin plutôt que l'ouverture de nouvelles fenêtres.",
"%matplotlib inline",
"1 Introduction\n1.1 Principe\nMatplotlib est une librairie pour des graphes 2D et 3D, \n* facile à utiliser,\n* intègrant des formats $\\LaTeX$ pour les libellés,\n* contrôlant tous les éléments d'une figure, \n* supportant tous le sformats png, pdf, eps..\nPlus d'information sur la page de Matplotlib\nPour démarrer avec Matplotlib dans un programme Python, inclure les objets du module pylab. Plus facile:",
"from pylab import * ",
"Ou importer le module matplotlib.pyplot avec l'identifiant plt. Plus correct pour éviter de charger tous les objets:",
"# import matplotlib\nimport matplotlib.pyplot as plt\n\nimport numpy as np",
"1.2 MATLAB-like API\nLa façon la plus simple d'utiliser matplotlib est de le faire par l'API de type MATLAB compatible avec les fonctions graphique de MATLAB.",
"from pylab import *",
"Example élémentaire d'utilisation de l'API.",
"x = np.linspace(0, 5, 10)\ny = x ** 2\n\nfigure()\nplot(x, y, 'r')\nxlabel('x')\nylabel('y')\ntitle('titre')\nshow()",
"La plupart des fonctions MATLAB sont incluses dans pylab.",
"subplot(1,2,1)\nplot(x, y, 'r--')\nsubplot(1,2,2)\nplot(y, x, 'g*-');\nshow()",
"Cette API est limitée à des graphes rudimentaires. Les fonctionalités orientées objet de Matplotlib sont à privilégier pour des graphes plus élaborées. \n2 Matplotlib orienté objet\n2.1 Syntaxe de base\nThe main idea with object-oriented programming is to have objects that one can apply functions and actions on, and no object or program states should be global (such as the MATLAB-like API). The real advantage of this approach becomes apparent when more than one figure is created, or when a figure contains more than one subplot. \nTo use the object-oriented API we start out very much like in the previous example, but instead of creating a new global figure instance we store a reference to the newly created figure instance in the fig variable, and from it we create a new axis instance axes using the add_axes method in the Figure class instance fig:",
"fig = plt.figure()\n\naxes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # left, bottom, width, height (range 0 to 1)\n\naxes.plot(x, y, 'r')\n\naxes.set_xlabel('x')\naxes.set_ylabel('y')\naxes.set_title('title');\nshow()",
"Although a little bit more code is involved, the advantage is that we now have full control of where the plot axes are placed, and we can easily add more than one axis to the figure:",
"fig = plt.figure()\n\naxes1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # main axes\naxes2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # inset axes\n\n# main figure\naxes1.plot(x, y, 'r')\naxes1.set_xlabel('x')\naxes1.set_ylabel('y')\naxes1.set_title('title')\n\n# insert\naxes2.plot(y, x, 'g')\naxes2.set_xlabel('y')\naxes2.set_ylabel('x')\naxes2.set_title('insert title');\nshow;",
"If we don't care about being explicit about where our plot axes are placed in the figure canvas, then we can use one of the many axis layout managers in matplotlib. My favorite is subplots, which can be used like this:",
"fig, axes = plt.subplots()\n\naxes.plot(x, y, 'r')\naxes.set_xlabel('x')\naxes.set_ylabel('y')\naxes.set_title('title');\nshow()\n\nfig, axes = plt.subplots(nrows=1, ncols=2)\n\nfor ax in axes:\n ax.plot(x, y, 'r')\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n ax.set_title('title')\nshow()",
"That was easy, but it isn't so pretty with overlapping figure axes and labels, right?\nWe can deal with that by using the fig.tight_layout method, which automatically adjusts the positions of the axes on the figure canvas so that there is no overlapping content:",
"fig, axes = plt.subplots(nrows=1, ncols=2)\n\nfor ax in axes:\n ax.plot(x, y, 'r')\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n ax.set_title('title')\n \nfig.tight_layout()\nshow()",
"2.2 Tailles et proportions\nMatplotlib allows the aspect ratio, DPI and figure size to be specified when the Figure object is created, using the figsize and dpi keyword arguments. figsize is a tuple of the width and height of the figure in inches, and dpi is the dots-per-inch (pixel per inch). To create an 800x400 pixel, 100 dots-per-inch figure, we can do:",
"fig = plt.figure(figsize=(8,4), dpi=100)",
"The same arguments can also be passed to layout managers, such as the subplots function:",
"fig, axes = plt.subplots(figsize=(12,3))\n\naxes.plot(x, y, 'r')\naxes.set_xlabel('x')\naxes.set_ylabel('y')\naxes.set_title('title');\nshow()",
"2.3 Sauver les figures\nTo save a figure to a file we can use the savefig method in the Figure class:",
"fig.savefig(\"filename.png\")",
"Here we can also optionally specify the DPI and choose between different output formats:",
"fig.savefig(\"filename.png\", dpi=200)",
"What formats are available and which ones should be used for best quality?\nMatplotlib can generate high-quality output in a number formats, including PNG, JPG, EPS, SVG, PGF and PDF. For scientific papers, I recommend using PDF whenever possible. (LaTeX documents compiled with pdflatex can include PDFs using the includegraphics command). In some cases, PGF can also be good alternative.\n2.4 Légendes, libellés et titres\nNow that we have covered the basics of how to create a figure canvas and add axes instances to the canvas, let's look at how decorate a figure with titles, axis labels, and legends.\nTitres\nA title can be added to each axis instance in a figure. To set the title, use the set_title method in the axes instance:",
"ax.set_title(\"title\");",
"Libellés des axes\nSimilarly, with the methods set_xlabel and set_ylabel, we can set the labels of the X and Y axes:",
"ax.set_xlabel(\"x\")\nax.set_ylabel(\"y\");",
"Légendes\nLegends for curves in a figure can be added in two ways. One method is to use the legend method of the axis object and pass a list/tuple of legend texts for the previously defined curves:",
"ax.legend([\"curve1\", \"curve2\", \"curve3\"]);",
"The method described above follows the MATLAB API. It is somewhat prone to errors and unflexible if curves are added to or removed from the figure (resulting in a wrongly labelled curve).\nA better method is to use the label=\"label text\" keyword argument when plots or other objects are added to the figure, and then using the legend method without arguments to add the legend to the figure:",
"ax.plot(x, x**2, label=\"curve1\")\nax.plot(x, x**3, label=\"curve2\")\nax.legend();",
"The advantage with this method is that if curves are added or removed from the figure, the legend is automatically updated accordingly.\nThe legend function takes an optional keyword argument loc that can be used to specify where in the figure the legend is to be drawn. The allowed values of loc are numerical codes for the various places the legend can be drawn. See http://matplotlib.org/users/legend_guide.html#legend-location for details. Some of the most common loc values are:",
"ax.legend(loc=0) # let matplotlib decide the optimal location\nax.legend(loc=1) # upper right corner\nax.legend(loc=2) # upper left corner\nax.legend(loc=3) # lower left corner\nax.legend(loc=4) # lower right corner\n# .. many more options are available",
"The following figure shows how to use the figure title, axis labels and legends described above:",
"fig, ax = plt.subplots()\n\nax.plot(x, x**2, label=\"y = x**2\")\nax.plot(x, x**3, label=\"y = x**3\")\nax.legend(loc=2); # upper left corner\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_title('title');\nshow()",
"2.5 Formattage des textes: LaTeX et fontes\nThe figure above is functional, but it does not (yet) satisfy the criteria for a figure used in a publication. First and foremost, we need to have LaTeX formatted text, and second, we need to be able to adjust the font size to appear right in a publication.\nMatplotlib has great support for LaTeX. All we need to do is to use dollar signs encapsulate LaTeX in any text (legend, title, label, etc.). For example, \"$y=x^3$\".\nBut here we can run into a slightly subtle problem with LaTeX code and Python text strings. In LaTeX, we frequently use the backslash in commands, for example \\alpha to produce the symbol $\\alpha$. But the backslash already has a meaning in Python strings (the escape code character). To avoid Python messing up our latex code, we need to use \"raw\" text strings. Raw text strings are prepended with an 'r', like r\"\\alpha\" or r'\\alpha' instead of \"\\alpha\" or '\\alpha':",
"fig, ax = plt.subplots()\n\nax.plot(x, x**2, label=r\"$y = \\alpha^2$\")\nax.plot(x, x**3, label=r\"$y = \\alpha^3$\")\nax.legend(loc=2) # upper left corner\nax.set_xlabel(r'$\\alpha$', fontsize=18)\nax.set_ylabel(r'$y$', fontsize=18)\nax.set_title('title');\nshow()",
"We can also change the global font size and font family, which applies to all text elements in a figure (tick labels, axis labels and titles, legends, etc.):",
"# Update the matplotlib configuration parameters:\nmatplotlib.rcParams.update({'font.size': 18, 'font.family': 'serif'})\n\nfig, ax = plt.subplots()\n\nax.plot(x, x**2, label=r\"$y = \\alpha^2$\")\nax.plot(x, x**3, label=r\"$y = \\alpha^3$\")\nax.legend(loc=2) # upper left corner\nax.set_xlabel(r'$\\alpha$')\nax.set_ylabel(r'$y$')\nax.set_title('title');\nshow()",
"A good choice of global fonts are the STIX fonts:",
"# Update the matplotlib configuration parameters:\nmatplotlib.rcParams.update({'font.size': 18, 'font.family': 'STIXGeneral', 'mathtext.fontset': 'stix'})\n\nfig, ax = plt.subplots()\n\nax.plot(x, x**2, label=r\"$y = \\alpha^2$\")\nax.plot(x, x**3, label=r\"$y = \\alpha^3$\")\nax.legend(loc=2) # upper left corner\nax.set_xlabel(r'$\\alpha$')\nax.set_ylabel(r'$y$')\nax.set_title('title');\nshow()",
"Or, alternatively, we can request that matplotlib uses LaTeX to render the text elements in the figure:",
"matplotlib.rcParams.update({'font.size': 18, 'text.usetex': True})\n\nfig, ax = plt.subplots()\n\nax.plot(x, x**2, label=r\"$y = \\alpha^2$\")\nax.plot(x, x**3, label=r\"$y = \\alpha^3$\")\nax.legend(loc=2) # upper left corner\nax.set_xlabel(r'$\\alpha$')\nax.set_ylabel(r'$y$')\nax.set_title('title');\nshow()\n\n# restore\nmatplotlib.rcParams.update({'font.size': 12, 'font.family': 'sans', 'text.usetex': False})",
"2.6 Couleurs, largeur et types de lignes\nColors\nWith matplotlib, we can define the colors of lines and other graphical elements in a number of ways. First of all, we can use the MATLAB-like syntax where 'b' means blue, 'g' means green, etc. The MATLAB API for selecting line styles are also supported: where, for example, 'b.-' means a blue line with dots:",
"# MATLAB style line color and style \nax.plot(x, x**2, 'b.-') # blue line with dots\nax.plot(x, x**3, 'g--') # green dashed line",
"We can also define colors by their names or RGB hex codes and optionally provide an alpha value using the color and alpha keyword arguments:",
"fig, ax = plt.subplots()\n\nax.plot(x, x+1, color=\"red\", alpha=0.5) # half-transparant red\nax.plot(x, x+2, color=\"#1155dd\") # RGB hex code for a bluish color\nax.plot(x, x+3, color=\"#15cc55\") # RGB hex code for a greenish color\nshow()",
"Line and marker styles\nTo change the line width, we can use the linewidth or lw keyword argument. The line style can be selected using the linestyle or ls keyword arguments:",
"fig, ax = plt.subplots(figsize=(12,6))\n\nax.plot(x, x+1, color=\"blue\", linewidth=0.25)\nax.plot(x, x+2, color=\"blue\", linewidth=0.50)\nax.plot(x, x+3, color=\"blue\", linewidth=1.00)\nax.plot(x, x+4, color=\"blue\", linewidth=2.00)\n\n# possible linestype options ‘-‘, ‘--’, ‘-.’, ‘:’, ‘steps’\nax.plot(x, x+5, color=\"red\", lw=2, linestyle='-')\nax.plot(x, x+6, color=\"red\", lw=2, ls='-.')\nax.plot(x, x+7, color=\"red\", lw=2, ls=':')\n\n# custom dash\nline, = ax.plot(x, x+8, color=\"black\", lw=1.50)\nline.set_dashes([5, 10, 15, 10]) # format: line length, space length, ...\n\n# possible marker symbols: marker = '+', 'o', '*', 's', ',', '.', '1', '2', '3', '4', ...\nax.plot(x, x+ 9, color=\"green\", lw=2, ls='--', marker='+')\nax.plot(x, x+10, color=\"green\", lw=2, ls='--', marker='o')\nax.plot(x, x+11, color=\"green\", lw=2, ls='--', marker='s')\nax.plot(x, x+12, color=\"green\", lw=2, ls='--', marker='1')\n\n# marker size and color\nax.plot(x, x+13, color=\"purple\", lw=1, ls='-', marker='o', markersize=2)\nax.plot(x, x+14, color=\"purple\", lw=1, ls='-', marker='o', markersize=4)\nax.plot(x, x+15, color=\"purple\", lw=1, ls='-', marker='o', markersize=8, markerfacecolor=\"red\")\nax.plot(x, x+16, color=\"purple\", lw=1, ls='-', marker='s', markersize=8, \n markerfacecolor=\"yellow\", markeredgewidth=2, markeredgecolor=\"blue\");\nshow()",
"2.7 Contrôle des axes\nThe appearance of the axes is an important aspect of a figure that we often need to modify to make a publication quality graphics. We need to be able to control where the ticks and labels are placed, modify the font size and possibly the labels used on the axes. In this section we will look at controling those properties in a matplotlib figure.\nPlot range\nThe first thing we might want to configure is the ranges of the axes. We can do this using the set_ylim and set_xlim methods in the axis object, or axis('tight') for automatrically getting \"tightly fitted\" axes ranges:",
"fig, axes = plt.subplots(1, 3, figsize=(12, 4))\n\naxes[0].plot(x, x**2, x, x**3)\naxes[0].set_title(\"default axes ranges\")\n\naxes[1].plot(x, x**2, x, x**3)\naxes[1].axis('tight')\naxes[1].set_title(\"tight axes\")\n\naxes[2].plot(x, x**2, x, x**3)\naxes[2].set_ylim([0, 60])\naxes[2].set_xlim([2, 5])\naxes[2].set_title(\"custom axes range\");\nshow()",
"Logarithmic scale\nIt is also possible to set a logarithmic scale for one or both axes. This functionality is in fact only one application of a more general transformation system in Matplotlib. Each of the axes' scales are set seperately using set_xscale and set_yscale methods which accept one parameter (with the value \"log\" in this case):",
"fig, axes = plt.subplots(1, 2, figsize=(10,4))\n \naxes[0].plot(x, x**2, x, np.exp(x))\naxes[0].set_title(\"Normal scale\")\n\naxes[1].plot(x, x**2, x, np.exp(x))\naxes[1].set_yscale(\"log\")\naxes[1].set_title(\"Logarithmic scale (y)\");\nshow()",
"2.8 Placement des échelles et libellés\nWe can explicitly determine where we want the axis ticks with set_xticks and set_yticks, which both take a list of values for where on the axis the ticks are to be placed. We can also use the set_xticklabels and set_yticklabels methods to provide a list of custom text labels for each tick location:",
"fig, ax = plt.subplots(figsize=(10, 4))\n\nax.plot(x, x**2, x, x**3, lw=2)\n\nax.set_xticks([1, 2, 3, 4, 5])\nax.set_xticklabels([r'$\\alpha$', r'$\\beta$', r'$\\gamma$', r'$\\delta$', r'$\\epsilon$'], fontsize=18)\n\nyticks = [0, 50, 100, 150]\nax.set_yticks(yticks)\nax.set_yticklabels([\"$%.1f$\" % y for y in yticks], fontsize=18); # use LaTeX formatted labels\nshow()",
"There are a number of more advanced methods for controlling major and minor tick placement in matplotlib figures, such as automatic placement according to different policies. See http://matplotlib.org/api/ticker_api.html for details.\nScientific notation\nWith large numbers on axes, it is often better use scientific notation:",
"fig, ax = plt.subplots(1, 1)\n \nax.plot(x, x**2, x, np.exp(x))\nax.set_title(\"scientific notation\")\n\nax.set_yticks([0, 50, 100, 150])\n\nfrom matplotlib import ticker\nformatter = ticker.ScalarFormatter(useMathText=True)\nformatter.set_scientific(True) \nformatter.set_powerlimits((-1,1)) \nax.yaxis.set_major_formatter(formatter) \nshow()",
"2.9 Formattage des espaces sur les axes",
"# distance between x and y axis and the numbers on the axes\nmatplotlib.rcParams['xtick.major.pad'] = 5\nmatplotlib.rcParams['ytick.major.pad'] = 5\n\nfig, ax = plt.subplots(1, 1)\n \nax.plot(x, x**2, x, np.exp(x))\nax.set_yticks([0, 50, 100, 150])\n\nax.set_title(\"label and axis spacing\")\n\n# padding between axis label and axis numbers\nax.xaxis.labelpad = 5\nax.yaxis.labelpad = 5\n\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\");\nshow()\n\n# restore defaults\nmatplotlib.rcParams['xtick.major.pad'] = 3\nmatplotlib.rcParams['ytick.major.pad'] = 3",
"Axis position adjustments\nUnfortunately, when saving figures the labels are sometimes clipped, and it can be necessary to adjust the positions of axes a little bit. This can be done using subplots_adjust:",
"fig, ax = plt.subplots(1, 1)\n \nax.plot(x, x**2, x, np.exp(x))\nax.set_yticks([0, 50, 100, 150])\n\nax.set_title(\"title\")\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\n\nfig.subplots_adjust(left=0.15, right=.9, bottom=0.1, top=0.9);\nshow()",
"2.10 Grille\nWith the grid method in the axis object, we can turn on and off grid lines. We can also customize the appearance of the grid lines using the same keyword arguments as the plot function:",
"fig, axes = plt.subplots(1, 2, figsize=(10,3))\n\n# default grid appearance\naxes[0].plot(x, x**2, x, x**3, lw=2)\naxes[0].grid(True)\n\n# custom grid appearance\naxes[1].plot(x, x**2, x, x**3, lw=2)\naxes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=0.5)\nshow()",
"2.11 Double graphique\nSometimes it is useful to have dual x or y axes in a figure; for example, when plotting curves with different units together. Matplotlib supports this with the twinx and twiny functions:",
"fig, ax1 = plt.subplots()\n\nax1.plot(x, x**2, lw=2, color=\"blue\")\nax1.set_ylabel(r\"area $(m^2)$\", fontsize=18, color=\"blue\")\nfor label in ax1.get_yticklabels():\n label.set_color(\"blue\")\n \nax2 = ax1.twinx()\nax2.plot(x, x**3, lw=2, color=\"red\")\nax2.set_ylabel(r\"volume $(m^3)$\", fontsize=18, color=\"red\")\nfor label in ax2.get_yticklabels():\n label.set_color(\"red\")\nshow()",
"2.12 Axes centrés",
"fig, ax = plt.subplots()\n\nax.spines['right'].set_color('none')\nax.spines['top'].set_color('none')\n\nax.xaxis.set_ticks_position('bottom')\nax.spines['bottom'].set_position(('data',0)) # set position of x spine to x=0\n\nax.yaxis.set_ticks_position('left')\nax.spines['left'].set_position(('data',0)) # set position of y spine to y=0\n\nxx = np.linspace(-0.75, 1., 100)\nax.plot(xx, xx**3);\nshow()",
"2.13 Autres graphes 2D\nIn addition to the regular plot method, there are a number of other functions for generating different kind of plots. See the matplotlib plot gallery for a complete list of available plot types: http://matplotlib.org/gallery.html. Some of the more useful ones are show below:",
"n = np.array([0,1,2,3,4,5])\n\nfig, axes = plt.subplots(1, 4, figsize=(12,3))\n\naxes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx)))\naxes[0].set_title(\"scatter\")\n\naxes[1].step(n, n**2, lw=2)\naxes[1].set_title(\"step\")\n\naxes[2].bar(n, n**2, align=\"center\", width=0.5, alpha=0.5)\naxes[2].set_title(\"bar\")\n\naxes[3].fill_between(x, x**2, x**3, color=\"green\", alpha=0.5);\naxes[3].set_title(\"fill_between\");\nshow()\n\n# polar plot using add_axes and polar projection\nfig = plt.figure()\nax = fig.add_axes([0.0, 0.0, .6, .6], polar=True)\nt = np.linspace(0, 2 * np.pi, 100)\nax.plot(t, t, color='blue', lw=3);\nshow()\n\n# A histogram\nn = np.random.randn(100000)\nfig, axes = plt.subplots(1, 2, figsize=(12,4))\n\naxes[0].hist(n)\naxes[0].set_title(\"Default histogram\")\naxes[0].set_xlim((min(n), max(n)))\n\naxes[1].hist(n, cumulative=True, bins=50)\naxes[1].set_title(\"Cumulative detailed histogram\")\naxes[1].set_xlim((min(n), max(n)));\nshow()",
"2.14 Textes d'annotation\nAnnotating text in matplotlib figures can be done using the text function. It supports LaTeX formatting just like axis label texts and titles:",
"fig, ax = plt.subplots()\n\nax.plot(xx, xx**2, xx, xx**3)\n\nax.text(0.15, 0.2, r\"$y=x^2$\", fontsize=20, color=\"blue\")\nax.text(0.65, 0.1, r\"$y=x^3$\", fontsize=20, color=\"green\");\nshow()",
"2.15 Figures avec sous graphes\nAxes can be added to a matplotlib Figure canvas manually using fig.add_axes or using a sub-figure layout manager such as subplots, subplot2grid, or gridspec:\nsubplots",
"fig, ax = plt.subplots(2, 3)\nfig.tight_layout()\nshow()",
"subplot2grid",
"fig = plt.figure()\nax1 = plt.subplot2grid((3,3), (0,0), colspan=3)\nax2 = plt.subplot2grid((3,3), (1,0), colspan=2)\nax3 = plt.subplot2grid((3,3), (1,2), rowspan=2)\nax4 = plt.subplot2grid((3,3), (2,0))\nax5 = plt.subplot2grid((3,3), (2,1))\nfig.tight_layout()\nshow()",
"gridspec",
"import matplotlib.gridspec as gridspec\n\nfig = plt.figure()\n\ngs = gridspec.GridSpec(2, 3, height_ratios=[2,1], width_ratios=[1,2,1])\nfor g in gs:\n ax = fig.add_subplot(g)\n \nfig.tight_layout()\nshow()",
"add_axes\nManually adding axes with add_axes is useful for adding insets to figures:",
"fig, ax = plt.subplots()\n\nax.plot(xx, xx**2, xx, xx**3)\nfig.tight_layout()\n\n# inset\ninset_ax = fig.add_axes([0.2, 0.55, 0.35, 0.35]) # X, Y, width, height\n\ninset_ax.plot(xx, xx**2, xx, xx**3)\ninset_ax.set_title('zoom near origin')\n\n# set axis range\ninset_ax.set_xlim(-.2, .2)\ninset_ax.set_ylim(-.005, .01)\n\n# set axis tick locations\ninset_ax.set_yticks([0, 0.005, 0.01])\ninset_ax.set_xticks([-0.1,0,.1]);\nshow()",
"2.16 Graphes de contour\nColormaps and contour figures are useful for plotting functions of two variables. In most of these functions we will use a colormap to encode one dimension of the data. There are a number of predefined colormaps. It is relatively straightforward to define custom colormaps. For a list of pre-defined colormaps, see: http://www.scipy.org/Cookbook/Matplotlib/Show_colormaps",
"alpha = 0.7\nphi_ext = 2 * np.pi * 0.5\n\ndef flux_qubit_potential(phi_m, phi_p):\n return 2 + alpha - 2 * np.cos(phi_p) * np.cos(phi_m) - alpha * np.cos(phi_ext - 2*phi_p)\n\nphi_m = np.linspace(0, 2*np.pi, 100)\nphi_p = np.linspace(0, 2*np.pi, 100)\nX,Y = np.meshgrid(phi_p, phi_m)\nZ = flux_qubit_potential(X, Y).T",
"pcolor",
"fig, ax = plt.subplots()\n\np = ax.pcolor(X/(2*np.pi), Y/(2*np.pi), Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max())\ncb = fig.colorbar(p, ax=ax)\nshow()",
"imshow",
"fig, ax = plt.subplots()\n\nim = ax.imshow(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])\nim.set_interpolation('bilinear')\n\ncb = fig.colorbar(im, ax=ax)\nshow()",
"contour",
"fig, ax = plt.subplots()\n\ncnt = ax.contour(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])\nshow()",
"3 graphes 3D\nTo use 3D graphics in matplotlib, we first need to create an instance of the Axes3D class. 3D axes can be added to a matplotlib figure canvas in exactly the same way as 2D axes; or, more conveniently, by passing a projection='3d' keyword argument to the add_axes or add_subplot methods.",
"from mpl_toolkits.mplot3d.axes3d import Axes3D",
"3.1 Surface plots",
"fig = plt.figure(figsize=(14,6))\n\n# `ax` is a 3D-aware axis instance because of the projection='3d' keyword argument to add_subplot\nax = fig.add_subplot(1, 2, 1, projection='3d')\n\np = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0)\n\n# surface_plot with color grading and color bar\nax = fig.add_subplot(1, 2, 2, projection='3d')\np = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=matplotlib.cm.coolwarm, linewidth=0, antialiased=False)\ncb = fig.colorbar(p, shrink=0.5)\nshow()",
"Wire-frame plot",
"fig = plt.figure(figsize=(8,6))\n\nax = fig.add_subplot(1, 1, 1, projection='3d')\n\np = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4)\nshow()",
"3.2 Coutour plots with projections",
"fig = plt.figure(figsize=(8,6))\n\nax = fig.add_subplot(1,1,1, projection='3d')\n\nax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25)\ncset = ax.contour(X, Y, Z, zdir='z', offset=-np.pi, cmap=matplotlib.cm.coolwarm)\ncset = ax.contour(X, Y, Z, zdir='x', offset=-np.pi, cmap=matplotlib.cm.coolwarm)\ncset = ax.contour(X, Y, Z, zdir='y', offset=3*np.pi, cmap=matplotlib.cm.coolwarm)\n\nax.set_xlim3d(-np.pi, 2*np.pi);\nax.set_ylim3d(0, 3*np.pi);\nax.set_zlim3d(-np.pi, 2*np.pi);\nshow()",
"Change the view angle\nWe can change the perspective of a 3D plot using the view_init method, which takes two arguments: elevation and azimuth angle (in degrees):",
"fig = plt.figure(figsize=(12,6))\n\nax = fig.add_subplot(1,2,1, projection='3d')\nax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25)\nax.view_init(30, 45)\n\nax = fig.add_subplot(1,2,2, projection='3d')\nax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25)\nax.view_init(70, 30)\n\nfig.tight_layout()\nshow()",
"4 Compléments\n\nhttp://www.matplotlib.org - The project web page for matplotlib.\nhttps://github.com/matplotlib/matplotlib - The source code for matplotlib.\nhttp://matplotlib.org/gallery.html - A large gallery showcaseing various types of plots matplotlib can create. Highly recommended! \nhttp://www.loria.fr/~rougier/teaching/matplotlib - A good matplotlib tutorial.\nhttp://scipy-lectures.github.io/matplotlib/matplotlib.html - Another good matplotlib reference."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bharat-b7/NN_glimpse
|
2.2.2 CNN HandsOn - MNIST & CN Nets.ipynb
|
unlicense
|
[
"Convolution Nets for MNIST\nDeep Learning models can take quite a bit of time to run, particularly if GPU isn't used. \nIn the interest of time, you could sample a subset of observations (e.g. $1000$) that are a particular number of your choice (e.g. $6$) and $1000$ observations that aren't that particular number (i.e. $\\neq 6$). \nWe will build a model using that and see how it performs on the test dataset",
"import os\nos.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\" # see issue #152\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"\"\n#os.environ['THEANO_FLAGS'] = \"device=gpu2\"\n\n#Import the required libraries\nimport numpy as np\nnp.random.seed(1338)\n\nfrom keras.datasets import mnist\nfrom keras.models import load_model\n\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Dropout, Activation, Flatten\n\nfrom keras.layers.convolutional import Conv2D\nfrom keras.layers.pooling import MaxPooling2D\n\nfrom keras.utils import np_utils\nfrom keras.optimizers import SGD",
"Loading Data",
"#Load the training and testing data\n(X_train, y_train), (X_test, y_test) = mnist.load_data()",
"Data Preparation\nVery Important:\nWhen dealing with images & convolutions, it is paramount to handle image_data_format properly",
"img_rows, img_cols = 28, 28\n'''\nif K.image_data_format() == 'channels_first':\n shape_ord = (1, img_rows, img_cols)\nelse: # channel_last\n shape_ord = (img_rows, img_cols, 1)\n'''\nshape_ord = (1, img_rows, img_cols)",
"Preprocess and Normalise Data",
"X_train = X_train.reshape((X_train.shape[0],) + shape_ord)\nX_test = X_test.reshape((X_test.shape[0],) + shape_ord)\n\nX_train = X_train.astype('float32')\nX_test = X_test.astype('float32')\n\nX_train /= 255\nX_test /= 255\n\n# Converting the classes to its binary categorical form\nnb_classes = 10\ny_train = np_utils.to_categorical(y_train, nb_classes)\ny_test = np_utils.to_categorical(y_test, nb_classes)",
"A simple CNN",
"# -- Initializing the values for the convolution neural network\n\nnb_epoch = 100 # kept very low! Please increase if you have GPU\n\nbatch_size = 30000\n# number of convolutional filters to use\nnb_filters = 32\n# size of pooling area for max pooling\nnb_pool = 2\n# convolution kernel size\nnb_conv = 3\n\nsgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)",
"Step 1: Model Definition",
"model = Sequential()\n\nmodel.add(Conv2D(nb_filters, nb_conv, nb_conv, \n input_shape=shape_ord)) # note: the very first layer **must** always specify the input_shape\nmodel.add(Activation('relu'))\n\nmodel.add(Flatten())\nmodel.add(Dense(nb_classes))\nmodel.add(Activation('softmax'))",
"Step 2: Compile",
"model.compile(loss='categorical_crossentropy',\n optimizer='sgd',\n metrics=['accuracy'])",
"Step 3: Fit",
"# Train or load! you choose!!\n'''\nhist = model.fit(X_train, y_train, batch_size=batch_size, \n nb_epoch=nb_epoch, verbose=1, \n validation_data=(X_test, y_test))\nmodel.save('example_MNIST_CNN_base.h5')\n'''\nmodel=load_model('example_MNIST_CNN_base.h5')\nmodel.summary()\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.figure()\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.plot(hist.history['loss'])\nplt.plot(hist.history['val_loss'])\nplt.legend(['Training', 'Validation'])\n\nplt.figure()\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.plot(hist.history['acc'])\nplt.plot(hist.history['val_acc'])\nplt.legend(['Training', 'Validation'], loc='lower right')",
"Step 4: Evaluate",
"print('Available Metrics in Model: {}'.format(model.metrics_names))\n\n# Evaluating the model on the test data \nloss, accuracy = model.evaluate(X_test, y_test, verbose=0)\nprint('Test Loss:', loss)\nprint('Test Accuracy:', accuracy)",
"Let's plot our model Predictions!",
"import matplotlib.pyplot as plt\n\n%matplotlib inline\n\nslice = 15\npredicted = model.predict(X_test[:slice]).argmax(-1)\n\nplt.figure(figsize=(16,8))\nfor i in range(slice):\n plt.subplot(1, slice, i+1)\n plt.imshow(X_test[i,0], interpolation='nearest')\n plt.text(0, 0, predicted[i], color='black', \n bbox=dict(facecolor='white', alpha=1))\n plt.axis('off')",
"Adding more Dense Layers",
"model = Sequential()\nmodel.add(Conv2D(nb_filters, nb_conv, nb_conv, input_shape=shape_ord))\nmodel.add(Activation('relu'))\n\nmodel.add(Flatten())\nmodel.add(Dense(128))\nmodel.add(Activation('relu'))\n\nmodel.add(Dense(nb_classes))\nmodel.add(Activation('softmax'))\n\nmodel.compile(loss='categorical_crossentropy',\n optimizer='sgd',\n metrics=['accuracy'])\n\n# Ah, another path to choose, Train or load!!\n'''\nhist = model.fit(X_train, y_train, batch_size=batch_size, \n nb_epoch=nb_epoch, verbose=1, \n validation_data=(X_test, y_test))\nmodel.save('example_MNIST_CNN_more_dense.h5')\n'''\nmodel=load_model('example_MNIST_CNN_more_dense.h5')\nmodel.summary()\n\n#Evaluating the model on the test data \nscore, accuracy = model.evaluate(X_test, y_test, verbose=0)\nprint('Test score:', score)\nprint('Test accuracy:', accuracy)",
"Adding more Convolution Layers",
"model = Sequential()\nmodel.add(Conv2D(nb_filters, nb_conv, nb_conv, input_shape=shape_ord))\nmodel.add(Activation('relu'))\nmodel.add(Conv2D(nb_filters, nb_conv, nb_conv))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))\nmodel.add(Dropout(0.25))\n \nmodel.add(Flatten())\nmodel.add(Dense(128))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(nb_classes))\nmodel.add(Activation('softmax'))\n\nmodel.compile(loss='categorical_crossentropy',\n optimizer='sgd',\n metrics=['accuracy'])\n\n'''\nhist = model.fit(X_train, y_train, batch_size=batch_size/2, \n nb_epoch=nb_epoch, verbose=1, \n validation_data=(X_test, y_test))\nmodel.save('example_MNIST_CNN_more_conv.h5')\n'''\nmodel=load_model('example_MNIST_CNN_more_conv.h5')\nmodel.summary()\n\n#Evaluating the model on the test data \nscore, accuracy = model.evaluate(X_test, y_test, verbose=0)\nprint('Test score:', score)\nprint('Test accuracy:', accuracy)",
"Exercise\nThe above code has been written as a function. \nChange some of the hyperparameters and see what happens.",
"# Function for constructing the convolution neural network\n# Feel free to add parameters, if you want\n\ndef build_model():\n \"\"\"\"\"\"\n model = Sequential()\n model.add(Conv2D(nb_filters, nb_conv, nb_conv, \n padding='valid',\n input_shape=shape_ord))\n model.add(Activation('relu'))\n model.add(Conv2D(nb_filters, nb_conv, nb_conv))\n model.add(Activation('relu'))\n model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))\n model.add(Dropout(0.25))\n \n model.add(Flatten())\n model.add(Dense(128))\n model.add(Activation('relu'))\n model.add(Dropout(0.5))\n model.add(Dense(nb_classes))\n model.add(Activation('softmax'))\n \n model.compile(loss='categorical_crossentropy',\n optimizer='sgd',\n metrics=['accuracy'])\n\n model.fit(X_train, y_train, batch_size=batch_size, \n epochs=nb_epoch,verbose=1,\n validation_data=(X_test, y_test))\n \n\n #Evaluating the model on the test data \n score, accuracy = model.evaluate(X_test, y_test, verbose=0)\n print('Test score:', score)\n print('Test accuracy:', accuracy)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/mlops-on-gcp
|
on_demand/kfp-caip-sklearn/lab-02-kfp-pipeline/exercises/lab-02.ipynb
|
apache-2.0
|
[
"Continuous training pipeline with Kubeflow Pipeline and AI Platform\nLearning Objectives:\n1. Learn how to use Kubeflow Pipeline (KFP) pre-build components (BiqQuery, AI Platform training and predictions)\n1. Learn how to use KFP lightweight python components\n1. Learn how to build a KFP with these components\n1. Learn how to compile, upload, and run a KFP with the command line\nIn this lab, you will build, deploy, and run a KFP pipeline that orchestrates BigQuery and AI Platform services to train, tune, and deploy a scikit-learn model.\nUnderstanding the pipeline design\nThe workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the covertype_training_pipeline.py file that we will generate below.\nThe pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.",
"!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py",
"NOTE: Because there are no environment variables set, therefore covertype_training_pipeline.py file is missing; we will create it in the next step.\nThe pipeline uses a mix of custom and pre-build components.\n\nPre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution:\nBigQuery query component\nAI Platform Training component\nAI Platform Deploy component\n\n\nCustom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's Lightweight Python Components mechanism. The code for the components is in the helper_components.py file:\nRetrieve Best Run. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job.\nEvaluate Model. This component evaluates a sklearn trained model using a provided metric and a testing dataset.\n\n\n\nExercise\nComplete TO DOs the pipeline file below.\n<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.\n</ql-infobox>",
"%%writefile ./pipeline/covertype_training_pipeline.py\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"KFP orchestrating BigQuery and Cloud AI Platform services.\"\"\"\n\nimport os\n\nfrom helper_components import evaluate_model\nfrom helper_components import retrieve_best_run\nfrom jinja2 import Template\nimport kfp\nfrom kfp.components import func_to_container_op\nfrom kfp.dsl.types import Dict\nfrom kfp.dsl.types import GCPProjectID\nfrom kfp.dsl.types import GCPRegion\nfrom kfp.dsl.types import GCSPath\nfrom kfp.dsl.types import String\nfrom kfp.gcp import use_gcp_secret\n\n# Defaults and environment settings\nBASE_IMAGE = os.getenv('BASE_IMAGE')\nTRAINER_IMAGE = os.getenv('TRAINER_IMAGE')\nRUNTIME_VERSION = os.getenv('RUNTIME_VERSION')\nPYTHON_VERSION = os.getenv('PYTHON_VERSION')\nCOMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')\nUSE_KFP_SA = os.getenv('USE_KFP_SA')\n\nTRAINING_FILE_PATH = 'datasets/training/data.csv'\nVALIDATION_FILE_PATH = 'datasets/validation/data.csv'\nTESTING_FILE_PATH = 'datasets/testing/data.csv'\n\n# Parameter defaults\nSPLITS_DATASET_ID = 'splits'\nHYPERTUNE_SETTINGS = \"\"\"\n{\n \"hyperparameters\": {\n \"goal\": \"MAXIMIZE\",\n \"maxTrials\": 6,\n \"maxParallelTrials\": 3,\n \"hyperparameterMetricTag\": \"accuracy\",\n \"enableTrialEarlyStopping\": True,\n \"params\": [\n {\n \"parameterName\": \"max_iter\",\n \"type\": \"DISCRETE\",\n \"discreteValues\": [500, 1000]\n },\n {\n \"parameterName\": \"alpha\",\n \"type\": \"DOUBLE\",\n \"minValue\": 0.0001,\n \"maxValue\": 0.001,\n \"scaleType\": \"UNIT_LINEAR_SCALE\"\n }\n ]\n }\n}\n\"\"\"\n\n\n# Helper functions\ndef generate_sampling_query(source_table_name, num_lots, lots):\n \"\"\"Prepares the data sampling query.\"\"\"\n\n sampling_query_template = \"\"\"\n SELECT *\n FROM \n `{{ source_table }}` AS cover\n WHERE \n MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})\n \"\"\"\n query = Template(sampling_query_template).render(\n source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])\n\n return query\n\n\n# Create component factories\ncomponent_store = # TO DO: Complete the command\n\nbigquery_query_op = # TO DO: Use the pre-build bigquery/query component\nmlengine_train_op = # TO DO: Use the pre-build ml_engine/train\nmlengine_deploy_op = # TO DO: Use the pre-build ml_engine/deploy component\nretrieve_best_run_op = # TO DO: Package the retrieve_best_run function into a lightweight component\nevaluate_model_op = # TO DO: Package the evaluate_model function into a lightweight component\n\n\n@kfp.dsl.pipeline(\n name='Covertype Classifier Training',\n description='The pipeline training and deploying the Covertype classifierpipeline_yaml'\n)\ndef covertype_train(project_id,\n region,\n source_table_name,\n gcs_root,\n dataset_id,\n evaluation_metric_name,\n evaluation_metric_threshold,\n model_id,\n version_id,\n replace_existing_version,\n hypertune_settings=HYPERTUNE_SETTINGS,\n dataset_location='US'):\n \"\"\"Orchestrates training and deployment of an sklearn model.\"\"\"\n\n # Create the training split\n query = generate_sampling_query(\n source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])\n\n training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)\n\n create_training_split = bigquery_query_op(\n query=query,\n project_id=project_id,\n dataset_id=dataset_id,\n table_id='',\n output_gcs_path=training_file_path,\n dataset_location=dataset_location)\n\n # Create the validation split\n query = generate_sampling_query(\n source_table_name=source_table_name, num_lots=10, lots=[8])\n\n validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)\n\n create_validation_split = # TODO - use the bigquery_query_op\n\n # Create the testing split\n query = generate_sampling_query(\n source_table_name=source_table_name, num_lots=10, lots=[9])\n\n testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)\n\n create_testing_split = # TO DO: Use the bigquery_query_op\n \n\n # Tune hyperparameters\n tune_args = [\n '--training_dataset_path',\n create_training_split.outputs['output_gcs_path'],\n '--validation_dataset_path',\n create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'\n ]\n\n job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',\n kfp.dsl.RUN_ID_PLACEHOLDER)\n\n hypertune = # TO DO: Use the mlengine_train_op\n\n # Retrieve the best trial\n get_best_trial = retrieve_best_run_op(\n project_id, hypertune.outputs['job_id'])\n\n # Train the model on a combined training and validation datasets\n job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)\n\n train_args = [\n '--training_dataset_path',\n create_training_split.outputs['output_gcs_path'],\n '--validation_dataset_path',\n create_validation_split.outputs['output_gcs_path'], '--alpha',\n get_best_trial.outputs['alpha'], '--max_iter',\n get_best_trial.outputs['max_iter'], '--hptune', 'False'\n ]\n\n train_model = # TO DO: Use the mlengine_train_op\n\n # Evaluate the model on the testing split\n eval_model = evaluate_model_op(\n dataset_path=str(create_testing_split.outputs['output_gcs_path']),\n model_path=str(train_model.outputs['job_dir']),\n metric_name=evaluation_metric_name)\n\n # Deploy the model if the primary metric is better than threshold\n with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):\n deploy_model = mlengine_deploy_op(\n model_uri=train_model.outputs['job_dir'],\n project_id=project_id,\n model_id=model_id,\n version_id=version_id,\n runtime_version=RUNTIME_VERSION,\n python_version=PYTHON_VERSION,\n replace_existing_version=replace_existing_version)\n\n # Configure the pipeline to run using the service account defined\n # in the user-gcp-sa k8s secret\n if USE_KFP_SA == 'True':\n kfp.dsl.get_pipeline_conf().add_op_transformer(\n use_gcp_secret('user-gcp-sa'))",
"The custom components execute in a container image defined in base_image/Dockerfile.",
"!cat base_image/Dockerfile",
"The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in trainer_image/Dockerfile.",
"!cat trainer_image/Dockerfile",
"Building and deploying the pipeline\nBefore deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on Argo Workflow, which is expressed in YAML. \nConfigure environment settings\nUpdate the below constants with the settings reflecting your lab environment. \n\nREGION - the compute region for AI Platform Training and Prediction\nARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default.\n\nENDPOINT - set the ENDPOINT constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.\n\n\nOpen the SETTINGS for your instance\n\nUse the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.\n\nRun gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.",
"!gsutil ls",
"HINT: \nFor ENDPOINT, use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SDK section of the SETTINGS window.\nFor ARTIFACT_STORE_URI, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output. Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default'",
"REGION = 'us-central1'\nENDPOINT = '337dd39580cbcbd2-dot-us-central2.pipelines.googleusercontent.com' # TO DO: REPLACE WITH YOUR ENDPOINT\nARTIFACT_STORE_URI = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' # TO DO: REPLACE WITH YOUR ARTIFACT_STORE NAME \nPROJECT_ID = !(gcloud config get-value core/project)\nPROJECT_ID = PROJECT_ID[0]",
"Build the trainer image",
"IMAGE_NAME='trainer_image'\nTAG='latest'\nTRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)",
"Note: Please ignore any incompatibility ERROR that may appear for the packages visions as it will not affect the lab's functionality.",
"!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image",
"Build the base image for custom components",
"IMAGE_NAME='base_image'\nTAG='latest'\nBASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)\n\n!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image",
"Compile the pipeline\nYou can compile the DSL using an API from the KFP SDK or using the KFP compiler.\nTo compile the pipeline DSL using the KFP compiler.\nSet the pipeline's compile time settings\nThe pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the user-gcp-sa secret of the Kubernetes namespace hosting KFP. If you want to use the user-gcp-sa service account you change the value of USE_KFP_SA to True.\nNote that the default AI Platform Pipelines configuration does not define the user-gcp-sa secret.",
"USE_KFP_SA = False\n\nCOMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'\nRUNTIME_VERSION = '1.15'\nPYTHON_VERSION = '3.7'\n\n%env USE_KFP_SA={USE_KFP_SA}\n%env BASE_IMAGE={BASE_IMAGE}\n%env TRAINER_IMAGE={TRAINER_IMAGE}\n%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}\n%env RUNTIME_VERSION={RUNTIME_VERSION}\n%env PYTHON_VERSION={PYTHON_VERSION}",
"Use the CLI compiler to compile the pipeline\nExercise\nCompile the covertype_training_pipeline.py with the dsl-compile command line:\n<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.\n</ql-infobox>",
"# TO DO: Your code goes here",
"The result is the covertype_training_pipeline.yaml file.",
"!head covertype_training_pipeline.yaml",
"Deploy the pipeline package\nExercise\nUpload the pipeline to the Kubeflow cluster using the kfp command line:\n<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.\n</ql-infobox>",
"PIPELINE_NAME='covertype_continuous_training'\n\n# TO DO: Your code goes here",
"Submitting pipeline runs\nYou can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.\nList the pipelines in AI Platform Pipelines",
"!kfp --endpoint $ENDPOINT pipeline list",
"Submit a run\nFind the ID of the covertype_continuous_training pipeline you uploaded in the previous step and update the value of PIPELINE_ID .",
"PIPELINE_ID='0918568d-758c-46cf-9752-e04a4403cd84' # TO DO: REPLACE WITH YOUR PIPELINE ID \n\nEXPERIMENT_NAME = 'Covertype_Classifier_Training'\nRUN_ID = 'Run_001'\nSOURCE_TABLE = 'covertype_dataset.covertype'\nDATASET_ID = 'splits'\nEVALUATION_METRIC = 'accuracy'\nEVALUATION_METRIC_THRESHOLD = '0.69'\nMODEL_ID = 'covertype_classifier'\nVERSION_ID = 'v01'\nREPLACE_EXISTING_VERSION = 'True'\n\nGCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)",
"Exercise\nRun the pipeline using the kfp command line. Here are some of the variable\nyou will have to use to pass to the pipeline:\n\nEXPERIMENT_NAME is set to the experiment used to run the pipeline. You can choose any name you want. If the experiment does not exist it will be created by the command\nRUN_ID is the name of the run. You can use an arbitrary name\nPIPELINE_ID is the id of your pipeline. Use the value retrieved by the kfp pipeline list command\nGCS_STAGING_PATH is the URI to the Cloud Storage location used by the pipeline to store intermediate files. By default, it is set to the staging folder in your artifact store.\nREGION is a compute region for AI Platform Training and Prediction.\n\n<ql-infobox><b>NOTE:</b> If you need help, you may take a look at the complete solution by navigating to mlops-on-gcp > workshops > kfp-caip-sklearn > lab-02-kfp-pipeline and opening lab-02.ipynb.\n</ql-infobox>",
"# TO DO: Your code goes here",
"Monitoring the run\nYou can monitor the run using KFP UI. Follow the instructor who will walk you through the KFP UI and monitoring techniques.\nTo access the KFP UI in your environment use the following URI:\nhttps://[ENDPOINT]\nNOTE that your pipeline run may fail due to the bug in a BigQuery component that does not handle certain race conditions. If you observe the pipeline failure, re-run the last cell of the notebook to submit another pipeline run or retry the run from the KFP UI\n<font size=-1>Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dereneaton/ipyrad
|
testdocs/analysis/dev-bpp-parallel.ipynb
|
gpl-3.0
|
[
"BPP parallelization without blocking",
"import ipcoal\nimport toytree\nimport ipyrad.analysis as ipa\nimport ipyparallel as ipp",
"Start an ipcluster instance\nHere I assume that you already started an ipcluster instance in a terminal using the command below, or by starting engines in the Ipython Clusters tab in Jupyter. Remember that when you pull in new updates and restart your kernel you also need to restart your cluster instance. \nbash\nipcluster start --n=4",
"# connect to a running client\nipyclient = ipp.Client()\n\n# show number of engines\nipyclient.ids",
"Simulate loci under a known scenario",
"# make a random tree\ntree = toytree.rtree.unittree(ntips=5, treeheight=5e5, seed=1243)\ntree.draw(ts='p');\n\n# simulate loci and write to HDF5\nmodel = ipcoal.Model(tree, Ne=1e5, nsamples=4)\nmodel.sim_loci(100, 500)\nmodel.write_loci_to_hdf5(name=\"test\", outdir=\"/tmp\", diploid=True)",
"Setup BPP",
"# create an IMAP \nIMAP = {\n 'r' + str(i): [j for j in model.alpha_ordered_names if int(j[1]) == i][:2] \n for i in range(5)\n}\nIMAP\n\n# init bpp tool.\nbpp1 = ipa.bpp(\n data=\"/tmp/test.seqs.hdf5\",\n name=\"test1\", \n workdir=\"/tmp\",\n guidetree=tree,\n imap=IMAP,\n maxloci=100,\n burnin=1000,\n nsample=5000,\n)\nbpp1.kwargs",
"Submit BPP jobs to run on cluster (using ._run())",
"# submit 2 jobs to ipyclient\nbpp1._run(nreps=2, ipyclient=ipyclient, force=True, block=False, dry_run=False)",
"Submit more jobs on the same ipyclient\nHere I use the .copy() function for convenience, but you could just create a new BPP object and call the ._run() command with the same ipyclient object.",
"# submit X other jobs to ipyclient (e.g., using diff job name)\nbpp2 = bpp1.copy(\"test2\")\nbpp2._run(nreps=4, ipyclient=ipyclient, force=True, block=False, dry_run=False)",
"The asynchronous job objects",
"# see the jobs that are submitted\nbpp1.asyncs\n\nbpp2.asyncs",
"Block until jobs finish (or don't)",
"# see outstanding jobs (optional, this does NOT BLOCK)\nipyclient.outstanding\n\n# BLOCK until all jobs on ipyclient are finished (returns True when done)\nipyclient.wait()",
"Summarize results (WHEN FINISHED)",
"res, mcmc = bpp1.summarize_results(\"00\")\nres\n\nres, mcmc = bpp2.summarize_results(\"00\")\nres"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Ad115/ICGC-data-parser
|
mutations_distribution_genes.ipynb
|
mit
|
[
"Plotting the mutations density in the genes\nAre there specific genes in which a significant portion of the mutations fall?\nWe want to answer this by finding the distribution of the number of mutations per gene.\nThat is, for each integer, we want to know how many genes have that number of mutations.",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nfrom scipy import optimize\n\nsns.set()",
"We first map genes to the number of mutations they harbor (read from a random sample of 100,000 mutations)",
"from collections import Counter\nfrom ICGC_data_parser import SSM_Reader\n\n\n\nmutations_per_gene = Counter()\n\nmutations = SSM_Reader(filename='/home/ad115/Downloads/simple_somatic_mutation.aggregated.vcf.gz')\n\n# Fix weird bug due to malformed description headers\nmutations.infos['studies'] = mutations.infos['studies']._replace(type='String')\n\nconsequences = mutations.subfield_parser('CONSEQUENCE')\n\n\n\nfor i, record in enumerate(mutations):\n if i % 100000 == 0:\n print(i)\n affected_genes = [c.gene_symbol for c in consequences(record) if c.gene_affected]\n mutations_per_gene.update(affected_genes)\n \nmutations_per_gene.most_common(5)\n\nlen(mutations_per_gene)",
"Now we want to group by number of mutations",
"distribution = Counter(mutations_per_gene.values())\ndistribution.most_common(10)",
"Now we plot the data...",
"x = sorted(distribution.keys())\ny = [distribution[i] for i in x]\n\nplt.figure(figsize=(10, 7))\n\nplt.plot(x, y)\nplt.yscale('log')\nplt.xscale('log')\nplt.title('Mutation distribution by gene')\nplt.xlabel('$n$')\nplt.ylabel('genes with $n$ mutations')\nplt.show()",
"We can see the data resembles a power law but does not quite fit. It looks like it has a bump in the middle, this may be because the genes have wildly varying lengths. In order to correct this we have to normalize the mutations per gene by the length of the gene. This is done as follows:",
"# In order to find out the length of the \n# genes, we will use the Ensembl REST API.\nimport ensembl_rest\nfrom itertools import islice\n\ndef chunks_of(iterable, size=10):\n \"\"\"A generator that yields chunks of fixed size from the iterable.\"\"\"\n iterator = iter(iterable)\n while True:\n next_ = list(islice(iterator, size))\n if next_:\n yield next_\n else:\n break\n# ---\n \n# Instantiate a client for communication with\n# the Ensembl REST API.\nclient = ensembl_rest.EnsemblClient()\n\n\nnormalized_counts = dict()\nlengths_distribution = Counter()\nfor i, gene_batch in enumerate(chunks_of(mutations_per_gene, size=1000)):\n # Get information of the genes\n gene_data = client.symbol_post('human',\n params={'symbols': gene_batch})\n gene_lengths = {gene: data['end'] - data['start'] + 1\n for gene, data in gene_data.items()}\n lengths_distribution.update(gene_lengths.values())\n \n # Get the normalization\n normalized_counts.update({\n gene: mutations_per_gene[gene] / gene_lengths[gene]\n for gene in gene_data\n })\n \n print((i+1)*1000)\n\nc = Counter()\nc.update(normalized_counts)\nc.most_common(10)\n\nnormalized_distribution = Counter(normalized_counts.values())\nnormalized_distribution.most_common(10)\n\nx = sorted(normalized_distribution.keys())\ny = [normalized_distribution[i] for i in x]\n\nplt.figure(figsize=(10, 7))\n\nplt.plot(x, y)\nplt.xscale('log')\nplt.title('Mutations per base distribution by gene (normalized)')\nplt.xlabel('$x$')\nplt.ylabel('genes with $x$ mutations per base pair')\nplt.show()\n\nmax(lengths_distribution)\n\nmin(lengths_distribution)\n\nlengths_distribution.most_common(5)\n\nx = sorted(lengths_distribution.keys())\ny = [lengths_distribution[i] for i in x]\n\nplt.figure(figsize=(10, 7))\n\nplt.plot(x, y)\nplt.xscale('log')\nplt.yscale('log')\nplt.title('Gene lengths distribution')\nplt.xlabel('$L$')\nplt.ylabel('genes with length $L$')\n\nplt.savefig('gene-lengths.png')\nplt.show()\n\nlengths_distribution"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
relopezbriega/mi-python-blog
|
content/notebooks/gA Tech Contest - Challenge 02.ipynb
|
gpl-2.0
|
[
"BIG DATA - Data Analysis\n\nNoteBook Created by Raul E. Lopez Briega\n<a href='mailto:relopezbriega@gmail.com?subject=hello neo&body=Hola Raul ' target='_blank'>relopezbriega@gmail.com</a>\n<a href='http://relopezbriega.com.ar' target='_blank'>relopezbriega.com.ar</a>\n\nSolution Title: Data Pythonisa\nGroup: dotCOM\n\nCaptain: Raul Lopez Briega\nMember 2:Stephanie Anglarill\nMember 3:Daniel Garac y Gojac\n\n\nLicensed under the Apache License, Version 2.0 (the \"License\"): you may\nnot use this file except in compliance with the License. You may obtain\na copy of the License at <a href='http://www.apache.org/licenses/LICENSE-2.0' target='_blank'>http://www.apache.org/licenses/LICENSE-2.0</a> Unless required by applicable law or agreed to in writing, software \ndistributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT \nWARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the \nLicense for the specific language governing permissions and limitations \nunder the License.\n\nChallenge business background\nA non-profitable organization (NGO – non government organization) supports its\noperation by organizing periodically some fund raising mailing campaigns. This\norganization has created a huge database with more than 3.5 million individuals\nthat at least once in the past was a donor.\nThe fund raising campaigns are performed by sending to the mailing list (or to a\nsubset of it) a symbolic gift and asking for a donation.\nOnce the campaign is planned, the campaign cost is automatically known:\n[number of potential donor contacted] x ([gift cost] + [mailing cost]).\nNevertheless, the fund raising result depends on both the number of donors that\nrespond to the campaign and the average amount of money that was donated.\nThe typical outcome of predictive modeling in database marketing is an estimate of\nthe expected response/return per customer in the database. A marketer will mail\nto a customer as long as the expected return from an order exceeds the cost\ninvested in generating the order, i.e., the cost of promotion. For our purpose, the\npackage cost (([gift cost] + [mailing cost]) of this new campaign is 0.75 per piece\nmailed.\nThe net revenue of the campaign is calculated as the SUM(the actual donation amount minus 0.75) over all records for which the expected revenue (or predicted value of the donation) is over $0.75. Our object is to help the NGO to select to which donors of its mailing list the campaign should address.\nSome business in-sights might be used in order to drive the exploratory phase of\nthe data analysis. For example:\n\n\nIt is quite difficult to get the attention of inactive donors (do not answer to the\nNGO appeals in the last 24 months).\n\n\nFrom a long term perspective and health of the mailing list, donors that did no\nanswer to campaigns in the last 13 to 24 months may become inactive.\n\n\nSo the provided data is the subset of donors that have not donated in the last 13 to 24 months (risk to become inactive).\n\n\nIs there any correlation (or inverse correlation) between likelihood to respond and the dollar amount of the gift?\n\n\nIf there is an inverse correlation, should the high dollar donors be invited? \n\n\nIf they are suppressed, the loss revenue would offset any gains due to the increased response rate of the low dollar donors?\n\n\n\nIntroduction\nThe rapid advance of infrastructure technologies have improved the ability to collect data throughout the enterprise. Virtually every aspect of business is now open to data collection and often even instrumented for data collection: operations, manufacturing, supply-chain management, customer behavior, marketing campaign performance, workflow procedures, and so on. At the same time, information is now widely available on external events such as market trends, industry news, and competitors’ movements. This broad availability of data has led to increasing interest in methods for extracting useful information and knowledge from data—the realm of data science.\nWith vast amounts of data now available, companies in almost every industry are focused on exploiting data for competitive advantage. In the past, firms could employ\nteams of statisticians, modelers, and analysts to explore datasets manually, but the volume and variety of data have far outstripped the capacity of manual analysis. At the\nsame time, computers have become far more powerful, networking has become ubiquitous, and algorithms have been developed that can connect datasets to enable broader\nand deeper analyses than previously possible. The convergence of these phenomena has\ngiven rise to the increasingly widespread business application of data science principles\nand data-mining techniques.\nData analysis is now so critical to business strategy. Businesses\nincreasingly are driven by data analytics, so there is great professional advantage in\nbeing able to interact competently with and within such businesses. Understanding the\nfundamental concepts, and having frameworks for organizing data-analytic thinking\nnot only will allow one to interact competently, but will help to envision opportunities\nfor improving data-driven decision-making, or to see data-oriented competitive threats.\nFirms in many traditional industries are exploiting new and existing data resources for\ncompetitive advantage. They employ data science teams to bring advanced technologies\nto bear to increase revenue and to decrease costs. In addition, many new companies are\nbeing developed with data mining as a key strategic component. \nBut the data can only help you if you know how to read it; so in order to take advantage of it benefits, we propose the following framework to manipulate, analyze, visualize and share the data insights:\n\n\nThe principal technology of our framework is IPython Notebook. The IPython Notebook is a web-based interactive computational environment where you can combine code execution, text, mathematics, plots and rich media into a single document.These notebooks are normal files that can be shared with colleagues, converted to other formats such as HTML or PDF, etc. This makes it easy to give your colleagues a document they can read immediately without having to install anything. This document itselft was made using this technology.\n\n\nFor Data Analysis, we will use the Python Programming Language, most precisely, its great Data driven modules pandas, scikit-learn, matplotlib and numpy. As we will show in this notebook, these are wonderful tools for data analysis. Another alternative to Python, was the popular statistics programming language R. R is a great tool for data analysis, with great libraries too; but we choose Python because is easier to learn and understand than R. Moreover, if we need some specific functionality from R; we can call it from Python using Rpy2 module and iPython magic. \n\n\nWith this framework, we are not only going to be able to perform our analysis, but also we will able to create a easy sharing report as we do the analysis.\n\nProof of Concent for gA Tech Contest 2013 - Challenge 02\n\nTime to start with the analysis, the first thing to do is import the python modules we will use.",
"import pandas as pd # importing pandas\nimport numpy as np # importing numpy\nfrom pandas import DataFrame, Series # importing DataFrame and Series objects from pandas\nimport matplotlib.pyplot as plt # importing matplotlib for plotting.\nfrom sklearn.ensemble import RandomForestRegressor # importing RandomForest; maching learning algorithm for classification.\nfrom IPython.display import Image, HTML, display # IPython rich display Image.\n# Ignoring deprecation warning messages.\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# importing the R language iPython integration. Rmagic.\n%load_ext rmagic ",
"To manipulate the data we will use the DataFrame object from pandas library. A DataFrame represents a tabular, spreadsheet-like data structure containing an ordered collection of columns, each of which can be a different value type (numeric,\nstring, boolean, etc.). it is like a in-memory table.",
"# Creating the NGOData DataFrame from the LEARNING dataset.\nNGOData = pd.read_csv('/home/raul/Ga_Tech/gA Tech Contest 2013 - Challenge 02 - Datasets/LEARNING.csv',\n header=0)\n\n# Creating the donors subset from the NGOData.\nNGOData_donors = NGOData[NGOData.DONOR_AMOUNT > 0]",
"Starting the exploration.\nNow it is when the fun start. We are going to start exploring the dataset, looking for insights and useful information.\nAnalyzing the percent of donors.\nFirst we will check the response percent using the DONOR_AMOUNT field.",
"round((NGOData[NGOData.DONOR_AMOUNT > 0]['DONOR_AMOUNT'].count() * 1.0 / NGOData['DONOR_AMOUNT'].count()) * 100.0, 2)\n# percent of donors from the dataset.",
"As we can see the response percent is only 5.08. Sometimes, the data are easy to understand if they are presented graphically. For example, we can created a pie chart for visualize this data with the following code.",
"donors = NGOData.groupby('DONOR_FLAG').IDX.count() # Grouping by DONOR_FLAG\n# Creating the chart labels.\nlabels = [ 'Donors\\n' + str(round(x * 1.0 / donors.sum() * 100.0, 2)) + '%' for x in donors ]\nlabels[0] = 'No ' + labels[0]\n\n\n# Plotting the results using matplotlib.\nfig = plt.figure()\np1 = fig.add_subplot(1,1,1)\np1.pie(donors, labels=labels)\np1.set_title('Portion of Donors')\nplot = fig.show()",
"Here is clear the small portion of donors.\nAnalizing donations amounts\nNow we will explore the donations amounts, for this we will use the DataFrame object NGOData_donors that we created at the beginning.",
"donors_amounts = NGOData_donors.groupby('DONOR_AMOUNT').size() # Grouping by DONOR_FLAG\n\n# Plotting the grouped amounts.\nplot = donors_amounts.plot(kind='bar', title='Donation amounts')",
"This graphic is not clear because it has too many amounts; so we could make a segmentation; for this we will create a custom function in order to segment the amounts into categories.",
"def segment_amounts(serie):\n \"\"\"This function return a pandas Series object with the values segemented into categories \"\"\"\n \n # Create a Serie, with our segments as index.\n result = Series(index=['0-10', '10-20', '20-30', '30-40', '40-50', '50-60', '60-100', '100-200']).fillna(0)\n \n # Segmenting the amounts into the new category indexes.\n for index, amount in serie.iteritems():\n if index < 10.1:\n result['0-10'] += amount\n elif index < 20.1:\n result['10-20'] += amount\n elif index < 30.1:\n result['20-30'] += amount\n elif index < 40.1:\n result['30-40'] += amount\n elif index < 50.1:\n result['40-50'] += amount\n elif index < 60.1:\n result['50-60'] += amount\n elif index < 100.1:\n result['60-100'] += amount\n else:\n result['100-200'] += amount\n \n return result\n\n# Calling our segmentation function.\ndonors_amounts1 = segment_amounts(donors_amounts) \ndonors_amounts1.index.name='Donation amount' # Naming the index.\n\n\n# Plotting semented results.\nplot = donors_amounts1.plot(kind='bar', title='Donors amounts')",
"Now the plot is more clear. We can see that the major number of donations are for a small amount, less than $30.\nAnother way to get the same results is using the pandas built-in functions cut and value_counts.",
"# using pandas cut function to segment the Serie.\nbb = pd.cut(NGOData_donors['DONOR_AMOUNT'], [0, 10, 20, 30, 40, 50, 60, 100, 200])\n\n# Plotting the results using pandas value_counts function.\nplot = pd.value_counts(bb).plot(kind='bar', title='Donation amounts')",
"One of the most useful graphics in descriptive statitics is the boxplot. the Boxplot is a convenient way of graphically depicting groups of numerical data through their quartiles. Box plots may also have lines extending vertically from the boxes indicating variability outside the upper and lower quartiles. Outliers are plotted as individual points in this graphs.\nHere, we will use R language to create the boxplot graph, because in R it is much easier to create a boxplot.",
"# R programming language is better for boxplot graph, so we will use Rmagic to made a donation amount boxplot using R.\n# Passing python DataFrame to R.\n%R -i NGOData_donors \n\n# R boxplot of donation amounts.\n%R donation <- NGOData_donors$DONOR_AMOUNT\nplot = %R boxplot(donation)",
"Here we can see that donation amounts of 200 and 150 are outliers; the main distribution of donation amounts is between 0 and 50, with an average of 15.\nAnalyzing Total Cost and average Donation amounts.\nNow we are going to analyze the profits if we mailing every donor in the data set.",
"cost = 0.75 # the cost by donor mailed.\n\n# Calculating the profit of mailing every donor in the data set.\ntotal_cost_all = cost * NGOData['DONOR_AMOUNT'].count()\ntotal_donations_all = NGOData['DONOR_AMOUNT'].sum()\ntotal_profits_all = round(total_donations_all - total_cost_all, 2)\ntotal_profits_all\n\n# Average donation all dataset.\nround(NGOData['DONOR_AMOUNT'].mean(), 2)\n\n# Average donation only donators.\nround(NGOData_donors['DONOR_AMOUNT'].mean(), 2)\n\n# Average Profit\nround((NGOData_donors['DONOR_AMOUNT'].sum() - \\\n cost * NGOData['DONOR_AMOUNT'].count()) / NGOData['DONOR_AMOUNT'].count(), 2)",
"After our analysis we can see that the profit after mailing every donor in the dataset will be 2004.53, with an average donation amount of 0.79 and a average porfit of 0.04; not quite good numbers. We will try to improve this profits with our analysis.\nExploring the data\nIn this section we start with the exploration process, we will try to find out some insights from the dataset.",
"# useful describe statistics on the data.\ndescribe = NGOData.describe()\n\n# Collection of numeric columns.\nnumeric_columns = list(describe.columns)\n\n# Content of describe DataFrame for DONOR_AMOUNT column.\ndescribe['DONOR_AMOUNT']",
"Here we see that the describe method give us useful information that we can use to get some insights and could help us to filter the dataset.\nNow we are going to export the content of describe to a CSV file, so we can take a better look using Excel or any other spreasheet tool.",
"describe.to_csv('/home/raul/Ga_Tech/gA Tech Contest 2013 - Challenge 02 - Datasets/NGODescribe.csv')\n\n# Correlation DataFrame on Excel.\nImage(filename='/home/raul/Ga_Tech/gA Tech Contest 2013 - Challenge 02 - Datasets/descr_excel.png')",
"After taking a look to the dataset, we are ready to select only some columns. This way the amount of data we have to manage is reduced and our exploration functions and algorithms run faster.",
"columns = [\n # demographics\n \"ODATEDW\", \"OSOURCE\", \"STATE\", \"EC8\", \"PVASTATE\", \"DOB\", \"RECINHSE\",\n \"MDMAUD\", \"DOMAIN\", \"CLUSTER\", \"AGE\", \"HV2\", \"CHILD03\", \"CHILD07\",\"IC4\",\n \"CHILD12\", \"CHILD18\", \"NUMCHLD\", \"INCOME\", \"GENDER\", \"WEALTH1\", \"HIT\",\n # donor interests\n \"COLLECT1\", \"VETERANS\", \"BIBLE\", \"CATLG\", \"HOMEE\", \"PETS\", \"CDPLAY\",\n \"STEREO\", \"PCOWNERS\", \"PHOTO\", \"CRAFTS\", \"FISHER\", \"GARDENIN\", \"BOATS\",\n \"WALKER\", \"KIDSTUFF\", \"CARDS\", \"PLATES\",\n # PEP star RFA status\n \"PEPSTRFL\",\n # summary variables of promotion history\n \"CARDPROM\", \"MAXADATE\", \"NUMPROM\", \"CARDPM12\", \"NUMPRM12\",\n # summary variables of donation history\n \"RAMNTALL\", \"NGIFTALL\", \"CARDGIFT\", \"MINRAMNT\", \"MAXRAMNT\", \"LASTGIFT\",\n \"LASTDATE\", \"FISTDATE\", \"TIMELAG\", \"AVGGIFT\",\"RAMNT_3\",\n # ID & donor variables.\n \"IDX\", \"DONOR_FLAG\", \"DONOR_AMOUNT\", \n # RFA (Recency/Frequency/Donation Amount)\n \"RFA_2F\", \"RFA_2A\", \"MDMAUD_R\", \"MDMAUD_F\", \"MDMAUD_A\",\n #others\n \"CLUSTER2\", \"GEOCODE2\"]\n\n# Creating a new DataFrame with the columns subset.\nnew_NGOData = NGOData[columns]\n\n# Analysis of Age distribution.\nplot = new_NGOData['AGE'].hist().set_title('Age distribution')\n\n# Analysis of Number of childs.\nplot = new_NGOData['NUMCHLD'].hist().set_title('number of childs distribution')\n\n# exploring the HIT value. The number of responses of a donor.\nplot = boxplot(new_NGOData['HIT'])\n\nplot = boxplot(new_NGOData[new_NGOData.HIT < 200]['HIT'])",
"Here we can see that there are some values of the HIT variable that are separate from the majority of HIT distribution.",
"# Creating a new DataFrame of NGOData_donors with the columns subset.\nnew_NGOData_donors = NGOData_donors[columns]\n\nAGE2 = pd.cut(new_NGOData_donors['AGE'], range(0, 100, 5))\n\nplot = pd.value_counts(AGE2).plot(kind='bar', title='Donations amounts by age')\n\n# Adding the AGE2 segment column to our DataFrame.\nnew_NGOData_donors['AGE2'] = AGE2\n\n# Exploring the donors amounts by age.\nplot = new_NGOData_donors[['DONOR_AMOUNT', 'AGE2']].boxplot(by='AGE2')",
"This plot shows that people aged from 30 to 60 are of higher median amount donation than others.",
"plot = new_NGOData_donors[new_NGOData_donors.DONOR_AMOUNT < 41][['DONOR_AMOUNT', 'AGE2']].boxplot(by='AGE2')",
"Here we confirmed the same observations.",
"# Exploring the donors amounts by gender.\nplot = new_NGOData_donors[new_NGOData_donors.DONOR_AMOUNT <= 80][['DONOR_AMOUNT', 'GENDER']].boxplot(by='GENDER')",
"Here we can see that the join and the male are the genders with the higher media amount of donations.",
"plot = new_NGOData_donors.groupby('GENDER').size().plot(kind='bar').set_title('Gender distribution')",
"in this plot we can see the proportion between Males and Females. Females is a larger group of donors.",
"# Listing the state ranking.\nstates = new_NGOData_donors.groupby('STATE').size()\nstates.sort(ascending=False)\nstates[:5] # top 5 states.\n\n# Exploring the donors amounts by States.\nplot = new_NGOData_donors[new_NGOData_donors.STATE.isin(['CA', 'FL', 'TX', 'MI', 'IL', 'NC', 'WA'])] \\\n[['DONOR_AMOUNT', 'STATE']].boxplot(by='STATE')",
"Here we see that most donations came from CA and FL states and the media donation amount of this states is greater than the others states.\nChecking the Correlation between DONOR_AMOUNT and the others numeric variables\nNow we will check correlation to the donation amounts.",
"# numeric columns.\nix_numeric = list(NGOData.describe().columns)\n\n# creating a correlation Serie.\ncorrelation = NGOData[ix_numeric].corrwith(new_NGOData['DONOR_AMOUNT'])\n\n# Sorting the correlation Serie.\ncorrelation = abs(correlation)\ncorrelation.sort(ascending=False)\n\ncorrelation[:30]",
"Here we see the fields with the best correlation to the donor amount.",
"# Correlation between all columns.\ncorr_all = NGOData[ix_numeric].corr()\n\ncorr_all[ix_numeric[:5]][:5]\n\n# Export the correlation DataFrame to csv.\ncorr_all.to_csv('/home/raul/Ga_Tech/gA Tech Contest 2013 - Challenge 02 - Datasets/corr_all.csv')\ncorrelation.to_csv('/home/raul/Ga_Tech/gA Tech Contest 2013 - Challenge 02 - Datasets/corr_amounts.csv')\n\n# Correlation DataFrame on Excel.\nImage(filename='/home/raul/Ga_Tech/gA Tech Contest 2013 - Challenge 02 - Datasets/corr_excel.png')",
"Calculating donation probability\nHere we are going to calculate some porbabilities based on the insights we obtains with the correlation calculations and our exploration.",
"#Calculating overall donation probability.\naverage_prob = round((NGOData[NGOData.DONOR_AMOUNT > 0]['DONOR_AMOUNT'].count() * 1.0 \\\n / NGOData['DONOR_AMOUNT'].count()) * 100.0, 2)\naverage_prob \n\n#Calculating donation probability for donors with a lastgift less or equal to 10.\na = round((NGOData[(NGOData.DONOR_AMOUNT > 0) & (NGOData.LASTGIFT <= 10)]['DONOR_AMOUNT'].count() * 1.0 \\\n / NGOData[NGOData.LASTGIFT <= 10]['DONOR_AMOUNT'].count()) * 100.0, 2)\na\n\n# Plotting the comparison.\nlastgift = Series({'average': average_prob, 'lastgift<=10': a})\nplot=lastgift.plot(kind='barh', color=['blue', 'green']).set_title('Donation probabiliy')\n\n# Average donation.\naverage_donation = round(NGOData_donors['DONOR_AMOUNT'].mean(), 2)\naverage_donation\n\n# Average donation lastgift <= 10\na = round(NGOData_donors[NGOData_donors.LASTGIFT <= 10]['DONOR_AMOUNT'].mean(), 2)\na\n\n# Plotting the comparison.\nlastgift = Series({'average': average_donation, 'lastgift<=10': a})\nplot = lastgift.plot(kind='barh', color=['blue', 'green']).set_title('Average gross donations')\n\n#Calculating donation probability for donors with a lastgift greater than 35.\na = round((NGOData[(NGOData.DONOR_AMOUNT > 0) & (NGOData.LASTGIFT >35)]['DONOR_AMOUNT'].count() * 1.0 \\\n / NGOData[NGOData.LASTGIFT > 35]['DONOR_AMOUNT'].count()) * 100.0, 2)\na\n\n# Plotting the comparison.\nlastgift = Series({'average': average_prob, 'lastgift>35': a})\nplot=lastgift.plot(kind='barh', color=['blue', 'green']).set_title('Donation probabiliy lastgift >35')\n\n# Average donation lastgift > 35\na = round(NGOData_donors[NGOData_donors.LASTGIFT > 35]['DONOR_AMOUNT'].mean(), 2)\na\n\n# Plotting the comparison.\nlastgift = Series({'average': average_donation, 'lastgift>35': a})\nplot = lastgift.plot(kind='barh', color=['blue', 'green']).set_title('Average gross donations lastgift > 35')",
"Here we can see that the donation probability is better when the previous donation amount decrease. We can conclude that there is a inverse correlation between the donation amounts and the probability of donation.",
"# Total donation learning data set.\ntotal_donation = round(NGOData_donors['DONOR_AMOUNT'].sum(), 2)\ntotal_donation\n\n# donation amount for donors with lastgift > 35\na = round(NGOData_donors[NGOData_donors.LASTGIFT > 35]['DONOR_AMOUNT'].sum(), 2)\na\n\n# Donors with higher average donation.\nb = round(NGOData_donors[NGOData_donors.LASTGIFT > 35]['DONOR_AMOUNT'].count(), 2)\nb\n\n# percentage of total donation.\nround(a / total_donation * 100, 4)",
"Here we can see thath only 64 donors accounts for the 8% of the total donation amount.",
"# donation amount for donors with max donation over $30\na = round(NGOData_donors[NGOData_donors.MAXRAMNT > 30]['DONOR_AMOUNT'].sum(), 2)\na\n\n# Donors with max donation over $30\nb = round(NGOData_donors[NGOData_donors.MAXRAMNT > 30]['DONOR_AMOUNT'].count(), 2)\nb\n\n# percentage of total donation.\nround(a / total_donation * 100, 4)",
"Here we can see thath only 136 donors account for the 13% of the total donation amount.",
"# donation amount for donors with total past donations greater than $250\na = round(NGOData_donors[NGOData_donors.RAMNTALL > 250]['DONOR_AMOUNT'].sum(), 2)\na\n\n# Donors with total past donations greater than $250\nb = round(NGOData_donors[NGOData_donors.RAMNTALL > 250]['DONOR_AMOUNT'].count(), 2)\nb\n\n# percentage of total donation.\nround(a / total_donation * 100, 4)\n\n# overlap between the two previous segments\nb = round(NGOData_donors[(NGOData_donors.RAMNTALL > 250) & (NGOData_donors.MAXRAMNT >30) ]\\\n ['DONOR_AMOUNT'].count(), 2)\nb",
"only 54 donors in common between the two segments",
"#Calculating donation probability for donors who have donated in the 96NK campaign.\na = round((NGOData[(NGOData.DONOR_AMOUNT > 0) & (NGOData.RAMNT_3 > 3.5)]['DONOR_AMOUNT'].count() * 1.0 \\\n / NGOData[NGOData.RAMNT_3 > 3.5]['DONOR_AMOUNT'].count()) * 100.0, 2)\na",
"people who have donated over $3.50 in the 96NK campaign have a higher probability of donating than the average.",
"# Average donation for donors who have donated in the 96NK campaign.\nb = round(NGOData_donors[NGOData_donors.RAMNT_3 > 3.5]['DONOR_AMOUNT'].mean(), 2)\nb\n\n# Plotting the comparison.\ncomp = Series({'average': average_prob, '96NK campaign': a})\nplot=comp.plot(kind='barh', color=['blue', 'green']).set_title('Donation probabiliy 96NK campaign')\n\n# Plotting the comparison.\ncomp = Series({'average': average_donation, '96NK campaign': b})\nplot = comp.plot(kind='barh', color=['blue', 'green']).set_title('Average gross donations 96NK campaign')\n\n# IC4 Average family income in hundreds\nIC4 = round(NGOData_donors['IC4'].mean(), 2)\nIC4\n\n#Calculating donation probability for IC4\na = round((NGOData[(NGOData.DONOR_AMOUNT > 0) & (NGOData.IC4 > 800)]['IC4'].count() * 1.0 \\\n / NGOData[NGOData.IC4 > 800]['DONOR_AMOUNT'].count()) * 100.0, 2)\na\n\n# Average donation for IC4\nb = round(NGOData_donors[NGOData_donors.IC4 > 800]['DONOR_AMOUNT'].mean(), 2)\nb\n\n# Plotting the comparison.\ncomp = Series({'average': average_prob, 'family income': a})\nplot=comp.plot(kind='barh', color=['blue', 'green']).set_title('Donation probabiliy by family income')\n\n# Plotting the comparison.\ncomp = Series({'average': average_donation, 'family income': b})\nplot = comp.plot(kind='barh', color=['blue', 'green']).set_title('average donation by family income')\n\na = round(NGOData_donors[NGOData_donors.IC4 > 800]['DONOR_AMOUNT'].count())\na\n\nb = round(NGOData_donors[NGOData_donors.IC4 > 800]['DONOR_AMOUNT'].sum())\nb\n\n# percentage of total donation.\nround(b / total_donation * 100, 4) ",
"6% of total donation came from families with an average income greater than 80.000 a year.",
"#Calculating donation probability for HV2\na = round((NGOData[(NGOData.DONOR_AMOUNT > 0) & (NGOData.HV2 > 1600)]['HV2'].count() * 1.0 \\\n / NGOData[NGOData.HV2 > 1600]['DONOR_AMOUNT'].count()) * 100.0, 2)\na\n\n# Average donation for HV2\nb = round(NGOData_donors[NGOData_donors.HV2 > 1600]['DONOR_AMOUNT'].mean(), 2)\nb\n\n# Plotting the comparison.\ncomp = Series({'average': average_prob, 'average home value': a})\nplot=comp.plot(kind='barh', color=['blue', 'green']).set_title('Donation probabiliy by average home value')\n\n# Plotting the comparison.\ncomp = Series({'average': average_donation, 'average home value': b})\nplot = comp.plot(kind='barh', color=['blue', 'green']).set_title('average donation by average home value')\n\nb = round(NGOData_donors[NGOData_donors.HV2 > 1600]['DONOR_AMOUNT'].sum())\nb\n\na = round(NGOData_donors[NGOData_donors.HV2 > 1600]['DONOR_AMOUNT'].count())\na\n\n# percentage of total donation.\nround(b / total_donation * 100, 4) ",
"26% of total donation came from families with an average home value greater than 160.000.",
"#Calculating donation probability for EC8\na = round((NGOData[(NGOData.DONOR_AMOUNT > 0) & (NGOData.EC8 > 12)]['EC8'].count() * 1.0 \\\n / NGOData[NGOData.EC8 > 12]['DONOR_AMOUNT'].count()) * 100.0, 2)\na\n\n# Average donation for EC8\nb = round(NGOData_donors[NGOData_donors.EC8 > 12]['DONOR_AMOUNT'].mean(), 2)\nb\n\n# Plotting the comparison.\ncomp = Series({'average': average_prob, '% adults + 25 with a graduate degree ': a})\nplot=comp.plot(kind='barh', color=['blue', 'green']).set_title('Donation probabiliy \\\nby % adults + 25 with a graduate degree')\n\n# Plotting the comparison.\ncomp = Series({'average': average_donation, '% adults + 25 with a graduate degree ': b})\nplot=comp.plot(kind='barh', color=['blue', 'green']).set_title('Average Donation by \\\n% adults + 25 with a graduate degree')\n\n# Number of donors with a EC8 greater than 12.\na = round(NGOData_donors[NGOData_donors.EC8 > 12]['DONOR_AMOUNT'].count())\na\n\n# total donation of donors with a EC8 greater than 12.\nb = round(NGOData_donors[NGOData_donors.EC8 > 12]['DONOR_AMOUNT'].sum())\nb\n\n# percentage of total donation.\nround(b / total_donation * 100, 2) ",
"24% of total donation came from families with an % of adults +25 with graduate degree greater than 12\n\nConclutions from exploration phase\nFrom the exploration phase we can conclude that the most significant variables for predicting a customer’s donation behavior are the previous donation behavior summaries.\n\n\nBuilding our prediction model\nWith all the information we collected in the exploration phase, now we are ready to start building our prediction model. \nFirst we will start with a single model. To build this model, we have created 7 segments from the different insight we got from the exploration data analysis. These segments are:\n\nMAXRAMNT > 30 \nRAMNTALL > 250\nHV2 > 1600 and AGE between 30 and 60.\nEC8 > 12\nIC4 > 800\nRAMNT_3 > 3.5\nSTATE in ('CA', 'FL', 'MI')\n\nAs a first step, we build a function to apply our criteria selection to a dataframe.",
"def apply_model(df):\n \"\"\" This function applies our model sampling to a dataset.\n \n Criteria:\n 1. MAXRAMNT > 30\n 2. RAMNTALL > 250\n 3. HV2 > 1600 and AGE between 30 and 60.\n 4. EC8 > 12\n 5. IC4 > 800\n 6. RAMNT_3 > 3.5\n 7. STATE in ('CA', 'FL', 'MI')\n \n This function will return a python set object with the\n list of IDXs that are selected by our model criteria selection.\n \n \"\"\"\n \n #Building the model sample.\n # Segments samples.\n sample7 = df[df.STATE.isin(['CA', 'FL', 'MI'])]['IDX']\n sample3 = df[(df.HV2 > 1600) & (df.AGE >=30)& (df.AGE >=60)]['IDX']\n sample4 = df[df.EC8 > 12]['IDX']\n sample1 = df[df.MAXRAMNT > 30]['IDX']\n sample2 = df[df.RAMNTALL > 250]['IDX']\n sample5 = df[df.IC4 > 800]['IDX']\n sample6 = df[df.RAMNT_3 > 3.5]['IDX']\n\n # depurating the model sample.\n sample = set(sample7.values)\n # using sets difference propierty to depurate the sample.\n sample = sample ^ set(sample3.values)\n sample = sample ^ set(sample4.values)\n sample = sample ^ set(sample1.values)\n sample = sample ^ set(sample2.values)\n sample = sample ^ set(sample5.values)\n sample = sample ^ set(sample6.values)\n \n return sample\n ",
"Then we build another function to test our single model.",
"# Building our simple model.\ndef single_model(df, cost):\n \"\"\"\n This function apply the simple model to a DataFrame.\n The model is builded under the following segments:\n \n 1. MAXRAMNT > 30\n 2. RAMNTALL > 250\n 3. HV2 > 1600 and AGE between 30 and 60.\n 4. EC8 > 12\n 5. IC4 > 800\n 6. RAMNT_3 > 3.5\n 7. STATE in ('CA', 'FL', 'MI')\n \n Parameters:\n * df : DataFrame to apply the model\n * cost: Cost per piece mailed.\n \n print the dataset and model information\n plot the comparison between the given dataset and the model.\n \n Returns the DataFrame with the model subselection.\n \n \"\"\"\n # copy the Dataframe to a new object.\n df1 = df\n \n #Calculating profits for all DataFrame.\n total_donations_all = round(df['DONOR_AMOUNT'].sum(), 2)\n total_cost_all = round(cost * df['DONOR_AMOUNT'].count(), 2) \n total_profits_all = total_donations_all - total_cost_all\n mean_donation_all = df[df.DONOR_FLAG == 1]['DONOR_AMOUNT'].mean()\n donation_prob_all = round((df[df.DONOR_FLAG == 1]['DONOR_AMOUNT'].count() * 1.0 \\\n / df['DONOR_AMOUNT'].count()) * 100.0, 2)\n \n #Building the model sample with our apply_sample function.\n sample = apply_model(df)\n\n sample_all = list(sample) # sample size.\n \n # Applying our sample to the new dataframe.\n df1 = df1[df1.IDX.isin(sample_all)]\n\n # Calculating contribution profits of model\n total_donations = round(df1['DONOR_AMOUNT'].sum(), 2)\n total_cost = round(cost * len(sample_all), 2) \n model_profits = total_donations - total_cost\n profit_improvement = round(((model_profits - total_profits_all) / total_profits_all) * 100, 2)\n mean_donation = df1[NGOData.DONOR_FLAG == 1]['DONOR_AMOUNT'].mean()\n donation_prob = (float(df1[NGOData.DONOR_FLAG == 1]['DONOR_AMOUNT'].count()) \\\n / float(len(sample))) * 100\n donors_percent = (len(sample) * 1.0 /df['IDX'].count()) * 100.0\n\n # Printing the results\n # Printing all df values.\n print 'Original dataset values:\\n'\n print 'All dataset size: %d' % df['IDX'].count()\n print 'All dataset donation prob.: %.2f%%' % donation_prob_all\n print 'All dataset donations: $%.2f' % total_donations_all\n print 'All dataset cost: $%.2f' % total_cost_all\n print 'All dataset profits: $%.2f' % total_profits_all\n print 'All dataset mean donation: $%.2f' % mean_donation_all\n print '\\n'\n # Printing model values.\n print 'Model values:\\n'\n print 'Model sample size: %d' % len(sample)\n print 'Model sample donation prob.: %.2f%%' % donation_prob\n print 'Model total donations: $%.2f' % total_donations\n print 'Model total cost: $%.2f' % total_cost\n print 'Model total profits: $%.2f' % model_profits\n print 'Model mean donation: $%.2f' % mean_donation\n print 'Model profit improvement: %.2f %%' % profit_improvement\n print 'Model donors mailed percent: %.2f %%' % donors_percent\n \n # Plotting the comparison.\n # Average donation\n comp = Series({'All dataset average donation': mean_donation_all, 'Model average donation': mean_donation})\n comp2 = Series({'All dataset donation prob.': donation_prob_all, 'Model donation porb.': donation_prob})\n plt.figure()\n comp.plot(kind='barh', color=['blue', 'green']).set_title('Average Donation all dataset vs model')\n plt.figure()\n comp2.plot(kind='barh', color=['blue', 'green']).set_title('Donation probability all dataset vs model')\n \n return df1\n\n# Applying the simple model to the NGO dataset.\nx = single_model(NGOData, cost)",
"Applying this single model to the LEARNING dataset we can see a profit improvement of 51.11%; in our model we only need to mail 17,561 customers from the dataset to obtain a mean donation of 16.43 and a total profit of 3,020.03\n\nApplying the model to the validation dataset\nNow, we can test our simple model in the validation dataset. In this dataset we do not have any donation information, so we have to infer it from the learning dataset.",
"# Creating the NGOvalidation DataFrame from the VALIDATION dataset.\nNGOvalidation = pd.read_csv('/home/raul/Ga_Tech/gA Tech Contest 2013 - Challenge 02 - Datasets/VALIDATION.txt',\n header=0)",
"In order to test our model in the validation dataset; we need to build a custom function that predict the donation amounts for the validation dataset from the learning dataset.",
"def single_model_val(dfl, dfv, cost):\n \"\"\"\n This function apply the simple model to a DataFrame.\n The model is builded under the following segments:\n \n 1. MAXRAMNT > 30\n 2. RAMNTALL > 250\n 3. HV2 > 1600 and AGE between 30 and 60.\n 4. EC8 > 12\n 5. IC4 > 800\n 6. RAMNT_3 > 3.5\n 7. STATE in ('CA', 'FL', 'MI')\n \n Parameters:\n * dfl : the learning dataset.\n * dfv : the validation dataset.\n * cost: Cost per piece mailed.\n \n Prints the original dataset, the learning dataset and the validation dataset information.\n Plot the comparison between the given dataset and the model.\n \n Returns the DataFrame with the model subselection.\n \n \"\"\"\n \n learn = dfl # copy the learning dataset\n valid = dfv # copy the validation dataset\n \n learn_values = apply_model(learn) # applying our model to the learning dataset\n valid_values = apply_model(valid) # applying our model to the validation dataset\n\n learn = learn[learn.IDX.isin(learn_values)] # selecting the customers\n valid = valid[valid.IDX.isin(valid_values)] # selecting the customers\n \n # Calculating variables for learning dataset\n total_donations_learn = round(learn['DONOR_AMOUNT'].sum(), 2)\n total_cost_learn = round(cost * len(learn_values), 2) \n model_profits_learn = total_donations_learn - total_cost_learn\n mean_donation_learn = learn[learn.DONOR_FLAG == 1]['DONOR_AMOUNT'].mean()\n donation_prob_learn = (float(learn[learn.DONOR_FLAG == 1]['DONOR_AMOUNT'].count()) \\\n / float(len(learn_values))) \n \n #Calculating variables for all DataFrame. \n mean_donation_all = dfl[dfl.DONOR_FLAG == 1]['DONOR_AMOUNT'].mean()\n donation_prob_all = dfl[dfl.DONOR_FLAG == 1]['DONOR_AMOUNT'].count() * 1.0 \\\n / dfl['DONOR_AMOUNT'].count()\n total_donations_all = mean_donation_all * donation_prob_all * len(dfv)\n total_cost_all = cost * len(dfv)\n total_profits_all = total_donations_all - total_cost_all\n \n # Calculation varaibles for validation dataset.\n total_donations_valid = mean_donation_learn * donation_prob_learn * len(valid_values)\n total_cost_valid = round(cost * len(valid_values), 2)\n model_profits_valid = total_donations_valid - total_cost_valid\n donors_percent_valid = (len(valid_values) * 1.0 /dfv['IDX'].count()) * 100.0\n profit_improvement_valid = (model_profits_valid - total_profits_all) / total_profits_all \n\n # Printing the results\n # Printing all df values.\n print 'Original validation dataset values:\\n'\n print 'All dataset size: %d' % len(dfv)\n print 'All dataset donation prob.: %.2f%% (infer from learning)' % (donation_prob_all * 100)\n print 'All dataset donations: $%.2f (infer from learning)' % total_donations_all\n print 'All dataset cost: $%.2f' % total_cost_all\n print 'All dataset profits: $%.2f' % total_profits_all\n print 'All dataset mean donation: $%.2f (infer from learning)' % mean_donation_all\n print '\\n'\n # Printing learning df values.\n print 'Learning dataset values:\\n'\n print 'Learning dataset size: %d' % len(learn_values)\n print 'Learning dataset donation prob.: %.2f%%' % (donation_prob_learn * 100)\n print 'Learning dataset donations: $%.2f' % total_donations_learn\n print 'Learning dataset cost: $%.2f' % total_cost_learn\n print 'Learning dataset profits: $%.2f' % model_profits_learn\n print 'Learning dataset mean donation: $%.2f' % mean_donation_learn\n print '\\n'\n # Printing validation values.\n print 'Validation dataset values:\\n'\n print 'Validation sample size: %d' % len(valid_values)\n print 'Validation sample donation prob.: %.2f%%' % (donation_prob_learn * 100)\n print 'Validation total donations: $%.2f' % total_donations_valid\n print 'Validation total cost: $%.2f' % total_cost_valid\n print 'Validation total profits: $%.2f' % model_profits_valid\n print 'Validation mean donation: $%.2f' % mean_donation_learn\n print '\\n'\n print 'Model profit improvement: %.2f %%' % (profit_improvement_valid * 100)\n print 'Model donors mailed percent: %.2f %%' % donors_percent_valid\n \n # Plotting the comparison.\n # Average donation\n comp = Series({'All dataset average donation': mean_donation_all, \\\n 'Model average donation': mean_donation_learn})\n comp2 = Series({'All dataset donation prob.': donation_prob_all * 100, \\\n 'Model donation porb.': donation_prob_learn * 100.0})\n plt.figure()\n comp.plot(kind='barh', color=['blue', 'green']).set_title('Average Donation all dataset vs model')\n plt.figure()\n comp2.plot(kind='barh', color=['blue', 'green']).set_title('Donation probability all dataset vs model')\n \n \n return valid\n \n\naa = single_model_val(NGOData, NGOvalidation, cost)\n",
"Applying our single model to the VALIDATION dataset we can see a profit improvement of 52.39%; mailing 17,700 customers with a mean donation of 16.43 and a total profit of 3,053.01.\nWe can see similar results than the ones we saw when we applied our model to the LEARNING dataset.\nBuilding a prediction model\nIn our previous model, we infer the donation amounts for the validation dataset, from the learning dataset. Now, we are going to build a more complex prediction model using the machine learning algorithm Random Forest. Once we train our model to predict the donation amounts, then we will use it to select the customer to mail.",
"# Selecting the more statistically significant variables to predict the donor_amount.\ncolumns = ['DONOR_AMOUNT', 'IDX', 'HV2', 'SOLP3', 'MAXRAMNT', 'IC4', 'EC8', 'RAMNT_3', \\\n 'RDATE_3', 'RAMNT_21', 'RAMNTALL', 'LASTGIFT', 'RAMNT_14', 'RAMNT_22' ]\n\n# Feature selection\nfeatures = columns[2:]\n\n# Preparing the train dataset\ntrain = NGOData[columns]\n# Cleansing the dataset.\ntrain = train.fillna(0)\n\n# building our Random forest model.\nclf = RandomForestRegressor(n_estimators=50, n_jobs=2)\nclf.fit(train[features], train.DONOR_AMOUNT)\n\n# Predicting the results.\npreds = clf.predict(train[features])\n\n# Testing the results of our prediction model.\n\n# Adding the predicted column to the dataset.\ntrain['DONOR_PRED'] = preds\n\n# previewing the results.\naa = train [['DONOR_AMOUNT', 'DONOR_PRED']]\naa[aa.DONOR_AMOUNT > 0][:10]",
"Here we can see really impresive results, our model can predict very well the donation amounts.",
"# Total donations dataset.\naa['DONOR_AMOUNT'].sum()\n\n# Total donations predicted by model\naa['DONOR_PRED'].sum()\n\n# Value predicted but no actual donation\naa[ (aa.DONOR_AMOUNT == 0) & (aa.DONOR_PRED >0.75 )].count()\n\n# Total Donation amount wrongly predicted\naa[ (aa.DONOR_AMOUNT == 0) & (aa.DONOR_PRED >0.75 )]['DONOR_PRED'].sum()\n\n# mean donation wrongly predicted.\naa[ (aa.DONOR_AMOUNT == 0) & (aa.DONOR_PRED >0.75 )]['DONOR_PRED'].mean()\n\n# Error rate.\nerror_rate = aa[ (aa.DONOR_AMOUNT == 0) & (aa.DONOR_PRED >0.75 )]['DONOR_PRED'].sum() \\\n /aa['DONOR_PRED'].sum()\n\n# Model corrected rate.\ncorrected_rate = round(1.0 - error_rate, 2)\ncorrected_rate\n\n# Actual donations not predicted.\naa[ (aa.DONOR_AMOUNT > 0) & (aa.DONOR_PRED ==0 )].count()\n\n# Actual equals predicted.\naa[ (aa.DONOR_AMOUNT == aa.DONOR_PRED)].count()",
"Now, we could apply our new prediction model to the validation dataset.",
"columns = ['IDX', 'HV2', 'SOLP3', 'MAXRAMNT', 'IC4', 'EC8', 'RAMNT_3', \\\n 'RDATE_3', 'RAMNT_21', 'RAMNTALL', 'LASTGIFT', 'RAMNT_14', 'RAMNT_22' ]\n\n# subset of validation\nvalidation = NGOvalidation[columns]\n# Cleansing the dataset.\nvalidation = validation.fillna(0) \n\n# predicting the donation amounts.\nDONOR_AMOUNT = clf.predict(validation[features])\n\n# Adding predicted donation amounts to validations subset.\nvalidation['DONOR_AMOUNT'] = DONOR_AMOUNT\n\n# Selecting only the customers with a donation greater than cost.\nvalidation_mail = validation[validation.DONOR_AMOUNT > 0.75]\n\n# Calculating customer mailed\nmailed = len(validation_mail)\nprint 'Customer mailed: %d' % mailed\n\n# Calculating total cost.\ntotal_cost = round(len(validation_mail) * 0.75, 2)\nprint 'Total Cost: %2.f' % total_cost\n\n# Calculating total donation amounts.\ntotal_donations = round(validation_mail['DONOR_AMOUNT'].sum() * corrected_rate, 2)\ntotal_donations\n\n# Calculating net profits\nprofits = total_donations - total_cost\nprofits\n\n# Model profits improvement.\nmodel_improvement = round(((profits - total_profits_all)/ total_profits_all) * 100.0, 2)\nprint 'Model profits improvements of %.2f%%' % model_improvement",
"Applying this more complex model to the VALIDATION dataset we can see a profit improvement of 657,45%; mailing 22,124 customers and with a total profit of 15,183.38."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
MegaShow/college-programming
|
Homework/Principles of Artificial Neural Networks/Week 13 DMN/DL_week13_memory.ipynb
|
mit
|
[
"week 13: Memory networks\n0. Introduction\n本次实验的代码主要参考自github的开源代码,原文请点击这里。 如对本次课件的内容有任何疑惑的同学可以直接微信我或者邮件到cuizhiying.csu@gmail.com\n0.1 Experimental content and requirements\n本次实验内容主要实现和运行 Dynamic Memory Networks for Visual and Textual Question Answering (2016) ,使用的是数据集是bAbI数据集 The (20) QA bAbI tasks,具体要求如下:\n1. 体会Memory Network的基本框架结构,阅读实验代码,结合理论课的内容,加深对Memory Network的思考和理解\n2. 独立完成实验指导书中提出的问题(简要回答)\n3. 按照实验指导书的引导,填充缺失部分的代码,让程序顺利地运转起来\n4. 坚持独立完成,禁止抄袭\n5. 实验结束后,将整个文件夹下载下来(注意保留程序运行结果),打包上传到超算课堂网站中(统一使用zip格式压缩)。\n0.2 Recommended Reading\n今天进行的任务是一个典型的QA( Question answering )任务, 这是一个我们之前还没有接触过的任务,所以在进行实验之前,强烈建议同学们可以在课堂之余,看一下相关的文章。对模型结构先有一个清晰认识能够很好帮助我们理解代码。当然,一边看代码一边理解也是一个很好的选择。\n以下四篇论文,在Memory Networks的演变上有清晰的脉络,可供参考。实验中的代码针对的是第四篇文章。\n1. Memory Networks (2015)\n2. End-To-End Memory Networks (2015)\n3. Ask Me Anything: Dynamic Memory Networks for Natural Language Processing (2016)\n4. Dynamic Memory Networks for Visual and Textual Question Answering (2016)\n若果对网络的理解还不够透彻,又不想花太多时间看论文的同学,强烈建议阅读这篇论文总结, 这份文章总结的思路非常清晰,能够非常有效地帮助大家更好地理解和完成本次实验。\n1. Dataset Explore\n1.1 Intuition\n本次实验用到是数据集是bAbI数据集中的 The (20) QA bAbI tasks。在数据存放在./data/en-10k/中,以txt的文件形式存放着,建议先打开文件来看一下,有个感性的认识。\n官方说明引用如下: \n\nThis section presents the first set of 20 tasks for testing text understanding and reasoning in the bAbI project. \nThe aim is that each task tests a unique aspect of text and reasoning, and hence test different capabilities of learning models. \n\nThe file format for each task is as follows:\nID text\nID text\nID text\nID question[tab]answer[tab]supporting fact IDS.\n...\nThe IDs for a given “story” start at 1 and increase. When the IDs in a file reset back to 1 you can consider the following sentences as a new “story”. Supporting fact IDs only ever reference the sentences within a “story”.\nFor Example:\n1 Mary moved to the bathroom.\n2 John went to the hallway.\n3 Where is Mary? bathroom 1\n4 Daniel went back to the hallway.\n5 Sandra moved to the garden.\n6 Where is Daniel? hallway 4\n7 John moved to the office.\n8 Sandra journeyed to the bathroom.\n9 Where is Daniel? hallway 4\n10 Mary moved to the hallway.\n11 Daniel travelled to the office.\n12 Where is Daniel? office 11\n13 John went back to the garden.\n14 John moved to the bedroom.\n15 Where is Sandra? bathroom 8\n1 Sandra travelled to the office.\n2 Sandra went to the bathroom.\n3 Where is Sandra? bathroom 2\n以上应该也是你们在该文件中看到的数据集的样子。\n1.2 dataset prepare\n处理以上格式的数据跟我们以前的任务不太一样,要繁琐很多。数据的预处理自然很重要,但不是本次实验的重点。所以,我们在这次实验中以已经处理好的Dataset和Dataloader的形式直接提供给大家使用。 \n我们只需要知道的是在本次实验中,数据预处理的基本思路是将上述文本,编码成了了单词索引的形式,然后将长短不一的文段采用最大段数的形式,以类似于补0的形式,padding到了同样的长度。这样子,我们就可以以batch的形式进行输入了。如果还没有理解的话,没有关系,我们接下来的测试。(感兴趣的同学可以直接阅读源代码./babi_loader.py)",
"from babi_loader import BabiDataset, pad_collate\n\nfrom torch.utils.data import DataLoader\n# There are 20 tasks, we should control which task we would like to load\ntask_id = 1\ndataset = BabiDataset( task_id )\ndataloader = DataLoader(dataset, batch_size=4, shuffle=False, collate_fn=pad_collate)",
"好了,现在我们导入了dataloader之后,可以尝试将每一个batch的数据打印出来看看是什么样子的",
"contexts, questions, answers = next(iter(dataloader))\nprint(contexts.shape)\nprint(questions.shape)\nprint(answers.shape)\nprint(contexts)\nprint(questions)\nprint(answers)",
"可以看到,打印出来的都是数字索引,不是真实的文本,所以我们在输出之前,肯定需要对这些索引进行重新映射,找回原来的文本信息,我们先看一下查找表",
"references = dataset.QA.IVOCAB\nprint(references)",
"好了,根据这个查找表,我们就可以很顺利得将我们的数据还原出来了。只需要按照索引重新映射回来即可。",
"def interpret_indexed_tensor(contexts, questions, answers):\n for n, data in enumerate(zip(contexts, questions, answers)):\n context = data[0]\n question = data[1]\n answer = data[2]\n \n print(n)\n for i, sentence in enumerate(context):\n s = ' '.join([references[elem.item()] for elem in sentence])\n print(f'{i}th sentence: {s}')\n s = ' '.join([references[elem.item()] for elem in question])\n print(f'question: {s}')\n s = references[answer.item()]\n print(f'answer: {s}')\n\ninterpret_indexed_tensor(contexts, questions, answers)",
"从上面测试中,我们就清楚我们该如何使用这个Dataloader了。 \n同时,通过观察上面的输入和输出,我们应该看到:\n1. <PAD>符号:在同一个batch中,所以的句子都进行了padding的操作,将一个batch中的所有的句子都补到和最长的句子一样长。同时,每个问题也是,比如batchsize为8,则说明这个batch中共有8个问题,但是有个故事可能句子多些,有的故事句子少些,统一padding到最多句子的形式就好了。在rnn中,我们本不必统一句子和文段的长度,在这里使用padding的方式固定长度是出于训练的需要,定长才能将句子和文段以batch的形式输入进行处理,是文本处理的一个常用的方式\n2. <EOS>符号:end of sentence\n3. 有的句子可以同时被多个问题引用到\n4. 预测的答案是输入的sentences中所包含的单词中的一个。\n2. Dynamic Memory Networks\n在理解这个网络结构之前,一定要认真阅读上面提到的论文总结,上面的讲述很详细,很多很细节的地方,我在这里就不会跟那篇文章那样子对比得那么详细了,强烈建议先重新回顾一下上述论文的演变过程,对网络的运作有个清楚的认识,重温总结,点击这里\n2.1 Architecture\n网络的总体架构入下图所示。\n<img src=\"./images/example.png\" width = \"50%\" />\n从上图中可以看出要实现这个QA系统,我们需要实现4个模块:Input Module, Memory Module, Question Module and Answer Module. 我们在下来的实现中看细节。\n2.2 Input Module\n首先导入一些常用的包",
"import os\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.nn.init as init\nfrom torch.autograd import Variable",
"整个DMN的过程使用了很多的GRU作为编码器,在这里的话,我们可以稍作回顾,一般的情况下,GRU的更新过程如下公式所示: \n <img src=\"./images/gru.png\" width = \"50%\" />\n一般情况下,我们直接将GRU看做简化版的LSTM即可。\n首先,我们需要对输入的句子进行编码,这里使用的是bi-directional GRU,另外注意到在forward函数中,我们用到了一个参数word_embedding参数。这里面实际上传入的是embedding函数,如nn.Embedding。因为我们分成了不同的模块来编写这份代码,所以我们需要设置这样子一个参数,来使得所有的word_embedding保持一致。整个过程如下图所示:\n<img src=\"./images/input.png\" width = \"70%\" />\n在这里使用bi-directional GRU来更好得获取前后文信息。如果对GRU模块不清楚的,可以点击这里,快速回顾一下. 在这里,bi-directional GRU的更新方式如下公式所示:\n<img src=\"./images/bi-gru.png\" width = \"50%\" />",
"class InputModule(nn.Module):\n def __init__(self, vocab_size, hidden_size):\n super(InputModule, self).__init__()\n self.hidden_size = hidden_size\n \n ##################################################################################\n #\n # nn.GRU use the bidirectional parameter to control the type of GRU\n #\n ##################################################################################\n self.gru = nn.GRU(hidden_size, hidden_size, bidirectional=True, batch_first=True)\n self.dropout = nn.Dropout(0.1)\n\n # we should all the word_embedding the same, while we using the different module\n def forward(self, contexts, word_embedding):\n '''\n contexts.size() -> (#batch, #sentence, #token)\n word_embedding() -> (#batch, #sentence x #token, #embedding)\n position_encoding() -> (#batch, #sentence, #embedding)\n facts.size() -> (#batch, #sentence, #hidden = #embedding)\n '''\n batch_num, sen_num, token_num = contexts.size()\n\n contexts = contexts.view(batch_num, -1)\n contexts = word_embedding(contexts)\n\n contexts = contexts.view(batch_num, sen_num, token_num, -1)\n contexts = self.position_encoding(contexts)\n contexts = self.dropout(contexts)\n\n #########################################################################\n #\n # if you change the gru type, you should also change the initial hidden state\n # as bidirectional gru's shape is (2, *, *), while normal gru just need( 1, *, *)\n #\n #########################################################################\n h0 = Variable(torch.zeros(2, batch_num, self.hidden_size).cuda())\n facts, hdn = self.gru(contexts, h0)\n #########################################################################\n #\n # if you use bi-directional GRU, you should fusion the output,\n # if you use normal GRU, commont the following code. \n # acconding to the equation (6)\n #\n #########################################################################\n facts = facts[:, :, :self.hidden_size] + facts[:, :, self.hidden_size:]\n return facts\n \n def position_encoding(self, embedded_sentence):\n '''\n embedded_sentence.size() -> (#batch, #sentence, #token, #embedding)\n l.size() -> (#sentence, #embedding)\n output.size() -> (#batch, #sentence, #embedding)\n '''\n _, _, slen, elen = embedded_sentence.size()\n\n l = [[(1 - s/(slen-1)) - (e/(elen-1)) * (1 - 2*s/(slen-1)) for e in range(elen)] for s in range(slen)]\n l = torch.FloatTensor(l)\n l = l.unsqueeze(0) # for #batch\n l = l.unsqueeze(1) # for #sen\n l = l.expand_as(embedded_sentence)\n weighted = embedded_sentence * Variable(l.cuda())\n return torch.sum(weighted, dim=2).squeeze(2) # sum with tokens",
"在上面的模块中,你可能注意到了self.position_encoding()这个函数,它的作用在于给输入的句子加上位置信息。即,position_encoding的作用是将输入的句子{我, 爱, 你}转换成{我1, 爱2, 你3}的形式进行输出。这样子做的含义如下文引用所示。\n具体含义引用如下 \n\n词袋模型本身是无序的,句子“我爱你”和“你爱我”在BOW中都是{我,爱,你},模型本无法区分这两句话不同的含义,但如果给每个词加上position encoding,变成{我1,爱2, 你3}和{我3,爱2,你1},则变成不同的数据,所以就是位置编码就是一种特征。\n作者:HideOnBooks \n链接:https://www.zhihu.com/question/56476625/answer/416928811 \n来源:知乎 \n著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。 \n\n2.3 Question Module\nQuestion Module实际上只是对问题文本信息(一个句子,比如where are you?),使用一个普通的GRU进行编码,然后,将编码信息输送到Memory Module中进行Attention操作。代码比较简单,不复赘言。",
"class QuestionModule(nn.Module):\n def __init__(self, vocab_size, hidden_size):\n super(QuestionModule, self).__init__()\n self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True)\n\n def forward(self, questions, word_embedding):\n '''\n questions.size() -> (#batch, #token)\n word_embedding() -> (#batch, #token, #embedding)\n gru() -> (1, #batch, #hidden)\n '''\n questions = word_embedding(questions)\n _, questions = self.gru(questions)\n questions = questions.transpose(0, 1)\n return questions",
"2.4 Memory Module\nMemory 模块的示意图如下所示:\n \n在这里,我们先获取经过Input Module编码过后的信息$F$,然后进行输入Attention Mechanism中,迭代查找相关信息。Attention Mechanism中隐含了从Question Module中获得的对问题的文本描述进行GRU编码之后的信息输入。 \nAttention实际上是在比较和查找Question 和 Input之间的联系",
"class AttentionGRUCell(nn.Module):\n def __init__(self, input_size, hidden_size):\n super(AttentionGRUCell, self).__init__()\n self.hidden_size = hidden_size\n \n self.Wr = nn.Linear(input_size, hidden_size)\n self.Ur = nn.Linear(hidden_size, hidden_size)\n self.W = nn.Linear(input_size, hidden_size)\n self.U = nn.Linear(hidden_size, hidden_size)\n\n def forward(self, fact, C, g):\n r = torch.sigmoid(self.Wr(fact) + self.Ur(C))\n h_tilda = torch.tanh(self.W(fact) + r * self.U(C))\n g = g.unsqueeze(1).expand_as(h_tilda)\n h = g * h_tilda + (1 - g) * C\n return h\n\nclass AttentionGRU(nn.Module):\n def __init__(self, input_size, hidden_size):\n super(AttentionGRU, self).__init__()\n self.hidden_size = hidden_size\n self.AGRUCell = AttentionGRUCell(input_size, hidden_size)\n\n def forward(self, facts, G):\n batch_num, sen_num, embedding_size = facts.size()\n C = Variable(torch.zeros(self.hidden_size)).cuda()\n for sid in range(sen_num):\n fact = facts[:, sid, :]\n g = G[:, sid]\n if sid == 0:\n C = C.unsqueeze(0).expand_as(fact)\n C = self.AGRUCell(fact, C, g)\n return C",
"<img src=\"./images/memory_equation.png\" width = \"50%\" />",
"class EpisodicMemory(nn.Module):\n def __init__(self, hidden_size):\n super(EpisodicMemory, self).__init__()\n self.AGRU = AttentionGRU(hidden_size, hidden_size)\n self.z1 = nn.Linear(4 * hidden_size, hidden_size)\n self.z2 = nn.Linear(hidden_size, 1)\n self.next_mem = nn.Linear(3 * hidden_size, hidden_size)\n\n def make_interaction(self, facts, questions, prevM):\n '''\n facts.size() -> (#batch, #sentence, #hidden = #embedding)\n questions.size() -> (#batch, 1, #hidden)\n prevM.size() -> (#batch, #sentence = 1, #hidden = #embedding)\n z.size() -> (#batch, #sentence, 4 x #embedding)\n G.size() -> (#batch, #sentence)\n '''\n batch_num, sen_num, embedding_size = facts.size()\n questions = questions.expand_as(facts)\n prevM = prevM.expand_as(facts)\n\n z = torch.cat([\n facts * questions,\n facts * prevM,\n torch.abs(facts - questions),\n torch.abs(facts - prevM)\n ], dim=2)\n\n z = z.view(-1, 4 * embedding_size)\n\n G = torch.tanh(self.z1(z))\n G = self.z2(G)\n G = G.view(batch_num, -1)\n G = F.softmax(G, dim=1)\n\n return G\n \n def forward(self, facts, questions, prevM):\n '''\n facts.size() -> (#batch, #sentence, #hidden = #embedding)\n questions.size() -> (#batch, #sentence = 1, #hidden)\n prevM.size() -> (#batch, #sentence = 1, #hidden = #embedding)\n G.size() -> (#batch, #sentence)\n C.size() -> (#batch, #hidden)\n concat.size() -> (#batch, 3 x #embedding)\n '''\n G = self.make_interaction(facts, questions, prevM)\n C = self.AGRU(facts, G)\n concat = torch.cat([prevM.squeeze(1), C, questions.squeeze(1)], dim=1)\n next_mem = F.relu(self.next_mem(concat))\n next_mem = next_mem.unsqueeze(1)\n return next_mem\n",
"2.5 Answer Module\nAnswer Module将Memory Module和Question Module的信息使用全连接层结合到一起,然后输出一个文本中所有单词的对应着问题答案的可能性。",
"class AnswerModule(nn.Module):\n def __init__(self, vocab_size, hidden_size):\n super(AnswerModule, self).__init__()\n self.z = nn.Linear(2 * hidden_size, vocab_size)\n self.dropout = nn.Dropout(0.1)\n\n def forward(self, M, questions):\n M = self.dropout(M)\n concat = torch.cat([M, questions], dim=2).squeeze(1)\n z = self.z(concat)\n return z",
"2.6 Combine\n重新回顾一下整体的网络结构 \nNote: 虽然本文实现的网络结构不是这一幅图(这幅结构图是上述四篇论文中的第三篇论文所提供的结构图),但是在此还是借用了这幅图,因为,这两篇论文的总体结构都非常相似,两者只是在部分结构的细节上有所差异,因此,在此仍旧借用这幅图来表述总体的网络结构图。其中的信息流动几乎是一毛一样的。\n<img src=\"./images/network2.png\" width = \"60%\" />",
"class DMNPlus(nn.Module):\n '''\n This class combine all the module above. The data flow is showed as the above image.\n '''\n def __init__(self, hidden_size, vocab_size, num_hop=3, qa=None):\n super(DMNPlus, self).__init__()\n self.num_hop = num_hop\n self.qa = qa\n self.word_embedding = nn.Embedding(vocab_size, hidden_size, padding_idx=0, sparse=True).cuda()\n self.criterion = nn.CrossEntropyLoss(reduction='sum')\n\n self.input_module = InputModule(vocab_size, hidden_size)\n self.question_module = QuestionModule(vocab_size, hidden_size)\n self.memory = EpisodicMemory(hidden_size)\n self.answer_module = AnswerModule(vocab_size, hidden_size)\n\n def forward(self, contexts, questions):\n '''\n contexts.size() -> (#batch, #sentence, #token) -> (#batch, #sentence, #hidden = #embedding)\n questions.size() -> (#batch, #token) -> (#batch, 1, #hidden)\n '''\n facts = self.input_module(contexts, self.word_embedding)\n questions = self.question_module(questions, self.word_embedding)\n M = questions\n for hop in range(self.num_hop):\n M = self.memory(facts, questions, M)\n preds = self.answer_module(M, questions)\n return preds\n\n # train the index into a word, it's similar to the part 1 data explore\n def interpret_indexed_tensor(self, var):\n if len(var.size()) == 3:\n # var -> n x #sen x #token\n for n, sentences in enumerate(var):\n for i, sentence in enumerate(sentences):\n s = ' '.join([self.qa.IVOCAB[elem.data[0]] for elem in sentence])\n print(f'{n}th of batch, {i}th sentence, {s}')\n elif len(var.size()) == 2:\n # var -> n x #token\n for n, sentence in enumerate(var):\n s = ' '.join([self.qa.IVOCAB[elem.data[0]] for elem in sentence])\n print(f'{n}th of batch, {s}')\n elif len(var.size()) == 1:\n # var -> n (one token per batch)\n for n, token in enumerate(var):\n s = self.qa.IVOCAB[token.data[0]]\n print(f'{n}th of batch, {s}')\n\n # calculate the loss of the network\n def get_loss(self, contexts, questions, targets):\n output = self.forward(contexts, questions)\n loss = self.criterion(output, targets)\n reg_loss = 0\n for param in self.parameters():\n reg_loss += 0.001 * torch.sum(param * param)\n preds = F.softmax(output, dim=1)\n _, pred_ids = torch.max(preds, dim=1)\n corrects = (pred_ids.data == answers.cuda().data)\n acc = torch.mean(corrects.float())\n return loss + reg_loss, acc\n",
"3. Training and Test\n3.1 Train the Network\nbAbI数据集共有20个小的QA任务,为了训练简单,我们每次只训练一个任务,请将下面的一个 cell 中的 task_id 设置成你的学号的最后一个数字,尾号为 0 的同学将task_id设置为10",
"task_id = 10\n\nepochs = 3\n\nif task_id in range(1, 21):\n#def train(task_id):\n dset = BabiDataset(task_id)\n vocab_size = len(dset.QA.VOCAB)\n hidden_size = 80\n \n\n model = DMNPlus(hidden_size, vocab_size, num_hop=3, qa=dset.QA)\n model.cuda()\n \n best_acc = 0\n optim = torch.optim.Adam(model.parameters())\n\n for epoch in range(epochs):\n dset.set_mode('train')\n train_loader = DataLoader(\n dset, batch_size=32, shuffle=True, collate_fn=pad_collate\n )\n\n model.train()\n total_acc = 0\n cnt = 0\n for batch_idx, data in enumerate(train_loader):\n contexts, questions, answers = data\n batch_size = contexts.size()[0]\n contexts = Variable(contexts.long().cuda())\n questions = Variable(questions.long().cuda())\n answers = Variable(answers.cuda())\n\n loss, acc = model.get_loss(contexts, questions, answers)\n optim.zero_grad()\n loss.backward()\n optim.step()\n \n total_acc += acc * batch_size\n cnt += batch_size\n if batch_idx % 20 == 0:\n print(f'[Task {task_id}, Epoch {epoch}] [Training] loss : {loss.item(): {10}.{8}}, acc : {total_acc / cnt: {5}.{4}}, batch_idx : {batch_idx}')",
"3.2 Test\n测试集上,该任务的准确率是多少",
"dset.set_mode('test')\ntest_loader = DataLoader(\n dset, batch_size=100, shuffle=False, collate_fn=pad_collate\n )\ntest_acc = 0\ncnt = 0\n\nmodel.eval()\nfor batch_idx, data in enumerate(test_loader):\n contexts, questions, answers = data\n batch_size = contexts.size()[0]\n contexts = Variable(contexts.long().cuda())\n questions = Variable(questions.long().cuda())\n answers = Variable(answers.cuda())\n \n _, acc = model.get_loss(contexts, questions, answers)\n test_acc += acc * batch_size\n cnt += batch_size\nprint(f'Task {task_id}, Epoch {epoch}] [Test] Accuracy : {test_acc / cnt: {5}.{4}}')",
"3.3 Show example\n我们不妨将输出结果转换会文本信息,目测一下该模型的表现, 首先,我们先重新编辑一下上面提供的解释函数。",
"def interpret_indexed_tensor(references, contexts, questions, answers, predictions):\n for n, data in enumerate(zip(contexts, questions, answers, predictions)):\n context = data[0]\n question = data[1]\n answer = data[2]\n predict = data[3]\n \n print(n)\n for i, sentence in enumerate(context):\n s = ' '.join([references[elem.item()] for elem in sentence])\n print(f'{i}th sentence: {s}')\n q = ' '.join([references[elem.item()] for elem in question])\n print(f'question: {q}')\n a = references[answer.item()]\n print(f'answer: {a}')\n p = references[predict.argmax().item()]\n print(f'predict: {p}')\n\ndset.set_mode('test')\ndataloader = DataLoader(dset, batch_size=4, shuffle=False, collate_fn=pad_collate)\ncontexts, questions, answers = next(iter(dataloader))\n\ncontexts = Variable(contexts.long().cuda())\nquestions = Variable(questions.long().cuda())\n\n# prediction\nmodel.eval()\npredicts = model(contexts, questions)\n\nreferences = dset.QA.IVOCAB\n#contexts, questions, answers = next(iter(dataloader))\n#interpret_indexed_tensor(contexts, questions, answers)\ninterpret_indexed_tensor(references, contexts, questions, answers, predicts)",
"4. Exercise\n1、QA问题和关系型数据库(如对关系型数据库有所遗忘的话,请点击这里,可稍作回顾)的检索有什么区别?简要说说你的理解\n答:关系型数据库的检索是通过建立在相应字段上的索引和条件表达式来实现的,而QA问题的检索式根据记忆状态信息来实现的。关系型数据库可以在字段上建立索引,然后通过索引得到数据的存储位置,从而获取数据。QA问题将问答的知识以memory state的形式存储起来,训练时可以更新memory state,而检索时只需要将问题转换为相应的特征值,然后通过DMN产生输出,再转换为自然语言。\n2、bAbI数据集的qa问题是怎么划分为了20个任务,划分的依据是什么?请到根据上文的提示,打开数据集文件进行观察,然后回答\n答:bAbI数据集的QA问题是以答案类型或答案依据来划分的。比如任务1是单个事实(句子)支持的答案,任务2、3是两个、三个事实(句子)支持的答案。比如任务10是回答yes、no、maybe,而任务7是回答数量。\n3、编写几行代码,探索一下在这个数据集中一个Task中训练集有多少条,测试集有多少条(提示是下面一个cell的代码)",
"# default dataset is training set\ndataset = BabiDataset( task_id )\nprint('train set:', len(dataset))\n\n# change the mode into \"test\", now dataset is testset\ndataset.set_mode(\"test\")\nprint('test set:', len(dataset))",
"答:任务10的训练集为9000条,测试集为1000条。\n4、根据注释提示,将input module中的bi-directional GRU改成普通的GRU,重新运行网络。",
"class InputModule(nn.Module):\n def __init__(self, vocab_size, hidden_size):\n super(InputModule, self).__init__()\n self.hidden_size = hidden_size\n self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True)\n self.dropout = nn.Dropout(0.1)\n\n def forward(self, contexts, word_embedding):\n '''\n contexts.size() -> (#batch, #sentence, #token)\n word_embedding() -> (#batch, #sentence x #token, #embedding)\n position_encoding() -> (#batch, #sentence, #embedding)\n facts.size() -> (#batch, #sentence, #hidden = #embedding)\n '''\n batch_num, sen_num, token_num = contexts.size()\n\n contexts = contexts.view(batch_num, -1)\n contexts = word_embedding(contexts)\n \n contexts = contexts.view(batch_num, sen_num, token_num, -1)\n contexts = self.position_encoding(contexts)\n contexts = self.dropout(contexts)\n \n facts, hdn = self.gru(contexts)\n return facts\n\n def position_encoding(self, embedded_sentence):\n '''\n embedded_sentence.size() -> (#batch, #sentence, #token, #embedding)\n l.size() -> (#sentence, #embedding)\n output.size() -> (#batch, #sentence, #embedding)\n '''\n _, _, slen, elen = embedded_sentence.size()\n\n l = [[(1 - s/(slen-1)) - (e/(elen-1)) * (1 - 2*s/(slen-1)) for e in range(elen)] for s in range(slen)]\n l = torch.FloatTensor(l)\n l = l.unsqueeze(0) # for #batch\n l = l.unsqueeze(1) # for #sen\n l = l.expand_as(embedded_sentence)\n weighted = embedded_sentence * Variable(l.cuda())\n return torch.sum(weighted, dim=2).squeeze(2) # sum with tokens\n\ntask_id = 10\n\nepochs = 3\n\nif task_id in range(1, 21):\n#def train(task_id):\n dset = BabiDataset(task_id)\n vocab_size = len(dset.QA.VOCAB)\n hidden_size = 80\n \n\n model = DMNPlus(hidden_size, vocab_size, num_hop=3, qa=dset.QA)\n model.cuda()\n \n best_acc = 0\n optim = torch.optim.Adam(model.parameters())\n\n for epoch in range(epochs):\n dset.set_mode('train')\n train_loader = DataLoader(\n dset, batch_size=32, shuffle=True, collate_fn=pad_collate\n )\n\n model.train()\n total_acc = 0\n cnt = 0\n for batch_idx, data in enumerate(train_loader):\n contexts, questions, answers = data\n batch_size = contexts.size()[0]\n contexts = Variable(contexts.long().cuda())\n questions = Variable(questions.long().cuda())\n answers = Variable(answers.cuda())\n\n loss, acc = model.get_loss(contexts, questions, answers)\n optim.zero_grad()\n loss.backward()\n optim.step()\n \n total_acc += acc * batch_size\n cnt += batch_size\n if batch_idx % 20 == 0:\n print(f'[Task {task_id}, Epoch {epoch}] [Training] loss : {loss.item(): {10}.{8}}, acc : {total_acc / cnt: {5}.{4}}, batch_idx : {batch_idx}')",
"5、根据代码和理解,用自己的语言简要描述Episodic Memory所进行的操作。\n答:Eposodic Memory由Attention Mechanism和Memory Update Mechanism组成,它接受Input Module和Question Module的输出作为输入。首先,它将句子的表达信息facts和问题的描述信息questions一并传递给Attention Mechanism,得到句子信息和问题信息的一个关系。然后将关系信息和句子表达信息facts一同传递给Attention GRU,得到context vector。然后Meomory Update Mechanism会根据context vector来更新Eposodic Memory内部的memory值,并作为输出值输出。\n在DMN中会指定参数num_hop,该参数用于指定Eposodic Memory的迭代次数。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pdamodaran/yellowbrick
|
examples/gary-mayfield/testing.ipynb
|
apache-2.0
|
[
"<h1>Digit Classification</h1>",
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.image as mpimg\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.ensemble import RandomForestClassifier\n\nfrom yellowbrick.classifier import ClassificationReport, ConfusionMatrix, ROCAUC, ClassBalance",
"<h2>Loading the data</h2>\n\n<ul>\n<li>We use panda's read_csv to read train.csv into a dataframe</li>\n<li>Separate our images and labels for supervised learning.</li>\n<li>Use train_test_split to break data into sets for training and testing</li>\n</ul>",
"labeled_images = pd.read_csv('train.csv')\nimages = labeled_images.iloc[0:10000,1:].as_matrix()\nlabels = labeled_images.iloc[0:10000,:1].as_matrix()\nlabels = np.ravel(labels)\ntrain_images, test_images, train_labels, test_labels = train_test_split(images, labels, train_size=0.75, test_size=0.25, random_state=42)",
"<h4>Viewing an image</h4>",
"i = 5000\nimg = train_images[i]\nimg=img.reshape((28,28))\nplt.imshow(img, cmap='gray')\nplt.title(train_labels[i])\n\nbayes = GaussianNB()\nclasses = [0,1,2,3,4,5,6,7,8,9]\nvisualizer = ClassificationReport(bayes, classes=classes)\n\nvisualizer.fit(train_images, train_labels)\nvisualizer.score(test_images, test_labels)\ng = visualizer.poof()\n\nbayes = GaussianNB()\nvisualizer = ConfusionMatrix(bayes)\n\nvisualizer.fit(train_images, train_labels)\nvisualizer.score(test_images, test_labels)\ng = visualizer.poof()\n\nforest = RandomForestClassifier()\nvisualizer = ClassBalance(forest, classes=classes)\n\nvisualizer.fit(train_images, train_labels)\nvisualizer.score(test_images, test_labels)\ng = visualizer.poof()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/applied-machine-learning-intensive
|
content/xx_misc/dimensionality_reduction/colab.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/xx_misc/dimensionality_reduction/colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC.",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Dimensionality Reduction\nPrincipal Component Analysis (PCA) is one of the most common ways to perform dimensionality reduction. PCA takes a set of independent and dependent variables (dimensions) and creates a representation of the variable, or group of variables, that explains the most variance. In a regression or classification problem, that would mean reducing the number of variables or features to the most important aggregate components and perhaps discarding those which add little value to our model's predictive power. This is known as feature extraction, and can help simplify your model. \nLoad Packages",
"%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler, RobustScaler\nfrom sklearn.decomposition import PCA\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LogisticRegression",
"Why PCA?\nThe curse of dimensionality states that analyzing data with high dimensionality can lead to overly complex models that are inefficient, suffer from overfitting, and tend to have less predictive power. In machine learning, this often means that your feature space is too large. Maybe there are more features than columns of data, or perhaps your data is too sparse to draw any statistically significant inferences. PCA simplifies the feature set into a set of \"principal components,\" which are linear combinations of the original features and have low correlation between themselves. PCA may be undesirable in a case where you want your model to be interpretable using your original features, not the principal components.\nAs a rule of thumb, if your optimal number of components is greater than or equal to your original feature count, you probably shouldn't use PCA. It is all about finding the optimal component count, where the components explain the most variance in your model. In other words, you want to choose the best features for your model.\nPCA and other techniques for dimensionality reduction also help to visualize and analyze higher dimensional data either in 2D or 3D. Sometimes PCA is referred to as Singular Value Decomposition (SVD), but we will call it PCA for now. \nIf you'd like to take a deep dive into the math (and there is quite a bit of math!), read these helpful lecture notes from a statistics course at Carnegie Mellon University.\nData Preparation\nPCA works best when features are normally distributed and have low multicollinearity. Because PCA is performing rotations in N-dimensional space, we typically need to standardize our data. Essentially, we are reducing the space that our data occupies in higher dimensions by standardizing the distribution, or scaling the range of values down to $[0, 1]$. Each method of scaling has its own data requirements, and there are several flavors of scaling and standardization. Therefore, you should first conduct a thorough data analysis of your features in order to make this assessment.\nYou may see the terms \"scaling,\" \"standardizing,\" \"centering,\" and \"normalizing\" used interchangeably. This can be confusing, so let's break down these terms.\n\n\nScaling: Changes the range of the data but does not affect the distribution.\n\n\nStandardizing: Changes the distribution of the data by calculating the standard normal score.\n\n\nCentering: Shifts the distribution of the data so that the mean is zero.\n\n\nNormalizing: normalizes the rows of your dataset.\n\n\nWhen using PCA to build a predictive model, we typically want to standardize the data with standard scalers. But some cases (e.g., cluster analysis or NLP) may require normalization of rows, not columns. There also may be other cases outside of PCA where you will need to scale or standardize.\nHere is a helpful guide for choosing which method is right for your data.\nYou can always check out the documentation for the implementation library, and scikit-learn's website has another helpful guide to its preprocessing methods.\nDownload Wine Data",
"df_wine = pd.read_csv(\n 'http://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data',\n header=None)\ndisplay(df_wine.head())\ndisplay(df_wine.shape)",
"Split Into Training and Test Data, and Then Standardize",
"# Split into training and testing sets.\nX, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.3, stratify=y, random_state=0)\n\n# Standardize the features.\nsc = StandardScaler()\nX_train_std = sc.fit_transform(X_train)\nX_test_std = sc.transform(X_test)",
"Covariance\nThe covariance of features $M$ and $N$ is defined as follows:\n$$ \\sigma_{MN}^2 = \\frac{1}{n}\\sum_i {(m_i-\\mu_M)^2(n_i-\\mu_N)^2}$$\n\n$\\mu_M$ is the sample mean of feature $M$\n$\\mu_N$ is the sample mean of feature $N$.\n\nCovariance is an extension of variance; it is an indication of variability within a set of two features, just as variance is an indicator of variability within a feature. Don't worry too much about the math here. The implementation of PCA hides the details.\nEigenvectors and Values\nEigenvectors represent the directional vectors that we search for in the N-dimensional space. Eigenvalues represent the length of these vectors, and they inform us of how much variance is explained by the Nth principal component. An eigenvalue of 1 means there is no more information gained beyond the original feature, so it is desirable to have principal components with values greater than 1.",
"cov_mat = np.cov(X_train_std.T)\neigen_vals, eigen_vecs = np.linalg.eig(cov_mat)",
"Again, don't worry too much about how eigenvalues and eigenvectors are calculated; most of it is under the hood in scikit-learn.\nHow Does PCA Work?\nWe use the covariance defined above to search for a first component (a vector that minimizes the error or distance from that vector and the data). This process iterates until a n_components, or number of vectors to build n principal components is found. In scikit-learn, you can choose a number of components to solve for, or let scikit-learn automatically choose the optimal number of components.",
"# Calculate cumulative sum of explained variances.\ntot = sum(eigen_vals)\nvar_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]\ncum_var_exp = np.cumsum(var_exp)\n\nax, fig = plt.subplots(1,1,figsize=(8, 4))\n# Plot explained variances.\nplt.bar(range(1, 14), var_exp, alpha=0.5, align='center',\n label='individual explained variance')\nplt.step(range(1, 14), cum_var_exp, where='mid',\n label='cumulative explained variance')\nplt.ylabel('Explained variance ratio')\nplt.xlabel('Principal component index')\nplt.legend(loc='best')\nplt.show()",
"Using PCA, we can see the explained variance of each component. The most variance is explained by the first principal component and drops off around 4 PCs. We can also see that the cumulative explained variance hits approximately 90% with 8 PCs.\nPCA for Feature Extraction\nPCA is just one form of dimensionality reduction, and you will come across other related forms, as well as other types of dataset transformations. Transforming your dataset is a key technique in model-building, so don't get too attached to your original dataset.\nBy sorting the eigenpairs (vectors and their values), we can project that data into a lower dimensional space.",
"# Make a list of (eigenvalue, eigenvector) tuples.\neigen_pairs = [(np.abs(eigen_vals[i]),\n eigen_vecs[:, i]) for i in range(len(eigen_vals))]\n\n# Sort the (eigenvalue, eigenvector) tuples from high to low.\neigen_pairs.sort(key=lambda k: k[0], reverse=True)\n\nw = np.hstack((eigen_pairs[0][1][:, np.newaxis],\n eigen_pairs[1][1][:, np.newaxis]))\nprint('Matrix W:\\n', w)",
"The result is a 13x2 projection matrix that is created from the top-2 eigenvectors. We can now use this projection matrix, $W$, to map any sample, $x$, to its 2-dimensional sample vector $x'$.",
"# Project training data onto PC1 and PC2.\nX_train_pca = X_train_std.dot(w)\n\n# Visualize projection.\ncolors = ['r', 'b', 'g']\nmarkers = ['s', 'x', 'o']\nfor l, c, m in zip(np.unique(y_train), colors, markers):\n plt.scatter(X_train_pca[y_train==l, 0], \n X_train_pca[y_train==l, 1], \n c=c, label=l, marker=m) \nplt.xlabel('PC 1')\nplt.ylabel('PC 2')\nplt.legend(loc='lower left')\nplt.show()",
"That is how you can implement PCA from scratch using a covariance matrix.\nUsing PCA With scikit-learn\nWe can now use scikit-learn to implement PCA and to understand all of the explained variance per component. If we choose n_components to be None, then we will get a number of components equal to the number of features in our dataset.",
"pca = PCA(n_components=None)\nX_train_pca = pca.fit_transform(X_train_std)\npca.explained_variance_ratio_",
"Now we can use our PCA in a logistic regression.",
"# Initialize pca and logistic regression model.\npca = PCA(n_components=2)\nlr = LogisticRegression(multi_class='auto', solver='liblinear', random_state=0)\n\n# Fit and transform data.\nX_train_pca = pca.fit_transform(X_train_std)\nX_test_pca = pca.transform(X_test_std)\nlr.fit(X_train_pca, y_train)",
"This can be visualized using plot decision regions.",
"from matplotlib.colors import ListedColormap\n\ndef plot_decision_regions(X, y, classifier, resolution=0.02):\n # Setup marker generator and color map.\n markers = ('s', 'x', 'o', '^', 'v')\n colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')\n cmap = ListedColormap(colors[:len(np.unique(y))])\n\n # Plot the decision surface.\n x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1\n x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1\n xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),\n np.arange(x2_min, x2_max, resolution))\n Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)\n Z = Z.reshape(xx1.shape)\n plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)\n plt.xlim(xx1.min(), xx1.max())\n plt.ylim(xx2.min(), xx2.max())\n\n # Plot class samples.\n for idx, cl in enumerate(np.unique(y)):\n plt.scatter(x=X[y == cl, 0], \n y=X[y == cl, 1],\n alpha=0.6, \n c=[cmap(idx)],\n edgecolor='black',\n marker=markers[idx], \n label=cl) # Plot decision regions for training se.\n\nplot_decision_regions(X_train_pca, y_train, classifier=lr)\nplt.xlabel('PC 1')\nplt.ylabel('PC 2')\nplt.legend(loc='lower right')\nplt.show()",
"Now let's plot the decision regions of the classifier and see if the classes are separable by eye.",
"# Plot decision regions for test set.\nplot_decision_regions(X_test_pca, y_test, classifier=lr)\nplt.xlabel('PC1')\nplt.ylabel('PC2')\nplt.legend(loc='lower right')\nplt.show()",
"Resources\n\nExamples adapted from \nTDS\nTutorial\nMath\nFeature Selection\n\nExercises\nWatch this video from Siraj Raval.",
"from IPython.display import IFrame\n\nIFrame(src=\"https://www.youtube.com/embed/jPmV3j1dAv4\", width=\"560\",\n height=\"315\", frameborder=\"0\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
msampathkumar/datadriven_pumpit
|
blood_donation/blood_donation_predicitons.ipynb
|
apache-2.0
|
[
"Blood Transfusion Service Center\nData Set Characteristics: Multivariate\nNumber of Instances: 748\nArea: Business\nAttribute Characteristics: Real\nNumber of Attributes: 5\nDate Donated 2008-10-03\nAssociated Tasks: Classification\nNumber of Web Hits:\n140894\nSource:\nOriginal Owner and Donor \nProf. I-Cheng Yeh \nDepartment of Information Management \nChung-Hua University, \nHsin Chu, Taiwan 30067, R.O.C. \ne-mail:icyeh '@' chu.edu.tw \nTEL:886-3-5186511 \nDate Donated: October 3, 2008 \nData Set Information:\nTo demonstrate the RFMTC marketing model (a modified version of RFM), this study adopted the donor database of Blood Transfusion Service Center in Hsin-Chu City in Taiwan. The center passes their blood transfusion service bus to one university in Hsin-Chu City to gather blood donated about every three months. To build a FRMTC model, we selected 748 donors at random from the donor database. These 748 donor data, each one included R (Recency - months since last donation), F (Frequency - total number of donation), M (Monetary - total blood donated in c.c.), T (Time - months since first donation), and a binary variable representing whether he/she donated blood in March 2007 (1 stand for donating blood; 0 stands for not donating blood).\nAttribute Information:\nGiven is the variable name, variable type, the measurement unit and a brief description. The \"Blood Transfusion Service Center\" is a classification problem. The order of this listing corresponds to the order of numerals along the rows of the database. \nR (Recency - months since last donation), \nF (Frequency - total number of donation), \nM (Monetary - total blood donated in c.c.), \nT (Time - months since first donation), and \na binary variable representing whether he/she donated blood in March 2007 (1 stand for donating blood; 0 stands for not donating blood).\n\nTable 1 shows the descriptive statistics of the data. We selected 500 data at random as the training set, and the rest 248 as the testing set. \nTable 1. Descriptive statistics of the data \nVariable Data Type Measurement Description min max mean std \nRecency quantitative Months Input 0.03 74.4 9.74 8.07 \nFrequency quantitative Times Input 1 50 5.51 5.84 \nMonetary quantitative c.c. blood Input 250 12500 1378.68 1459.83 \nTime quantitative Months Input 2.27 98.3 34.42 24.32 \nWhether he/she donated blood in March 2007 binary 1=yes 0=no Output 0 1 1 (24%) 0 (76%)\n\nRelevant Papers:\nYeh, I-Cheng, Yang, King-Jang, and Ting, Tao-Ming, \"Knowledge discovery on RFM model using Bernoulli sequence,\" Expert Systems with Applications, 2008, [Web Link]\nResources:\n\nhttps://archive.ics.uci.edu/ml/datasets/Blood+Transfusion+Service+Center",
"import warnings\nimport itertools\nwarnings.filterwarnings('ignore')\nfrom functools import lru_cache\n\n# standard tools\nimport numpy as np\nimport pandas as pd\n\n# %load_ext autoreload\n\nseed = 7 * 9\nnp.random.seed(seed)\n\nimport xgboost\nimport sklearn.ensemble\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import log_loss\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.pipeline import Pipeline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%matplotlib inline\nfrom matplotlib.pylab import rcParams\nrcParams['figure.figsize'] = 12, 4\n\nscale_cols = {}\n\ndef rename_cols(name):\n if '(' in name:\n name = name.split('(')[0]\n return ''.join(map(lambda x: x[0], name.lower().split()))\n\n@lru_cache(maxsize=128)\ndef get_data():\n df = pd.read_csv('data/BloodDonation.csv', index_col=0)\n test_df = pd.read_csv('data/BloodDonationTest.csv', index_col=0)\n\n df.drop(['Total Volume Donated (c.c.)'], inplace=True, axis=1)\n test_df.drop(['Total Volume Donated (c.c.)'], inplace=True, axis=1)\n \n # rename cols\n new_cols_names = df.columns.map(rename_cols)\n for old_name, new_name in zip(df.columns, new_cols_names):\n print('Rename:', old_name, '\\t\\tNewname:', new_name)\n df.columns = new_cols_names\n test_df.columns = test_df.columns.map(rename_cols)\n \n global scale_cols\n for col in df.columns[:-1]:\n scale_cols[col] = StandardScaler(copy=True, with_mean=True, with_std=True).fit(df[col])\n df[col] = scale_cols[col].transform(df[col])\n test_df[col] = scale_cols[col].transform(test_df[col])\n\n return (df, test_df)\n\n## Data Modelling\ndef get_test_train(df):\n X = df.drop('mdim2', axis=1)\n y = df['mdim2']\n X_train, X_validation, y_train, y_validation = train_test_split(X, y, train_size=0.75, random_state=1234)\n return (X_train, X_validation, y_train, y_validation)\n\n\ndef test_train_validation_splt(X, y):\n # https://stackoverflow.com/questions/40829137/stratified-train-validation-test-split-in-scikit-learn\n from sklearn.cross_validation import train_test_split as tts\n SEED = 2000\n x_train, x_validation_and_test, y_train, y_validation_and_test = tts(X, y, test_size=.4, random_state=SEED)\n x_validation, x_test, y_validation, y_test = tts(x_validation_and_test, y_validation_and_test, test_size=.5, random_state=SEED)\n return (x_train, x_test, x_validation,\n y_train, y_test, y_validation)\n\n\n## save preds\ndef save_preds(preds, filename='submit.csv'):\n pd.DataFrame(preds.astype(np.float64),\n index=test_df.index,\n columns=['Made Donation in March 2007']\n ).to_csv(filename)\n print('stored file as', filename)\n\ndf, test_df = get_data()\n\ndf.columns\n\ndf['nod_per_msfd'] = df['nod'] / df['msfd']\ndf['msfd_per_nod'] = 1/df['nod_per_msfd']\n\ntest_df['nod_per_msfd'] = test_df['nod'] / test_df['msfd']\ntest_df['msfd_per_nod'] = 1/test_df['nod_per_msfd']\n\ndf.columns",
"test-train",
"X_train, X_validation, y_train, y_validation = get_test_train(df)\n\nX_train.shape, X_validation.shape",
"Bernoli",
"from sklearn.naive_bayes import BernoulliNB\n\nclf = BernoulliNB(alpha=0.5, binarize=0.5)\nclf.fit(X_train, y_train)\nlog_loss(y_train, clf.predict(X_train)), log_loss(y_validation, clf.predict(X_validation))\n\nclf",
"Gradient Boosting Classifer",
"clf = sklearn.ensemble.GradientBoostingClassifier(\n warm_start=True, subsample=.8,\n n_estimators=500,\n# learning_rate=0.0001,\n presort=True, verbose=0).fit(X_train, y_train)\n# log_loss(y, clf.predict(X))\n\n# results = cross_val_score(clf, X, y, cv=kfold, scoring='log_loss')\nlog_loss(y_train, clf.predict(X_train)), log_loss(y_validation, clf.predict(X_validation))\n\nclf",
"XGBOOST",
"from xgboost import XGBClassifier\n\nclf = XGBClassifier(max_depth=4,\n learning_rate=0.05,\n reg_alpha=0.1,\n reg_lambda=0.5,\n seed=12,\n# eta=0.02,\n colsample_bylevel=0.5,\n objective= 'binary:logistic'\n# n_estimators=800\n )\n\nclf.fit(X_train, y_train)\nlog_loss(y_train, clf.predict(X_train)), log_loss(y_validation, clf.predict(X_validation))\n\nxgb = xgboost\n\nparams = {}\nparams['objective'] = 'binary:logistic'\nparams['eval_metric'] = 'logloss'\nparams['eta'] = 0.02\nparams['max_depth'] = 5\n\nd_train = xgb.DMatrix(X_train, label=y_train)\nd_test = xgb.DMatrix(X_validation, label=y_validation)\n\nwatchlist = [(d_train, 'train'),\n (d_test, 'train')]\n\nbst = xgb.train(params, d_train, 1000, watchlist, early_stopping_rounds=50, verbose_eval=20)\nlog_loss(y_train, clf.predict(X_train)), log_loss(y_validation, clf.predict(X_validation))",
"Neural nets",
"from sklearn.neural_network import MLPClassifier\n\nclf = MLPClassifier(hidden_layer_sizes=(5, 5, 5), max_iter=500)\n\nclf.fit(X_train,y_train)\nlog_loss(y_train, clf.predict(X_train)), log_loss(y_validation, clf.predict(X_validation))\n\n# %%time\nclf = MLPClassifier(hidden_layer_sizes=(30, 18, 12, 5),\n max_iter=1250,\n solver='lbfgs', # 'lbfgs', 'adam'\n learning_rate_init=0.01,\n learning_rate='adaptive',\n activation='tanh',\n alpha=0.4,\n validation_fraction=0.25,\n early_stopping=True,\n verbose=True,\n random_state=7)\n\nclf.fit(X_train, y_train)\nlog_loss(y_train, clf.predict(X_train)), log_loss(y_validation, clf.predict(X_validation))",
"catboost",
"from catboost import Pool, CatBoostClassifier, cv, CatboostIpythonWidget\n\nmodel = CatBoostClassifier(\n custom_loss=['Logloss'],\n random_seed=42\n)\n\ncategorical_features_indices = np.where(X_train.dtypes != np.float)[0]\n\nmodel.fit(\n X_train, y_train,\n cat_features=categorical_features_indices,\n eval_set=(X_validation, y_validation),\n# verbose=True, # you can uncomment this for text output\n# plot=True\n)\n\nlog_loss(y_train, model.predict(X_train)), log_loss(y_validation, model.predict(X_validation))",
"Keras 1",
"from keras.models import Sequential\nfrom keras.layers import Activation, Dense\nfrom keras.wrappers.scikit_learn import KerasClassifier\n\n# For a single-input model with 2 classes (binary classification):\n\nmodel = Sequential()\nmodel.add(Dense(5, activation='tanh', input_dim=5))\nmodel.add(Dense(5, activation='relu'))\nmodel.add(Dense(5, activation='tanh'))\nmodel.add(Dense(1, activation='relu'))\nmodel.compile(optimizer='rmsprop',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n\n\n\n# Train the model, iterating on the data in batches of 32 samples\nmodel.fit(X_train.values, y_train.values, epochs=100, batch_size=32, verbose=0)\n\nlog_loss(y_train.values, model.predict(X_train.values)), log_loss(y_validation.values, model.predict(X_validation.values))",
"KERAS 2",
"model = Sequential([\n Dense(8, input_dim=(5)),\n Dense(6),\n Activation('tanh'),\n# Dense(6),\n# Activation('relu'),\n Dense(6),\n Activation('relu'),\n Dense(1),\n Activation('sigmoid'),\n])\n\nmodel.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n\nmodel.fit(X_train.values, y_train.values, epochs=1000, batch_size=32, verbose=0)\n\nlog_loss(y_train.values, model.predict(X_train.values)), log_loss(y_validation.values, model.predict(X_validation.values))",
"KERAS 3",
"# baseline model\ndef create_baseline():\n\t# create model\n\tmodel = Sequential()\n\tmodel.add(Dense(9, input_dim=5, kernel_initializer='normal', activation='relu'))\n# \tmodel.add(Dense(5))\n\tmodel.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))\n\t# Compile model\n\tmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\treturn model\n\nestimators = []\nestimators.append(('standardize', StandardScaler()))\nestimators.append(('mlp', KerasClassifier(build_fn=create_baseline, epochs=100, batch_size=5, verbose=0)))\npipeline = Pipeline(estimators)\n\nkfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)\nresults = cross_val_score(pipeline, X_train.values, y_train.values, cv=kfold)\n\nlog_loss(y_train.values, model.predict(X_train.values)), log_loss(y_validation.values, model.predict(X_validation.values))\n\nresults"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ucsd-ccbb/jupyter-genomics
|
notebooks/chipSeq/Omics_Pipe_ChIPseq_GUI.ipynb
|
mit
|
[
"Omics Pipe GUI -- ChIPseq Homer Pipeline\nAuthor: K. Fisch\nEmail: Kfisch@ucsd.edu\nDate: June 2016\nNote: Before editing this notebook, please make a copy (File --> Make a copy).\nTable of Contents\n<a id = \"toc\"></a>\n1. <a href = \"#intro\">Introduction</a>\n * <a href = \"#config\">Configuration</a>\n * <a href = \"#params\">Parameters</a>\n * <a href = \"#input\">User Input Required </a>\n2. <a href = \"#pipeline\">ChIPseq Homer Pipeline</a>\n3. <a href = \"#results\">ChIPseq QC Results</a>\n * <a href = \"#qc\">Raw Data Quality Control (FastQC)</a>\n * <a href = \"#alignment\">Alignment QC (Bowtie)</a>\n * <a href = \"#clonal\">Clonal Tag Distribution</a>\n * <a href = \"#autocorr\">Autocorrelation Analysis</a>\n4. <a href = \"#homer\">Homer Results</a>\n * <a href = \"#peaks\">Peak Summary</a>\n * <a href = \"#annot\">Annotation Summary</a>\n * <a href = \"#kegg\">KEGG enrichment</a>\n * <a href = \"#motif\">Motif Analysis</a>\n * <a href = \"#promoters\">Peaks in Promoters</a>\n * <a href = \"#genes\">Peaks Annotated to Genes of Interest</a>\n * <a href = \"#venn1\">Venn Analysis for Comparison of Peaks Annotated to Genes</a>\n * <a href = \"#browse\">View Peak Pileups in UCSC Genome Browser</a>\n<a id = \"intro\"></a>\nIntroduction\nOmics pipe is an open-source, modular computational platform that automates ‘best practice’ multi-omics data analysis pipelines.\nThis Jupyter notebook wraps the functionality of Omics Pipe into an easy-to-use interactive Jupyter notebook and parses\nthe output for genomic interpretation. Read more about Omics Pipe at https://pythonhosted.org/omics_pipe/.",
"#Omics Pipe Overview\nfrom IPython.display import Image\nImage(filename='/data/chip/2606129465-omics_pipe_overview.png', width=500, height=100)",
"<a id = \"config\"></a>\nSet up your Jupyter notebook to import Python modules needed",
"#Import Omics pipe and module dependencies\nimport yaml\nfrom omics_pipe.parameters.default_parameters import default_parameters \nfrom ruffus import *\nimport sys \nimport os\nimport time\nimport datetime \nimport drmaa\nimport csv\nfrom omics_pipe.utils import *\nfrom IPython.display import IFrame\nimport pandas\nimport glob\nimport os\nimport matplotlib.pyplot as plt\nfrom matplotlib_venn import venn2,venn3, venn3_circles\n%matplotlib inline\n#%matplotlib notebook\nimport qgrid\nqgrid.nbinstall(overwrite=True)\nqgrid.set_defaults(remote_js=True, precision=4)\nfrom IPython.display import HTML\nimport mygene\n#Download scripts from https://github.com/gdavidson/ChIPseq_tools\nsys.path.append('/data/chip/ChIPseq_tools-master') #append path to downloaded scripts\nimport compareGeneLists as compare\n\nnow = datetime.datetime.now()\ndate = now.strftime(\"%Y-%m-%d %H:%M\")\n\n#Change top directory to locate result files\nos.chdir(\"/data/chip\")",
"<a id = \"params\"></a>\nCustomize input parameters for Omics Pipe\nRequired: Sample names, condition for each sample\nOptional: genome build, gene annotation, output paths, tool parameters, etc. \nSee full Omics Pipe documentation for a description of the configurable parameters. \n<a id = \"input\"></a>\nUser Input Required Here",
"###Customize parameters: Specify sample names and conditions\nsample_names = [\"1_2percent_input_R1\",\n\"1_h3k4me3_R1\",\n\"1_h3k9ac_R1\",\n\"1_h3k9me3_R1\",\n\"2_2percent_input_R1\",\n\"2_h3k4me3_R1\",\n\"2_h3k9ac_R1\",\n\"2_h3k9me3_R1\",\n\"3_2percent_input_R1\",\n\"3_h3k4me3_R1\",\n\"3_h3k9ac_R1\",\n\"3_h3k9me3_R1\",\n\"4_2percent_input_R1\",\n\"4_h3k4me3_R1\",\n\"4_h3k9ac_R1\",\n\"4_h3k9me3_R1\",\n\"5_2percent_input_R1\",\n\"5_h3k4me3_R1\",\n\"5_h3k9ac_R1\",\n\"5_h3k9me3_R1\",\n\"6_2percent_input_R1\",\n\"6_h3k4me3_R1\",\n\"6_h3k9ac_R1\",\n\"6_h3k9me3_R1\"]\ncondition = [\"Control\",\n\"H3K4me3\",\n\"H3K4ac\",\n\"H3K9me3\",\n\"Control\",\n\"H3K4me3\",\n\"H3K4ac\",\n\"H3K9me3\",\n\"Control\",\n\"H3K4me3\",\n\"H3K4ac\",\n\"H3K9me3\",\n\"Control\",\n\"H3K4me3\",\n\"H3K4ac\",\n\"H3K9me3\",\n\"Control\",\n\"H3K4me3\",\n\"H3K4ac\",\n\"H3K9me3\",\n\"Control\",\n\"H3K4me3\",\n\"H3K4ac\",\n\"H3K9me3\"\n ]\nlib_type = [\"single_end\"]*len(condition)\n\n#Update Metadata File\nmeta = {'Sample': pandas.Series(sample_names), 'condition': pandas.Series(condition) , 'libType': pandas.Series(lib_type)}\nmeta_df = pandas.DataFrame(data = meta)\ndeseq_meta_new = \"/data/chip/new_meta.csv\"\nmeta_df.to_csv(deseq_meta_new,index=False)\nprint meta_df\n\n#Define pairs for differential peak calling (ChIP-input or Treatment-Control)\npairs = '1_h3k4me3_R1-1_2percent_input_R1 1_h3k9ac_R1-1_2percent_input_R1 1_h3k9me3_R1-1_2percent_input_R1 2_h3k4me3_R1-2_2percent_input_R1 2_h3k9ac_R1-2_2percent_input_R1 2_h3k9me3_R1-2_2percent_input_R1 3_h3k4me3_R1-3_2percent_input_R1 3_h3k9ac_R1-3_2percent_input_R1 3_h3k9me3_R1-3_2percent_input_R1 4_h3k4me3_R1-4_2percent_input_R1 4_h3k9ac_R1-4_2percent_input_R1 4_h3k9me3_R1-4_2percent_input_R1 5_h3k4me3_R1-5_2percent_input_R1 5_h3k9ac_R1-5_2percent_input_R1 5_h3k9me3_R1-5_2percent_input_R1 6_h3k4me3_R1-6_2percent_input_R1 6_h3k9ac_R1-6_2percent_input_R1 6_h3k9me3_R1-6_2percent_input_R1 6_2percent_input_R1-4_2percent_input_R1 6_h3k4me3_R1-4_h3k4me3_R1 6_h3k9ac_R1-4_h3k9ac_R1 6_h3k9me3_R1-4_h3k9me3_R1 5_2percent_input_R1-4_2percent_input_R1 5_h3k4me3_R1-4_h3k4me3_R1 5_h3k9ac_R1-4_h3k9ac_R1 5_h3k9me3_R1-4_h3k9me3_R1 6_2percent_input_R1-3_2percent_input_R1 6_h3k4me3_R1-3_h3k4me3_R1 6_h3k9ac_R1-3_h3k9ac_R1 6_h3k9me3_R1-3_h3k9me3_R1 5_2percent_input_R1-3_2percent_input_R1 5_h3k4me3_R1-3_h3k4me3_R1 5_h3k9ac_R1-3_h3k9ac_R1 5_h3k9me3_R1-3_h3k9me3_R1 1_2percent_input_R1-3_2percent_input_R1 1_h3k4me3_R1-3_h3k4me3_R1 1_h3k9ac_R1-3_h3k9ac_R1 1_h3k9me3_R1-3_h3k9me3_R1 2_2percent_input_R1-3_2percent_input_R1 2_h3k4me3_R1-3_h3k4me3_R1 2_h3k9ac_R1-3_h3k9ac_R1 2_h3k9me3_R1-3_h3k9me3_R1 1_2percent_input_R1-4_2percent_input_R1 1_h3k4me3_R1-4_h3k4me3_R1 1_h3k9ac_R1-4_h3k9ac_R1 1_h3k9me3_R1-4_h3k9me3_R1 2_2percent_input_R1-4_2percent_input_R1 2_h3k4me3_R1-4_h3k4me3_R1 2_h3k9ac_R1-4_h3k9ac_R1 2_h3k9me3_R1-4_h3k9me3_R1'\n\n#Define pairs of peaks to compare\npairs_to_compare = ['5_h3k4me3_R1_vs_5_2percent_input_R1-3_h3k4me3_R1_vs_3_2percent_input_R1','5_h3k9ac_R1_vs_5_2percent_input_R1-3_h3k9ac_R1_vs_3_2percent_input_R1','5_h3k9me3_R1_vs_5_2percent_input_R1-3_h3k9me3_R1_vs_3_2percent_input_R1','5_h3k4me3_R1_vs_5_2percent_input_R1-4_h3k4me3_R1_vs_4_2percent_input_R1','5_h3k9ac_R1_vs_5_2percent_input_R1-4_h3k9ac_R1_vs_4_2percent_input_R1','5_h3k9me3_R1_vs_5_2percent_input_R1-4_h3k9me3_R1_vs_4_2percent_input_R1','6_h3k4me3_R1_vs_6_2percent_input_R1-3_h3k4me3_R1_vs_3_2percent_input_R1','6_h3k9ac_R1_vs_6_2percent_input_R1-3_h3k9ac_R1_vs_3_2percent_input_R1','6_h3k9me3_R1_vs_6_2percent_input_R1-3_h3k9me3_R1_vs_3_2percent_input_R1','6_h3k4me3_R1_vs_6_2percent_input_R1-4_h3k4me3_R1_vs_4_2percent_input_R1','6_h3k9ac_R1_vs_6_2percent_input_R1-4_h3k9ac_R1_vs_4_2percent_input_R1','6_h3k9me3_R1_vs_6_2percent_input_R1-4_h3k9me3_R1_vs_4_2percent_input_R1','1_h3k4me3_R1_vs_1_2percent_input_R1-3_h3k4me3_R1_vs_3_2percent_input_R1','1_h3k9ac_R1_vs_1_2percent_input_R1-3_h3k9ac_R1_vs_3_2percent_input_R1','1_h3k9me3_R1_vs_1_2percent_input_R1-3_h3k9me3_R1_vs_3_2percent_input_R1','2_h3k4me3_R1_vs_2_2percent_input_R1-4_h3k4me3_R1_vs_4_2percent_input_R1','2_h3k9ac_R1_vs_2_2percent_input_R1-4_h3k9ac_R1_vs_4_2percent_input_R1','2_h3k9me3_R1_vs_2_2percent_input_R1-4_h3k9me3_R1_vs_4_2percent_input_R1','1_h3k4me3_R1_vs_1_2percent_input_R1-3_h3k4me3_R1_vs_3_2percent_input_R1','1_h3k9ac_R1_vs_1_2percent_input_R1-3_h3k9ac_R1_vs_3_2percent_input_R1','1_h3k9me3_R1_vs_1_2percent_input_R1-3_h3k9me3_R1_vs_3_2percent_input_R1','2_h3k4me3_R1_vs_2_2percent_input_R1-4_h3k4me3_R1_vs_4_2percent_input_R1','2_h3k9ac_R1_vs_2_2percent_input_R1-4_h3k9ac_R1_vs_4_2percent_input_R1','2_h3k9me3_R1_vs_2_2percent_input_R1-4_h3k9me3_R1_vs_4_2percent_input_R1','6_h3k4me3_R1_vs_4_h3k4me3_R1-6_h3k9me3_R1_vs_4_h3k9me3_R1','5_h3k4me3_R1_vs_4_h3k4me3_R1-5_h3k9me3_R1_vs_4_h3k9me3_R1','6_h3k4me3_R1_vs_3_h3k4me3_R1-6_h3k9me3_R1_vs_3_h3k9me3_R1','5_h3k4me3_R1_vs_3_h3k4me3_R1-5_h3k9me3_R1_vs_3_h3k9me3_R1','6_h3k9ac_R1_vs_4_h3k9ac_R1-6_h3k9me3_R1_vs_4_h3k9me3_R1','5_h3k9ac_R1_vs_4_h3k9ac_R1-5_h3k9me3_R1_vs_4_h3k9me3_R1','6_h3k9ac_R1_vs_3_h3k9ac_R1-6_h3k9me3_R1_vs_3_h3k9me3_R1','5_h3k9ac_R1_vs_3_h3k9ac_R1-5_h3k9me3_R1_vs_3_h3k9me3_R1','6_h3k4me3_R1_vs_4_h3k4me3_R1-6_h3k9ac_R1_vs_4_h3k9ac_R1','5_h3k4me3_R1_vs_4_h3k4me3_R1-5_h3k9ac_R1_vs_4_h3k9ac_R1','6_h3k4me3_R1_vs_3_h3k4me3_R1-6_h3k9ac_R1_vs_3_h3k9ac_R1','5_h3k4me3_R1_vs_3_h3k4me3_R1-5_h3k9ac_R1_vs_3_h3k9ac_R1']\n\n###Update parameters, such as GENOME, GTF_FILE, paths, etc\nparameters = \"/root/src/omics-pipe/tests/test_params_ChIPseq_HOMER_AWS.yaml\"\nstream = file(parameters, 'r')\nparams = yaml.load(stream)\nparams.update({\"SAMPLE_LIST\": sample_names})\nparams.update({\"PAIR_LIST\": pairs})\nparams.update({\"R_VERSION\": '3.2.3'})\nparams.update({\"GENOME\": '/database/Homo_sapiens/UCSC/hg19/Sequence/WholeGenomeFasta/genome.fa'})\nparams.update({\"REF_GENES\": '/database/Homo_sapiens/UCSC/hg19/Annotation/Genes/genes.gtf'})\nparams.update({\"RAW_DATA_DIR\": '/data/data'})\nparams.update({\"TEMP_DIR\": '/data/data/tmp'})\nparams.update({\"PIPE_MULTIPROCESS\": 100})\nparams.update({\"STAR_VERSION\": '2.4.5a'})\nparams.update({\"PARAMS_FILE\": '/data/results/updated_params.yaml'})\nparams.update({\"LOG_PATH\": ':/data/results/logs'})\nparams.update({\"QC_PATH\": \"/data/results/QC\"})\nparams.update({\"FLAG_PATH\": \"/data/results/flags\"})\nparams.update({\"BOWTIE_RESULTS\": \"/data/results/bowtie\"})\nparams.update({\"HOMER_RESULTS\": \"/data/results/homer\"})\nparams.update({\"BOWTIE_INDEX\": \"/data/database/Homo_sapiens/UCSC/hg19/Sequence/BowtieIndex/genome\"})\nparams.update({\"ENDS\": 'SE'})\nparams.update({\"HOMER_VERSION\": '4.6'})\nparams.update({\"TRIMMED_DATA_PATH\": \"/data/results/trimmed\"})\nparams.update({\"HOMER_TRIM_OPTIONS\": \"-3 GATCGGAAGAGCACACGTCT -mis 1 -minMatchLength 6 -min 45\"})\nparams.update({\"HOMER_PEAKS_OPTIONS\": \"-o auto -region -size 1000 -minDist 2500\"})\nparams.update({\"HOMER_MOTIFS_OPTIONS\": \"-start -1000 -end 100 -len 8,10 -p 4\"})\nparams.update({\"HOMER_ANNOTATE_OPTIONS\":\"\"})\nparams.update({\"HOMER_GENOME\": \"hg19\"})\n#update params\ndefault_parameters.update(params)\n\n#write yaml file\nstream = file('updated_params.yaml', 'w')\nyaml.dump(params,stream)\np = Bunch(default_parameters)\n#View Parameters\nprint \"Run Parameters: \\n\" + str(params)",
"<a id = \"pipeline\"></a>\nOmics Pipe ChIPseq HOMER Pipeline\nThe following commands execute the Omics Pipe ChIPseq HOMER pipeline http://homer.salk.edu/homer/index.html",
"### Omics Pipe Pipelines\nfrom IPython.display import Image\nImage(filename='/data/chip/2365251253-omics_pipe_pipelines_20140402.png', width=700, height=250)\n\n###Run Omics Pipe from the command line\n!omics_pipe ChIPseq_HOMER /data/chip/updated_params.yaml",
"<a id = \"results\"></a>\nChIPseq Results\nOmics Pipe produces output files for each of the steps in the pipeline, as well as log files and run information (for reproducibility). \nSummarized output for each of the steps is displayed below for biological interpretation.",
"#Change top directory to locate result files\nos.chdir(\"/data/chip\")\n\n#Display Omics Pipe Pipeline Run Status\n#pipeline = './flags/pipeline_combined_%s.pdf' % date\npipeline = './flags/pipeline_combined_2016-05-16 17:41.pdf'\nIFrame(pipeline, width=700, height=500)",
"<a id = \"qc\"></a>\nQuality Control of Raw Data -- FastQC\nQuality control of the raw data (fastq files) was assessed using the tool FastQC (http://www.bioinformatics.babraham.ac.uk/projects/fastqc/). \nThe results for all samples are summarized below, and samples are given a PASS/FAIL rating.",
"###Summarize FastQC raw data QC results per sample\nresults_dir = './QC/'\n# Below is the complete list of labels in the summary file\nsummary_labels = [\"Basic Statistics\", \"Per base sequence quality\", \"Per tile sequence quality\", \n \"Per sequence quality scores\", \"Per base sequence content\", \"Per sequence GC content\", \n \"Per base N content\", \"Sequence Length Distribution\", \"Sequence Duplication Levels\", \n \"Overrepresented sequences\", \"Adapter Content\", \"Kmer Content\"]\n\n# Below is the list I anticipate caring about; I leave the full list above in case it turns out later\n# I anticipated wrong and need to update this one.\nlabels_of_interest = [\"Basic Statistics\", \"Per base sequence quality\"]\n\n# Look for each file named summary.txt in each subdirectory named *_fastqc in the results directory\nsummary_wildpath = os.path.join(results_dir, '*/*_fastqc', \"summary.txt\")\nsummary_filepaths = [x for x in glob.glob(summary_wildpath)]\n#print os.getcwd()\n# Examine each of these files to find lines starting with \"FAIL\" or \"WARN\"\nfor curr_summary_path in summary_filepaths:\n has_error = False\n #print(divider) \n with open(curr_summary_path, 'r') as f:\n for line in f:\n if line.startswith(\"FAIL\") or line.startswith(\"WARN\"):\n fields = line.split(\"\\t\")\n if not has_error:\n print(fields[2].strip() + \": PASS\") # the file name\n has_error = True \n if fields[1] in labels_of_interest:\n print(fields[0] + \"\\t\" + fields[1])\n\n#Display QC results for individual samples\nsample = \"6_h3k9me3_R1\"\nname = '/data/chip/QC/%s_fastqc/fastqc_report.html' % (sample)\n#name = './QC/%s/%s_fastqc/fastqc_report.html' % (sample,sample)\nIFrame(name, width=1000, height=600)",
"<a id = \"alignment\"></a>\nAlignment Summary Statistics -- Bowtie\nThe samples were aligned to the genome with the Bowtie aligner (http://bowtie-bio.sourceforge.net/index.shtml). \nThe alignment statistics for all samples are summarized and displayed below. Samples that do not pass the alignment quality filter \n(Good quality = # aligned reads > 10 million and % aligned > 60%) are excluded from downstream analyses.",
"#Run samstat to produce summary statistics from Bowtie output\n!samstat ./bowtie/*/*.bam\n\n##Summarize Alignment QC Statistics\nimport sys\nfrom io import StringIO\nalign_dir = './bowtie/'\n# Look for each file named summary.txt in each subdirectory named *_fastqc in the results directory\nsummary_wildpath = os.path.join(align_dir, '*/', \"*.bam.samstat.html\")\n#summary_wildpath = os.path.join(star_dir, \"*Log.final.out\")\nsummary_filepaths = [x for x in glob.glob(summary_wildpath)]\n#print summary_filepaths\n\nalignment_stats = pandas.DataFrame()\nfor curr_summary_path in summary_filepaths: \n #with open(curr_summary_path, 'r') as f:\n filename = curr_summary_path.replace(\"./bowtie/\",\"\")\n filename2 = filename.replace(\".bam.samstat.html\",\"\")\n filename3 = filename2.replace(\"/*\",\"\")\n dfs = pandas.read_html(curr_summary_path, header =0)\n df = dfs[0]\n raw_reads1 = df[\"Number\"]\n raw_reads = raw_reads1[6]\n aligned_reads1 = df[\"Number\"]\n aligned_reads = aligned_reads1[0]\n percent_aligned1 = df[\"Percentage\"]\n percent_aligned = percent_aligned1[0]\n d = {\"Sample\": pandas.Series(filename3), \"Raw_Reads\": pandas.Series(float(raw_reads)),\n \"Aligned_Reads\": pandas.Series(float(aligned_reads)),\n \"Percent_Uniquely_Aligned\": pandas.Series(percent_aligned)}\n p = pandas.DataFrame(data=d)\n alignment_stats = alignment_stats.append(p)\n#print alignment_stats\nalignment_stats.to_csv(\"alignment_stats_summary.csv\",index=False)\n#View interactive table \nqgrid.show_grid(alignment_stats, grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})\n\n#Barplot of number of aligned reads per sample\nplt.figure(figsize=(10,10))\nax = plt.subplot(111)\nalignment_stats.plot(ax=ax, kind='barh', title='# of Reads')\nax.axis(x='off')\nax.axvline(x=10000000, linewidth=2, color='Red', zorder=0)\n#plt.xlabel('# Aligned Reads',fontsize=16)\nfor i, x in enumerate(alignment_stats.Sample):\n ax.text(0, i + 0, x, ha='right', va= \"bottom\", fontsize='medium')\nplt.savefig('./alignment_stats_%s' %date ,dpi=300) # save figure\n\n###Flag samples with poor alignment or low numbers of reads\ndf = alignment_stats\nfailed_samples = df.loc[(df.Aligned_Reads < 10000000) | (df.Percent_Uniquely_Aligned < 40), ['Sample','Raw_Reads', 'Aligned_Reads', 'Percent_Uniquely_Aligned']]\nprint failed_samples\n#View interactive table \n#qgrid.show_grid(failed_samples, grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})\n\n#View Alignment Statistics for failed samples\nfor failed in failed_samples[\"Sample\"]:\n #fname = \"/data/results/star/%s/Log.final.out\" % failed\n fname = \"./bowtie/%s.bam.samstat.html\" % failed\n print fname\n IFrame(fname, width=1000, height=600)\n\n###Samples that passed QC for alignment \npassed_samples = df.loc[(df.Aligned_Reads > 10000000) | (df.Percent_Uniquely_Aligned > 40), ['Sample','Raw_Reads', 'Aligned_Reads', 'Percent_Uniquely_Aligned']]\n\nprint \"Number of samples that passed alignment QC = \" + str(len(passed_samples))\n#View interactive table \n#qgrid.show_grid(passed_samples, grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})\n\n#View Alignment Statistics for passed samples\nfor passed in passed_samples[\"Sample\"]:\n #fname = \"/data/results/star/%s/Log.final.out\" % passed\n fname = \"./bowtie/%s.bam.samstat.html\" % passed\n print passed\n IFrame(fname, width=1000, height=600)",
"<a id = \"clonal\"></a>\nClonal Tag Distribution\ntagCountDistribution.txt - File contains a histogram of clonal read depth, showing the number of reads per unique position. If an experiment is \"over-sequenced\", you start seeing the same reads over and over instead of unique reads.",
"for sample in sample_names:\n fi = \"./%s/tagCountDistribution.txt\" % sample\n counts1 = pandas.read_csv(fi, sep=\"\\t\")\n counts = counts1.head(10)\n counts.set_index = 0\n counts[[1]].plot.bar().set_title(sample)\n plt.savefig('./clonal_distribution_plot_%s' %sample ,dpi=300) # save figure",
"<a id = \"autocorr\"></a>\nAutocorrelation Analysis\ntagAutocorrelation.txt - The autocorrelation routine creates a distribution of distances between adjacent reads in the genome. If reads are mapped to the same strand, they are added to the first column. If adjacent reads map to different strands, they are added to the 2nd column. The results from autocorrelation analysis are very useful for troubleshooting problems with the experiment, and are used to estimate the fragment length for ChIP-Seq and MNase-Seq.",
"for sample in sample_names:\n fi = \"./%s/tagAutocorrelation.txt\" % sample\n tags = pandas.read_csv(fi, sep=\"\\t\")\n #Distance in bp(Fragment Length Estimate: 164)(Peak Width Estimate: 164)\tSame Strand (+ for Watson strand, - for Crick)\tOpposite Strand\n tags.columns = ['Relative_Distance_Between_Reads(bp)', 'Same_Strand', 'Opposite_Strand']\n ax1 = tags.plot(x='Relative_Distance_Between_Reads(bp)', y=['Same_Strand','Opposite_Strand'])\n ax1.set_ylim(10000,250000)\n ax1.set_xlim(-1000,1000)\n ax1.set_title(sample)\n plt.savefig('./autocorrelation_plot_%s' %sample ,dpi=300) # save figure",
"<a id = \"homer\"></a>\nHomer Results\nHOMER (Hypergeometric Optimization of Motif EnRichment) is a suite of tools for Motif Discovery and next-gen sequencing analysis. It is a collection of command line programs for unix-style operating systems written in Perl and C++. HOMER was primarily written as a de novo motif discovery algorithm and is well suited for finding 8-20 bp motifs in large scale genomics data. http://homer.salk.edu/homer/index.html\n<a id = \"peaks\"></a>\nPeak Summary\nView top of peaks file for peak calling statistics",
"pairs1 = pairs.replace(\" \", \",\")\npairs2 = pairs1.replace(\"-\", \"_vs_\")\npairs3 = pairs2.split(\",\")\npeak_stats = pandas.DataFrame()\nfor pair in pairs3: \n fname = \"./%s/regions.txt\" % pair\n with open(fname, 'r') as fin:\n head = [next(fin) for x in xrange(40)]\n df = pandas.DataFrame(head)\n df.columns=[\"col\"]\n df['col'] = df['col'].str.replace('\\n','')\n df = pandas.DataFrame(df.col.str.split('=',1).tolist(),columns = ['sample',pair])\n df_items = df[['sample']]\n df_values = df[[pair]]\n peak_stats = pandas.concat([peak_stats, df_values],axis=1)\n #print pair\npeak_stats = pandas.concat([df_items,peak_stats],axis=1)\npeak_stats =peak_stats.transpose()\npeak_stats =peak_stats.dropna(axis=1)\npeak_stats.columns = peak_stats.iloc[0]\npeak_stats = peak_stats[1:]\npeak_stats.to_csv(\"peak_stats_summary.csv\",index=False)\n\n#View interactive table \nqgrid.show_grid(peak_stats, grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})\n",
"Number of Peaks Per Sample",
"#Display peak summary graphs\n\n#Barplot of number of peaks per sample\nnum_peaks = peak_stats.iloc[:,[1]]\nnum_peaks.columns = [\"Number of Peaks\"]\nnum_peaks =num_peaks.convert_objects(convert_numeric=True)\nnum_peaks = num_peaks.sort_values([\"Number of Peaks\"],axis=0,ascending=False)\n\nnum_peaks.plot.bar(figsize=(15, 5))\nplt.savefig('./peaks_summary.png' ,dpi=300) # save figure\n\n",
"IP Efficiency\nApproximate IP effeciency describes the fraction of tags found in peaks versus. genomic background. This provides an estimate of how well the ChIP worked. Certain antibodies like H3K4me3, ERa, or PU.1 will yield very high IP efficiencies (>20%), while most rand in the 1-20% range. Once this number dips below 1% it's a good sign the ChIP didn't work very well and should probably be optimized. http://homer.salk.edu/homer/ngs/peaks.html",
"#Display IP efficiency summary graphs, with horizontal line at y=1\nIP = peak_stats.iloc[:,[8]]\nIP.columns = [\"IP_Efficiency\"]\nIP['IP_Efficiency'] = IP['IP_Efficiency'].replace('%','',regex=True).astype('float')\nIP =IP.sort_values(['IP_Efficiency'],axis=0,ascending=False)\nIP.plot.bar(figsize=(15, 5))\nplt.axhline(y=1, color = \"red\", linewidth = 2)\nplt.savefig('./ipefficiency_summary.png' ,dpi=300) # save figure\n",
"<a id = \"annot\"></a>\nAnnotation Summary\nVisualize pie chart/bar graph of annotated peaks\nGene Type",
"#Summarize annotation stats\nannot_stats = pandas.DataFrame()\nfor pair in pairs3: \n fname = \"./%s/regions.annotate.txt\" % pair\n fi = pandas.read_csv(fname, sep=\"\\t\")\n fi.columns = [c.replace(' ', '_') for c in fi.columns]\n fi.Gene_Type.value_counts().plot(kind=\"pie\",figsize=(6, 6))\n plt.axis('equal')\n plt.title(pair)\n plt.savefig('./Peaks_Gene_Type_pie_%s.png' %pair ,dpi=300) # save figure\n plt.show()\n #qgrid.show_grid(fi, grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})\n\n#View interactive table \n#qgrid.show_grid(peak_stats, grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})\n",
"Annotation",
"#Summarize annotation stats\nannot_stats = pandas.DataFrame()\nfor pair in pairs3: \n fname = \"./%s/regions.annotate.txt\" % pair\n fi = pandas.read_csv(fname, sep=\"\\t\")\n fi.columns = [c.replace(' ', '_') for c in fi.columns]\n fi['Annotation'] = fi['Annotation'].replace('\\(.*?\\)','',regex=True)\n fi['Annotation'] = fi['Annotation'].replace(' \\.*?','',regex=True) \n fi['Annotation'] = fi['Annotation'].replace('\\..*$','',regex=True) \n\n fi.Annotation.value_counts().plot(kind=\"pie\", figsize=(8, 8))\n plt.axis('equal')\n plt.title(pair)\n plt.savefig('./Peaks_Gene_Type_pie_%s' %pair ,dpi=300) # save figure\n plt.show()\n \n\n#Download scripts from https://github.com/gdavidson/ChIPseq_tools\nimport sys\nsys.path.append('/data/chip/ChIPseq_tools-master') #append path to downloaded scripts\nimport getFromAnnotations as gfa\n\nfor pair in pairs3:\n annotationList = gfa.getAnnotationList('%s/regions.annotate.txt' %pair)\n #plot distances\n try:\n #pie chart \n pieChartMap = gfa.getPieChartMap(annotationList)\n gfa.pieChart(pieChartMap, pair)\n plt.show()\n plt.savefig('./Pie_Chart_with_numbers_%s' %pair ,dpi=300) # save figure\n \n except ValueError: \n next\n\n#qgrid.show_grid(fi.sample(200), grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})",
"<a id = \"kegg\"></a>\nKEGG Enrichment for peaks\nVisualize KEGG gene set enrichment for peaks annotated to genes",
"#Summarize annotation stats\nkegg_stats = pandas.DataFrame()\nfor pair in pairs3: \n fname = \"./%s_GO/kegg.txt\" % pair\n fi = pandas.read_csv(fname, sep=\"\\t\")\n fi.columns = [c.replace(' ', '_') for c in fi.columns]\n fi = fi.loc[fi[\"Enrichment\"] < 0.05]\n fi[\"comparison\"] = pair\n kegg_stats = kegg_stats.append(fi)\n\n#write summary to file\nkegg_stats.to_csv(\"kegg_stats_summary.csv\",index=False)\n\n#View interactive table \nqgrid.show_grid(kegg_stats, grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})",
"<a id = \"motif\"></a>\nMotif Analysis\nView enriched motifs http://homer.salk.edu/homer/ngs/peakMotifs.html\nIn general, when analyzing ChIP-Seq / ChIP-Chip peaks you should expect to see strong enrichment for a motif resembling the site recognized by the DNA binding domain of the factor you are studying. Enrichment p-values reported by HOMER should be very very significant (i.e. << 1e-50). If this is not the case, there is a strong possibility that the experiment may have failed in one way or another. For example, the peaks could be of low quality because the factor is not expressed very high.",
"#Summarize enriched motifs stats\nmotif_stats = pandas.DataFrame()\nfor pair in pairs3: \n fname = \"./%s/MotifOutput/knownResults.txt\" % pair\n fi = pandas.read_csv(fname, sep=\"\\t\")\n fi.columns = [\"Motif_Name\", \"Consensus\", \"P-value\", \"Log_P-value\", \"q-value_Benjamini\", \"#TargetSequenceswithMotif\", \n \"%TargetSequenceswithMotif\",\"#BackgroundSequenceswithMotif\", \"%BackgroundSequenceswithMotif\",]\n fi = fi.loc[fi[\"P-value\"] < 1e-50]\n fi[\"comparison\"] = pair\n motif_stats = motif_stats.append(fi)\n\n#write summary to file\nmotif_stats.to_csv(\"motif_stats_summary.csv\",index=False)\n\n\n#View interactive table \nqgrid.show_grid(motif_stats, grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})",
"<a id = \"promoters\"></a>\nPeaks in Promoters\nExtract peaks in promoter regions",
"#Summarize peaks in promoters\npromoter_stats = pandas.DataFrame()\nfor pair in pairs3: \n fname = \"./%s/regions.annotate.txt\" % pair\n fi = pandas.read_csv(fname, sep=\"\\t\")\n fi.columns = [c.replace(' ', '_') for c in fi.columns]\n fi['Annotation'] = fi['Annotation'].replace('\\(.*?\\)','',regex=True)\n fi['Annotation'] = fi['Annotation'].replace(' \\.*?','',regex=True) \n fi['Annotation'] = fi['Annotation'].replace('\\..*$','',regex=True) \n fi = fi.loc[fi[\"Annotation\"] == \"promoter-TSS\"]\n fi[\"comparison\"] = pair\n fi.Gene_Type.value_counts().plot(kind=\"bar\", figsize=(8, 8))\n plt.title(\"Peaks in Promoters by Gene Type -\" + pair)\n plt.show()\n plt.xlabel('Gene Type', fontsize=12)\n plt.ylabel('# of Peaks', fontsize=12)\n plt.savefig('./Promoter_Peaks_Gene_Type_bar_%s' %pair ,dpi=300) # save figure\n promoter_stats = promoter_stats.append(fi)\n#write summary to file\npromoter_stats.to_csv(\"promoter_stats_summary.csv\",index=False)",
"<a id = \"genes\"></a>\nPeaks Annotated to Genes of Interest\nExtract peaks annotated to genes of interest",
"genes_df = pandas.read_csv(\"./genes_of_interest_validated_junctions.csv\")\ngene_names = genes_df[\"gene\"]\ngenes_stats = pandas.DataFrame()\nfor pair in pairs3: \n fname = \"./%s/regions.annotate.txt\" % pair\n fi = pandas.read_csv(fname, sep=\"\\t\")\n fi.columns = [c.replace(' ', '_') for c in fi.columns]\n fi['Annotation'] = fi['Annotation'].replace('\\(.*?\\)','',regex=True)\n fi['Annotation'] = fi['Annotation'].replace(' \\.*?','',regex=True) \n fi['Annotation'] = fi['Annotation'].replace('\\..*$','',regex=True) \n fi = fi.loc[fi[\"Gene_Name\"].isin(gene_names)]\n fi[\"comparison\"] = pair\n \n if fi.Annotation.empty:\n next\n else: \n fi.Annotation.value_counts().plot(kind=\"bar\", figsize=(8, 8))\n plt.title(\"Peaks in Promoters by Annotation -\" + pair)\n plt.show()\n plt.xlabel('Annotation', fontsize=12)\n plt.ylabel('# of Peaks', fontsize=12)\n plt.savefig('./Genes_of_Interest_Peaks_Annotation_bar_%s' %pair ,dpi=300) # save figure\n genes_stats = genes_stats.append(fi)\n#write summary to file\ngenes_stats.to_csv(\"genes_stats_summary.csv\",index=False)\n\n#Download scripts from https://github.com/gdavidson/ChIPseq_tools\nimport sys\nsys.path.append('/data/chip/ChIPseq_tools-master') #append path to downloaded scripts\nimport getFromAnnotations as gfa\n\nfor pair in pairs3:\n annotationList = gfa.getAnnotationList('%s/regions.annotate.txt' %pair)\n #plot distances\n try:\n distanceList,countMap = gfa.getDistanceList(annotationList)\n gfa.histDistances(distanceList, pair)\n plt.show()\n plt.savefig('./TSS_distance_%s' %pair ,dpi=300) # save figure\n gfa.plotDistances(countMap)\n plt.show()\n plt.savefig('./TSS_distance_bp_%s' %pair ,dpi=300) # save figure \n except ValueError: \n next\n \n\ngenes_stats2 = genes_stats[['Gene_Name','comparison']]\ngenes_stats2.Gene_Name.value_counts().plot(kind=\"bar\", figsize=(15, 8), stacked=True)\nplt.xlabel('Genes of Interest', fontsize=12)\nplt.ylabel('# of Peaks', fontsize=12)\nplt.savefig('./Genes_of_Interest_Peaks_all.png' ,dpi=300) # save figure\n\ngenes_stats2 = genes_stats[['Gene_Name','comparison']]\ngenes_stats2.comparison.value_counts().plot(kind=\"barh\", figsize=(15, 8), stacked=True)\nplt.xlabel('# Peaks', fontsize=12)\nplt.ylabel('Comparison', fontsize=12)\nplt.savefig('./Genes_of_Interest_Comparison_Peaks_all.png' ,dpi=300) # save figure\n\nsub_df = genes_stats2.groupby(['Gene_Name']).comparison.value_counts().unstack()\nsub_df.plot(kind='bar',stacked=True, figsize=(15, 8)).legend(loc='center left', bbox_to_anchor=(1.0, 0.5) )\nplt.xlabel('Genes of Interest', fontsize=12)\nplt.ylabel('# of Peaks', fontsize=12)\nplt.savefig('./Genes_of_Interest_Peaks_by_comparison.png' ,dpi=300) # save figure\n\nsub_df = genes_stats2.groupby(['comparison']).Gene_Name.value_counts().unstack()\nsub_df.plot(kind='barh',stacked=True, figsize=(15, 8)).legend(loc='center left', bbox_to_anchor=(1.0, 0.5) )\nplt.xlabel('# Peaks', fontsize=12)\nplt.ylabel('Comparison', fontsize=12)\nplt.savefig('./Comparison_by_genes_of_interest.png' ,dpi=300) # save figure",
"<a id = \"venn2\"></a>\nVenn Analysis for Comparison of Peaks",
"for pairs in pairs_to_compare: \n #print pairs\n pairs_split = pairs.split(\"-\")\n pair1 = pairs_split[0]\n pair2= pairs_split[1]\n peaks1 = pandas.read_csv('./%s/regions.annotate.txt' %pair1, sep=\"\\t\")\n peaks2 = pandas.read_csv('./%s/regions.annotate.txt' %pair2, sep=\"\\t\")\n peaks1.columns = [c.replace(' ', '_') for c in peaks1.columns]\n peaks1.columns.values[0] = \"Peak_ID\"\n peaks2.columns = [c.replace(' ', '_') for c in peaks2.columns]\n peaks2.columns.values[0] = \"Peak_ID\"\n peaks1_list = peaks1['Gene_Name'].tolist()\n peaks2_list = peaks2['Gene_Name'].tolist()\n venn2([set(peaks1_list), set(peaks2_list)], (pair1,pair2))\n plt.show()\n plt.savefig('./Venn_Analysis_Genes_with_Peaks_%s.png' %pairs ,dpi=300) # save figure\n commonGenes, uniqueL1, uniqueL2 = compare.compareLists(peaks1_list, peaks2_list)\n commonGenes_df = pandas.DataFrame(commonGenes, columns = [\"commonGenes\"])\n commonGenes_df.to_csv(\"CommonGenes_%s.csv\" %pairs)\n uniqueL1_df = pandas.DataFrame(uniqueL1, columns = [\"uniqueL1\"])\n uniqueL1_df.to_csv(\"uniqueL1_%s_%s.csv\" %(pair1, pairs))\n uniqueL2_df = pandas.DataFrame(uniqueL2, columns = [\"uniqueL2\"])\n uniqueL2_df.to_csv(\"uniqueL2_%s_%s.csv\" %(pair2, pairs))",
"<a id = \"browse\"></a>\nView Pileups on Genome Browser\nCut and copy the following URLs to create custom tracks for your samples in UCSC Genome Browser",
"for sample in sample_names:\n url = \"http://ccbb-analysis.s3.amazonaws.com/%s/%s.ucsc.bedGraph.gz\" %(sample,sample)\n print url\n\nIFrame(\"https://genome.ucsc.edu/cgi-bin/hgCustom?hgsid=504023239_5efJ2ONTkgrqUm6AcaAkNGcyXKmn\", width=900, height=500)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zzsza/Datascience_School
|
30. 딥러닝/04. CNN.ipynb
|
mit
|
[
"Convolutional Neural Network\nCNN\n\n\n이미지 분류를 위한 특별한 구조의 Deep Neural Network\n\n\nlocal receptive fields\n\nshared weights\npooling\n\nLocal Receptive Field\n\nInput Layer의 일부 Input에 대해서만 다음 Hidden Layer로 weight 연결\n예: 28x28 Input Layer에서 5x5 영역에 대해서만 weight 연결 \n=> 다음 Hidden Layer의 크기는 (28-5+1)x(28-5+1) = 24x24\nSparse Connectivity\n\n<img src=\"http://neuralnetworksanddeeplearning.com/images/tikz44.png\">\n<img src=\"http://neuralnetworksanddeeplearning.com/images/tikz45.png\">\n\nhttp://cs231n.github.io/assets/conv-demo/index.html\n\nShared weights and biases\n\n모든 연결에 대해 공통 weight & bias 계수 사용\n위 예에서 parameter의 수는 26개 (5x5+1)\n\n$$\n\\begin{eqnarray} \n \\sigma\\left(b + \\sum_{l=0}^4 \\sum_{m=0}^4 w_{l,m} a_{j+l, k+m} \\right).\n\\end{eqnarray}\n$$\n\n이 연산은 2-D image filter의 convolution연산과 동일 \n=> Convolution NN\n공통 weight: image kernel, image filter\n\nImage Filter\n<img src=\"http://i.stack.imgur.com/GvsBA.jpg\">",
"import scipy.ndimage\nimg = 255 - sp.misc.face(gray=True).astype(float)\nk = np.zeros((2,2))\nk[:,0] = 1; k[:,1] = -1\nimg2 = np.maximum(0, sp.ndimage.filters.convolve(img, k))\nplt.figure(figsize=(10,5))\nplt.subplot(121)\nplt.imshow(img)\nplt.grid(False)\nplt.subplot(122)\nplt.imshow(img2)\nplt.grid(False)",
"Feature Map\n\n만약 weight가 특정 image patter에 대해 a=1인 출력을 내도록 training 되었다면 \nhidden layer는 feature가 존재하는 위치를 표시\n=> feature map\n여기에서의 feature는 input data를 의미하는 것이 아니라 image 분류에 사용되는 input data의 특정한 pattern을 뜻함\n\n<img src=\"http://www.kdnuggets.com/wp-content/uploads/computer-vision-filters.jpg\">\nMultiple Feature Maps\n\n하나의 공통 weight set은 한 종류의 image feature만 발견 가능\n복수의 feature map (weight set) 필요\n\n<img src=\"http://neuralnetworksanddeeplearning.com/images/tikz46.png\"> \n\nMNIST digit image 에 대해 training이 완료된 20개 feature map의 예\n\n<img src=\"http://neuralnetworksanddeeplearning.com/images/net_full_layer_0.png\" style=\"width:50%;\"> \n<img src=\"http://i.ytimg.com/vi/n6hpQwq7Inw/maxresdefault.jpg\">\nMax Pooling Layer\n\n영역내에서 가장 최대값 출력\n영역내에 feature가 존재하는지의 여부\n전체 영역이 축소 \n\n<img src=\"http://cs231n.github.io/assets/cnn/maxpool.jpeg\" style=\"width:50%;\"> \n<img src=\"http://neuralnetworksanddeeplearning.com/images/tikz48.png\">\nL2 pooling\n\nmaximum 값 대신에 영역내의 값의 sum of square 사용\n\nOutput Layer\n\nsoftmax \n\n<img src=\"http://neuralnetworksanddeeplearning.com/images/tikz49.png\">\nDemo\n\nhttp://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html\n\nPython Implementation\n\nhttps://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network3.py\n\n```python\nclass FullyConnectedLayer(object):\ndef __init__(self, n_in, n_out, activation_fn=sigmoid, p_dropout=0.0):\n self.n_in = n_in\n self.n_out = n_out\n self.activation_fn = activation_fn\n self.p_dropout = p_dropout\n # Initialize weights and biases\n self.w = theano.shared(\n np.asarray(\n np.random.normal(\n loc=0.0, scale=np.sqrt(1.0/n_out), size=(n_in, n_out)),\n dtype=theano.config.floatX),\n name='w', borrow=True)\n self.b = theano.shared(\n np.asarray(np.random.normal(loc=0.0, scale=1.0, size=(n_out,)),\n dtype=theano.config.floatX),\n name='b', borrow=True)\n self.params = [self.w, self.b]\n\ndef set_inpt(self, inpt, inpt_dropout, mini_batch_size):\n self.inpt = inpt.reshape((mini_batch_size, self.n_in))\n self.output = self.activation_fn(\n (1-self.p_dropout)*T.dot(self.inpt, self.w) + self.b)\n self.y_out = T.argmax(self.output, axis=1)\n self.inpt_dropout = dropout_layer(\n inpt_dropout.reshape((mini_batch_size, self.n_in)), self.p_dropout)\n self.output_dropout = self.activation_fn(\n T.dot(self.inpt_dropout, self.w) + self.b)\n\ndef accuracy(self, y):\n \"Return the accuracy for the mini-batch.\"\n return T.mean(T.eq(y, self.y_out))\n\n``` \n```python\nclass ConvPoolLayer(object):\n \"\"\"Used to create a combination of a convolutional and a max-pooling\n layer. A more sophisticated implementation would separate the\n two, but for our purposes we'll always use them together, and it\n simplifies the code, so it makes sense to combine them.\n\"\"\"\n\ndef __init__(self, filter_shape, image_shape, poolsize=(2, 2),\n activation_fn=sigmoid):\n \"\"\"`filter_shape` is a tuple of length 4, whose entries are the number\n of filters, the number of input feature maps, the filter height, and the\n filter width.\n\n `image_shape` is a tuple of length 4, whose entries are the\n mini-batch size, the number of input feature maps, the image\n height, and the image width.\n\n `poolsize` is a tuple of length 2, whose entries are the y and\n x pooling sizes.\n\n \"\"\"\n self.filter_shape = filter_shape\n self.image_shape = image_shape\n self.poolsize = poolsize\n self.activation_fn=activation_fn\n # initialize weights and biases\n n_out = (filter_shape[0]*np.prod(filter_shape[2:])/np.prod(poolsize))\n self.w = theano.shared(\n np.asarray(\n np.random.normal(loc=0, scale=np.sqrt(1.0/n_out), size=filter_shape),\n dtype=theano.config.floatX),\n borrow=True)\n self.b = theano.shared(\n np.asarray(\n np.random.normal(loc=0, scale=1.0, size=(filter_shape[0],)),\n dtype=theano.config.floatX),\n borrow=True)\n self.params = [self.w, self.b]\n\ndef set_inpt(self, inpt, inpt_dropout, mini_batch_size):\n self.inpt = inpt.reshape(self.image_shape)\n conv_out = conv.conv2d(\n input=self.inpt, filters=self.w, filter_shape=self.filter_shape,\n image_shape=self.image_shape)\n pooled_out = downsample.max_pool_2d(\n input=conv_out, ds=self.poolsize, ignore_border=True)\n self.output = self.activation_fn(\n pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))\n self.output_dropout = self.output # no dropout in the convolutional layers\n\n```\n```python\nclass SoftmaxLayer(object):\ndef __init__(self, n_in, n_out, p_dropout=0.0):\n self.n_in = n_in\n self.n_out = n_out\n self.p_dropout = p_dropout\n # Initialize weights and biases\n self.w = theano.shared(\n np.zeros((n_in, n_out), dtype=theano.config.floatX),\n name='w', borrow=True)\n self.b = theano.shared(\n np.zeros((n_out,), dtype=theano.config.floatX),\n name='b', borrow=True)\n self.params = [self.w, self.b]\n\ndef set_inpt(self, inpt, inpt_dropout, mini_batch_size):\n self.inpt = inpt.reshape((mini_batch_size, self.n_in))\n self.output = softmax((1-self.p_dropout)*T.dot(self.inpt, self.w) + self.b)\n self.y_out = T.argmax(self.output, axis=1)\n self.inpt_dropout = dropout_layer(\n inpt_dropout.reshape((mini_batch_size, self.n_in)), self.p_dropout)\n self.output_dropout = softmax(T.dot(self.inpt_dropout, self.w) + self.b)\n\ndef cost(self, net):\n \"Return the log-likelihood cost.\"\n return -T.mean(T.log(self.output_dropout)[T.arange(net.y.shape[0]), net.y])\n\ndef accuracy(self, y):\n \"Return the accuracy for the mini-batch.\"\n return T.mean(T.eq(y, self.y_out))\n\n```\n```python\nclass Network(object):\ndef __init__(self, layers, mini_batch_size):\n \"\"\"Takes a list of `layers`, describing the network architecture, and\n a value for the `mini_batch_size` to be used during training\n by stochastic gradient descent.\n\n \"\"\"\n self.layers = layers\n self.mini_batch_size = mini_batch_size\n self.params = [param for layer in self.layers for param in layer.params]\n self.x = T.matrix(\"x\") \n self.y = T.ivector(\"y\")\n init_layer = self.layers[0]\n init_layer.set_inpt(self.x, self.x, self.mini_batch_size)\n for j in xrange(1, len(self.layers)):\n prev_layer, layer = self.layers[j-1], self.layers[j]\n layer.set_inpt(\n prev_layer.output, prev_layer.output_dropout, self.mini_batch_size)\n self.output = self.layers[-1].output\n self.output_dropout = self.layers[-1].output_dropout\n\n\ndef SGD(self, training_data, epochs, mini_batch_size, eta,\n validation_data, test_data, lmbda=0.0):\n \"\"\"Train the network using mini-batch stochastic gradient descent.\"\"\"\n training_x, training_y = training_data\n validation_x, validation_y = validation_data\n test_x, test_y = test_data\n\n # compute number of minibatches for training, validation and testing\n num_training_batches = size(training_data)/mini_batch_size\n num_validation_batches = size(validation_data)/mini_batch_size\n num_test_batches = size(test_data)/mini_batch_size\n\n # define the (regularized) cost function, symbolic gradients, and updates\n l2_norm_squared = sum([(layer.w**2).sum() for layer in self.layers])\n cost = self.layers[-1].cost(self)+\\\n 0.5*lmbda*l2_norm_squared/num_training_batches\n grads = T.grad(cost, self.params)\n updates = [(param, param-eta*grad)\n for param, grad in zip(self.params, grads)]\n\n # define functions to train a mini-batch, and to compute the\n # accuracy in validation and test mini-batches.\n i = T.lscalar() # mini-batch index\n train_mb = theano.function(\n [i], cost, updates=updates,\n givens={\n self.x:\n training_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],\n self.y:\n training_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]\n })\n validate_mb_accuracy = theano.function(\n [i], self.layers[-1].accuracy(self.y),\n givens={\n self.x:\n validation_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],\n self.y:\n validation_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]\n })\n test_mb_accuracy = theano.function(\n [i], self.layers[-1].accuracy(self.y),\n givens={\n self.x:\n test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],\n self.y:\n test_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]\n })\n self.test_mb_predictions = theano.function(\n [i], self.layers[-1].y_out,\n givens={\n self.x:\n test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size]\n })\n # Do the actual training\n best_validation_accuracy = 0.0\n for epoch in xrange(epochs):\n for minibatch_index in xrange(num_training_batches):\n iteration = num_training_batches*epoch+minibatch_index\n if iteration % 1000 == 0:\n print(\"Training mini-batch number {0}\".format(iteration))\n cost_ij = train_mb(minibatch_index)\n if (iteration+1) % num_training_batches == 0:\n validation_accuracy = np.mean(\n [validate_mb_accuracy(j) for j in xrange(num_validation_batches)])\n print(\"Epoch {0}: validation accuracy {1:.2%}\".format(\n epoch, validation_accuracy))\n if validation_accuracy >= best_validation_accuracy:\n print(\"This is the best validation accuracy to date.\")\n best_validation_accuracy = validation_accuracy\n best_iteration = iteration\n if test_data:\n test_accuracy = np.mean(\n [test_mb_accuracy(j) for j in xrange(num_test_batches)])\n print('The corresponding test accuracy is {0:.2%}'.format(\n test_accuracy))\n print(\"Finished training network.\")\n print(\"Best validation accuracy of {0:.2%} obtained at iteration {1}\".format(\n best_validation_accuracy, best_iteration))\n print(\"Corresponding test accuracy of {0:.2%}\".format(test_accuracy))\n\n``` \nPerformance Test",
"%cd /home/dockeruser/neural-networks-and-deep-learning/src",
"Normal MLP",
"import network3\nfrom network3 import Network\nfrom network3 import ConvPoolLayer, FullyConnectedLayer, SoftmaxLayer\n\ntraining_data, validation_data, test_data = network3.load_data_shared()\nmini_batch_size = 10\n\nnet = Network([\n FullyConnectedLayer(n_in=784, n_out=100),\n SoftmaxLayer(n_in=100, n_out=10)], \n mini_batch_size)\n\nnet.SGD(training_data, 10, mini_batch_size, 0.1, validation_data, test_data)",
"Add Convolutional + Pooling Layer",
"net = Network([\n ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), \n filter_shape=(20, 1, 5, 5), \n poolsize=(2, 2)),\n FullyConnectedLayer(n_in=20*12*12, n_out=100),\n SoftmaxLayer(n_in=100, n_out=10)], \n mini_batch_size)\n\nnet.SGD(training_data, 10, mini_batch_size, 0.1, validation_data, test_data) ",
"Add Additional Convolution + Pool Layer\n\n두번째 convolutional-pooling layer의 역할\nfeature map에서 feature가 나타나는 pattern의 포착\nfeature of feature map",
"net = Network([\n ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), \n filter_shape=(20, 1, 5, 5), \n poolsize=(2, 2)),\n ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12), \n filter_shape=(40, 20, 5, 5), \n poolsize=(2, 2)),\n FullyConnectedLayer(n_in=40*4*4, n_out=100),\n SoftmaxLayer(n_in=100, n_out=10)], \n mini_batch_size)\n\nnet.SGD(training_data, 10, mini_batch_size, 0.1, validation_data, test_data)",
"Apply ReLu\n\nsigmoid activation functions 보다 성능 향상",
"from network3 import ReLU\n\nnet = Network([\n ConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28), \n filter_shape=(20, 1, 5, 5), \n poolsize=(2, 2), \n activation_fn=ReLU),\n ConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12), \n filter_shape=(40, 20, 5, 5), \n poolsize=(2, 2), \n activation_fn=ReLU),\n FullyConnectedLayer(n_in=40*4*4, n_out=100, activation_fn=ReLU),\n SoftmaxLayer(n_in=100, n_out=10)], \n mini_batch_size)\n\nnet.SGD(training_data, 60, mini_batch_size, 0.03, validation_data, test_data, lmbda=0.1)",
"History of CNN\n1998 LeNet-5 paper\n\n\"Gradient-based learning applied to document recognition\"\nby Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner\nLeNet-5\nMNIST digit image classification\n\n2012 LRMD paper\n\n\"Building high-level features using large scale unsupervised learning\"\nby Quoc Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg Corrado, Jeff Dean, and Andrew Ng (2012). \nStanford and Google\nclassify images from ImageNet\n\naccuracy 9.3% -> 15.8%\n\n\nImage-Net\n\nhttp://image-net.org/\n16 million full color images in 20 thousand categories\nclassified by Amazon's Mechanical Turk service\n\n2012 KSH paper\n\n\"ImageNet classification with deep convolutional neural networks\"\nby Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton (2012).\nImageNet Large-Scale Visual Recognition Challenge (ILSVRC)\ntraining set: 1.2 million ImageNet images, drawn from 1,000 categories\nvalidation and test sets: 50,000 and 150,000 images from the same 1,000 categories\nsome contain multiple objects\naccuracy 84.7%\nAlexNet\nInput Layer: 3×224×224 neurons, (RGB values for a 224×224 image)\n7 hidden layers of neurons\nfirst 5 hidden layers are convolutional layers (some with max-pooling), \nnext 2 layers are fully-connected layers\n\n\nThe ouput layer is a 1,000-unit softmax layer\nReLU (rectified linear units)\nparameters: 60 million\nl2 regularization and dropout\nmomentum-based mini-batch stochastic gradient descent\n\n<img src=\"http://neuralnetworksanddeeplearning.com/images/KSH.jpg\">\n2014 ILSVRC competition\n\ntraining set of 1.2 million images, in 1,000 categories\nGoogLeNet\n22 layers Deep CNN\n93.33%"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
yingchi/fastai-notes
|
deeplearning1/nbs/convolution-intro.ipynb
|
apache-2.0
|
[
"Convolution Explained Using MNIST Data\nDownlaod Data",
"%matplotlib inline\nimport math,sys,os,numpy as np\nimport pandas as pd\nfrom numpy.linalg import norm\nfrom PIL import Image\nfrom matplotlib import pyplot as plt, rcParams, rc\nfrom scipy.ndimage import imread\nfrom skimage.measure import block_reduce\nimport six.moves.cPickle as pickle\nfrom scipy.ndimage.filters import correlate, convolve\nfrom ipywidgets import interact, interactive, fixed\nfrom ipywidgets.widgets import *\nrc('animation', html='html5')\nrcParams['figure.figsize'] = 3, 6\n%precision 4\nnp.set_printoptions(precision=4, linewidth=100)\n\nWORK_DIR = '/Users/PeiYingchi/Documents/fastai-notes/deeplearning1/nbs'\nDATA_DIR = WORK_DIR + '/data/'\n\ndef plots(ims, interp=False, titles=None):\n ims=np.array(ims)\n mn,mx=ims.min(),ims.max()\n f = plt.figure(figsize=(12,24))\n for i in range(len(ims)):\n sp=f.add_subplot(1, len(ims), i+1)\n if not titles is None: sp.set_title(titles[i], fontsize=18)\n plt.imshow(ims[i], interpolation=None if interp else 'none', vmin=mn,vmax=mx)\n\ndef plot(im, interp=False):\n f = plt.figure(figsize=(3,6), frameon=True)\n plt.imshow(im, interpolation=None if interp else 'none')\n\nplt.gray()\nplt.close()",
"Now, we have 2 choices for downloading the MNIST data\n\nDownload from Kaggle kg download -c digit-recognizer ==> train.csv test.csv \nDownload using code from tensorlow mnist tutorial ==> train.npz\n\nI have tried both methods. The data downloaded are slightly different, i.e. the number of rows are different. \nData From Kaggle - ETL Needed",
"df = pd.read_csv(DATA_DIR+'MNIST_kg/train.csv')\nprint(data.shape)\n\narr_2d = np.arange(24).reshape(2,-1)\nprint(arr_2d)\narr_3d = arr_2d.reshape(2,3,4,order='C')\nprint(arr_3d)\n\nlabels=df['label'].as_matrix()\ndf_images = df.drop('label', axis=1)\nimages = df_images.as_matrix()\n\nimages = images.reshape(len(labels), 28, 28)\nnp.savez_compressed(DATA_DIR+'MNIST_kg/'+'train.npz', labels=labels, images=images)",
"Data From Tensorflow Tutorial",
"from tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data/')\nimages, labels = mnist.train.images, mnist.train.labels\nimages = images.reshape((55000,28,28))\nnp.savez_compressed(DATA_DIR+'MNIST_tf/train.npz', images=images, labels=labels)",
"Read In Saved Data",
"# data = np.load(DATA_DIR+'MNIST_data/train.npz')\ndata = np.load(DATA_DIR+'MNIST_tf/train.npz')\nprint(data.keys())\nlabels = data['labels']\nimages = data['images']\nn = len(images)\nimages.shape\n\nplot(images[0])\n\nlabels[0]\n\nplots(images[:5], titles=labels[:5])\n\ntop=[[-1,-1,-1],\n [ 1, 1, 1],\n [ 0, 0, 0]]\n\nplot(top)",
"This matrix can serve as a top edge filter because for example, for a 3x3 area (black-wigh image), if this area does not include an edge, then it will be something like\n[[10, 8, 9] <br>\n [8, 8, 8] <br> \n [10, 7, 5]] <br>\n\nHere let's suppose the color code range from (0 to 10) with 10 is the most black.\n\nThen after multiplying the filter, the sum value is -3\nIf the area includes a top edge, it will be somthing like\n[[0, 0, 0] <br>\n [8, 8, 8] <br>\n [9, 7, 5]] <br>\nThen after multiplying the filter, the sum value is 24\nFor the MNIST case, the bachground color is black, so the edge will have a matrix multiplication value -ve and very small.",
"r=(0,28)\ndef zoomim(x1=0,x2=28,y1=0,y2=28):\n plot(images[0,y1:y2,x1:x2])\nw=interactive(zoomim, x1=r,x2=r,y1=r,y2=r)\nw\n\n# recommand: x1=1, x2=9, y1=6, y2=14\nk=w.kwargs\ndims = np.index_exp[k['y1']:k['y2']:1,k['x1']:k['x2']]\nimages[0][dims]\n\ncorrtop = correlate(images[0], top)\n\ncorrtop[dims]\n\nplot(corrtop[dims])\n\nplot(corrtop)\n\nnp.rot90(top, 1)\n\nconvtop = convolve(images[0], np.rot90(top,2))\nplot(convtop)\nnp.allclose(convtop, corrtop)\n\nstraights=[np.rot90(top,i) for i in range(4)]\nplots(straights)",
"How to come out with these filters?\nThere are some pre-defined filters for certain patterns. But that is not the approach for deep learning. In deep learning, we do not use pre-defined filters, instead, we start with filters with random numbers. \nFilters in Deep Learning\nNow we start with 4 randomly generated filters.\n\nYou can think about convolutions as the weight matrix in the excel example. Actually, that excel example is the simplest, small illustration of convolutions.",
"br=[[ 0, 0, 1],\n [ 0, 1,-1.5],\n [ 1,-1.5, 0]]\n\ndiags = [np.rot90(br,i) for i in range(4)]\nplots(diags)\n\n# add the previous 4 filters to get 8 filters for our use\nrots = straights + diags\ncorrs = [correlate(images[0], rot) for rot in rots]\nplots(corrs)",
"Max Pooling",
"# Maxpooling\ndef pool(im): return block_reduce(im, (7,7), np.max)\n\nplots([pool(im) for im in corrs])",
"Position Invariance\nNow, it's a good time to look back at our Vgg model.",
"from vgg16 import Vgg16\nvgg = Vgg16()\nvgg.model.summary()",
"We know that the filters are position invariant. That means, the filters are able to pick up the pattern wherever it is inside the picture.\nHowever, we need to be able to identify position to some extent, because if there are 4 eyes in the picture and are far apart, somthing is wrong. So how our deep learning network care about this?\nBecause it has many layers. As we go down through layers in our model, deeper layers will make sure that there is an eye here, a nose there etc...\nMoving on",
"eights=[images[i] for i in xrange(n) if labels[i]==8]\nones=[images[i] for i in xrange(n) if labels[i]==1]\n\nplots(eights[:5])\nplots(ones[:5])\n\npool8 = [np.array([pool(correlate(im, rot)) for im in eights]) for rot in rots]\n\nlen(pool8), pool8[0].shape\n\nplots(pool8[0][0:5])\n\ndef normalize(arr): return (arr-arr.mean())/arr.std()\n\nfilts8 = np.array([ims.mean(axis=0) for ims in pool8])\nfilts8 = normalize(filts8)\n\nplots(filts8)\n\npool1 = [np.array([pool(correlate(im, rot)) for im in ones]) for rot in rots]\nfilts1 = np.array([ims.mean(axis=0) for ims in pool1])\nfilts1 = normalize(filts1)\n\nplots(filts1)\n\ndef pool_corr(im): return np.array([pool(correlate(im, rot)) for rot in rots])\n\nplots(pool_corr(eights[0]))\n\ndef sse(a,b): return ((a-b)**2).sum()\ndef is8_n2(im): return 1 if sse(pool_corr(im),filts1) > sse(pool_corr(im),filts8) else 0\n\nsse(pool_corr(eights[0]), filts8), sse(pool_corr(eights[0]), filts1)\n\n[np.array([is8_n2(im) for im in ims]).sum() for ims in [eights,ones]]\n\n[np.array([(1-is8_n2(im)) for im in ims]).sum() for ims in [eights,ones]]\n\ndef n1(a,b): return (np.fabs(a-b)).sum()\ndef is8_n1(im): return 1 if n1(pool_corr(im),filts1) > n1(pool_corr(im),filts8) else 0\n\n[np.array([is8_n1(im) for im in ims]).sum() for ims in [eights,ones]]\n\n[np.array([(1-is8_n1(im)) for im in ims]).sum() for ims in [eights,ones]]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kazzz24/deep-learning
|
seq2seq/sequence_to_sequence_implementation.ipynb
|
mit
|
[
"Character Sequence to Sequence\nIn this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models.\n<img src=\"images/sequence-to-sequence.jpg\"/>\nDataset\nThe dataset lives in the /data/ folder. At the moment, it is made up of the following files:\n * letters_source.txt: The list of input letter sequences. Each sequence is its own line. \n * letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.",
"import helper\n\nsource_path = 'data/letters_source.txt'\ntarget_path = 'data/letters_target.txt'\n\nsource_sentences = helper.load_data(source_path)\ntarget_sentences = helper.load_data(target_path)",
"Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.",
"source_sentences[:50].split('\\n')",
"target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.",
"target_sentences[:50].split('\\n')",
"Preprocess\nTo do anything useful with it, we'll need to turn the characters into a list of integers:",
"def extract_character_vocab(data):\n special_words = ['<pad>', '<unk>', '<s>', '<\\s>']\n\n set_words = set([character for line in data.split('\\n') for character in line])\n int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}\n vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}\n\n return int_to_vocab, vocab_to_int\n\n# Build int2letter and letter2int dicts\nsource_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)\ntarget_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)\n\n# Convert characters to ids\nsource_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<unk>']) for letter in line] for line in source_sentences.split('\\n')]\ntarget_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<unk>']) for letter in line] for line in target_sentences.split('\\n')]\n\nprint(\"Example source sequence\")\nprint(source_letter_ids[:3])\nprint(\"\\n\")\nprint(\"Example target sequence\")\nprint(target_letter_ids[:3])",
"The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length.",
"def pad_id_sequences(source_ids, source_letter_to_int, target_ids, target_letter_to_int, sequence_length):\n new_source_ids = [sentence + [source_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \\\n for sentence in source_ids]\n new_target_ids = [sentence + [target_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \\\n for sentence in target_ids]\n\n return new_source_ids, new_target_ids\n\n\n# Use the longest sequence as sequence length\nsequence_length = max(\n [len(sentence) for sentence in source_letter_ids] + [len(sentence) for sentence in target_letter_ids])\n\n# Pad all sequences up to sequence length\nsource_ids, target_ids = pad_id_sequences(source_letter_ids, source_letter_to_int, \n target_letter_ids, target_letter_to_int, sequence_length)\n\nprint(\"Sequence Length\")\nprint(sequence_length)\nprint(\"\\n\")\nprint(\"Input sequence example\")\nprint(source_ids[:3])\nprint(\"\\n\")\nprint(\"Target sequence example\")\nprint(target_ids[:3])",
"This is the final shape we need them to be in. We can now proceed to building the model.\nModel\nCheck the Version of TensorFlow\nThis will check to make sure you have the correct version of TensorFlow",
"from distutils.version import LooseVersion\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))",
"Hyperparameters",
"# Number of Epochs\nepochs = 60\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 50\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 13\ndecoding_embedding_size = 13\n# Learning Rate\nlearning_rate = 0.001",
"Input",
"input_data = tf.placeholder(tf.int32, [batch_size, sequence_length])\ntargets = tf.placeholder(tf.int32, [batch_size, sequence_length])\nlr = tf.placeholder(tf.float32)",
"Sequence to Sequence\nThe decoder is probably the most complex part of this model. We need to declare a decoder for the training phase, and a decoder for the inference/prediction phase. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).\nFirst, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.\nThen, we'll need to hookup a fully connected layer to the output of decoder. The output of this layer tells us which word the RNN is choosing to output at each time step.\nLet's first look at the inference/prediction decoder. It is the one we'll use when we deploy our chatbot to the wild (even though it comes second in the actual code).\n<img src=\"images/sequence-to-sequence-inference-decoder.png\"/>\nWe'll hand our encoder hidden state to the inference decoder and have it process its output. TensorFlow handles most of the logic for us. We just have to use tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder and supply them with the appropriate inputs.\nNotice that the inference decoder feeds the output of each time step as an input to the next.\nAs for the training decoder, we can think of it as looking like this:\n<img src=\"images/sequence-to-sequence-training-decoder.png\"/>\nThe training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).\nEncoding\n\nEmbed the input data using tf.contrib.layers.embed_sequence\nPass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.",
"source_vocab_size = len(source_letter_to_int)\n\n# Encoder embedding\nenc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)\n\n# Encoder\nenc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n_, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, dtype=tf.float32)",
"Process Decoding Input",
"import numpy as np\n\n# Process the input we'll feed to the decoder\nending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1])\ndec_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<s>']), ending], 1)\n\ndemonstration_outputs = np.reshape(range(batch_size * sequence_length), (batch_size, sequence_length))\n\nsess = tf.InteractiveSession()\nprint(\"Targets\")\nprint(demonstration_outputs[:2])\nprint(\"\\n\")\nprint(\"Processed Decoding Input\")\nprint(sess.run(dec_input, {targets: demonstration_outputs})[:2])",
"Decoding\n\nEmbed the decoding input\nBuild the decoding RNNs\nBuild the output layer in the decoding scope, so the weight and bias can be shared between the training and inference decoders.",
"target_vocab_size = len(target_letter_to_int)\n\n# Decoder Embedding\ndec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\ndec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n\n# Decoder RNNs\ndec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n\nwith tf.variable_scope(\"decoding\") as decoding_scope:\n # Output Layer\n output_fn = lambda x: tf.contrib.layers.fully_connected(x, target_vocab_size, None, scope=decoding_scope)",
"Decoder During Training\n\nBuild the training decoder using tf.contrib.seq2seq.simple_decoder_fn_train and tf.contrib.seq2seq.dynamic_rnn_decoder.\nApply the output layer to the output of the training decoder",
"with tf.variable_scope(\"decoding\") as decoding_scope:\n # Training Decoder\n train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(enc_state)\n train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(\n dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)\n \n # Apply output function\n train_logits = output_fn(train_pred)",
"Decoder During Inference\n\nReuse the weights the biases from the training decoder using tf.variable_scope(\"decoding\", reuse=True)\nBuild the inference decoder using tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder.\nThe output function is applied to the output in this step",
"with tf.variable_scope(\"decoding\", reuse=True) as decoding_scope:\n # Inference Decoder\n infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(\n output_fn, enc_state, dec_embeddings, target_letter_to_int['<s>'], target_letter_to_int['<\\s>'], \n sequence_length - 1, target_vocab_size)\n inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)",
"Optimization\nOur loss function is tf.contrib.seq2seq.sequence_loss provided by the tensor flow seq2seq module. It calculates a weighted cross-entropy loss for the output logits.",
"# Loss function\ncost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([batch_size, sequence_length]))\n\n# Optimizer\noptimizer = tf.train.AdamOptimizer(lr)\n\n# Gradient Clipping\ngradients = optimizer.compute_gradients(cost)\ncapped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\ntrain_op = optimizer.apply_gradients(capped_gradients)",
"Train\nWe're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.",
"import numpy as np\n\ntrain_source = source_ids[batch_size:]\ntrain_target = target_ids[batch_size:]\n\nvalid_source = source_ids[:batch_size]\nvalid_target = target_ids[:batch_size]\n\nsess.run(tf.global_variables_initializer())\n\nfor epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch, targets: target_batch, lr: learning_rate})\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source})\n\n train_acc = np.mean(np.equal(target_batch, np.argmax(batch_train_logits, 2)))\n valid_acc = np.mean(np.equal(valid_target, np.argmax(batch_valid_logits, 2)))\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_ids) // batch_size, train_acc, valid_acc, loss))",
"Prediction",
"input_sentence = 'hello'\n\n\ninput_sentence = [source_letter_to_int.get(word, source_letter_to_int['<unk>']) for word in input_sentence.lower()]\ninput_sentence = input_sentence + [0] * (sequence_length - len(input_sentence))\nbatch_shell = np.zeros((batch_size, sequence_length))\nbatch_shell[0] = input_sentence\nchatbot_logits = sess.run(inference_logits, {input_data: batch_shell})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in input_sentence]))\nprint(' Input Words: {}'.format([source_int_to_letter[i] for i in input_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(chatbot_logits, 1)]))\nprint(' Chatbot Answer Words: {}'.format([target_int_to_letter[i] for i in np.argmax(chatbot_logits, 1)]))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
martinjrobins/hobo
|
examples/toy/distribution-german-credit-hierarchical.ipynb
|
bsd-3-clause
|
[
"Fitting a hierarchical logistic model to German credit data\nThis notebook explains how to run the toy hierarchical logistic regression model example using the German credit data from [1]. In this example, we have predictors for 1000 individuals and an outcome variable indicating whether or not each individual should be given credit.\n[1] \"UCI machine learning repository\", 2010. A. Frank and A. Asuncion. https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data)",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pints\nimport pints.toy",
"To run this example, we need to first get the data from [1] and process it so we have dichtonomous $y\\in{-1,1}$ outputs and the matrix of predictors has been standardised. In addition, we also add a column of 1s corresponding to a constant term in the regression.\nIf you are connected to the internet, by instantiating with x=None, Pints will fetch the data from the repo for you. If, instead, you have local copies of the x and y matrices, these can be supplied as arguments.",
"logpdf = pints.toy.GermanCreditHierarchicalLogPDF(download=True)",
"Let's look at the data: x is a matrix of predictors and y is a vector of credit recommendations for 1000 individuals. Pints also handles processing of x into a design matrix z of all interactions between variables (including with themselves).",
"x, y, z = logpdf.data()\nprint(z.shape)",
"Now we run HMC to fit the parameters of the model.",
"xs = [\n np.random.uniform(0, 1, size=(logpdf.n_parameters())),\n np.random.uniform(0, 1, size=(logpdf.n_parameters())),\n np.random.uniform(0, 1, size=(logpdf.n_parameters())),\n]\n\nmcmc = pints.MCMCController(logpdf, len(xs), xs, method=pints.HamiltonianMCMC)\nmcmc.set_max_iterations(400)\n\n# Set up modest logging\nmcmc.set_log_to_screen(True)\nmcmc.set_log_interval(10)\n\nfor sampler in mcmc.samplers():\n sampler.set_leapfrog_step_size(0.02)\n sampler.set_leapfrog_steps(1)\n\n# Run!\nprint('Running...')\nchains = mcmc.run()\nprint('Done!')",
"This is clearly a much harder problem than the non-hierarchical version!",
"results = pints.MCMCSummary(chains=chains, time=mcmc.time())\nprint(results)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nest/nest-simulator
|
doc/htmldoc/neurons/model_details/HillTononiModels.ipynb
|
gpl-2.0
|
[
"The Hill-Tononi Neuron and Synapse Models\nHans Ekkehard Plesser, NMBU/FZ Jülich/U Oslo, 2016-12-01, 2021-02-22\nBackground\nThis notebook describes the neuron and synapse model proposed by Hill and Tononi in J Neurophysiol 93:1671-1698, 2005 (doi:10.1152/jn.00915.2004) and their implementation in NEST. The notebook also contains some tests.\nThis description is based on the original publication and publications cited therein, an analysis of the source code of the original Synthesis implementation kindly provided by Sean Hill, and plausiblity arguments.\nIn what follows, we will refer to the original paper as [HT05].\nFor a NEST implementation of the full network model by Hill and Tononi see the implementation by Ricardo Muprhy and colleagues.\nThe Neuron Model\nIntegration\nThe original Synthesis implementation of the model uses Runge-Kutta integration with fixed 0.25 ms step size, and integrates channels dynamics first, followed by integration of membrane potential and threshold.\nNEST, in contrast, integrates the complete 16-dimensional state using a single adaptive-stepsize Runge-Kutta-Fehlberg-4(5) solver from the GNU Science Library (gsl_odeiv_step_rkf45).\nMembrane potential\nMembrane potential evolution is governed by [HT05, p 1677]\n\\begin{equation}\n\\frac{\\text{d}V}{\\text{d}t} = \\frac{-g_{\\text{NaL}}(V-E_{\\text{Na}})\n-g_{\\text{KL}}(V-E_{\\text{K}})+I_{\\text{syn}}+I_{\\text{int}}}{\\tau_{\\text{m}}}\n-\\frac{g_{\\text{spike}}(V-E_{\\text{K}})}{\\tau_{\\text{spike}}}\n\\end{equation}\n\nThe equation does not contain membrane capacitance. As a side-effect, all conductances are dimensionless.\nNa and K leak conductances $g_{\\text{NaL}}$ and $g_{\\text{KL}}$ are constant, although $g_{\\text{KL}}$ may be adjusted on slow time scales to mimic neuromodulatory effects.\nReversal potentials $E_{\\text{Na}}$ and $E_{\\text{K}}$ are assumed constant.\nSynaptic currents $I_{\\text{syn}}$ and intrinsic currents $I_{\\text{int}}$ are discussed below. In contrast to the paper, they are shown with positive sign here (just change in notation).\nThe last term is a re-polarizing current only active during the refractory period, see below. Note that it has a different (faster) time constant than the other currents. It might have been more natural to use the same time constant for all currents and instead adjust $g_{\\text{spike}}$. We follow the original approach here.\n\nThreshold, Spike generation and refractory effects\nThe threshold evolves according to [HT05, p 1677]\n\\begin{equation}\n\\frac{\\text{d}\\theta}{\\text{d}t} = -\\frac{\\theta-\\theta_{\\text{eq}}}{\\tau_{\\theta}}\n\\end{equation}\nThe neuron emits a single spike if \n- it is not refractory\n- membrane potential crosses the threshold, $V\\geq\\theta$\nUpon spike emission,\n- $V \\leftarrow E_{\\text{Na}}$\n- $\\theta \\leftarrow E_{\\text{Na}}$\n- the neuron becomes refractory for time $t_{\\text{spike}}$ (t_ref in NEST)\nThe repolarizing current is active during, and only during the refractory period:\n\\begin{equation}\ng_{\\text{spike}} = \\begin{cases} 1 & \\text{neuron is refractory}\\\n 0 & \\text{else} \\end{cases}\n\\end{equation}\nDuring the refractory period, the neuron cannot fire new spikes, but all state variables evolve freely, nothing is clamped. \nThe model of spiking and refractoriness is based on Synthesis model PulseIntegrateAndFire.\nIntrinsic currents\nNote that not all intrinsic currents are active in all populations of the network model presented in [HT05, p1678f].\nIntrinsic currents are based on the Hodgkin-Huxley description, i.e.,\n\\begin{align}\nI_X &= g_{\\text{peak}, X} m_X(V, t)^N_X h_X(V, t)(V-E_X) \\\n\\frac{\\text{d}m_X}{\\text{d}t} &= \\frac{m_X^{\\infty}-m_X}{\\tau_{m,X}(V)}\\\n\\frac{\\text{d}h_X}{\\text{d}t} &= \\frac{h_X^{\\infty}-h_X}{\\tau_{h,X}(V)}\n\\end{align}\nwhere $I_X$ is the current through channel $X$ and $m_X$ and $h_X$ the activation and inactivation variables for channel $X$.\nPacemaker current $I_h$\nSynthesis: IhChannel\n\\begin{align}\nN_h & = 1 \\\nm_h^{\\infty}(V) &= \\frac{1}{1+\\exp\\left(\\frac{V+75\\text{mV}}{5.5\\text{mV}}\\right)} \\\n\\tau_{m,h}(V) &= \\frac{1}{\\exp(-14.59-0.086V) + \\exp(-1.87 + 0.0701V)} \\\nh_h(V, t) &\\equiv 1 \n\\end{align}\nNote that subscript $h$ in some cases above marks the $I_h$ channel.\nLow-threshold calcium current $I_T$\nSynthesis: ItChannel\nEquations given in paper\n\\begin{align}\nN_T & \\quad \\text{not given} \\\nm_T^{\\infty}(V) &= 1/{1 + \\exp[ -(V + 59.0)/6.2]} \\\n\\tau_{m,T}(V) &= {0.22/\\exp[ -(V + 132.0)/ 16.7]} + \\exp[(V + 16.8)/18.2] + 0.13\\\nh_T^{\\infty}(V) &= 1/{1 + \\exp[(V + 83.0)/4.0]} \\\n\\tau_{h,T}(V) &= \\langle 8.2 + {56.6 + 0.27 \\exp[(V + 115.2)/5.0]}\\rangle / {1.0 + \\exp[(V + 86.0)/3.2]}\n\\end{align}\nNote the following:\n- The channel model is based on Destexhe et al, J Neurophysiol 76:2049 (1996).\n- In the equation for $\\tau_{m,T}$, the second exponential term must be added to the first (in the denominator) to make dimensional sense; 0.13 and 0.22 have unit ms.\n- In the equation for $\\tau_{h,T}$, the $\\langle \\rangle$ brackets should be dropped, so that $8.2$ is not divided by the $1+\\exp$ term. Otherwise, it could have been combined with the $56.6$.\n- This analysis is confirmed by code analysis and comparison with Destexhe et al, J Neurophysiol 76:2049 (1996), Eq 5.\n- From Destexhe et al we also find $N_T=2$.\nCorrected equations\nThis leads to the following equations, which are implemented in Synthesis and NEST.\n\\begin{align}\nN_T &= 2 \\\nm_T^{\\infty}(V) &= \\frac{1}{1+\\exp\\left(-\\frac{V+59\\text{mV}}{6.2\\text{mV}}\\right)}\\\n\\tau_{m,T}(V) &= 0.13\\text{ms} \n + \\frac{0.22\\text{ms}}{\\exp\\left(-\\frac{V + 132\\text{mV}}{16.7\\text{mV}}\\right) + \\exp\\left(\\frac{V + 16.8\\text{mV}}{18.2\\text{mV}}\\right)} \\ \nh_T^{\\infty}(V) &= \\frac{1}{1+\\exp\\left(\\frac{V+83\\text{mV}}{4\\text{mV}}\\right)}\\\n\\tau_{h,T}(V) &= 8.2\\text{ms} + \\frac{56.6\\text{ms} + 0.27\\text{ms} \\exp\\left(\\frac{V + 115.2\\text{mV}}{5\\text{mV}}\\right)}{1 + \\exp\\left(\\frac{V + 86\\text{mV}}{3.2\\text{mV}}\\right)}\n\\end{align}\nNote: $N_T$ is a settable parameter in NEST.\nPersistent Sodium Current $I_{\\text{NaP}}$\nSynthesis: INaPChannel\nThis model has only activation ($m$) and uses the steady-state value, so the only relevant equation is that for $m$. In the paper, it is given as\n\\begin{equation}\nm_{\\text{NaP}}^{\\infty}(V) = 1/[1+\\exp(-V+55.7)/7.7]\n\\end{equation}\nDimensional analysis indicates that the division by $7.7$ should be in the argument of the exponential, and the minus sign needs to be moved so that the current activates as the neuron depolarizes leading to the corrected equation\n\\begin{equation}\nm_{\\text{NaP}}^{\\infty}(V) = \\frac{1}{1+\\exp\\left(-\\frac{V+55.7\\text{mV}}{7.7\\text{mV}}\\right)}\n\\end{equation}\nThis equation is implemented in NEST and Synthesis and is the one found in Compte et al (2003), cited by [HT05, p 1679].\nCorrected exponent\nAccording to Compte et al (2003), $N_{\\text{NaP}}=3$, i.e.,\n\\begin{equation}\nI_{\\text{NaP}} = g_{\\text{peak,NaP}}(m_{\\text{NaP}}^{\\infty}(V))^3(V-E_{\\text{NaP}})\n\\end{equation}\nThis equation is also given in a comment in Synthesis, but is missing from the implementation.\nNote: NEST implements the equation according to Compte et al (2003) with $N_{\\text{NaP}}=3$ by default, while Synthesis uses $N_{\\text{NaP}}=1$. $N_{\\text{NaP}}$ is a settable parameter in NEST.\nDepolarization-activated Potassium Current $I_{DK}$\nSynthesis: IKNaChannel\nThis model also only has a single activation variable $m$, following more complicated dynamics expressed by $D$.\nEquations in paper\n\\begin{align}\n dD/dt &= D_{\\text{influx}} - D(1-D_{\\text{eq}})/\\tau_D \\\n D_{\\text{influx}} &= 1/{1+ \\exp[-(V-D_{\\theta})/\\sigma_D]} \\\n m_{DK}^{\\infty} &= 1/1 + (d_{1/2}D)^{3.5}\n\\end{align}\nThere are several problems with these equations.\nIn the steady state the first equation becomes\n\\begin{equation}\n 0 = - D(1-D_{\\text{eq}})/\\tau_D \n \\end{equation}\n with solution\n \\begin{equation}\n D = 0\n\\end{equation}\nThis contradicts both the statement [HT05, p. 1679] that $D\\to D_{\\text{eq}}$ in this case, and the requirement that $D>0$ to avoid a singluarity in the equation for $m_{DK}^{\\infty}$. The most plausible correction is\n\\begin{equation}\n dD/dt = D_{\\text{influx}} - (D-D_{\\text{eq}})/\\tau_D \n\\end{equation}\nThe third equation appears incorrect and logic as well as Wang et al, J Neurophysiol 89:3279–3293, 2003, Eq 9, cited in [HT05, p 1679], indicate that the correct equation is\n\\begin{equation}\n m_{DK}^{\\infty} = 1/(1 + (d_{1/2} / D)^{3.5})\n\\end{equation}\nCorrected equations\nThe equations for this channel implemented in NEST are thus\n\\begin{align}\nI_{DK} &= - g_{\\text{peak},DK} m_{DK}(V,t) (V - E_{DK})\\\n m_{DK} &= \\frac{1}{1 + \\left(\\frac{d_{1/2}}{D}\\right)^{3.5}}\\\n \\frac{dD}{dt} &= D_{\\text{influx}}(V) - \\frac{D-D_{\\text{eq}}}{\\tau_D} = \\frac{D_{\\infty}(V)-D}{\\tau_D} \\\n D_{\\infty}(V) &= \\tau_D D_{\\text{influx}}(V) + {D_{\\text{eq}}}\\\n D_{\\text{influx}} &= \\frac{D_{\\text{influx,peak}}}{1+ \\exp\\left(-\\frac{V-D_{\\theta}}{\\sigma_D}\\right)} \n\\end{align}\nwith \n|$D_{\\text{influx,peak}}$|$D_{\\text{eq}}$|$\\tau_D$|$D_{\\theta}$|$\\sigma_D$|$d_{1/2}$|\n| --: | --: | --: | --: | --: | --: |\n|$0.025\\text{ms}^{-1}$ |$0.001$|$1250\\text{ms}$|$-10\\text{mV}$|$5\\text{mV}$|$0.25$|\nNote the following:\n- $D_{eq}$ is the equilibrium value only for $D_{\\text{influx}}(V)=0$, i.e., in the limit $V\\to -\\infty$ and $t\\to\\infty$.\n- The actual steady-state value is $D_{\\infty}$.\n- $d_{1/2}$, $D$, $D_{\\infty}$, and $D_{\\text{eq}}$ have identical, but arbitrary units, so we can assume them dimensionless ($D$ is a \"factor\" that in an abstract way represents concentrations).\n- $D_{\\text{influx}}$ and $D_{\\text{influx,peak}}$ are rates of change of $D_{\\infty}$ and thus have units of inverse time.\n- $m_{DK}$ is a steep sigmoid which is almost 0 or 1 except for a narrow window around $d_{1/2}$.\n- To the left of this window, $I_{DK}\\approx 0$.\n- To the right of this window, $I_{DK}\\sim -(V-E_{DK})$.\n- $m_{DK}$ is not integrated over time, instead it is an instantaneous transform of $D$, which is integrated over time.\nNote: The differential equation for $dD/dt$ differs from the one implemented in Synthesis.\nSynaptic channels\nThese are described in [HT05, p 1678]. Synaptic channels are conductance based with double-exponential time course (beta functions) and normalized for peak conductance. NMDA channels are additionally voltage gated, as described below.\nLet ${t_{(j, X)}}$ be the set of all spike arrival times, where $X$ indicates the synapse model and $j$ enumerates spikes. Then the total synaptic input is given by\n\\begin{equation}\nI_{\\text{syn}}(t) = - \\sum_{{t_{(j, X)}}} \\bar{g}X(t-t{(j, X)}) (V-E_X)\n\\end{equation}\nStandard Channels\nSynthesis: SynChannel\nThe conductance change due to a single input spike at time $t=0$ through a channel of type $X$ is given by (see below for exceptions)\n\\begin{align}\n \\bar{g}X(t) &= g_X(t)\\\n g_X(t) &= g{\\text{peak}, X}\\frac{\\exp(-t/\\tau_1) - \\exp(-t/\\tau_2)}{\n \\exp(-t_{\\text{peak}}/\\tau_1) - \\exp(-t_{\\text{peak}}/\\tau_2)} \\Theta(t)\\\n t_{\\text{peak}} &= \\frac{\\tau_2 \\tau_1}{\\tau_2 - \\tau_1} \\ln\\frac{ \\tau_2}{\\tau_1}\n\\end{align} \nwhere $t_{\\text{peak}}$ is the time of the conductance maximum and $\\tau_1$ and $\\tau_2$ are synaptic rise- and decay-time, respectively; $\\Theta(t)$ is the Heaviside step function. The equation is integrated using exact integration in Synthesis; in NEST, it is included in the ODE-system integrated using the Runge-Kutta-Fehlberg 4(5) solver from GSL.\nThe \"indirection\" from $g$ to $\\bar{g}$ is required for consistent notation for NMDA channels below.\nThese channels are used for AMPA, GABA_A and GABA_B channels.\nNMDA Channels\nSynthesis: SynNMDAChannel\nFor the NMDA channel we have\n\\begin{equation}\n\\bar{g}{\\text{NMDA}}(t) = m(V, t) g{\\text{NMDA}}(t)\n\\end{equation}\nwith $g_{\\text{NMDA}}(t)$ from above. \nThe voltage-dependent gating $m(V, t)$ is defined as follows (based on textual description, Vargas-Caballero and Robinson J Neurophysiol 89:2778–2783, 2003, doi:10.1152/jn.01038.2002, and code inspection):\n\\begin{align}\n m(V, t) &= a(V) m_{\\text{fast}}^(V, t) + ( 1 - a(V) ) m_{\\text{slow}}^(V, t)\\\n a(V) &= 0.51 - 0.0028 V \\\n m^{\\infty}(V) &= \\frac{1}{ 1 + \\exp\\left( -S_{\\text{act}} ( V - V_{\\text{act}} ) \\right) } \\\n m_X^*(V, t) &= \\min(m^{\\infty}(V), m_X(V, t))\\\n \\frac{\\text{d}m_X}{\\text{d}t} &= \\frac{m^{\\infty}(V) - m_X }{ \\tau_{\\text{Mg}, X}}\n\\end{align} \nwhere $X$ is \"slow\" or \"fast\". $a(V)$ expresses voltage-dependent weighting between slow and fast unblocking, $m^{\\infty}(V)$ the steady-state value of the proportion of unblocked NMDA-channels, the minimum condition in $m_X^*(V,t)$ the instantaneous blocking and the differential equation for $m_X(V,t)$ the unblocking dynamics.\nSynthesis uses tabluated values for $m^{\\infty}$. NEST uses the best fit of $V_{\\text{act}}$ and $S_{\\text{act}}$ to the tabulated data for conductance table fNMDA.\nNote: NEST also supports instantaneous NMDA dynamics using a boolean switch. In that case $m(V, t)=m^{\\infty}(V)$. \nDetailed relation between NMDA channel model in NEST and previous work\nThe NMDA channel dynamics are not very clearly described in the original paper by Hill and Tononi. The model implemented in NEST is at this point mostly based on inspecting the code of Sean Hill's Synthesis simulator. \nThe equations for $m(V,t)$ and $a(V)$ above are based on Eq 2 of Vargas-Caballero and Robinson (2003) [VCR in the following] and text below that equation, from which the slow and fast NMDA unblocking time constants are also taken. The exponential time courses in VCR Eq 2 are in NEST modeled by the differential equation above. \nFurther, the logic is as follows: $m \\to 0$ corresponds to blocking of NMDA channels. This happens for small values of $V$, i.e., $V < V_{\\text{act}}$. Blocking is assumed to be instantaneous, and this is implented by the minimum operation in the equation for $m^*X(V, t)$ above. What remains is the equation for $m^{\\infty}(V)$ above, giving the steady-state blocking of the NMDA channel for a given voltage. The equation here is based on VCR Eq 1, with our $V{\\text{act}}$ corresponding to their $V_{0.5}$ and our $S_{\\text{act}}$ corresponding to their $\\frac{z \\delta F}{RT}$ (I believe some parentheses in VCR Eq 1 are incorrectly placed), see also VCR Fig 1B. So $V_{\\text{act}}$ is the voltage for which $m^{\\infty}(V) = \\frac{1}{2}$ and $S_{\\text{act}}$ determines the slope of the sigmoidal activation curve, with large $S_{\\text{act}}$ corresponding to a steeper curve. \nSynthesis implemented this equation using a look-up table. I obtained the parameter values for $V_{\\text{act}}$ and $S_{\\text{act}}$ in ht_neuron by fitting the equation for $m^{\\infty}(V)$ to this lookup table. \nNo synaptic \"minis\"\nSynaptic \"minis\" due to spontaneous release of neurotransmitter quanta [HT05, p 1679] are not included in the NEST implementation of the Hill-Tononi model, because the total mini input rate for a cell was just 2 Hz and they cause PSP changes by $0.5 \\pm 0.25$mV only and thus should have minimal effect.\nThe Synapse Depression Model\nThe synapse depression model is implemented in NEST as ht_synapse, in Synthesis in SynChannel and VesiclePool.\n$P\\in[0, 1]$ describes the state of the presynaptic vesicle pool. Spikes are transmitted with an effective weight\n\\begin{equation}\nw_{\\text{eff}} = P w\n\\end{equation}\nwhere $w$ is the nominal weight of the synapse.\nEvolution of $P$ in paper and Synthesis implementation\nAccording to [HT05, p 1678], the pool $P$ evolves according to\n\\begin{equation}\n\\frac{\\text{d}P}{\\text{d}t} = -\\:\\text{spike}\\:\\delta_P P+\\frac{P_{\\text{peak}}-P}{\\tau_P}\n\\end{equation}\nwhere\n- $\\text{spike}=1$ while the neuron is in spiking state, 0 otherwise\n- $P_{\\text{peak}}=1$ \n- $\\delta_P = 0.5$ by default\n- $\\tau_P = 500\\text{ms}$ by default\nSince neurons are in spiking state for one integration time step $\\Delta t$, this suggest that the effect of a spike on the vesicle pool is approximately\n\\begin{equation}\nP \\leftarrow ( 1 - \\Delta t \\delta_P ) P\n\\end{equation}\nFor default parameters $\\Delta t=0.25\\text{ms}$ and $\\delta_P=0.5$, this means that a single spike reduces the pool by 1/8 of its current size.\nEvolution of $P$ in the NEST implementation\nIn NEST, we modify the equations above to obtain a definite jump in pool size on transmission of a spike, without any dependence on the integration time step (fixing explicitly $P_{\\text{peak}}$):\n\\begin{align}\n\\frac{\\text{d}P}{\\text{d}t} &= \\frac{1-P}{\\tau_P} \\\nP &\\leftarrow ( 1 - \\delta_P^*) P \n\\end{align}\n$P$ is only updated when a spike passes the synapse, in the following way (where $\\Delta$ is the time since the last spike through the same synapse):\n\nRecuperation: $P\\leftarrow 1 - ( 1 - P ) \\exp( -\\Delta / \\tau_P )$\nSpike transmission with $w_{\\text{eff}} = P w$\nDepletion: $P \\leftarrow ( 1 - \\delta_P^*) P$\n\nTo achieve approximately the same depletion as in Synthesis, use $\\delta_P^*=\\Delta t\\delta_p$.\nTests of the Models",
"import sys\nimport math\nimport numpy as np\nimport pandas as pd\nimport scipy.optimize as so\nimport scipy.integrate as si\nimport matplotlib.pyplot as plt\nimport nest\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (12, 3)",
"Neuron Model\nPassive properties\nTest relaxation of neuron and threshold to equilibrium values in absence of intrinsic currents and input. We then have\n\\begin{align}\n\\tau_m \\dot{V}&= \\left[-g_{NaL}(V-E_{Na})-g_{KL}(V-E_K)\\right] = -(g_{NaL}+g_{KL})V+(g_{NaL}E_{Na}+g_{KL}E_K)\\\n\\Leftrightarrow\\quad \\tau_{\\text{eff}}\\dot{V} &= -V+V_{\\infty}\\\nV_{\\infty} &= \\frac{g_{NaL}E_{Na}+g_{KL}E_K}{g_{NaL}+g_{KL}}\\\n\\tau_{\\text{eff}}&=\\frac{\\tau_m}{g_{NaL}+g_{KL}}\n\\end{align}\nwith solution\n\\begin{equation}\nV(t) = V_0 e^{-\\frac{t}{\\tau_{\\text{eff}}}} + V_{\\infty}\\left(1-e^{-\\frac{t}{\\tau_{\\text{eff}}}} \\right)\n\\end{equation}\nand for the threshold\n\\begin{equation}\n\\theta(t) = \\theta_0 e^{-\\frac{t}{\\tau_{\\theta}}} + \\theta_{eq}\\left(1-e^{-\\frac{t}{\\tau_{\\theta}}} \\right)\n\\end{equation}",
"def Vpass(t, V0, gNaL, ENa, gKL, EK, taum, I=0):\n tau_eff = taum/(gNaL + gKL)\n Vinf = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL)\n return V0*np.exp(-t/tau_eff) + Vinf*(1-np.exp(-t/tau_eff))\n\ndef theta(t, th0, theq, tauth):\n return th0*np.exp(-t/tauth) + theq*(1-np.exp(-t/tauth))\n\nnest.ResetKernel()\nnest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,\n 'g_peak_T': 0., 'g_peak_h': 0.,\n 'tau_theta': 10.})\nhp = nest.GetDefaults('ht_neuron')\n\nV_0 = [-100., -70., -55.]\nth_0 = [-65., -51., -10.]\nT_sim = 20.\n\nnrns = nest.Create('ht_neuron', n=len(V_0), params={'V_m': V_0, 'theta': th_0}) \n\nnest.Simulate(T_sim)\nV_th_sim = nrns.get(['V_m', 'theta'])\n\nfor (V0, th0, Vsim, thsim) in zip(V_0, th_0, V_th_sim['V_m'], V_th_sim['theta']):\n Vex = Vpass(T_sim, V0, hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'], hp['tau_m'])\n thex = theta(T_sim, th0, hp['theta_eq'], hp['tau_theta'])\n print('Vex = {:.3f}, Vsim = {:.3f}, Vex-Vsim = {:.3e}'.format(Vex, Vsim, Vex-Vsim))\n print('thex = {:.3f}, thsim = {:.3f}, thex-thsim = {:.3e}'.format(thex, thsim, thex-thsim))",
"Agreement is excellent.\nSpiking without intrinsic currents or synaptic input\nThe equations above hold for input current $I(t)$, but with\n\\begin{equation}\nV_{\\infty}(I) = \\frac{g_{NaL}E_{Na}+g_{KL}E_K}{g_{NaL}+g_{KL}} + \\frac{I}{g_{NaL}+g_{KL}}\n\\end{equation}\nIn NEST, we need to inject input current into the ht_neuron with a dc_generator, whence the current will set on only at a later time and we need to take this into account. For simplicity, we assume that $V$ is initialized to $V_{\\infty}(I=0)$ and that current onset is at $t_I$. We then have for $t\\geq t_I$\n\\begin{equation}\nV(t) = V_{\\infty}(0) e^{-\\frac{t-t_I}{\\tau_{\\text{eff}}}} + V_{\\infty}(I)\\left(1-e^{-\\frac{t-t_I}{\\tau_{\\text{eff}}}} \\right)\n\\end{equation}\nIf we also initialize $\\theta=\\theta_{\\text{eq}}$, the threshold is constant and we have the first spike at\n\\begin{align}\nV(t) &= \\theta_{\\text{eq}}\\\n\\Leftrightarrow\\quad t &= t_I -\\tau_{\\text{eff}} \\ln \\frac{\\theta_{\\text{eq}}-V_{\\infty}(I)}{V_{\\infty}(0)-V_{\\infty}(I)}\n\\end{align}",
"def t_first_spike(gNaL, ENa, gKL, EK, taum, theq, tI, I):\n tau_eff = taum/(gNaL + gKL)\n Vinf0 = (gNaL*ENa + gKL*EK)/(gNaL + gKL)\n VinfI = (gNaL*ENa + gKL*EK + I)/(gNaL + gKL)\n return tI - tau_eff * np.log((theq-VinfI) / (Vinf0-VinfI))\n\nnest.ResetKernel()\nnest.resolution = 0.001\nnest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,\n 'g_peak_T': 0., 'g_peak_h': 0.})\nhp = nest.GetDefaults('ht_neuron')\n\nI = [25., 50., 100.]\ntI = 1.\ndelay = 1.\nT_sim = 40.\n\nnrns = nest.Create('ht_neuron', n=len(I))\ndcgens = nest.Create('dc_generator', n=len(I), params={'amplitude': I, 'start': tI})\nsrs = nest.Create('spike_recorder', n=len(I))\nnest.Connect(dcgens, nrns, 'one_to_one', {'delay': delay})\nnest.Connect(nrns, srs, 'one_to_one')\nnest.Simulate(T_sim)\n\nt_first_sim = [t[0] for t in srs.get('events', 'times')]\n\nfor dc, tf_sim in zip(I, t_first_sim):\n tf_ex = t_first_spike(hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'], \n hp['tau_m'], hp['theta_eq'], tI+delay, dc)\n print('tex = {:.4f}, tsim = {:.4f}, tex-tsim = {:.4f}'.format(tf_ex, \n tf_sim, \n tf_ex-tf_sim))\n",
"Agreement is as good as possible: All spikes occur in NEST at then end of the time step containing the expected spike time.\nInter-spike interval\nAfter each spike, $V_m = \\theta = E_{Na}$, i.e., all memory is erased. We can thus treat ISIs independently. $\\theta$ relaxes according to the equation above. For $V_m$, we have during $t_{\\text{spike}}$ after a spike\n\\begin{align}\n\\tau_m\\dot{V} &= {-g_{\\text{NaL}}(V-E_{\\text{Na}})\n-g_{\\text{KL}}(V-E_{\\text{K}})+I}\n-\\frac{\\tau_m}{\\tau_{\\text{spike}}}({V-E_{\\text{K}}})\\\n&= -(g_{NaL}+g_{KL}+\\frac{\\tau_m}{\\tau_{\\text{spike}}})V+(g_{NaL}E_{Na}+g_{KL}E_K+\\frac{\\tau_m}{\\tau_{\\text{spike}}}E_K)\n\\end{align}\nthus recovering the same for for the solution but with\n\\begin{align}\n\\tau^{\\text{eff}} &= \\frac{\\tau_m}{g{NaL}+g_{KL}+\\frac{\\tau_m}{\\tau_{\\text{spike}}}}\\\nV^{\\infty} &= \\frac{g{NaL}E_{Na}+g_{KL}E_K+I+\\frac{\\tau_m}{\\tau_{\\text{spike}}}E_K}{g_{NaL}+g_{KL}+\\frac{\\tau_m}{\\tau_{\\text{spike}}}}\n\\end{align}\nAssuming that the ISI is longer than the refractory period $t_{\\text{spike}}$, and we had a spike at time $t_s$, then we have at $t_s+t_{\\text{spike}}$\n\\begin{align}\nV^ &= V(t_s+t_{\\text{spike}}) = E_{Na} e^{-\\frac{t_{\\text{spike}}}{\\tau^{\\text{eff}}}} + V^{\\infty}(I)\\left(1-e^{-\\frac{t{\\text{spike}}}{\\tau^{\\text{eff}}}} \\right)\\\n\\theta^ &= \\theta(t_s+t_{\\text{spike}}) = E_{Na} e^{-\\frac{t_{\\text{spike}}}{\\tau_{\\theta}}} + \\theta_{eq}\\left(1-e^{-\\frac{t_{\\text{spike}}}{\\tau_{\\theta}}} \\right)\\\nt^ &= t_s+t_{\\text{spike}}\n\\end{align}\nFor $t>t^$, the normal equations apply again, i.e.,\n\\begin{align}\nV(t) &= V^ e^{-\\frac{t-t^}{\\tau_{\\text{eff}}}} + V_{\\infty}(I)\\left(1-e^{-\\frac{t-t^}{\\tau_{\\text{eff}}}} \\right)\\\n\\theta(t) &= \\theta^ e^{-\\frac{t-t^}{\\tau_{\\theta}}} + \\theta_{\\infty}\\left(1-e^{-\\frac{t-t^*}{\\tau_{\\theta}}}\\right)\n\\end{align}\nThe time of the next spike is then given by\n\\begin{equation}\nV(\\hat{t}) = \\theta(\\hat{t})\n\\end{equation}\nwhich can only be solved numerically. The ISI is then obtained as $\\hat{t}-t_s$.",
"def Vspike(tspk, gNaL, ENa, gKL, EK, taum, tauspk, I=0):\n tau_eff = taum/(gNaL + gKL + taum/tauspk)\n Vinf = (gNaL*ENa + gKL*EK + I + taum/tauspk*EK)/(gNaL + gKL + taum/tauspk)\n return ENa*np.exp(-tspk/tau_eff) + Vinf*(1-np.exp(-tspk/tau_eff))\n\ndef thetaspike(tspk, ENa, theq, tauth):\n return ENa*np.exp(-tspk/tauth) + theq*(1-np.exp(-tspk/tauth))\n\ndef Vpost(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I=0):\n Vsp = Vspike(tspk, gNaL, ENa, gKL, EK, taum, tauspk, I)\n return Vpass(t-tspk, Vsp, gNaL, ENa, gKL, EK, taum, I)\n\ndef thetapost(t, tspk, ENa, theq, tauth):\n thsp = thetaspike(tspk, ENa, theq, tauth)\n return theta(t-tspk, thsp, theq, tauth)\n\ndef threshold(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I, theq, tauth):\n return Vpost(t, tspk, gNaL, ENa, gKL, EK, taum, tauspk, I) - thetapost(t, tspk, ENa, theq, tauth)\n\nnest.ResetKernel()\nnest.resolution = 0.001\nnest.SetDefaults('ht_neuron', {'g_peak_NaP': 0., 'g_peak_KNa': 0.,\n 'g_peak_T': 0., 'g_peak_h': 0.})\nhp = nest.GetDefaults('ht_neuron')\n\nI = [25., 50., 100.]\ntI = 1.\ndelay = 1.\nT_sim = 1000.\n\nnrns = nest.Create('ht_neuron', n=len(I))\ndcgens = nest.Create('dc_generator', n=len(I), params={'amplitude': I, 'start': tI})\nsrs = nest.Create('spike_recorder', n=len(I))\nnest.Connect(dcgens, nrns, 'one_to_one', {'delay': delay})\nnest.Connect(nrns, srs, 'one_to_one')\nnest.Simulate(T_sim)\n\nisi_sim = []\nfor ev in srs.events:\n t_spk = ev['times']\n isi = np.diff(t_spk)\n isi_sim.append((np.min(isi), np.mean(isi), np.max(isi)))\n\nfor dc, (isi_min, isi_mean, isi_max) in zip(I, isi_sim):\n isi_ex = so.bisect(threshold, hp['t_ref'], 50, \n args=(hp['t_ref'], hp['g_NaL'], hp['E_Na'], hp['g_KL'], hp['E_K'], \n hp['tau_m'], hp['tau_spike'], dc, hp['theta_eq'], hp['tau_theta']))\n print('isi_ex = {:.4f}, isi_sim (min, mean, max) = ({:.4f}, {:.4f}, {:.4f})'.format(\n isi_ex, isi_min, isi_mean, isi_max))",
"ISIs are as predicted: measured ISI is predicted rounded up to next time step\nISIs are perfectly regular as expected\n\nIntrinsic Currents\nPreparations",
"nest.ResetKernel()\nclass Channel:\n \"\"\"\n Base class for channel models in Python.\n \"\"\"\n def tau_m(self, V):\n raise NotImplementedError()\n def tau_h(self, V):\n raise NotImplementedError()\n def m_inf(self, V):\n raise NotImplementedError()\n def h_inf(self, V):\n raise NotImplementedError()\n def D_inf(self, V):\n raise NotImplementedError()\n def dh(self, h, t, V):\n return (self.h_inf(V)-h)/self.tau_h(V)\n def dm(self, m, t, V):\n return (self.m_inf(V)-m)/self.tau_m(V)\n\ndef voltage_clamp(channel, DT_V_seq, nest_dt=0.1):\n \"Run voltage clamp with voltage V through intervals DT.\"\n\n # NEST part\n nest_g_0 = {'g_peak_h': 0., 'g_peak_T': 0., 'g_peak_NaP': 0., 'g_peak_KNa': 0.}\n nest_g_0[channel.nest_g] = 1.\n \n nest.ResetKernel()\n nest.resolution = nest_dt\n nrn = nest.Create('ht_neuron', params=nest_g_0)\n mm = nest.Create('multimeter', params={'record_from': ['V_m', 'theta', channel.nest_I],\n 'interval': nest_dt})\n nest.Connect(mm, nrn)\n\n # ensure we start from equilibrated state\n nrn.set(V_m=DT_V_seq[0][1], equilibrate=True, voltage_clamp=True)\n for DT, V in DT_V_seq:\n nrn.set(V_m=V, voltage_clamp=True)\n nest.Simulate(DT)\n t_end = nest.biological_time\n \n # simulate a little more so we get all data up to t_end to multimeter\n nest.Simulate(2 * nest.min_delay)\n \n tmp = pd.DataFrame(mm.events)\n nest_res = tmp[tmp.times <= t_end]\n \n # Control part\n t_old = 0.\n try:\n m_old = channel.m_inf(DT_V_seq[0][1])\n except NotImplementedError:\n m_old = None\n try:\n h_old = channel.h_inf(DT_V_seq[0][1])\n except NotImplementedError:\n h_old = None\n try:\n D_old = channel.D_inf(DT_V_seq[0][1])\n except NotImplementedError:\n D_old = None\n \n t_all, I_all = [], []\n if D_old is not None:\n D_all = []\n \n for DT, V in DT_V_seq:\n t_loc = np.arange(0., DT+0.1*nest_dt, nest_dt)\n I_loc = channel.compute_I(t_loc, V, m_old, h_old, D_old)\n t_all.extend(t_old + t_loc[1:])\n I_all.extend(I_loc[1:])\n if D_old is not None:\n D_all.extend(channel.D[1:])\n m_old = channel.m[-1] if m_old is not None else None\n h_old = channel.h[-1] if h_old is not None else None\n D_old = channel.D[-1] if D_old is not None else None\n t_old = t_all[-1]\n \n if D_old is None:\n ctrl_res = pd.DataFrame({'times': t_all, channel.nest_I: I_all})\n else:\n ctrl_res = pd.DataFrame({'times': t_all, channel.nest_I: I_all, 'D': D_all})\n\n return nest_res, ctrl_res",
"I_h channel\nThe $I_h$ current is governed by\n\\begin{align}\nI_h &= g_{\\text{peak}, h} m_h(V, t) (V-E_h) \\\n\\frac{\\text{d}m_h}{\\text{d}t} &= \\frac{m_h^{\\infty}-m_h}{\\tau_{m,h}(V)}\\\nm_h^{\\infty}(V) &= \\frac{1}{1+\\exp\\left(\\frac{V+75\\text{mV}}{5.5\\text{mV}}\\right)} \\\n\\tau_{m,h}(V) &= \\frac{1}{\\exp(-14.59-0.086V) + \\exp(-1.87 + 0.0701V)}\n\\end{align}\nWe first inspect $m_h^{\\infty}(V)$ and $\\tau_{m,h}(V)$ to prepare for testing",
"nest.ResetKernel()\nclass Ih(Channel):\n \n nest_g = 'g_peak_h'\n nest_I = 'I_h'\n \n def __init__(self, ht_params):\n self.hp = ht_params\n \n def tau_m(self, V):\n return 1/(np.exp(-14.59-0.086*V) + np.exp(-1.87 + 0.0701*V))\n \n def m_inf(self, V):\n return 1/(1+np.exp((V+75)/5.5))\n\n def compute_I(self, t, V, m0, h0, D0):\n self.m = si.odeint(self.dm, m0, t, args=(V,))\n return - self.hp['g_peak_h'] * self.m * (V - self.hp['E_rev_h'])\n\nih = Ih(nest.GetDefaults('ht_neuron'))\n\nV = np.linspace(-110, 30, 100)\nplt.plot(V, ih.tau_m(V));\nax = plt.gca();\nax.set_xlabel('Voltage V [mV]');\nax.set_ylabel('Time constant tau_m [ms]', color='b');\nax2 = ax.twinx()\nax2.plot(V, ih.m_inf(V), 'g');\nax2.set_ylabel('Steady-state m_h^inf', color='g');",
"The time constant is extremely long, up to 1s, for relevant voltages where $I_h$ is perceptible. We thus need long test runs.\nCurves are in good agreement with Fig 5 of Huguenard and McCormick, J Neurophysiol 68:1373, 1992, cited in [HT05]. I_h data there was from guinea pig slices at 35.5 C and needed no temperature adjustment.\n\nWe now run a voltage clamp experiment starting from the equilibrium value.",
"ih = Ih(nest.GetDefaults('ht_neuron'))\nnr, cr = voltage_clamp(ih, [(500, -65.), (500, -80.), (500, -100.), (500, -90.), (500, -55.)]) \n\nplt.subplot(1, 2, 1)\nplt.plot(nr.times, nr.I_h, label='NEST');\nplt.plot(cr.times, cr.I_h, label='Control');\nplt.legend(loc='upper left');\nplt.xlabel('Time [ms]');\nplt.ylabel('I_h [mV]');\nplt.title('I_h current')\n\nplt.subplot(1, 2, 2)\nplt.plot(nr.times, (nr.I_h-cr.I_h)/np.abs(cr.I_h));\nplt.title('Relative I_h error')\nplt.xlabel('Time [ms]');\nplt.ylabel('Rel. error (NEST-Control)/|Control|');",
"Agreement is very good\nNote that currents have units of $mV$ due to choice of dimensionless conductances.\n\nI_T Channel\nThe corrected equations used for the $I_T$ channel in NEST are\n\\begin{align}\nI_T &= g_{\\text{peak}, T} m_T^2(V, t) h_T(V,t) (V-E_T) \\\nm_T^{\\infty}(V) &= \\frac{1}{1+\\exp\\left(-\\frac{V+59\\text{mV}}{6.2\\text{mV}}\\right)}\\\n\\tau_{m,T}(V) &= 0.13\\text{ms} \n + \\frac{0.22\\text{ms}}{\\exp\\left(-\\frac{V + 132\\text{mV}}{16.7\\text{mV}}\\right) + \\exp\\left(\\frac{V + 16.8\\text{mV}}{18.2\\text{mV}}\\right)} \\ \nh_T^{\\infty}(V) &= \\frac{1}{1+\\exp\\left(\\frac{V+83\\text{mV}}{4\\text{mV}}\\right)}\\\n\\tau_{h,T}(V) &= 8.2\\text{ms} + \\frac{56.6\\text{ms} + 0.27\\text{ms} \\exp\\left(\\frac{V + 115.2\\text{mV}}{5\\text{mV}}\\right)}{1 + \\exp\\left(\\frac{V + 86\\text{mV}}{3.2\\text{mV}}\\right)}\n\\end{align}",
"nest.ResetKernel()\nclass IT(Channel):\n \n nest_g = 'g_peak_T'\n nest_I = 'I_T'\n \n def __init__(self, ht_params):\n self.hp = ht_params\n \n def tau_m(self, V):\n return 0.13 + 0.22/(np.exp(-(V+132)/16.7) + np.exp((V+16.8)/18.2))\n\n def tau_h(self, V):\n return 8.2 + (56.6 + 0.27 * np.exp((V+115.2)/5.0)) /(1 + np.exp((V+86.0)/3.2))\n\n def m_inf(self, V):\n return 1/(1+np.exp(-(V+59.0)/6.2))\n\n def h_inf(self, V):\n return 1/(1+np.exp((V+83.0)/4.0))\n\n def compute_I(self, t, V, m0, h0, D0):\n self.m = si.odeint(self.dm, m0, t, args=(V,))\n self.h = si.odeint(self.dh, h0, t, args=(V,))\n return - self.hp['g_peak_T'] * self.m**2 * self.h * (V - self.hp['E_rev_T'])\n\niT = IT(nest.GetDefaults('ht_neuron'))\n\nV = np.linspace(-110, 30, 100)\nplt.plot(V, 10 * iT.tau_m(V), 'b-', label='10 * tau_m');\nplt.plot(V, iT.tau_h(V), 'b--', label='tau_h');\nax1 = plt.gca();\nax1.set_xlabel('Voltage V [mV]');\nax1.set_ylabel('Time constants [ms]', color='b');\nax2 = ax1.twinx()\nax2.plot(V, iT.m_inf(V), 'g-', label='m_inf');\nax2.plot(V, iT.h_inf(V), 'g--', label='h_inf');\nax2.set_ylabel('Steady-state', color='g');\nln1, lb1 = ax1.get_legend_handles_labels()\nln2, lb2 = ax2.get_legend_handles_labels()\nplt.legend(ln1+ln2, lb1+lb2, loc='upper right');",
"Time constants here are much shorter than for I_h\nTime constants are about five times shorter than in Fig 1 of Huguenard and McCormick, J Neurophysiol 68:1373, 1992, cited in [HT05], but that may be due to the fact that the original data was collected at 23-25C and parameters have been adjusted to 36C.\nSteady-state activation and inactivation look much like in Huguenard and McCormick.\nNote: Most detailed paper on data is Huguenard and Prince, J Neurosci 12:3804-3817, 1992. The parameters given for h_inf here are for VB cells, not nRT cells in that paper (Fig 5B), parameters for m_inf are similar to but not exactly those of Fig 4B for either VB or nRT.",
"iT = IT(nest.GetDefaults('ht_neuron'))\nnr, cr = voltage_clamp(iT, [(200, -65.), (200, -80.), (200, -100.), (200, -90.), (200, -70.),\n (200, -55.)],\n nest_dt=0.1) \n\nplt.subplot(1, 2, 1)\nplt.plot(nr.times, nr.I_T, label='NEST');\nplt.plot(cr.times, cr.I_T, label='Control');\nplt.legend(loc='upper left');\nplt.xlabel('Time [ms]');\nplt.ylabel('I_T [mV]');\nplt.title('I_T current')\n\nplt.subplot(1, 2, 2)\nplt.plot(nr.times, (nr.I_T-cr.I_T)/np.abs(cr.I_T));\nplt.title('Relative I_T error')\nplt.xlabel('Time [ms]');\nplt.ylabel('Rel. error (NEST-Control)/|Control|');",
"Also here the results are in good agreement and the error appears acceptable.\n\nI_NaP channel\nThis channel adapts instantaneously to changes in membrane potential:\n\\begin{align}\nI_{NaP} &= - g_{\\text{peak}, NaP} (m_{NaP}^{\\infty}(V, t))^3 (V-E_{NaP}) \\\nm_{NaP}^{\\infty}(V) &= \\frac{1}{1+\\exp\\left(-\\frac{V+55.7\\text{mV}}{7.7\\text{mV}}\\right)}\n\\end{align}",
"nest.ResetKernel()\nclass INaP(Channel):\n \n nest_g = 'g_peak_NaP'\n nest_I = 'I_NaP'\n \n def __init__(self, ht_params):\n self.hp = ht_params\n \n def m_inf(self, V):\n return 1/(1+np.exp(-(V+55.7)/7.7))\n \n def compute_I(self, t, V, m0, h0, D0):\n return self.I_V_curve(V * np.ones_like(t)) \n\n def I_V_curve(self, V):\n self.m = self.m_inf(V)\n return - self.hp['g_peak_NaP'] * self.m**3 * (V - self.hp['E_rev_NaP'])\n\niNaP = INaP(nest.GetDefaults('ht_neuron'))\nV = np.arange(-110., 30., 1.)\nnr, cr = voltage_clamp(iNaP, [(1, v) for v in V], nest_dt=0.1)\n\nplt.subplot(1, 2, 1)\nplt.plot(nr.times, nr.I_NaP, label='NEST');\nplt.plot(cr.times, cr.I_NaP, label='Control');\nplt.legend(loc='upper left');\nplt.xlabel('Time [ms]');\nplt.ylabel('I_NaP [mV]');\nplt.title('I_NaP current')\n\nplt.subplot(1, 2, 2)\nplt.plot(nr.times, (nr.I_NaP-cr.I_NaP));\nplt.title('I_NaP error')\nplt.xlabel('Time [ms]');\nplt.ylabel('Error (NEST-Control)');",
"Perfect agreement\nStep structure is because $V$ changes only every second.\n\nI_KNa channel (aka I_DK)\nEquations for this channel are\n\\begin{align}\nI_{DK} &= - g_{\\text{peak},DK} m_{DK}(V,t) (V - E_{DK})\\\n m_{DK} &= \\frac{1}{1 + \\left(\\frac{d_{1/2}}{D}\\right)^{3.5}}\\\n \\frac{dD}{dt} &= D_{\\text{influx}}(V) - \\frac{D-D_{\\text{eq}}}{\\tau_D} = \\frac{D_{\\infty}(V)-D}{\\tau_D} \\\n D_{\\infty}(V) &= \\tau_D D_{\\text{influx}}(V) + {D_{\\text{eq}}}\\\n D_{\\text{influx}} &= \\frac{D_{\\text{influx,peak}}}{1+ \\exp\\left(-\\frac{V-D_{\\theta}}{\\sigma_D}\\right)} \n\\end{align}\nwith \n|$D_{\\text{influx,peak}}$|$D_{\\text{eq}}$|$\\tau_D$|$D_{\\theta}$|$\\sigma_D$|$d_{1/2}$|\n| --: | --: | --: | --: | --: | --: |\n|$0.025\\text{ms}^{-1}$ |$0.001$|$1250\\text{ms}$|$-10\\text{mV}$|$5\\text{mV}$|$0.25$|\nNote the following:\n- $D_{eq}$ is the equilibrium value only for $D_{\\text{influx}}(V)=0$, i.e., in the limit $V\\to -\\infty$ and $t\\to\\infty$.\n- The actual steady-state value is $D_{\\infty}$.\n- $m_{DK}$ is a steep sigmoid which is almost 0 or 1 except for a narrow window around $d_{1/2}$.\n- To the left of this window, $I_{DK}\\approx 0$.\n- To the right of this window, $I_{DK}\\sim -(V-E_{DK})$.\n- $m_{DK}$ is not integrated over time, instead it is an instantaneous transform of $D$, which is integrated over time.",
"nest.ResetKernel()\nclass IDK(Channel):\n \n nest_g = 'g_peak_KNa'\n nest_I = 'I_KNa'\n \n def __init__(self, ht_params):\n self.hp = ht_params\n \n def m_DK(self, D):\n return 1/(1+(0.25/D)**3.5)\n\n def D_inf(self, V):\n return 1250. * self.D_influx(V) + 0.001\n \n def D_influx(self, V):\n return 0.025 / ( 1 + np.exp(-(V+10)/5.) )\n \n def dD(self, D, t, V):\n return (self.D_inf(V) - D)/1250.\n \n def compute_I(self, t, V, m0, h0, D0):\n self.D = si.odeint(self.dD, D0, t, args=(V,))\n self.m = self.m_DK(self.D)\n return - self.hp['g_peak_KNa'] * self.m * (V - self.hp['E_rev_KNa'])",
"Properties of I_DK",
"iDK = IDK(nest.GetDefaults('ht_neuron'))\n\nD=np.linspace(0.01, 1.5,num=200);\nV=np.linspace(-110, 30, num=200);\n\nax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4);\nax2 = ax1.twinx()\nax3 = plt.subplot2grid((1, 9), (0, 6), colspan=3);\n\nax1.plot(V, -iDK.m_DK(iDK.D_inf(V))*(V - iDK.hp['E_rev_KNa']), 'g');\nax1.set_ylabel('Current I_inf(V)', color='g');\nax2.plot(V, iDK.m_DK(iDK.D_inf(V)), 'b');\nax2.set_ylabel('Activation m_inf(D_inf(V))', color='b');\nax1.set_xlabel('Membrane potential V [mV]');\nax2.set_title('Steady-state activation and current');\n\nax3.plot(D, iDK.m_DK(D), 'b');\nax3.set_xlabel('D');\nax3.set_ylabel('Activation m_inf(D)', color='b');\nax3.set_title('Activation as function of D');",
"Note that current in steady state is \n$\\approx 0$ for $V < -40$mV\n$\\sim -(V-E_{DK})$ for $V> -30$mV\n\n\n\nVoltage clamp",
"nr, cr = voltage_clamp(iDK, [(500, -65.), (500, -35.), (500, -25.), (500, 0.), (5000, -70.)],\n nest_dt=1.) \n\nax1 = plt.subplot2grid((1, 9), (0, 0), colspan=4);\nax2 = plt.subplot2grid((1, 9), (0, 6), colspan=3);\n\nax1.plot(nr.times, nr.I_KNa, label='NEST');\nax1.plot(cr.times, cr.I_KNa, label='Control');\nax1.legend(loc='lower right');\nax1.set_xlabel('Time [ms]');\nax1.set_ylabel('I_DK [mV]');\nax1.set_title('I_DK current');\n\nax2.plot(nr.times, (nr.I_KNa-cr.I_KNa)/np.abs(cr.I_KNa));\nax2.set_title('Relative I_DK error')\nax2.set_xlabel('Time [ms]');\nax2.set_ylabel('Rel. error (NEST-Control)/|Control|');",
"Looks very fine.\nNote that the current gets appreviable only when $V>-35$ mV\nOnce that threshold is crossed, the current adjust instantaneously to changes in $V$, since it is in the linear regime.\nWhen returning from $V=0$ to $V=-70$ mV, the current remains large for a long time since $D$ has to drop below 1 before $m_{\\infty}$ changes appreciably\n\nSynaptic channels\nFor synaptic channels, NEST allows recording of conductances, so we test conductances directly. Due to the voltage-dependence of the NMDA channels, we still do this in voltage clamp.",
"nest.ResetKernel()\nclass SynChannel:\n \"\"\"\n Base class for synapse channel models in Python.\n \"\"\"\n\n def t_peak(self):\n return self.tau_1 * self.tau_2 / (self.tau_2 - self.tau_1) * np.log(self.tau_2/self.tau_1)\n \n def beta(self, t):\n val = ( ( np.exp(-t/self.tau_1) - np.exp(-t/self.tau_2) ) /\n ( np.exp(-self.t_peak()/self.tau_1) - np.exp(-self.t_peak()/self.tau_2) ) )\n val[t < 0] = 0\n return val\n\ndef syn_voltage_clamp(channel, DT_V_seq, nest_dt=0.1):\n \"Run voltage clamp with voltage V through intervals DT with single spike at time 1\"\n\n spike_time = 1.0\n delay = 1.0\n \n nest.ResetKernel()\n nest.resolution = nest_dt\n try:\n nrn = nest.Create('ht_neuron', params={'theta': 1e6, 'theta_eq': 1e6,\n 'instant_unblock_NMDA': channel.instantaneous})\n except:\n nrn = nest.Create('ht_neuron', params={'theta': 1e6, 'theta_eq': 1e6})\n\n mm = nest.Create('multimeter', \n params={'record_from': ['g_'+channel.receptor],\n 'interval': nest_dt})\n sg = nest.Create('spike_generator', params={'spike_times': [spike_time]})\n nest.Connect(mm, nrn)\n nest.Connect(sg, nrn, syn_spec={'weight': 1.0, 'delay': delay,\n 'receptor_type': channel.rec_code})\n\n # ensure we start from equilibrated state\n nrn.set(V_m=DT_V_seq[0][1], equilibrate=True, voltage_clamp=True)\n for DT, V in DT_V_seq:\n nrn.set(V_m=V, voltage_clamp=True)\n nest.Simulate(DT)\n t_end = nest.biological_time\n \n # simulate a little more so we get all data up to t_end to multimeter\n nest.Simulate(2 * nest.min_delay)\n \n tmp = pd.DataFrame(mm.get('events'))\n nest_res = tmp[tmp.times <= t_end]\n \n # Control part\n t_old = 0.\n t_all, g_all = [], []\n \n m_fast_old = (channel.m_inf(DT_V_seq[0][1]) \n if channel.receptor == 'NMDA' and not channel.instantaneous else None) \n m_slow_old = (channel.m_inf(DT_V_seq[0][1]) \n if channel.receptor == 'NMDA' and not channel.instantaneous else None) \n\n for DT, V in DT_V_seq:\n t_loc = np.arange(0., DT+0.1*nest_dt, nest_dt)\n g_loc = channel.g(t_old+t_loc-(spike_time+delay), V, m_fast_old, m_slow_old)\n t_all.extend(t_old + t_loc[1:])\n g_all.extend(g_loc[1:])\n m_fast_old = channel.m_fast[-1] if m_fast_old is not None else None\n m_slow_old = channel.m_slow[-1] if m_slow_old is not None else None\n t_old = t_all[-1]\n \n ctrl_res = pd.DataFrame({'times': t_all, 'g_'+channel.receptor: g_all})\n\n return nest_res, ctrl_res",
"AMPA, GABA_A, GABA_B channels",
"nest.ResetKernel()\nclass PlainChannel(SynChannel):\n def __init__(self, hp, receptor):\n self.hp = hp\n self.receptor = receptor\n self.rec_code = hp['receptor_types'][receptor]\n self.tau_1 = hp['tau_rise_'+receptor]\n self.tau_2 = hp['tau_decay_'+receptor]\n self.g_peak = hp['g_peak_'+receptor]\n self.E_rev = hp['E_rev_'+receptor]\n \n def g(self, t, V, mf0, ms0):\n return self.g_peak * self.beta(t)\n \n def I(self, t, V):\n return - self.g(t) * (V-self.E_rev)\n\nampa = PlainChannel(nest.GetDefaults('ht_neuron'), 'AMPA')\nam_n, am_c = syn_voltage_clamp(ampa, [(25, -70.)], nest_dt=0.1)\nplt.subplot(1, 2, 1);\nplt.plot(am_n.times, am_n.g_AMPA, label='NEST');\nplt.plot(am_c.times, am_c.g_AMPA, label='Control');\nplt.xlabel('Time [ms]');\nplt.ylabel('g_AMPA');\nplt.title('AMPA Channel');\nplt.subplot(1, 2, 2);\nplt.plot(am_n.times, (am_n.g_AMPA-am_c.g_AMPA)/am_c.g_AMPA);\nplt.xlabel('Time [ms]');\nplt.ylabel('Rel error');\nplt.title('AMPA rel error');",
"Looks quite good, but the error is maybe a bit larger than one would hope.\nBut the synaptic rise time is short (0.5 ms) compared to the integration step in NEST (0.1 ms), which may explain the error.\nReducing the time step reduces the error:",
"ampa = PlainChannel(nest.GetDefaults('ht_neuron'), 'AMPA')\nam_n, am_c = syn_voltage_clamp(ampa, [(25, -70.)], nest_dt=0.001)\nplt.subplot(1, 2, 1);\nplt.plot(am_n.times, am_n.g_AMPA, label='NEST');\nplt.plot(am_c.times, am_c.g_AMPA, label='Control');\nplt.xlabel('Time [ms]');\nplt.ylabel('g_AMPA');\nplt.title('AMPA Channel');\nplt.subplot(1, 2, 2);\nplt.plot(am_n.times, (am_n.g_AMPA-am_c.g_AMPA)/am_c.g_AMPA);\nplt.xlabel('Time [ms]');\nplt.ylabel('Rel error');\nplt.title('AMPA rel error');\n\ngaba_a = PlainChannel(nest.GetDefaults('ht_neuron'), 'GABA_A')\nga_n, ga_c = syn_voltage_clamp(gaba_a, [(50, -70.)])\nplt.subplot(1, 2, 1);\nplt.plot(ga_n.times, ga_n.g_GABA_A, label='NEST');\nplt.plot(ga_c.times, ga_c.g_GABA_A, label='Control');\nplt.xlabel('Time [ms]');\nplt.ylabel('g_GABA_A');\nplt.title('GABA_A Channel');\nplt.subplot(1, 2, 2);\nplt.plot(ga_n.times, (ga_n.g_GABA_A-ga_c.g_GABA_A)/ga_c.g_GABA_A);\nplt.xlabel('Time [ms]');\nplt.ylabel('Rel error');\nplt.title('GABA_A rel error');\n\ngaba_b = PlainChannel(nest.GetDefaults('ht_neuron'), 'GABA_B')\ngb_n, gb_c = syn_voltage_clamp(gaba_b, [(750, -70.)])\nplt.subplot(1, 2, 1);\nplt.plot(gb_n.times, gb_n.g_GABA_B, label='NEST');\nplt.plot(gb_c.times, gb_c.g_GABA_B, label='Control');\nplt.xlabel('Time [ms]');\nplt.ylabel('g_GABA_B');\nplt.title('GABA_B Channel');\nplt.subplot(1, 2, 2);\nplt.plot(gb_n.times, (gb_n.g_GABA_B-gb_c.g_GABA_B)/gb_c.g_GABA_B);\nplt.xlabel('Time [ms]');\nplt.ylabel('Rel error');\nplt.title('GABA_B rel error');",
"Looks good for all\nFor GABA_B the error is negligible even for dt = 0.1, since the time constants are large.\n\nNMDA Channel\nThe equations for this channel are\n\\begin{align}\n \\bar{g}{\\text{NMDA}}(t) &= m(V, t) g{\\text{NMDA}}(t) m(V, t)\\ &= a(V) m_{\\text{fast}}^(V, t) + ( 1 - a(V) ) m_{\\text{slow}}^(V, t)\\\n a(V) &= 0.51 - 0.0028 V \\\n m^{\\infty}(V) &= \\frac{1}{ 1 + \\exp\\left( -S_{\\text{act}} ( V - V_{\\text{act}} ) \\right) } \\\n m_X^*(V, t) &= \\min(m^{\\infty}(V), m_X(V, t))\\\n \\frac{\\text{d}m_X}{\\text{d}t} &= \\frac{m^{\\infty}(V) - m_X }{ \\tau_{\\text{Mg}, X}}\n\\end{align} \nwhere $g_{\\text{NMDA}}(t)$ is the beta functions as for the other channels. In case of instantaneous unblocking, $m=m^{\\infty}$.\nNMDA with instantaneous unblocking",
"class NMDAInstantChannel(SynChannel):\n def __init__(self, hp, receptor):\n self.hp = hp\n self.receptor = receptor\n self.rec_code = hp['receptor_types'][receptor]\n self.tau_1 = hp['tau_rise_'+receptor]\n self.tau_2 = hp['tau_decay_'+receptor]\n self.g_peak = hp['g_peak_'+receptor]\n self.E_rev = hp['E_rev_'+receptor]\n self.S_act = hp['S_act_NMDA']\n self.V_act = hp['V_act_NMDA']\n self.instantaneous = True\n \n def m_inf(self, V):\n return 1. / ( 1. + np.exp(-self.S_act*(V-self.V_act)))\n \n def g(self, t, V, mf0, ms0):\n return self.g_peak * self.m_inf(V) * self.beta(t)\n \n def I(self, t, V):\n return - self.g(t) * (V-self.E_rev)\n\nnmdai = NMDAInstantChannel(nest.GetDefaults('ht_neuron'), 'NMDA')\nni_n, ni_c = syn_voltage_clamp(nmdai, [(50, -60.), (50, -50.), (50, -20.), (50, 0.), (50, -60.)])\nplt.subplot(1, 2, 1);\nplt.plot(ni_n.times, ni_n.g_NMDA, label='NEST');\nplt.plot(ni_c.times, ni_c.g_NMDA, label='Control');\nplt.xlabel('Time [ms]');\nplt.ylabel('g_NMDA');\nplt.title('NMDA Channel (instant unblock)');\nplt.subplot(1, 2, 2);\nplt.plot(ni_n.times, (ni_n.g_NMDA-ni_c.g_NMDA)/ni_c.g_NMDA);\nplt.xlabel('Time [ms]');\nplt.ylabel('Rel error');\nplt.title('NMDA (inst) rel error');",
"Looks good\nJumps are due to blocking/unblocking of Mg channels with changes in $V$\n\nNMDA with unblocking over time",
"class NMDAChannel(SynChannel):\n def __init__(self, hp, receptor):\n self.hp = hp\n self.receptor = receptor\n self.rec_code = hp['receptor_types'][receptor]\n self.tau_1 = hp['tau_rise_'+receptor]\n self.tau_2 = hp['tau_decay_'+receptor]\n self.g_peak = hp['g_peak_'+receptor]\n self.E_rev = hp['E_rev_'+receptor]\n self.S_act = hp['S_act_NMDA']\n self.V_act = hp['V_act_NMDA']\n self.tau_fast = hp['tau_Mg_fast_NMDA']\n self.tau_slow = hp['tau_Mg_slow_NMDA']\n self.instantaneous = False\n \n def m_inf(self, V):\n return 1. / ( 1. + np.exp(-self.S_act*(V-self.V_act)) )\n \n def dm(self, m, t, V, tau):\n return ( self.m_inf(V) - m ) / tau\n\n def g(self, t, V, mf0, ms0):\n self.m_fast = si.odeint(self.dm, mf0, t, args=(V, self.tau_fast))\n self.m_slow = si.odeint(self.dm, ms0, t, args=(V, self.tau_slow))\n a = 0.51 - 0.0028 * V\n m_inf = self.m_inf(V)\n mfs = self.m_fast[:]\n mfs[mfs > m_inf] = m_inf\n mss = self.m_slow[:]\n mss[mss > m_inf] = m_inf\n m = np.squeeze(a * mfs + ( 1 - a ) * mss)\n return self.g_peak * m * self.beta(t)\n \n def I(self, t, V):\n raise NotImplementedError()\n\nnmda = NMDAChannel(nest.GetDefaults('ht_neuron'), 'NMDA')\nnm_n, nm_c = syn_voltage_clamp(nmda, [(50, -70.), (50, -50.), (50, -20.), (50, 0.), (50, -60.)])\nplt.subplot(1, 2, 1);\nplt.plot(nm_n.times, nm_n.g_NMDA, label='NEST');\nplt.plot(nm_c.times, nm_c.g_NMDA, label='Control');\nplt.xlabel('Time [ms]');\nplt.ylabel('g_NMDA');\nplt.title('NMDA Channel');\nplt.subplot(1, 2, 2);\nplt.plot(nm_n.times, (nm_n.g_NMDA-nm_c.g_NMDA)/nm_c.g_NMDA);\nplt.xlabel('Time [ms]');\nplt.ylabel('Rel error');\nplt.title('NMDA rel error');",
"Looks fine, too.\n\nSynapse Model\nWe test the synapse model by placing it between two parrot neurons, sending spikes with differing intervals and compare to expected weights.",
"nest.ResetKernel()\nsp = nest.GetDefaults('ht_synapse')\nP0 = sp['P']\ndP = sp['delta_P']\ntP = sp['tau_P']\nspike_times = [10., 12., 20., 20.5, 100., 200., 1000.]\nexpected = [(0., P0, P0)]\nfor idx, t in enumerate(spike_times):\n tlast, Psend, Ppost = expected[idx]\n Psend = 1 - (1-Ppost)*math.exp(-(t-tlast)/tP)\n expected.append((t, Psend, (1-dP)*Psend))\nexpected_weights = list(zip(*expected[1:]))[1]\n\nsg = nest.Create('spike_generator', params={'spike_times': spike_times})\nn = nest.Create('parrot_neuron', 2)\nwr = nest.Create('weight_recorder')\n\nnest.SetDefaults('ht_synapse', {'weight_recorder': wr, 'weight': 1.0})\nnest.Connect(sg, n[:1])\nnest.Connect(n[:1], n[1:], syn_spec='ht_synapse')\nnest.Simulate(1200)\n\nrec_weights = wr.get('events', 'weights')\n\nprint('Recorded weights:', rec_weights)\nprint('Expected weights:', expected_weights)\nprint('Difference :', np.array(rec_weights) - np.array(expected_weights))",
"Perfect agreement, synapse model looks fine.\nIntegration test: Neuron driven through all synapses\nWe drive a Hill-Tononi neuron through pulse packets arriving at 1 second intervals, impinging through all synapse types. Compare this to Fig 5 of [HT05].",
"nest.ResetKernel()\nnrn = nest.Create('ht_neuron')\nppg = nest.Create('pulsepacket_generator', n=4,\n params={'pulse_times': [700., 1700., 2700., 3700.],\n 'activity': 700, 'sdev': 50.})\npr = nest.Create('parrot_neuron', n=4)\nmm = nest.Create('multimeter', \n params={'interval': 0.1,\n 'record_from': ['V_m', 'theta',\n 'g_AMPA', 'g_NMDA',\n 'g_GABA_A', 'g_GABA_B',\n 'I_NaP', 'I_KNa', 'I_T', 'I_h']})\n\nweights = {'AMPA': 25., 'NMDA': 20., 'GABA_A': 10., 'GABA_B': 1.}\nreceptors = nest.GetDefaults('ht_neuron')['receptor_types']\n\nnest.Connect(ppg, pr, 'one_to_one')\nfor p, (rec_name, rec_wgt) in zip(pr, weights.items()):\n nest.Connect(p, nrn, syn_spec={'synapse_model': 'ht_synapse',\n 'receptor_type': receptors[rec_name],\n 'weight': rec_wgt})\nnest.Connect(mm, nrn)\n\nnest.Simulate(5000)\n\ndata = nest.GetStatus(mm)[0]['events']\nt = data['times']\ndef texify_name(name):\n return r'${}_{{\\mathrm{{{}}}}}$'.format(*name.split('_'))\n\nfig = plt.figure(figsize=(12,10))\n\nVax = fig.add_subplot(311)\nVax.plot(t, data['V_m'], 'k', lw=1, label=r'$V_m$')\nVax.plot(t, data['theta'], 'r', alpha=0.5, lw=1, label=r'$\\Theta$')\nVax.set_ylabel('Potential [mV]')\nVax.legend(fontsize='small')\nVax.set_title('ht_neuron driven by sinousiodal Poisson processes')\n\nIax = fig.add_subplot(312)\nfor iname, color in (('I_h', 'blue'), ('I_KNa', 'green'),\n ('I_NaP', 'red'), ('I_T', 'cyan')):\n Iax.plot(t, data[iname], color=color, lw=1, label=texify_name(iname))\n#Iax.set_ylim(-60, 60)\nIax.legend(fontsize='small')\nIax.set_ylabel('Current [mV]')\n\nGax = fig.add_subplot(313)\nfor gname, sgn, color in (('g_AMPA', 1, 'green'), ('g_GABA_A', -1, 'red'), \n ('g_GABA_B', -1, 'cyan'), ('g_NMDA', 1, 'magenta')):\n Gax.plot(t, sgn*data[gname], lw=1, label=texify_name(gname), color=color)\n#Gax.set_ylim(-150, 150)\nGax.legend(fontsize='small')\nGax.set_ylabel('Conductance')\nGax.set_xlabel('Time [ms]');",
"License\nThis file is part of NEST. Copyright (C) 2004 The NEST Initiative\nNEST is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.\nNEST is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
grigorisg9gr/menpo-notebooks
|
menpo/Transforms/Piecewise_Affine.ipynb
|
bsd-3-clause
|
[
"Piecewise Affine Transforms",
"import numpy as np\nfrom menpo.transform import PiecewiseAffine",
"We build a PiecewiseAffine by supplying two sets of points and a shared triangle list",
"from menpo.shape import TriMesh, PointCloud\na = np.array([[0, 0], [1, 0], [0, 1], [1, 1],\n [-0.5, -0.7], [0.8, -0.4], [0.9, -2.1]])\nb = np.array([[0,0], [2, 0], [-1, 3], [2, 6],\n [-1.0, -0.01], [1.0, -0.4], [0.8, -1.6]])\ntl = np.array([[0,2,1], [1,3,2]])\n\nsrc = TriMesh(a, tl)\nsrc_points = PointCloud(a)\ntgt = PointCloud(b)\n\npwa = PiecewiseAffine(src_points, tgt)",
"Lets make a random 5000 point PointCloud in the unit square and view it",
"%matplotlib inline\n# points_s = PointCloud(np.random.rand(10000).reshape([-1,2]))\npoints_f = PointCloud(np.random.rand(10000).reshape([-1,2]))\npoints_f.view()",
"Now lets see the effect having warped",
"t_points_f = pwa.apply(points_f);\nt_points_f.view()\n\ntest = np.array([[0.1,0.1], [0.7, 0.9], \n [0.2,0.3], [0.5, 0.6]])\n\npwa.index_alpha_beta(test)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
henchc/Data-on-the-Mind-2017-scraping-apis
|
02-Scraping/solutions/02-Selenium_solutions.ipynb
|
mit
|
[
"Webscraping with Selenium\n\nWhen the data that you want exists on a website with heavy JavaScript and requires interaction from the user, BeautifulSoup will not be enough. This is when you need a webdriver. One of the most popular webdrivers is Selenium. Selenium is commonly used in industry to automate testing of the user experience, but it can also interact with content to collect data that are difficult to get otherwise.\nThis lesson is a short introduction to the Selenium webdriver. It includes:\n\nLaunching the webdriver\nNavigating the browser\nCollecting generated data\nExporting data to CSV\n\nLet's first import the necessary Python libraries:",
"from selenium import webdriver # powers the browser interaction\nfrom selenium.webdriver.support.ui import Select # selects menu options\nfrom pyvirtualdisplay import Display # for JHub environment\nfrom bs4 import BeautifulSoup # to parse HTML\nimport csv # to write CSV\nimport pandas # to see CSV",
"Selenium actually uses our web browser, and since the JupyterHub doesn't come with Firefox, we'll download the binaries:",
"# download firefox binaries\n!wget http://ftp.mozilla.org/pub/firefox/releases/54.0/linux-x86_64/en-US/firefox-54.0.tar.bz2\n \n# untar binaries\n!tar xvjf firefox-54.0.tar.bz2",
"We also need the webdriver for Firefox that allows Selenium to interact directly with the browser through the code we write. We can download the geckodriver for Firefox from the github page:",
"# download geckodriver\n!wget https://github.com/mozilla/geckodriver/releases/download/v0.17.0/geckodriver-v0.17.0-linux64.tar.gz\n \n# untar geckdriver\n!tar xzvf geckodriver-v0.17.0-linux64.tar.gz",
"1. Launching the webdriver\nSince we are in different environment and we can't use our regular graphical desktop, we need to tell Python to start a virutal display, onto which Selenium can project the Firefox web browser (though we won't actually see it).",
"display = Display(visible=0, size=(1024, 768))\ndisplay.start()",
"Now we can initialize the Selenium web driver, giving it the path to the Firefox binary code and the driver:",
"# setup driver\ndriver = webdriver.Firefox(firefox_binary='./firefox/firefox', executable_path=\"./geckodriver\")",
"You can navigate Selenium to a URL by using the get method, exactly the same way we used the requests.get before:",
"driver.get(\"http://www.google.com\")\nprint(driver.page_source)",
"Cool, right? You can see Google in your browser now. Let's go look at some West Bengal State election results:\n2. Navigating the browser\nTo follow along as Selenium navigates the website, try opening the <a href=\"http://wbsec.gov.in/(S(eoxjutirydhdvx550untivvu))/DetailedResult/Detailed_gp_2013.aspx\">site</a> in another tab. You'll notice if you select options from the menu, it calls a script to generate a custom table. The URL doesn't change, and so we can't just call for the HTML of the page, it needs to be generated. That's where Selenium shines. It can choose these menu options and wait for the generated table before grabbing the new HTML for the data.",
"# go results page\ndriver.get(\"http://wbsec.gov.in/(S(eoxjutirydhdvx550untivvu))/DetailedResult/Detailed_gp_2013.aspx\")",
"Zilla Parishad\nSimilar to BeautifulSoup, Selenium has methods to find elements on a webpage. We can use the method find_element_by_name to find an element on the page by its name.",
"# find \"district\" drop down menu\ndistrict = driver.find_element_by_name(\"ddldistrict\")\n\ndistrict",
"Now if we want to get the different options in this drop down, we can do the same. You'll notice that each name is associated with a unique value. Since we're getting multiple elements here, we'll use find_elements_by_tag_name",
"# find options in \"disrict\" drop down\ndistrict_options = district.find_elements_by_tag_name(\"option\")\n\nprint(district_options[1].get_attribute(\"value\"))\nprint(district_options[1].text)",
"Now we'll make a dictionary associating each name with its value.",
"d_options = {option.text.strip(): option.get_attribute(\"value\") for option in district_options if option.get_attribute(\"value\").isdigit()}\nprint(d_options)",
"We can then select a district by using its name and our dictionary. First we'll make our own function using Selenium's Select, and then we'll call it on \"Bankura\".",
"district_select = Select(district)\ndistrict_select.select_by_value(d_options[\"Bankura\"])",
"You should have seen the dropdown menu select 'Bankura' by running the previous cell.\nPanchayat Samity\nWe can do the same as we did above to find the different blocks.",
"# find the \"block\" drop down\nblock = driver.find_element_by_name(\"ddlblock\")\n\n# get options\nblock_options = block.find_elements_by_tag_name(\"option\")\n\nprint(block_options[1].get_attribute(\"value\"))\nprint(block_options[1].text)\n\nb_options = {option.text.strip(): option.get_attribute(\"value\") for option in block_options if option.get_attribute(\"value\").isdigit()}\nprint(b_options)\n\npanchayat_select = Select(block)\npanchayat_select.select_by_value(b_options[\"BANKURA-I\"])",
"Great! One dropdown menu to go.\nGram Panchayat",
"# get options\ngp = driver.find_element_by_name(\"ddlgp\")\ngp_options = gp.find_elements_by_tag_name(\"option\")\n\nprint(gp_options[1].get_attribute(\"value\"))\nprint(gp_options[1].text)\n\ngp_options = {option.text.strip(): option.get_attribute(\"value\") for option in gp_options if option.get_attribute(\"value\").isdigit()}\nprint(gp_options)\n\ngram_select = Select(gp)\ngram_select.select_by_value(gp_options[\"ANCHURI\"])",
"Once we selected the last dropdown menu parameter, the website automatically generate a table below. This table could not have been called up by a URL, as you can see that the URL in the browser did not change. This is why Selenium is so helpful.\n3. Collecting generated data\nNow that the table has been rendered, it exists as HTML in our page source. If we wanted to, we could send this to BeautifulSoup using the driver.page_source method to get the text. But we can also use Selenium's parsing methods.\nFirst we'll identify it by its CSS selector, and then use the get_attribute method.",
"soup = BeautifulSoup(driver.page_source, 'html5lib')\n\n# get the html for the table\ntable = soup.select('#DataGrid1')[0]",
"First we'll get all the rows of the table using the tr selector.",
"# get list of rows\nrows = [row for row in table.select(\"tr\")]",
"But the first row is the header so we don't want that.",
"rows = rows[1:]",
"Each cell in the row corresponds to the data we want.",
"rows[0].select('td')",
"Now it's just a matter of looping through the rows and getting the information we want from each one.",
"data = []\nfor row in rows:\n d = {}\n seat_names = row.select('td')[0].find_all(\"span\")\n d['seat'] = ' '.join([x.text for x in seat_names])\n d['electors'] = row.select('td')[1].text.strip()\n d['polled'] = row.select('td')[2].text.strip()\n d['rejected'] = row.select('td')[3].text.strip()\n d['osn'] = row.select('td')[4].text.strip()\n d['candidate'] = row.select('td')[5].text.strip()\n d['party'] = row.select('td')[6].text.strip()\n d['secured'] = row.select('td')[7].text.strip()\n data.append(d)\n\nprint(data[1])",
"You'll notice that some of the information, such as total electors, is not supplied for each canddiate. This code will add that information for the candidates who don't have it.",
"i = 0\nwhile i < len(data):\n if data[i]['seat']:\n seat = data[i]['seat']\n electors = data[i]['electors']\n polled = data[i]['polled']\n rejected = data[i]['rejected']\n i = i+1\n else:\n data[i]['seat'] = seat\n data[i]['electors'] = electors\n data[i]['polled'] = polled\n data[i]['rejected'] = rejected\n i = i+1\n\ndata",
"4. Exporting data to CSV\nWe can then loop through all the combinations of the dropdown menu we want, collect the information from the generated table, and append it to the data list. Once we're done, we can write it to a CSV.",
"header = data[0].keys()\n\nwith open('WBS-table.csv', 'w') as output_file:\n dict_writer = csv.DictWriter(output_file, header)\n dict_writer.writeheader()\n dict_writer.writerows(data)\n \npandas.read_csv('WBS-table.csv')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdiazpier/nest-simulator
|
doc/userdoc/model_details/aeif_models_implementation.ipynb
|
gpl-2.0
|
[
"NEST implementation of the aeif models\nHans Ekkehard Plesser and Tanguy Fardet, 2016-09-09\nThis notebook provides a reference solution for the Adaptive Exponential Integrate and Fire\n(AEIF) neuronal model and compares it with several numerical implementations using simpler solvers.\nIn particular this justifies the change of implementation in September 2016 to make the simulation\ncloser to the reference solution.\nPosition of the problem\nBasics\nThe equations governing the evolution of the AEIF model are\n$$\\left\\lbrace\\begin{array}{rcl}\n C_m\\dot{V} &=& -g_L(V-E_L) + g_L \\Delta_T e^{\\frac{V-V_T}{\\Delta_T}} + I_e + I_s(t) -w\\\n \\tau_s\\dot{w} &=& a(V-E_L) - w\n\\end{array}\\right.$$\nwhen $V < V_{peak}$ (threshold/spike detection).\nOnce a spike occurs, we apply the reset conditions:\n$$V = V_r \\quad \\text{and} \\quad w = w + b$$\nDivergence\nIn the AEIF model, the spike is generated by the exponential divergence. In practice, this means that just before threshold crossing (threshpassing), the argument of the exponential can become very large.\nThis can lead to numerical overflow or numerical instabilities in the solver, all the more if $V_{peak}$ is large, or if $\\Delta_T$ is small.\nTested solutions\nOld implementation (before September 2016)\nThe orginal solution was to bind the exponential argument to be smaller than 10 (ad hoc value to be close to the original implementation in BRIAN).\nAs will be shown in the notebook, this solution does not converge to the reference LSODAR solution.\nNew implementation\nThe new implementation does not bind the argument of the exponential, but the potential itself, since according to the theoretical model, $V$ should never get larger than $V_{peak}$.\nWe will show that this solution is not only closer to the reference solution in general, but also converges towards it as the timestep gets smaller.\nReference solution\nThe reference solution is implemented using the LSODAR solver which is described and compared in the following references:\n\nhttp://www.radford.edu/~thompson/RP/eventlocation.pdf (papers citing this one)\nhttp://www.sciencedirect.com/science/article/pii/S0377042712000684\nhttp://www.radford.edu/~thompson/RP/rootfinding.pdf\nhttps://computation.llnl.gov/casc/nsde/pubs/u88007.pdf\nhttp://www.cs.ucsb.edu/~cse/Files/SCE000136.pdf\nhttp://www.sciencedirect.com/science/article/pii/0377042789903348\nhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.455.2976&rep=rep1&type=pdf\nhttps://theses.lib.vt.edu/theses/available/etd-12092002-105032/unrestricted/etd.pdf\n\nTechnical details and requirements\nImplementation of the functions\n\nThe old and new implementations are reproduced using Scipy and are called by the scipy_aeif function\nThe NEST implementations are not shown here, but keep in mind that for a given time resolution, they are closer to the reference result than the scipy implementation since the GSL implementation uses a RK45 adaptive solver.\nThe reference solution using LSODAR, called reference_aeif, is implemented through the assimulo package.\n\nRequirements\nTo run this notebook, you need:\n\nnumpy and scipy\nassimulo\nmatplotlib",
"# Install assimulo package in the current Jupyter kernel\nimport sys\n!{sys.executable} -m pip install assimulo\n\nimport numpy as np\nfrom scipy.integrate import odeint\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (15, 6)",
"Scipy functions mimicking the NEST code\nRight hand side functions",
"def rhs_aeif_new(y, _, p):\n '''\n New implementation bounding V < V_peak\n \n Parameters\n ----------\n y : list\n Vector containing the state variables [V, w]\n _ : unused var\n p : Params instance\n Object containing the neuronal parameters.\n \n Returns\n -------\n dv : double\n Derivative of V\n dw : double\n Derivative of w\n '''\n v = min(y[0], p.Vpeak)\n w = y[1]\n Ispike = 0.\n \n if p.DeltaT != 0.:\n Ispike = p.gL * p.DeltaT * np.exp((v-p.vT)/p.DeltaT)\n \n dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm\n dw = (p.a * (v-p.EL) - w) / p.tau_w\n \n return dv, dw\n\n\ndef rhs_aeif_old(y, _, p):\n '''\n Old implementation bounding the argument of the\n exponential function (e_arg < 10.).\n \n Parameters\n ----------\n y : list\n Vector containing the state variables [V, w]\n _ : unused var\n p : Params instance\n Object containing the neuronal parameters.\n \n Returns\n -------\n dv : double\n Derivative of V\n dw : double\n Derivative of w\n '''\n v = y[0]\n w = y[1]\n Ispike = 0.\n \n if p.DeltaT != 0.:\n e_arg = min((v-p.vT)/p.DeltaT, 10.)\n Ispike = p.gL * p.DeltaT * np.exp(e_arg)\n \n dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm\n dw = (p.a * (v-p.EL) - w) / p.tau_w\n \n return dv, dw",
"Complete model",
"def scipy_aeif(p, f, simtime, dt):\n '''\n Complete aeif model using scipy `odeint` solver.\n \n Parameters\n ----------\n p : Params instance\n Object containing the neuronal parameters.\n f : function\n Right-hand side function (either `rhs_aeif_old`\n or `rhs_aeif_new`)\n simtime : double\n Duration of the simulation (will run between\n 0 and tmax)\n dt : double\n Time increment.\n \n Returns\n -------\n t : list\n Times at which the neuronal state was evaluated.\n y : list\n State values associated to the times in `t`\n s : list\n Spike times.\n vs : list\n Values of `V` just before the spike.\n ws : list\n Values of `w` just before the spike\n fos : list\n List of dictionaries containing additional output\n information from `odeint`\n '''\n t = np.arange(0, simtime, dt) # time axis\n n = len(t) \n y = np.zeros((n, 2)) # V, w\n y[0, 0] = p.EL # Initial: (V_0, w_0) = (E_L, 5.)\n y[0, 1] = 5. # Initial: (V_0, w_0) = (E_L, 5.)\n s = [] # spike times \n vs = [] # membrane potential at spike before reset\n ws = [] # w at spike before step\n fos = [] # full output dict from odeint()\n \n # imitate NEST: update time-step by time-step\n for k in range(1, n):\n \n # solve ODE from t_k-1 to t_k\n d, fo = odeint(f, y[k-1, :], t[k-1:k+1], (p, ), full_output=True)\n y[k, :] = d[1, :]\n fos.append(fo)\n \n # check for threshold crossing\n if y[k, 0] >= p.Vpeak:\n s.append(t[k])\n vs.append(y[k, 0])\n ws.append(y[k, 1])\n \n y[k, 0] = p.Vreset # reset\n y[k, 1] += p.b # step\n \n return t, y, s, vs, ws, fos",
"LSODAR reference solution\nSetting assimulo class",
"from assimulo.solvers import LSODAR\nfrom assimulo.problem import Explicit_Problem\n\nclass Extended_Problem(Explicit_Problem):\n\n # need variables here for access\n sw0 = [ False ]\n ts_spikes = []\n ws_spikes = []\n Vs_spikes = []\n \n def __init__(self, p):\n self.p = p\n self.y0 = [self.p.EL, 5.] # V, w\n # reset variables\n self.ts_spikes = []\n self.ws_spikes = []\n self.Vs_spikes = []\n\n #The right-hand-side function (rhs)\n\n def rhs(self, t, y, sw):\n \"\"\"\n This is the function we are trying to simulate (aeif model).\n \"\"\"\n V, w = y[0], y[1]\n Ispike = 0.\n \n if self.p.DeltaT != 0.:\n Ispike = self.p.gL * self.p.DeltaT * np.exp((V-self.p.vT)/self.p.DeltaT)\n dotV = ( -self.p.gL*(V-self.p.EL) + Ispike + self.p.Ie - w ) / self.p.Cm\n dotW = ( self.p.a*(V-self.p.EL) - w ) / self.p.tau_w\n return np.array([dotV, dotW])\n\n # Sets a name to our function\n name = 'AEIF_nosyn'\n\n # The event function\n def state_events(self, t, y, sw):\n \"\"\"\n This is our function that keeps track of our events. When the sign\n of any of the events has changed, we have an event.\n \"\"\"\n event_0 = -5 if y[0] >= self.p.Vpeak else 5 # spike\n if event_0 < 0:\n if not self.ts_spikes:\n self.ts_spikes.append(t)\n self.Vs_spikes.append(y[0])\n self.ws_spikes.append(y[1])\n elif self.ts_spikes and not np.isclose(t, self.ts_spikes[-1], 0.01):\n self.ts_spikes.append(t)\n self.Vs_spikes.append(y[0])\n self.ws_spikes.append(y[1])\n return np.array([event_0])\n\n #Responsible for handling the events.\n def handle_event(self, solver, event_info):\n \"\"\"\n Event handling. This functions is called when Assimulo finds an event as\n specified by the event functions.\n \"\"\"\n ev = event_info\n event_info = event_info[0] # only look at the state events information.\n if event_info[0] > 0:\n solver.sw[0] = True\n solver.y[0] = self.p.Vreset\n solver.y[1] += self.p.b\n else:\n solver.sw[0] = False\n\n def initialize(self, solver):\n solver.h_sol=[]\n solver.nq_sol=[]\n\n def handle_result(self, solver, t, y):\n Explicit_Problem.handle_result(self, solver, t, y)\n # Extra output for algorithm analysis\n if solver.report_continuously:\n h, nq = solver.get_algorithm_data()\n solver.h_sol.extend([h])\n solver.nq_sol.extend([nq])",
"LSODAR reference model",
"def reference_aeif(p, simtime):\n '''\n Reference aeif model using LSODAR.\n \n Parameters\n ----------\n p : Params instance\n Object containing the neuronal parameters.\n f : function\n Right-hand side function (either `rhs_aeif_old`\n or `rhs_aeif_new`)\n simtime : double\n Duration of the simulation (will run between\n 0 and tmax)\n dt : double\n Time increment.\n \n Returns\n -------\n t : list\n Times at which the neuronal state was evaluated.\n y : list\n State values associated to the times in `t`\n s : list\n Spike times.\n vs : list\n Values of `V` just before the spike.\n ws : list\n Values of `w` just before the spike\n h : list\n List of the minimal time increment at each step.\n '''\n #Create an instance of the problem\n exp_mod = Extended_Problem(p) #Create the problem\n exp_sim = LSODAR(exp_mod) #Create the solver\n\n exp_sim.atol=1.e-8\n exp_sim.report_continuously = True\n exp_sim.store_event_points = True\n\n exp_sim.verbosity = 30\n\n #Simulate\n t, y = exp_sim.simulate(simtime) #Simulate 10 seconds\n \n return t, y, exp_mod.ts_spikes, exp_mod.Vs_spikes, exp_mod.ws_spikes, exp_sim.h_sol",
"Set the parameters and simulate the models\nParams (chose a dictionary)",
"# Regular spiking\naeif_param = {\n 'V_reset': -58.,\n 'V_peak': 0.0,\n 'V_th': -50.,\n 'I_e': 420.,\n 'g_L': 11.,\n 'tau_w': 300.,\n 'E_L': -70.,\n 'Delta_T': 2.,\n 'a': 3.,\n 'b': 0.,\n 'C_m': 200.,\n 'V_m': -70., #! must be equal to E_L\n 'w': 5., #! must be equal to 5.\n 'tau_syn_ex': 0.2\n}\n\n# Bursting\naeif_param2 = {\n 'V_reset': -46.,\n 'V_peak': 0.0,\n 'V_th': -50.,\n 'I_e': 500.0,\n 'g_L': 10.,\n 'tau_w': 120.,\n 'E_L': -58.,\n 'Delta_T': 2.,\n 'a': 2.,\n 'b': 100.,\n 'C_m': 200.,\n 'V_m': -58., #! must be equal to E_L\n 'w': 5., #! must be equal to 5.\n}\n\n# Close to chaos (use resolution < 0.005 and simtime = 200)\naeif_param3 = {\n 'V_reset': -48.,\n 'V_peak': 0.0,\n 'V_th': -50.,\n 'I_e': 160.,\n 'g_L': 12.,\n 'tau_w': 130.,\n 'E_L': -60.,\n 'Delta_T': 2.,\n 'a': -11.,\n 'b': 30.,\n 'C_m': 100.,\n 'V_m': -60., #! must be equal to E_L\n 'w': 5., #! must be equal to 5.\n}\n\nclass Params(object):\n '''\n Class giving access to the neuronal\n parameters.\n '''\n def __init__(self):\n self.params = aeif_param\n self.Vpeak = aeif_param[\"V_peak\"]\n self.Vreset = aeif_param[\"V_reset\"]\n self.gL = aeif_param[\"g_L\"]\n self.Cm = aeif_param[\"C_m\"]\n self.EL = aeif_param[\"E_L\"]\n self.DeltaT = aeif_param[\"Delta_T\"]\n self.tau_w = aeif_param[\"tau_w\"]\n self.a = aeif_param[\"a\"]\n self.b = aeif_param[\"b\"]\n self.vT = aeif_param[\"V_th\"]\n self.Ie = aeif_param[\"I_e\"]\n \np = Params()",
"Simulate the 3 implementations",
"# Parameters of the simulation\nsimtime = 100.\nresolution = 0.01\n\nt_old, y_old, s_old, vs_old, ws_old, fo_old = scipy_aeif(p, rhs_aeif_old, simtime, resolution)\nt_new, y_new, s_new, vs_new, ws_new, fo_new = scipy_aeif(p, rhs_aeif_new, simtime, resolution)\nt_ref, y_ref, s_ref, vs_ref, ws_ref, h_ref = reference_aeif(p, simtime)",
"Plot the results\nZoom out",
"fig, ax = plt.subplots()\nax2 = ax.twinx()\n\n# Plot the potentials\nax.plot(t_ref, y_ref[:,0], linestyle=\"-\", label=\"V ref.\")\nax.plot(t_old, y_old[:,0], linestyle=\"-.\", label=\"V old\")\nax.plot(t_new, y_new[:,0], linestyle=\"--\", label=\"V new\")\n\n# Plot the adaptation variables\nax2.plot(t_ref, y_ref[:,1], linestyle=\"-\", c=\"k\", label=\"w ref.\")\nax2.plot(t_old, y_old[:,1], linestyle=\"-.\", c=\"m\", label=\"w old\")\nax2.plot(t_new, y_new[:,1], linestyle=\"--\", c=\"y\", label=\"w new\")\n\n# Show\nax.set_xlim([0., simtime])\nax.set_ylim([-65., 40.])\nax.set_xlabel(\"Time (ms)\")\nax.set_ylabel(\"V (mV)\")\nax2.set_ylim([-20., 20.])\nax2.set_ylabel(\"w (pA)\")\nax.legend(loc=6)\nax2.legend(loc=2)\nplt.show()",
"Zoom in",
"fig, ax = plt.subplots()\nax2 = ax.twinx()\n\n# Plot the potentials\nax.plot(t_ref, y_ref[:,0], linestyle=\"-\", label=\"V ref.\")\nax.plot(t_old, y_old[:,0], linestyle=\"-.\", label=\"V old\")\nax.plot(t_new, y_new[:,0], linestyle=\"--\", label=\"V new\")\n\n# Plot the adaptation variables\nax2.plot(t_ref, y_ref[:,1], linestyle=\"-\", c=\"k\", label=\"w ref.\")\nax2.plot(t_old, y_old[:,1], linestyle=\"-.\", c=\"y\", label=\"w old\")\nax2.plot(t_new, y_new[:,1], linestyle=\"--\", c=\"m\", label=\"w new\")\n\nax.set_xlim([90., 92.])\nax.set_ylim([-65., 40.])\nax.set_xlabel(\"Time (ms)\")\nax.set_ylabel(\"V (mV)\")\nax2.set_ylim([17.5, 18.5])\nax2.set_ylabel(\"w (pA)\")\nax.legend(loc=5)\nax2.legend(loc=2)\nplt.show()",
"Compare properties at spike times",
"print(\"spike times:\\n-----------\")\nprint(\"ref\", np.around(s_ref, 3)) # ref lsodar\nprint(\"old\", np.around(s_old, 3))\nprint(\"new\", np.around(s_new, 3))\n\nprint(\"\\nV at spike time:\\n---------------\")\nprint(\"ref\", np.around(vs_ref, 3)) # ref lsodar\nprint(\"old\", np.around(vs_old, 3))\nprint(\"new\", np.around(vs_new, 3))\n\nprint(\"\\nw at spike time:\\n---------------\")\nprint(\"ref\", np.around(ws_ref, 3)) # ref lsodar\nprint(\"old\", np.around(ws_old, 3))\nprint(\"new\", np.around(ws_new, 3))",
"Size of minimal integration timestep",
"plt.semilogy(t_ref, h_ref, label='Reference')\nplt.semilogy(t_old[1:], [d['hu'] for d in fo_old], linewidth=2, label='Old')\nplt.semilogy(t_new[1:], [d['hu'] for d in fo_new], label='New')\n\nplt.legend(loc=6)\nplt.show();",
"Convergence towards LSODAR reference with step size\nZoom out",
"plt.plot(t_ref, y_ref[:,0], label=\"V ref.\")\nresolutions = (0.1, 0.01, 0.001)\ndi_res = {}\n\nfor resolution in resolutions:\n t_old, y_old, _, _, _, _ = scipy_aeif(p, rhs_aeif_old, simtime, resolution)\n t_new, y_new, _, _, _, _ = scipy_aeif(p, rhs_aeif_new, simtime, resolution)\n di_res[resolution] = (t_old, y_old, t_new, y_new)\n plt.plot(t_old, y_old[:,0], linestyle=\":\", label=\"V old, r={}\".format(resolution))\n plt.plot(t_new, y_new[:,0], linestyle=\"--\", linewidth=1.5, label=\"V new, r={}\".format(resolution))\nplt.xlim(0., simtime)\nplt.xlabel(\"Time (ms)\")\nplt.ylabel(\"V (mV)\")\nplt.legend(loc=2)\nplt.show();",
"Zoom in",
"plt.plot(t_ref, y_ref[:,0], label=\"V ref.\")\nfor resolution in resolutions:\n t_old, y_old = di_res[resolution][:2]\n t_new, y_new = di_res[resolution][2:]\n plt.plot(t_old, y_old[:,0], linestyle=\"--\", label=\"V old, r={}\".format(resolution))\n plt.plot(t_new, y_new[:,0], linestyle=\"-.\", linewidth=2., label=\"V new, r={}\".format(resolution))\nplt.xlim(90., 92.)\nplt.ylim([-62., 2.])\nplt.xlabel(\"Time (ms)\")\nplt.ylabel(\"V (mV)\")\nplt.legend(loc=2)\nplt.show();",
"License\nThis file is part of NEST. Copyright (C) 2004 The NEST Initiative\nNEST is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.\nNEST is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs
|
site/en/tutorials/generative/cyclegan.ipynb
|
apache-2.0
|
[
"Copyright 2019 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"CycleGAN\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/generative/cyclegan\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/cyclegan.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/cyclegan.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/cyclegan.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nThis notebook demonstrates unpaired image to image translation using conditional GAN's, as described in Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, also known as CycleGAN. The paper proposes a method that can capture the characteristics of one image domain and figure out how these characteristics could be translated into another image domain, all in the absence of any paired training examples. \nThis notebook assumes you are familiar with Pix2Pix, which you can learn about in the Pix2Pix tutorial. The code for CycleGAN is similar, the main difference is an additional loss function, and the use of unpaired training data.\nCycleGAN uses a cycle consistency loss to enable training without the need for paired data. In other words, it can translate from one domain to another without a one-to-one mapping between the source and target domain. \nThis opens up the possibility to do a lot of interesting tasks like photo-enhancement, image colorization, style transfer, etc. All you need is the source and the target dataset (which is simply a directory of images).\n\n\nSet up the input pipeline\nInstall the tensorflow_examples package that enables importing of the generator and the discriminator.",
"!pip install git+https://github.com/tensorflow/examples.git\n\nimport tensorflow as tf\n\nimport tensorflow_datasets as tfds\nfrom tensorflow_examples.models.pix2pix import pix2pix\n\nimport os\nimport time\nimport matplotlib.pyplot as plt\nfrom IPython.display import clear_output\n\nAUTOTUNE = tf.data.AUTOTUNE",
"Input Pipeline\nThis tutorial trains a model to translate from images of horses, to images of zebras. You can find this dataset and similar ones here. \nAs mentioned in the paper, apply random jittering and mirroring to the training dataset. These are some of the image augmentation techniques that avoids overfitting.\nThis is similar to what was done in pix2pix\n\nIn random jittering, the image is resized to 286 x 286 and then randomly cropped to 256 x 256.\nIn random mirroring, the image is randomly flipped horizontally i.e. left to right.",
"dataset, metadata = tfds.load('cycle_gan/horse2zebra',\n with_info=True, as_supervised=True)\n\ntrain_horses, train_zebras = dataset['trainA'], dataset['trainB']\ntest_horses, test_zebras = dataset['testA'], dataset['testB']\n\nBUFFER_SIZE = 1000\nBATCH_SIZE = 1\nIMG_WIDTH = 256\nIMG_HEIGHT = 256\n\ndef random_crop(image):\n cropped_image = tf.image.random_crop(\n image, size=[IMG_HEIGHT, IMG_WIDTH, 3])\n\n return cropped_image\n\n# normalizing the images to [-1, 1]\ndef normalize(image):\n image = tf.cast(image, tf.float32)\n image = (image / 127.5) - 1\n return image\n\ndef random_jitter(image):\n # resizing to 286 x 286 x 3\n image = tf.image.resize(image, [286, 286],\n method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)\n\n # randomly cropping to 256 x 256 x 3\n image = random_crop(image)\n\n # random mirroring\n image = tf.image.random_flip_left_right(image)\n\n return image\n\ndef preprocess_image_train(image, label):\n image = random_jitter(image)\n image = normalize(image)\n return image\n\ndef preprocess_image_test(image, label):\n image = normalize(image)\n return image\n\ntrain_horses = train_horses.cache().map(\n preprocess_image_train, num_parallel_calls=AUTOTUNE).shuffle(\n BUFFER_SIZE).batch(BATCH_SIZE)\n\ntrain_zebras = train_zebras.cache().map(\n preprocess_image_train, num_parallel_calls=AUTOTUNE).shuffle(\n BUFFER_SIZE).batch(BATCH_SIZE)\n\ntest_horses = test_horses.map(\n preprocess_image_test, num_parallel_calls=AUTOTUNE).cache().shuffle(\n BUFFER_SIZE).batch(BATCH_SIZE)\n\ntest_zebras = test_zebras.map(\n preprocess_image_test, num_parallel_calls=AUTOTUNE).cache().shuffle(\n BUFFER_SIZE).batch(BATCH_SIZE)\n\nsample_horse = next(iter(train_horses))\nsample_zebra = next(iter(train_zebras))\n\nplt.subplot(121)\nplt.title('Horse')\nplt.imshow(sample_horse[0] * 0.5 + 0.5)\n\nplt.subplot(122)\nplt.title('Horse with random jitter')\nplt.imshow(random_jitter(sample_horse[0]) * 0.5 + 0.5)\n\nplt.subplot(121)\nplt.title('Zebra')\nplt.imshow(sample_zebra[0] * 0.5 + 0.5)\n\nplt.subplot(122)\nplt.title('Zebra with random jitter')\nplt.imshow(random_jitter(sample_zebra[0]) * 0.5 + 0.5)",
"Import and reuse the Pix2Pix models\nImport the generator and the discriminator used in Pix2Pix via the installed tensorflow_examples package.\nThe model architecture used in this tutorial is very similar to what was used in pix2pix. Some of the differences are:\n\nCyclegan uses instance normalization instead of batch normalization.\nThe CycleGAN paper uses a modified resnet based generator. This tutorial is using a modified unet generator for simplicity.\n\nThere are 2 generators (G and F) and 2 discriminators (X and Y) being trained here. \n\nGenerator G learns to transform image X to image Y. $(G: X -> Y)$\nGenerator F learns to transform image Y to image X. $(F: Y -> X)$\nDiscriminator D_X learns to differentiate between image X and generated image X (F(Y)).\nDiscriminator D_Y learns to differentiate between image Y and generated image Y (G(X)).",
"OUTPUT_CHANNELS = 3\n\ngenerator_g = pix2pix.unet_generator(OUTPUT_CHANNELS, norm_type='instancenorm')\ngenerator_f = pix2pix.unet_generator(OUTPUT_CHANNELS, norm_type='instancenorm')\n\ndiscriminator_x = pix2pix.discriminator(norm_type='instancenorm', target=False)\ndiscriminator_y = pix2pix.discriminator(norm_type='instancenorm', target=False)\n\nto_zebra = generator_g(sample_horse)\nto_horse = generator_f(sample_zebra)\nplt.figure(figsize=(8, 8))\ncontrast = 8\n\nimgs = [sample_horse, to_zebra, sample_zebra, to_horse]\ntitle = ['Horse', 'To Zebra', 'Zebra', 'To Horse']\n\nfor i in range(len(imgs)):\n plt.subplot(2, 2, i+1)\n plt.title(title[i])\n if i % 2 == 0:\n plt.imshow(imgs[i][0] * 0.5 + 0.5)\n else:\n plt.imshow(imgs[i][0] * 0.5 * contrast + 0.5)\nplt.show()\n\nplt.figure(figsize=(8, 8))\n\nplt.subplot(121)\nplt.title('Is a real zebra?')\nplt.imshow(discriminator_y(sample_zebra)[0, ..., -1], cmap='RdBu_r')\n\nplt.subplot(122)\nplt.title('Is a real horse?')\nplt.imshow(discriminator_x(sample_horse)[0, ..., -1], cmap='RdBu_r')\n\nplt.show()",
"Loss functions\nIn CycleGAN, there is no paired data to train on, hence there is no guarantee that the input x and the target y pair are meaningful during training. Thus in order to enforce that the network learns the correct mapping, the authors propose the cycle consistency loss.\nThe discriminator loss and the generator loss are similar to the ones used in pix2pix.",
"LAMBDA = 10\n\nloss_obj = tf.keras.losses.BinaryCrossentropy(from_logits=True)\n\ndef discriminator_loss(real, generated):\n real_loss = loss_obj(tf.ones_like(real), real)\n\n generated_loss = loss_obj(tf.zeros_like(generated), generated)\n\n total_disc_loss = real_loss + generated_loss\n\n return total_disc_loss * 0.5\n\ndef generator_loss(generated):\n return loss_obj(tf.ones_like(generated), generated)",
"Cycle consistency means the result should be close to the original input. For example, if one translates a sentence from English to French, and then translates it back from French to English, then the resulting sentence should be the same as the original sentence.\nIn cycle consistency loss, \n\nImage $X$ is passed via generator $G$ that yields generated image $\\hat{Y}$.\nGenerated image $\\hat{Y}$ is passed via generator $F$ that yields cycled image $\\hat{X}$.\nMean absolute error is calculated between $X$ and $\\hat{X}$.\n\n$$forward\\ cycle\\ consistency\\ loss: X -> G(X) -> F(G(X)) \\sim \\hat{X}$$\n$$backward\\ cycle\\ consistency\\ loss: Y -> F(Y) -> G(F(Y)) \\sim \\hat{Y}$$",
"def calc_cycle_loss(real_image, cycled_image):\n loss1 = tf.reduce_mean(tf.abs(real_image - cycled_image))\n \n return LAMBDA * loss1",
"As shown above, generator $G$ is responsible for translating image $X$ to image $Y$. Identity loss says that, if you fed image $Y$ to generator $G$, it should yield the real image $Y$ or something close to image $Y$.\nIf you run the zebra-to-horse model on a horse or the horse-to-zebra model on a zebra, it should not modify the image much since the image already contains the target class.\n$$Identity\\ loss = |G(Y) - Y| + |F(X) - X|$$",
"def identity_loss(real_image, same_image):\n loss = tf.reduce_mean(tf.abs(real_image - same_image))\n return LAMBDA * 0.5 * loss",
"Initialize the optimizers for all the generators and the discriminators.",
"generator_g_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)\ngenerator_f_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)\n\ndiscriminator_x_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)\ndiscriminator_y_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)",
"Checkpoints",
"checkpoint_path = \"./checkpoints/train\"\n\nckpt = tf.train.Checkpoint(generator_g=generator_g,\n generator_f=generator_f,\n discriminator_x=discriminator_x,\n discriminator_y=discriminator_y,\n generator_g_optimizer=generator_g_optimizer,\n generator_f_optimizer=generator_f_optimizer,\n discriminator_x_optimizer=discriminator_x_optimizer,\n discriminator_y_optimizer=discriminator_y_optimizer)\n\nckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)\n\n# if a checkpoint exists, restore the latest checkpoint.\nif ckpt_manager.latest_checkpoint:\n ckpt.restore(ckpt_manager.latest_checkpoint)\n print ('Latest checkpoint restored!!')",
"Training\nNote: This example model is trained for fewer epochs (40) than the paper (200) to keep training time reasonable for this tutorial. Predictions may be less accurate.",
"EPOCHS = 40\n\ndef generate_images(model, test_input):\n prediction = model(test_input)\n \n plt.figure(figsize=(12, 12))\n\n display_list = [test_input[0], prediction[0]]\n title = ['Input Image', 'Predicted Image']\n\n for i in range(2):\n plt.subplot(1, 2, i+1)\n plt.title(title[i])\n # getting the pixel values between [0, 1] to plot it.\n plt.imshow(display_list[i] * 0.5 + 0.5)\n plt.axis('off')\n plt.show()",
"Even though the training loop looks complicated, it consists of four basic steps:\n\nGet the predictions.\nCalculate the loss.\nCalculate the gradients using backpropagation.\nApply the gradients to the optimizer.",
"@tf.function\ndef train_step(real_x, real_y):\n # persistent is set to True because the tape is used more than\n # once to calculate the gradients.\n with tf.GradientTape(persistent=True) as tape:\n # Generator G translates X -> Y\n # Generator F translates Y -> X.\n \n fake_y = generator_g(real_x, training=True)\n cycled_x = generator_f(fake_y, training=True)\n\n fake_x = generator_f(real_y, training=True)\n cycled_y = generator_g(fake_x, training=True)\n\n # same_x and same_y are used for identity loss.\n same_x = generator_f(real_x, training=True)\n same_y = generator_g(real_y, training=True)\n\n disc_real_x = discriminator_x(real_x, training=True)\n disc_real_y = discriminator_y(real_y, training=True)\n\n disc_fake_x = discriminator_x(fake_x, training=True)\n disc_fake_y = discriminator_y(fake_y, training=True)\n\n # calculate the loss\n gen_g_loss = generator_loss(disc_fake_y)\n gen_f_loss = generator_loss(disc_fake_x)\n \n total_cycle_loss = calc_cycle_loss(real_x, cycled_x) + calc_cycle_loss(real_y, cycled_y)\n \n # Total generator loss = adversarial loss + cycle loss\n total_gen_g_loss = gen_g_loss + total_cycle_loss + identity_loss(real_y, same_y)\n total_gen_f_loss = gen_f_loss + total_cycle_loss + identity_loss(real_x, same_x)\n\n disc_x_loss = discriminator_loss(disc_real_x, disc_fake_x)\n disc_y_loss = discriminator_loss(disc_real_y, disc_fake_y)\n \n # Calculate the gradients for generator and discriminator\n generator_g_gradients = tape.gradient(total_gen_g_loss, \n generator_g.trainable_variables)\n generator_f_gradients = tape.gradient(total_gen_f_loss, \n generator_f.trainable_variables)\n \n discriminator_x_gradients = tape.gradient(disc_x_loss, \n discriminator_x.trainable_variables)\n discriminator_y_gradients = tape.gradient(disc_y_loss, \n discriminator_y.trainable_variables)\n \n # Apply the gradients to the optimizer\n generator_g_optimizer.apply_gradients(zip(generator_g_gradients, \n generator_g.trainable_variables))\n\n generator_f_optimizer.apply_gradients(zip(generator_f_gradients, \n generator_f.trainable_variables))\n \n discriminator_x_optimizer.apply_gradients(zip(discriminator_x_gradients,\n discriminator_x.trainable_variables))\n \n discriminator_y_optimizer.apply_gradients(zip(discriminator_y_gradients,\n discriminator_y.trainable_variables))\n\nfor epoch in range(EPOCHS):\n start = time.time()\n\n n = 0\n for image_x, image_y in tf.data.Dataset.zip((train_horses, train_zebras)):\n train_step(image_x, image_y)\n if n % 10 == 0:\n print ('.', end='')\n n += 1\n\n clear_output(wait=True)\n # Using a consistent image (sample_horse) so that the progress of the model\n # is clearly visible.\n generate_images(generator_g, sample_horse)\n\n if (epoch + 1) % 5 == 0:\n ckpt_save_path = ckpt_manager.save()\n print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,\n ckpt_save_path))\n\n print ('Time taken for epoch {} is {} sec\\n'.format(epoch + 1,\n time.time()-start))",
"Generate using test dataset",
"# Run the trained model on the test dataset\nfor inp in test_horses.take(5):\n generate_images(generator_g, inp)",
"Next steps\nThis tutorial has shown how to implement CycleGAN starting from the generator and discriminator implemented in the Pix2Pix tutorial. As a next step, you could try using a different dataset from TensorFlow Datasets. \nYou could also train for a larger number of epochs to improve the results, or you could implement the modified ResNet generator used in the paper instead of the U-Net generator used here."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cmcc/cmip6/models/sandbox-2/ocnbgchem.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: CMCC\nSource ID: SANDBOX-2\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:50\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-2', 'ocnbgchem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\n3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\n4. Key Properties --> Transport Scheme\n5. Key Properties --> Boundary Forcing\n6. Key Properties --> Gas Exchange\n7. Key Properties --> Carbon Chemistry\n8. Tracers\n9. Tracers --> Ecosystem\n10. Tracers --> Ecosystem --> Phytoplankton\n11. Tracers --> Ecosystem --> Zooplankton\n12. Tracers --> Disolved Organic Matter\n13. Tracers --> Particules\n14. Tracers --> Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Elemental Stoichiometry\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n",
"1.5. Elemental Stoichiometry Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.7. Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Damping\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for passive tracers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"2.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for passive tracers (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for biology sources and sinks",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"3.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transport scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"4.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTransport scheme used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4.3. Use Different Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how atmospheric deposition is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n",
"5.2. River Input\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how river input is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n",
"5.3. Sediments From Boundary Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Sediments From Explicit Model\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from explicit sediment model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.2. CO2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe CO2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.3. O2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs O2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.4. O2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe O2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. DMS Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs DMS gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.6. DMS Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify DMS gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.7. N2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.8. N2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.9. N2O Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2O gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.10. N2O Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2O gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.11. CFC11 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC11 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.12. CFC11 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.13. CFC12 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC12 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.14. CFC12 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.15. SF6 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs SF6 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.16. SF6 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify SF6 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.17. 13CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.18. 13CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.19. 14CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.20. 14CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.21. Other Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any other gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how carbon chemistry is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n",
"7.2. PH Scale\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.3. Constants If Not OMIP\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Sulfur Cycle Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sulfur cycle modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Nutrients Present\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Nitrous Species If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous species.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.5. Nitrous Processes If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous processes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Tracers --> Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Upper Trophic Levels Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefine how upper trophic level are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Tracers --> Ecosystem --> Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of phytoplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n",
"10.2. Pft\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Tracers --> Ecosystem --> Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of zooplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nZooplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Tracers --> Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there bacteria representation ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Lability\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Tracers --> Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Types If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Size If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n",
"13.4. Size If Discrete\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.5. Sinking Speed If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Tracers --> Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n",
"14.2. Abiotic Carbon\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs abiotic carbon modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.3. Alkalinity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is alkalinity modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ghvn7777/ghvn7777.github.io
|
content/fluent_python/1_1_python_card.ipynb
|
apache-2.0
|
[
"__getitem__ AND __len__ 方法\n下面看一个生成扑克牌以及对其进行操作的例子",
"import collections\n\nCard = collections.namedtuple('Card', ['rank', 'suit']) #'Card' 是 namedtuple 名字, 后面是元素\n\nclass FrenchDeck:\n ranks = [str(n) for n in range(2, 11)] + list('JQKA')\n suits = 'spades diamonds clubs hearts'.split() # 黑桃 钻石 梅花 红心\n \n def __init__(self):\n self._cards = [Card(rank, suit) for suit in self.suits\n for rank in self.ranks]\n \n def __len__(self):\n return len(self._cards)\n \n def __getitem__(self, position): #代表了 self._cards 的 []运算符\n return self._cards[position]\n\n\nbeer_card = Card('7', 'diamonds')\nbeer_card\n\ndeck = FrenchDeck()\nlen(deck) #默认调用了 __len__()\n\ndeck[0] #调用了 __getitem__()\n\ndeck[-1]\n\nfrom random import choice\nchoice(deck)\n\ndeck[:3] #因为 __getitem__() 方法把 [] 操作交给 slef._cards 列表,我们的 desk 自动支持切片操作\n\ndeck[12::13]",
"迭代",
"# 我们只要写了 __getitem__() 方法,就可以将类变成可迭代的\nfor card in deck[:10]:\n print(card)\n\n# 也可以反向迭代\nfor card in reversed(deck):\n print(card)",
"in 运算符\n迭代通常是隐式的,如果集合中没有 __contains__ 方法, in 操作符就会按顺序进行一次迭代搜索,于是 in 可以在 FrenchDeck 类中使用,因为它是可迭代的",
"Card('Q', 'hearts') in deck\n\nCard('Q', 'beasts') in deck",
"排序\n扑克牌一般按照数字大小( A 最大)来排列,花色按照黑桃(最大),红心,方块,梅花(最小)来排序,我们实现一下此功能",
"suit_values = dict(spades = 3, hearts = 2, diamonds = 1, clubs = 0)\nsuit_values\n\ndef spades_high(card):\n rank_value = FrenchDeck.ranks.index(card.rank)\n return rank_value * len(suit_values) + suit_values[card.suit]\n\nfor card in sorted(deck, key=spades_high):\n print(card)",
"现在还无法洗牌,因为牌的顺序是不可变的,除非我们破坏这个类的封装性,直接对 _cards 进行操作,在以后我们会讲到,其实只需要一行代码实现 __setitem__ 方法就可以了"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
geektoni/shogun
|
doc/ipython-notebooks/ica/bss_image.ipynb
|
bsd-3-clause
|
[
"Blind Source Separation on Images with Shogun\nby Kevin Hughes\nThis notebook illustrates <a href=\"http://en.wikipedia.org/wiki/Blind_signal_separation\">Blind Source Seperation</a>(BSS) on images using <a href=\"http://en.wikipedia.org/wiki/Independent_component_analysis\">Independent Component Analysis</a> (ICA) in Shogun. This is very similar to the <a href=\"http://www.shogun-toolbox.org/static/notebook/current/bss_audio.html\">BSS audio notebook</a> except that here we have used images instead of audio signals.\nThe first step is to load 2 images from the Shogun data repository:",
"# change to the shogun-data directory\nimport os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\nos.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))\n\nfrom PIL import Image\nimport numpy as np\n\n# Load Images as grayscale images and convert to numpy arrays\ns1 = np.asarray(Image.open(\"lena.jpg\").convert('L'))\ns2 = np.asarray(Image.open(\"monalisa.jpg\").convert('L'))\n\n# Save Image Dimensions\n# we'll need these later for reshaping the images\nrows = s1.shape[0]\ncols = s1.shape[1]",
"Displaying the images using pylab:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# Show Images\nf,(ax1,ax2) = plt.subplots(1,2)\nax1.imshow(s1, cmap=plt.gray()) # set the color map to gray, only needs to be done once!\nax2.imshow(s2)",
"In our previous ICA examples the input data or source signals were already 1D but these images are obviously 2D. One common way to handle this case is to simply \"flatten\" the 2D image matrix into a 1D row vector. The same idea can also be applied to 3D data, for example a 3 channel RGB image can be converted a row vector by reshaping each 2D channel into a row vector and then placing them after each other length wise.\nLets prep the data:",
"# Convert Images to row vectors\n# and stack into a Data Matrix\nS = np.c_[s1.flatten(), s2.flatten()].T",
"It is pretty easy using a nice library like numpy.\nNext we need to mix our source signals together. We do this exactly the same way we handled the audio data - take a look!",
"# Mixing Matrix\nA = np.array([[1, 0.5], [0.5, 1]])\n\n# Mix Signals\nX = np.dot(A,S)\n\n# Show Images\nf,(ax1,ax2) = plt.subplots(1,2)\nax1.imshow(X[0,:].reshape(rows,cols))\nax2.imshow(X[1,:].reshape(rows,cols))",
"Notice how we had to reshape from a 1D row vector back into a 2D matrix of the correct shape. There is also another nuance that I would like to mention here: pylab is actually doing quite a lot for us here that you might not be aware of. It does a pretty good job determining the value range of the image to be shown and then it applies the color map. Many other libraries (for example OpenCV's highgui) won't be this helpful and you'll need to remember to scale the image appropriately on your own before trying to display it. \nNow onto the exciting step, unmixing the images using ICA! Again this step is the same as when using Audio data. Again we need to reshape the images before viewing them and an additional nuance was to add the *-1 to the first separated signal. I did this after viewing the result the first time as the image was clearly inversed, this can happen because ICA can't necessarily capture the correct phase.",
"import shogun as sg\n\nmixed_signals = sg.create_features(X)\n\n# Separating\njade = sg.create_transformer('Jade')\njade.fit(mixed_signals)\nsignals = jade.transform(mixed_signals)\nS_ = signals.get('feature_matrix')\n\n# Show Images\nf,(ax1,ax2) = plt.subplots(1,2)\nax1.imshow(S_[0,:].reshape(rows,cols) *-1)\nax2.imshow(S_[1,:].reshape(rows,cols))",
"And that's all there is to it!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kimkipyo/dss_git_kkp
|
통계, 머신러닝 복습/160502월_1일차_분석 환경, 소개/13.pandas 패키지의 소개.ipynb
|
mit
|
[
"pandas 패키지의 소개\npandas 패키지\n\n\nIndex를 가진 자료형인 R의 data.frame 자료형을 Python에서 구현\n\n\n참고 자료\n\nhttp://pandas.pydata.org/\nhttp://pandas.pydata.org/pandas-docs/stable/10min.html\nhttp://pandas.pydata.org/pandas-docs/stable/tutorials.html\n\npandas 자료형\n\nSeries\n시계열 데이터\n\nIndex를 가지는 1차원 NumPy Array\n\n\nDataFrame\n\n복수 필드 시계열 데이터 또는 테이블 데이터\n\nIndex를 가지는 2차원 NumPy Array\n\n\nIndex\n\nLabel: 각각의 Row/Column에 대한 이름 \nName: 인덱스 자체에 대한 이름\n\n<img src=\"https://docs.google.com/drawings/d/12FKb94RlpNp7hZNndpnLxmdMJn3FoLfGwkUAh33OmOw/pub?w=602&h=446\" style=\"width:60%; margin:0 auto 0 auto;\">\nSeries\n\n\nRow Index를 가지는 자료열\n\n\n생성\n\n추가/삭제\nIndexing\n\n명시적인 Index를 가지지 않는 Series",
"s = pd.Series([4, 7, -5, 3])\ns\n\ns.values\n\ntype(s.values)\n\ns.index\n\ntype(s.index)",
"Vectorized Operation",
"s * 2\n\nnp.exp(s)",
"명시적인 Index를 가지는 Series\n\n생성시 index 인수로 Index 지정\nIndex 원소는 각 데이터에 대한 key 역할을 하는 Label\ndict",
"s2 = pd.Series([4, 7, -5, 3], index=[\"d\", \"b\", \"a\", \"c\"])\ns2\n\ns2.index",
"Series Indexing 1: Label Indexing\n\nSingle Label\nLabel Slicing\n마지막 원소 포함\nLabel을 원소로 가지는 Label (Label을 사용한 List Fancy Indexing)\n주어진 순서대로 재배열",
"s2['a']\n\ns2['b':'c']\n\ns2[[\"a\", \"b\"]]",
"Series Indexing 2: Integer Indexing\n\nSingle Integer\nInteger Slicing\n마지막 원소를 포함하지 않는 일반적인 Slicing\nInteger List Indexing (List Fancy Indexing)\nBoolearn Fancy Indexing",
"s2[2]\n\ns2[1:4]\n\ns2[[2, 1]]\n\ns2[s2 > 0]",
"dict 연산",
"\"a\" in s2, \"e\" in s2\n\nfor i, j in s2.iteritems():\n print(i, j)\n\ns2[\"d\":\"a\"]",
"dict 데이터를 이용한 Series 생성\n\n별도의 index를 지정하면 지정한 자료만으로 생성",
"sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000}\ns3 = pd.Series(sdata)\ns3\n\nstates = ['Califonia', 'Ohio', 'Oregon', 'Texas']\ns4 = pd.Series(sdata, index=states)\ns4\n\npd.isnull(s)\n\npd.notnull(s4)\n\ns4.isnull()\n\ns4.notnull()",
"Index 기준 연산",
"print(s3.values, s4.values)\ns3.values + s4.values\n\ns3 + s4 #Utah가 NaN인 것을 보아하니 값이 둘 다 있을 때만 연산이 되고 하나라도 없으면 NaN으로 처리되나보네",
"Index 이름",
"s4\n\ns4.name = \"population\"\ns4\n\ns4.index.name = \"state\"\ns4",
"Index 변경",
"s\n\ns.index\n\ns.index = ['Bob', 'Steve', 'Jeff', 'Ryan']\ns\n\ns.index",
"DataFrame\n\nMulti-Series\n동일한 Row 인덱스를 사용하는 복수 Series\n\nSeries를 value로 가지는 dict\n\n\n2차원 행렬\n\n\nDataFrame을 행렬로 생각하면 각 Series는 행렬의 Column의 역할\n\n\nNumPy Array와 차이점 \n\n\n각 Column(Series)마다 type이 달라도 된다.\n\n\nColumn Index\n\n(Row) Index와 Column Index를 가진다.\n각 Column(Series)에 Label 지정 가능\n(Row) Index와 Column Label을 동시에 사용하여 자료 접근 가능",
"data = {\n 'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],\n 'year': [2001, 2001, 2002, 2001, 2002],\n 'pop': [1.5, 1.7, 3.6, 2.4, 2.9]\n}\ndf = pd.DataFrame(data)\ndf\n\npd.DataFrame(data, columns=['year', 'state', 'pop'])\n\ndf.dtypes",
"명시적인 Column/Row Index를 가지는 DataFrame",
"df2 = pd.DataFrame(data,\n columns=['year', 'state', 'pop', 'debt'],\n index=['one', 'two', 'three', 'four', 'five'])\ndf2",
"Single Column Access",
"df[\"state\"]\n\ntype(df[\"state\"]), type([df[\"state\"]])\n\n[df[\"state\"]]\n\ndf.state",
"Cloumn Data Update",
"df2['debt'] = 16.5, 16.2, 16.3, 16.7, 16.2\ndf2\n\ndf2['debt'] = 16.5\ndf2\n\ndf2['debt'] = np.arange(5)\ndf2\n\ndf2['debt'] = pd.DataFrame([-1.2, -1.5, -1.7], index=['two', 'four', 'five'])\ndf2",
"Add Column",
"df2['eastern'] = df2.state == 'Ohio'\ndf2",
"Delete Column",
"del df2[\"eastern\"]\ndf2",
"inplace 옵션\n\n함수/메소드는 두 가지 종류\n그 객체 자체를 변형\n\n해당 객체는 그대로 두고 변형된 새로운 객체를 출력\n\n\nDataFrame 메소드 대부분은 inplace 옵션을 가짐\n\ninplace=True이면 출력을 None으로 하고 객체 자체를 변형\ninplace=False이면 객체 자체는 보존하고 변형된 새로운 객체를 출력",
"x = [3, 6, 1, 4]\nsorted(x)\n\nx\n\nx.sort()\nx",
"drop 메소드를 사용한 Row/Column 삭제\n\ndel 함수 \ninplace 연산\ndrop 메소드 \n삭제된 Series/DataFrame 출력\nSeries는 Row 삭제\nDataFrame은 axis 인수로 Row/Column 선택\naxis=0(디폴트): Row\naxis=1: Column",
"s = pd.Series(np.arange(5.), index=['a', 'b', 'c', 'd', 'e'])\ns\n\ns2 = s.drop('c')\ns2\n\ns\n\ns.drop([\"b\", \"c\"])\n\ndf = pd.DataFrame(np.arange(16).reshape((4, 4)),\n index=['Ohio', 'Colorado', 'Utah', 'New York'],\n columns=['one', 'two', 'three', 'four'])\ndf\n\ndf.drop(['Colorado', 'Ohio'])\n\ndf.drop('two', axis=1)\n\ndf.drop(['two', 'four'], axis=1)",
"Nested dict를 사용한 DataFrame 생성",
"pop = {\n 'Nevada': {\n 2001: 2.4,\n 2002: 2.9\n },\n 'Ohio': {\n 2000: 1.5,\n 2001: 1.7,\n 2002: 3.6\n }\n}\n\ndf3 = pd.DataFrame(pop)\ndf3",
"Series dict를 사용한 DataFrame 생성",
"pdata = {\n 'Ohio': df3['Ohio'][:-1],\n 'Nevada': df3['Nevada'][:3]\n}\npd.DataFrame(pdata)",
"NumPy array로 변환",
"df3.values\n\ndf2.values\n\ndf3.values\n\ndf2.values",
"DataFrame의 Column Indexing\n\nSingle Label key\nSingle Label attribute\nLabel List Fancy Indexing",
"df2\n\ndf2[\"year\"]\n\ndf2.year\n\ndf2[[\"state\", \"debt\", \"year\"]]\n\ndf2[[\"year\"]]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/ec-earth-consortium/cmip6/models/sandbox-1/toplevel.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: EC-EARTH-CONSORTIUM\nSource ID: SANDBOX-1\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:59\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-1', 'toplevel')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Flux Correction\n3. Key Properties --> Genealogy\n4. Key Properties --> Software Properties\n5. Key Properties --> Coupling\n6. Key Properties --> Tuning Applied\n7. Key Properties --> Conservation --> Heat\n8. Key Properties --> Conservation --> Fresh Water\n9. Key Properties --> Conservation --> Salt\n10. Key Properties --> Conservation --> Momentum\n11. Radiative Forcings\n12. Radiative Forcings --> Greenhouse Gases --> CO2\n13. Radiative Forcings --> Greenhouse Gases --> CH4\n14. Radiative Forcings --> Greenhouse Gases --> N2O\n15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\n16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\n17. Radiative Forcings --> Greenhouse Gases --> CFC\n18. Radiative Forcings --> Aerosols --> SO4\n19. Radiative Forcings --> Aerosols --> Black Carbon\n20. Radiative Forcings --> Aerosols --> Organic Carbon\n21. Radiative Forcings --> Aerosols --> Nitrate\n22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\n23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\n24. Radiative Forcings --> Aerosols --> Dust\n25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\n26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\n27. Radiative Forcings --> Aerosols --> Sea Salt\n28. Radiative Forcings --> Other --> Land Use\n29. Radiative Forcings --> Other --> Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop level overview of coupled model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of coupled model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE Type: STRING Cardinality: 1.1\nYear the model was released",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. CMIP3 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP3 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. CMIP5 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP5 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Previous Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPreviously known as",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.4. Components Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.5. Coupler\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nOverarching coupling framework for model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5. Key Properties --> Coupling\n**\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of coupling in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Atmosphere Double Flux\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nWhere are the air-sea fluxes calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.4. Atmosphere Relative Winds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.5. Energy Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.6. Fresh Water Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Conservation --> Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.6. Land Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation --> Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Runoff\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how runoff is distributed and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Iceberg Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Endoreic Basins\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Snow Accumulation\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Key Properties --> Conservation --> Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Key Properties --> Conservation --> Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how momentum is conserved in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Radiative Forcings --> Greenhouse Gases --> CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Radiative Forcings --> Greenhouse Gases --> CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14. Radiative Forcings --> Greenhouse Gases --> N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Radiative Forcings --> Greenhouse Gases --> CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Equivalence Concentration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDetails of any equivalence concentrations used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Radiative Forcings --> Aerosols --> SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Radiative Forcings --> Aerosols --> Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Radiative Forcings --> Aerosols --> Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Radiative Forcings --> Aerosols --> Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.3. RFaci From Sulfate Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"24. Radiative Forcings --> Aerosols --> Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Radiative Forcings --> Aerosols --> Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Radiative Forcings --> Other --> Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28.2. Crop Change Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nLand use change represented via crop change only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Radiative Forcings --> Other --> Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow solar forcing is provided",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pligor/predicting-future-product-prices
|
04_time_series_prediction/14_price_history_seq2seq-native.ipynb
|
agpl-3.0
|
[
"# -*- coding: UTF-8 -*-\n#%load_ext autoreload\n%reload_ext autoreload\n%autoreload 2",
"https://www.youtube.com/watch?v=ElmBrKyMXxs\nhttps://github.com/hans/ipython-notebooks/blob/master/tf/TF%20tutorial.ipynb\nhttps://github.com/ematvey/tensorflow-seq2seq-tutorials",
"from __future__ import division\nimport tensorflow as tf\nfrom os import path\nimport numpy as np\nimport pandas as pd\nimport csv\nfrom sklearn.model_selection import StratifiedShuffleSplit\nfrom time import time\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom mylibs.jupyter_notebook_helper import show_graph\nfrom tensorflow.contrib import rnn\nfrom tensorflow.contrib import learn\nimport shutil\nfrom tensorflow.contrib.learn.python.learn import learn_runner\nfrom mylibs.tf_helper import getDefaultGPUconfig\nfrom sklearn.metrics import r2_score\nfrom mylibs.py_helper import factors\nfrom fastdtw import fastdtw\nfrom scipy.spatial.distance import euclidean\nfrom statsmodels.tsa.stattools import coint\nfrom common import get_or_run_nn\nfrom data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider\nfrom models.price_history_seq2seq_native import PriceHistorySeq2SeqNative\n\ndtype = tf.float32\nseed = 16011984\nrandom_state = np.random.RandomState(seed=seed)\nconfig = getDefaultGPUconfig()\n%matplotlib inline",
"Step 0 - hyperparams\nvocab_size is all the potential words you could have (classification for translation case)\nand max sequence length are the SAME thing\ndecoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now",
"num_epochs = 10\n\nnum_features = 1\nnum_units = 400 #state size\n\ninput_len = 60\ntarget_len = 30\n\nbatch_size = 47\n#batch_size = 50\n#trunc_backprop_len = ??",
"Step 1 - collect data (and/or generate them)",
"npz_path = '../price_history_03_dp_60to30_from_fixed_len.npz'\n#npz_path = '../data/price_history/price_history_03_dp_60to30_6400_train.npz'\n\ndp = PriceHistorySeq2SeqDataProvider(npz_path=npz_path, batch_size=batch_size)\ndp.inputs.shape, dp.targets.shape\n\naa, bb = dp.next()\naa.shape, bb.shape",
"Step 2 - Build model",
"model = PriceHistorySeq2SeqNative(rng=random_state, dtype=dtype, config=config)\n\ngraph = model.getGraph(batch_size=batch_size,\n num_units=num_units,\n input_len=input_len,\n target_len=target_len)\n\n#show_graph(graph)",
"Conclusion\nThere is no way this graph makes much sense but let's give it a try to see how bad really is\nStep 3 training the network\nRECALL: baseline is around 4 for huber loss for current problem, anything above 4 should be considered as major errors\nBasic RNN cell (EOS 1000)",
"rnn_cell = PriceHistorySeq2SeqNative.RNN_CELLS.BASIC_RNN\nnum_epochs = 10\neos_token = float(1e3)\nnum_epochs, num_units, batch_size\n\ndef experiment():\n return model.run(\n npz_path=npz_path,\n epochs=num_epochs,\n batch_size=batch_size,\n num_units=num_units,\n input_len = input_len,\n target_len = target_len,\n rnn_cell=rnn_cell,\n eos_token=eos_token,\n )\n\ndyn_stats, preds_dict = get_or_run_nn(experiment, filename='007_rnn_seq2seq_native_EOS1000_60to30_10epochs')\n\ndyn_stats.plotStats()\nplt.show()\n\nr2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])\n for ind in range(len(dp.targets))]\n\nind = np.argmin(r2_scores)\nind\n\nreals = dp.targets[ind]\npreds = preds_dict[ind]\n\nr2_score(y_true=reals, y_pred=preds)\n\nsns.tsplot(data=dp.inputs[ind].flatten())\n\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()\n\n%%time\ndtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]\n for ind in range(len(dp.targets))]\n\nnp.mean(dtw_scores)\n\ncoint(preds, reals)\n\ncur_ind = np.random.randint(len(dp.targets))\nreals = dp.targets[cur_ind]\npreds = preds_dict[cur_ind]\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()",
"Conclusion\nThe initial price difference of the predictions is still not as good as we would expect, perhaps using an EOS as they do in machine translation models is not the best architecture for our case\nGRU cell - with EOS = 1000",
"rnn_cell = PriceHistorySeq2SeqNative.RNN_CELLS.GRU\nnum_epochs = 30\neos_token = float(1e3)\nnum_epochs, num_units, batch_size\n\ndef experiment():\n return model.run(\n npz_path=npz_path,\n epochs=num_epochs,\n batch_size=batch_size,\n num_units=num_units,\n input_len = input_len,\n target_len = target_len,\n rnn_cell=rnn_cell,\n eos_token=eos_token,\n )\n\nexperiment()\n\ndyn_stats, preds_dict = get_or_run_nn(experiment, filename='007_gru_seq2seq_native_EOS1000_60to30_30epochs')\n\ndyn_stats.plotStats()\nplt.show()",
"TODO autocorrelation",
"r2_scores = [r2_score(y_true=dp.targets[ind], y_pred=preds_dict[ind])\n for ind in range(len(dp.targets))]\n\nind = np.argmin(r2_scores)\nind\n\nreals = dp.targets[ind]\npreds = preds_dict[ind]\n\nr2_score(y_true=reals, y_pred=preds)\n\nsns.tsplot(data=dp.inputs[ind].flatten())\n\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()\n\n%%time\ndtw_scores = [fastdtw(dp.targets[ind], preds_dict[ind])[0]\n for ind in range(len(dp.targets))]\n\nnp.mean(dtw_scores)\n\ncoint(preds, reals)\n\ncur_ind = np.random.randint(len(dp.targets))\nreals = dp.targets[cur_ind]\npreds = preds_dict[cur_ind]\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()",
"Conclusion\n???"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
OceanPARCELS/parcels
|
parcels/examples/tutorial_timestamps.ipynb
|
mit
|
[
"Tutorial on how to use timestaps in Field construction",
"from parcels import Field\nfrom glob import glob\nimport numpy as np",
"Some NetCDF files, such as for example those from the World Ocean Atlas, have time calendars that can't be parsed by xarray. These result in a ValueError: unable to decode time units, for example when the calendar is in 'months since' a particular date.\nIn these cases, a workaround in Parcels is to use the timestamps argument in Field (or FieldSet) creation. Here, we show how this works for example temperature data from the World Ocean Atlas in the Pacific Ocean\nThe following cell would give an error, since the calendar of the World Ocean Atlas data is in \"months since 1955-01-01 00:00:00\"",
"# tempfield = Field.from_netcdf(glob('WOA_data/woa18_decav_*_04.nc'), 't_an', \n# {'lon': 'lon', 'lat': 'lat', 'time': 'time'})",
"However, we can create our own numpy array of timestamps associated with each of the 12 snapshots in the netcdf file",
"timestamps = np.expand_dims(np.array([np.datetime64('2001-%.2d-15' %m) for m in range(1,13)]), axis=1)",
"And then we can add the timestamps as an extra argument",
"tempfield = Field.from_netcdf(glob('WOA_data/woa18_decav_*_04.nc'), 't_an', \n {'lon': 'lon', 'lat': 'lat', 'time': 'time'}, \n timestamps=timestamps)",
"Note, by the way, that adding the time_periodic=True argument to Field.from_netcdf() will also mean that the climatology can be cycled for multiple years."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
CAChemE/curso-python-datos
|
notebooks/050-Pandas-Intro.ipynb
|
bsd-3-clause
|
[
"Carga y manipulación de datos con pandas\n_ pandas es una biblioteca de análisis de datos en Python que nos provee de las estructuras de datos y herramientas para realizar análisis de manera rápida. Se articula sobre la biblioteca NumPy y nos permite enfrentarnos a situaciones en las que tenemos que manejar datos reales que requieren seguir un proceso de carga, limpieza, filtrado, reduccióń y análisis. _\nEn esta clase veremos como cargar y guardar datos, las características de las pricipales estructuras de pandas y las aplicaremos a algunos problemas.\nSe trata de una biblioteca muy extensa y que sigue evolucionando, por lo que lleva tiempo conocer todas las posibilidades que ofrece. La mejor forma de aprender pandas es usándolo, por lo que ¡nos ahorraremos la introducción e iremos directos al grano!",
"# Importamos pandas\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"Cargando los datos y explorándolos\nTrabajaremos sobre un fichero de datos metereológicos de la Consejeria Agricultura Pesca y Desarrollo Rural Andalucía.",
"from IPython.display import HTML\nHTML('<iframe src=\"http://www.juntadeandalucia.es/agriculturaypesca/ifapa/ria/servlet/FrontController?action=Static&url=coordenadas.jsp&c_provincia=4&c_estacion=4\" width=\"700\" height=\"400\"></iframe>')\n\n# Vemos qué pinta tiene el fichero\n# (esto es un comando de la terminal, no de python\n# y solo funcionará en Linux o MAC)\n!head ../data/tabernas_meteo_data.txt",
"Vemos que los datos no están en formato CSV, sino que la delimitación son espacios. Si intentamos cargarlos con pandas no tendremos mucho éxito:",
"# Tratamos de cargarlo en pandas\npd.read_csv(\"../data/tabernas_meteo_data.txt\").head(5)",
"Tenemos que hacer los siguientes cambios:\n\nSeparar los campos por un número arbitrario de espacios en blanco.\nSaltar las primeras líneas.\nDar nombres nuevos a las columnas.\nDescartar la columna del día del año (podemos calcularla luego).\nParsear las fechas en el formato correcto.\n\nLa aproximación clásica a este tipo de problemas es hacer una lectura línea a línea en la que vayamos \"parseando\" los datos del fichero, de acuerdo al formato que esperamos recibir y nos protejamos cuando, no se cumpla la estructura.\nSin embargo, gracias a pandas podremos reducir nuestro esfuerzo drásticamente, dando a la función read_csv algunas indicaciones sobre el formato de nuestros datos",
"data = pd.read_csv(\n \"../data/tabernas_meteo_data.txt\",\n delim_whitespace=True, # delimitado por espacios en blanco\n usecols=(0, 2, 3, 4, 5), # columnas que queremos usar\n skiprows=2, # saltar las dos primeras líneas\n names=['DATE', 'TMAX', 'TMIN', 'TMED', 'PRECIP'],\n parse_dates=['DATE'],\n# date_parser=lambda x: pd.datetime.strptime(x, '%d-%m-%y'), # Parseo manual\n dayfirst=True, # ¡Importante\n index_col=[\"DATE\"] # Si queremos indexar por fechas\n)\n\n# Ordenando de más antigua a más moderna\ndata.sort_index(inplace=True)\n\n# Mostrando sólo las primeras o las últimas líneas\ndata.head()",
"Las fechas también se pueden parsear de manera manual con el argumento:\ndate_parser=lambda x: pd.datetime.strptime(x, '%d-%m-%y'), # Parseo manual\n<div class=\"alert alert-info\">Para acordarnos de cómo parsear las fechas: http://strftime.org/</div>",
"# Comprobamos los tipos de datos de la columnas\ndata.dtypes\n\n# Pedomos información general del dataset\ndata.info()",
"En una dataframe pueden convivir datos de tipo diferente en diferentes columnas: en nuestro caso, fechas (en el índice) y (flotantes en las columnas). El que un dato sea de tipo fecha y no un string u otra cosa, nos permite obtener información como el día de la semana de manera directa:",
"data.index.dayofweek",
"Una vez hemos cargado los datos, estamos preparados para analizarlos utilizando toda la artillería de pandas. Por ejemplo, puede que queramos una descripción estadística rápida:",
"# Descripción estadística\ndata.describe()",
"Accediendo a los datos\nColumnas\nTenemos dos formas de acceder a las columnas: por nombre o por atributo (si no contienen espacios ni caracteres especiales).",
"# Accediendo como clave\ndata['TMAX'].head()\n\n# Accediendo como atributo\ndata.TMIN.head()\n\n# Accediendo a varias columnas a la vez\ndata[['TMAX', 'TMIN']].head()",
"Del mismo modo que accedmos, podemos operar con ellos:",
"# Modificando valores de columnas\ndata[['TMAX', 'TMIN']] / 10",
"e introducirlos en funciones:",
"# Aplicando una función a una columna entera (ej. media numpy)\nimport numpy as np\nnp.mean(data.TMAX)\n\n# Calculando la media con pandas\ndata.TMAX.mean()",
"Filas\nPara acceder a las filas tenemos dos métodos: .loc (basado en etiquetas), .iloc (basado en posiciones enteras) ~~y .ix (que combina ambos)~~ (.ix ha desaparecido en la versión 0.20).",
"# Accediendo a una fila por índice\ndata.iloc[1]\n\n# Accediendo a una fila por etiqueta\ndata.loc[\"2016-09-02\"]",
"Puedo incluso hacer secciones basadas en fechas:",
"data.loc[\"2016-12-01\":]",
"Filtrando los datos\nTambién puedo indexar utilizando arrays de valores booleanos, por ejemplo procedentes de la comprobación de una condición:",
"# Comprobando que registros carecen de datos válidos\ndata.TMIN.isnull().head()\n\n# Accediendo a los registros que cumplen una condición\ndata.loc[data.TMIN.isnull()]\n\n# Valores de precipitación por encima de la media:\nprint(data.PRECIP.mean())\ndata[data.PRECIP > data.PRECIP.mean()]",
"Funciones \"rolling\"\nPor último, pandas proporciona métodos para calcular magnitudes como medias móviles usando el método rolling:",
"# Calcular la media de la columna TMAX\ndata.TMAX.head(15)\n\n# Media trimensual centrada\ndata.TMAX.rolling(5, center=True).mean().head(15)",
"Creación de nuevas columnas",
"# Agruparemos por año y día: creemos dos columnas nuevas\ndata['year'] = data.index.year\ndata['month'] = data.index.month",
"Creando agrupaciones\nEn muchas ocasiones queremos realizar agrupaciones de datos en base a determinados valores como son fechas, o etiquetas (por ejemplo, datos que pertenecen a un mismo ensayo o lugar)\nPodemos agrupar nuestros datos utilizando groupby:",
"# Creamos la agrupación\nmonthly = data.groupby(by=['year', 'month'])\n\n# Podemos ver los grupos que se han creado\nmonthly.groups.keys()",
"Con estos grupos podemos hacer hacer varias cosas:\n\nAcceder a sus datos individualmente (por ejemplo, comprobar qué pasó cada día de marzo de 2016) \nRealizar una reducción de datos, para comparar diversos grupos (por, ejemplo caracterizar el tiempo de cada mes a lo largo de los años)",
"# Accedemos a un grupo\nmonthly.get_group((2016,3)).head()\n\n# Hhacemos una agregación de los datos:\nmonthly_mean = monthly.mean()\nmonthly_mean.head(24)",
"Creando agrupaciones\nEn ocasiones podemos querer ver nuestros datos de forma diferente o necesitamos organizarlos así para utilizar determinadas funciones de pandas. Una necesidad típica es la de pivotar una tabla.\nImagina que queremos acceder a los mismos datos que en el caso anterior, pero que ahora queremos ver los años en las filas y para cada variable (TMAX, TMED...) los calores de cada mes en una columna. ¿Cómo lo harías?",
"# Dejar los años como índices y ver la media mensual en cada columna\nmonthly_mean.reset_index().pivot(index='year', columns='month')",
"La línea anterior no es sencilla y no se escribe de una sola vez sin errores (sobre todo si estás empezando). Esto es una ejemplo de que pandas es una librería potente, pero que lleva tiempo aprender. Pasarás muchas horas peleando contra problemas de este tipo, pero afortunadamente mucha gente lo ha pasado mal antes y su experiencia ha quedado plasmada en cientos de preguntas de stack overflow y en la documentación de pandas\nPlotting\nLíneas",
"# Pintar la temperatura máx, min, med\ndata.plot(y=[\"TMAX\", \"TMIN\", \"TMED\"])\nplt.title('Temperaturas')",
"Cajas",
"data.loc[:, 'TMAX':'PRECIP'].plot.box()",
"Pintando los datos de un \"típíco día d del mes m del año a\nPintando la temperatura máxima de las máximas, mínima de las mínimas, media de las medias para cada día del año de los años disponnibles",
"group_daily = data.groupby(['month', data.index.day])\n\ndaily_agg = group_daily.agg({'TMED': 'mean', 'TMAX': 'max', 'TMIN': 'min', 'PRECIP': 'mean'})\ndaily_agg.head()\n\ndaily_agg.plot(y=['TMED', 'TMAX', 'TMIN'])",
"Visualizaciones especiales",
"# scatter_matrix\nfrom pandas.tools.plotting import scatter_matrix\naxes = scatter_matrix(data.loc[:, \"TMAX\":\"TMED\"])",
"Algunos enlaces:\n\nA visual guide to pandas: https://www.youtube.com/watch?v=9d5-Ti6onew\nPandas cheatsheet (oficial): https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf\nPandas Datacamp cheatsheet: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.eyIEEEg\nConsejos de rendimiento: http://slides.com/jeffreback/pfq-performance-pandas#/\n\n\n<br/>\n<h4 align=\"right\">¡Síguenos en Twitter!\n<br/>\n<p align=\"right\"> <a href=\"https://twitter.com/AeroPython\" class=\"twitter-follow-button\" data-show-count=\"false\">@AeroPython</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script> </p>\n<p align=\"right\"><a href=\"https://twitter.com/CAChemEorg\" class=\"twitter-follow-button\" data-show-count=\"false\">@CAChemEorg</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script> </p>\n<br/></h4>\n\n<br/>\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/deed.es\"><img alt=\"Licencia Creative Commons\" style=\"border-width:0\" src=\"http://i.creativecommons.org/l/by/4.0/88x31.png\" /></a><br /><span xmlns:dct=\"http://purl.org/dc/terms/\" property=\"dct:title\">Curso de introducción a Python: procesamiento y análisis de datos</span> por <span xmlns:cc=\"http://creativecommons.org/ns#\" property=\"cc:attributionName\">Juan Luis Cano Rodriguez, Alejandro Sáez Mollejo y Francisco J. Navarro Brull</span> se distribuye bajo una <a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/deed.es\">Licencia Creative Commons Atribución 4.0 Internacional</a>.\nLa mayor parte de material de este curso es un resumen adaptado del magnífico Curso de AeroPython realizado por: Juan Luis Cano y Álex Sáez"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
deculler/TableDemos
|
BerkeleySalary.ipynb
|
bsd-2-clause
|
[
"Illustration of datascience Tables on Open Data from Berkeley\nDavid E. Culler\nDatascience Table provides a simple, yet powerful data structure for a range of analyses. The basic concept is an ordered set of named columns. \n\nIt builds on the intuition many develop with excel - data is represented as rectangular regions. But, rather than labeling cells, the column labels really mean something. \nTables embed smoothly in jupyter notebooks, so the user experience is that of a computational document, rather than a spreadsheet. This provides a much clearer sequence of steps from raw data to finished product, at least if they are well constructed. There is no need to break out into visual basic or the like when you need more than the basics.\nTables draws heavily from relational database tables, but there is no separate language (e.g., SQL) required to do relational operations in them.\nTables provide also the concepts associated with pivot tables in Excel, which are closely related to relational operations, but often more natural.\nTables can be viewed as simple variants of the dataframes in R or Pandas. The key is simplicity. They are simple rectangular structures. The cells of a table can hold arbitrary values, although columns are homogeneous, so the additional power (and complexity) of higher dimensions are rarely needed.\nTables builds directly on the scipy ecosystem. Almost any sequence can go in, numpy.arrays come out. Thus, it is natural to manipulate data directly from Tables. Basic visualization is provide directly in terms of Tables, so you can go a long ways before reading matplotlib documentation. However, if you need more, a lot of it can be gained through keyword args - and if that is not enough, drop into scipy.\n\nThis notebook illustrates some of the use of Tables using municipal salary data made possible though the City of Berkeley's open data portal.",
"# This useful nonsense just goes at the top\nfrom datascience import *\nimport numpy as np\nimport matplotlib.pyplot as plots\nplots.style.use('fivethirtyeight')\n%matplotlib inline\n# datascience version number of last run of this notebook\nversion.__version__",
"Reading raw data into a Table\nLet's pull a big wad of City Employee salary data from the Berkeley Open Data portal. \nSince this was a trial till June 30, 2015 and you have to mouse around to get csv files, we happen to have pulled a local copy.",
"raw_berkeley_sal_2011 = Table.read_table(\"./data/BerkeleyData/City_of_Berkeley_Employee_Salaries_-_2011.csv\")",
"Let's take a peek at what we have got.",
"# Tables are rendered to work in a notebook, even if they are large. Only part shows.\n# You can adjust how much of it you see, but here we'd have ~1500 rows!\nraw_berkeley_sal_2011",
"Accessing data in a Table\nA column of Table data is accessed by indexing it by name. This returns the column as a numpy array.",
"raw_berkeley_sal_2011['Base Pay']",
"Some prefer the selectors - column and row",
"raw_berkeley_sal_2011.column('Overtime Pay')",
"Rows in the table can be indexed and sliced. A row is a little like a record or dict. It is an tupled, ordered according to the table it comes from and keeping the column names.",
"raw_berkeley_sal_2011.rows[0]\n\nraw_berkeley_sal_2011.row(0).item('Base Pay')\n\nraw_berkeley_sal_2011.row(0)[2]\n\nraw_berkeley_sal_2011.row(0).asdict()\n\nraw_berkeley_sal_2011.rows[0:10]",
"Converting data in a Table to make it workable\nWhen we read in data from a csv file we got a bunch of columns filled with a bunch of strings. As is often the case, we want the data in a column to represent values that we can analyze, whereas we want the printed format of a column to reflect its meaning. Currency is the most common such situation. Let's clean up our salary table.\nWe might start by getting ahold of the names of column that we want to clean up",
"paylabels = raw_berkeley_sal_2011.labels[2:]\npaylabels",
"Clean derivatives of raw tables\nIt is good hygene to keep the raw data raw and produce distinct, clean derivatives. Let's start by making a copy of the raw table. A new name and a new table.",
"berkeley_sal_2011 = raw_berkeley_sal_2011.copy()",
"Tables allow columns to have customized formatters\nIn Excel you do this by formatting the cells. We want to have the data as numbers, keep track of the type, and have it look nice.",
"berkeley_sal_2011.set_format(paylabels, CurrencyFormatter)\nberkeley_sal_2011",
"Now we get values we can compute on - and they still display as currency.",
"berkeley_sal_2011['Base Pay']\n\nmax(berkeley_sal_2011['Total Pay Benefits'])",
"Descriptive Statistics Summary\nNow we can try to get a summary of the data with some descriptive statistics. \nThe stats method on Tables computes a list of statistics over each column and creates a new Table containing these statistics. The default is tailored to the Berkeley Data8 course. Here we provide what you expect from the summary operation in R",
"def firstQtile(x) : return np.percentile(x,25)\ndef thirdQtile(x) : return np.percentile(x,25)\nsummary_ops = (min, firstQtile, np.median, np.mean, thirdQtile, max)\n\nberkeley_sal_2011.select(paylabels).stats(ops=summary_ops)",
"OK, so it looks like the average salary is about 86k, and it ranges up to 300k with some hefty overtime pay. Let's see if we can understand what is going on.\nVisualizing data\n\nTable.select creates a new table consisting of the specified list of columns.\nTable.hist plots a histogram of each of the columns in a table. It can either overlay the histograms or show them separately. Here we have specified the number of bins",
"berkeley_sal_2011.select([\"Base Pay\", \n \"Overtime Pay\", \n \"Total Pay Benefits\"]).hist(overlay=False,normed=False,\n bins=40)",
"Interesting. Base pay is bimodal. Most employees get no overtime, but there is a looong tale. Let's look at the individual level. Who's at the top?",
"berkeley_sal_2011.sort('Total Pay Benefits', descending=True)",
"So where does the $alary go? First, how many employees?\nTable.num_rows returns just what you'd think. The number of rows. Which in this case is the number of employees on the city payroll.",
"berkeley_sal_2011.num_rows",
"Grouping and Sorting Table data\n\nTable.drop creates a new Table without some columns. It is like select, but you don't have to name everythng you want.\nTable.group aggregates data by grouping all the rows that contain a common value in one (or more) columns. Here we group in \"Job Title\" summing the entries in all other columns for each group. We placed a column full of 1 to get a count, while summing salaries and such.\nTable.sort sorts the rows in a Table by a column - just like sort in Excel.",
"# lose the individual names\njob_titles = berkeley_sal_2011.drop(\"Employee Name\") \n# Build a handy column full of 1s\njob_titles[\"Title\"] = 1\n# Group by title summing the number of rows per\nby_title = job_titles.group(\"Job Title\", sum) \n# Sort by the number of employees per title\nordered_by_title = by_title.sort('Title sum', descending = True) \n# let's see what we get\nordered_by_title \n\nordered_by_title.num_rows",
"Wow, 305 Job Titles for 1437 employees!",
"\"{0:.3} employees per Job Code\".format(berkeley_sal_2011.num_rows/ordered_by_title.num_rows)",
"Plotting data\n\nTable.plot plots each of the columns in a table, either on separate charts or overlayed on a single chart. Optionally one of the columns can be specified as the horizontal axis and all others plotted against this.",
"ordered_by_title.select(['Title sum','Total Pay Benefits sum']).sort('Title sum').plot(overlay=False)",
"How about that, a few job categories have most of the employes and most of the spend, but it is far from uniform. Let's look a little deeper. Which categories consume most of the budget?",
"by_title.sort('Total Pay Benefits sum', descending = True)\n\nby_title.sort('Total Pay Benefits sum', descending = True).row(0)\n\nby_title.select(('Job Title', 'Total Pay Benefits sum')).sort('Total Pay Benefits sum', descending=True)",
"As is often the case in the real world, the categorization used for operations is not directly useful for analysis. We often need to build categories in order to get a handle on what's going on.\nWhat do all those job titles look like",
"ordered_by_title['Job Title']\n\ncategories = {\n 'Police': [\"POLICE\"], \n 'Fire': [\"FIRE\"], \n 'Animal Control':[\"ANIMAL\"], \n 'Health': [\"HEALTH\", \"PSYCH\", \"HLTH\"],\n 'Library': ['LIBRARY','LIBRARIAN'],\n 'Offical' : ['MAYOR','COUNCIL', 'COMMISSIONER', 'CITY MANAGER'],\n 'Trades' :[\"ELECTRICIAN\",\"MECHANIC\", \"ENGINEER\"],\n 'Parking' : [\"PARKING\"],\n 'Recreation' : [\"RECREATION\", \"AQUATICS\"],\n 'Gardener' : [\"GARDEN\"],\n \"Labor\" : [\"LABOR\", \"JANITOR\"],\n 'Community': [\"COMMUNITY\"],\n 'Admin' : [\"ADMIN\"],\n 'Traffic' : [\"TRAFFIC\"],\n 'Accounting' : [\"ACCOUNT\"],\n 'Dispatch' : [\"DISPATCH\"],\n 'Waste' : [\"WASTE\", \"SEWER\"],\n 'Analyst' : [\"ANALYS\"],\n 'Office' : [\"OFFICE \"],\n 'Legal' : ['LEGISLAT', 'ATTORN', 'ATTY'],\n 'IT' : [\"PROG\", \"INFORMATION SYSTEMS\"],\n 'School' : [\"SCHOOL\"],\n 'Architect' : [\"ARCHITECT\"],\n 'Planner' : [\"PLANNER\", \"PERMIT\"]\n }\n\ncategories",
"Applying a function to create a new column\n\ntable.apply: applies a function to every element in a column. \n\nOne of the best examples of high-order functions and tables is in categorizing data. As is often the case, we create a new column with the results",
"def categorize (title) : \n for category, keywords in categories.items():\n for word in keywords :\n if title.find(word) >= 0 : return category\n return 'Other'\n\nberkeley_sal_2011['Category'] = berkeley_sal_2011.apply(categorize, 'Job Title')\nberkeley_sal_2011\n\n# lose the individual names\njob_categories = berkeley_sal_2011.drop(\"Employee Name\") \njob_categories[\"Cat\"] = 1\nby_categories = job_categories.group(\"Category\", sum)\nby_categories.sort(\"Total Pay Benefits sum\", descending=True).show()",
"As is often the case working with real data, we often need to iterate a bit to get what we want out of it. With all those titles, a lot of stuff is likely to end up as other. \nHere we have a little iterative process to get enough of the job titles categorized",
"job_categories.where('Category', 'Other')\n\njob_categories.where('Category', 'Other').group('Job Title',sum).sort('Cat sum', descending=True)\n\njob_categories.where('Category', 'Other').group('Job Title',sum).sort('Total Pay Benefits sum', descending=True)",
"So no job title left has more than 10 employees in it, but some have quite a bit of cost. We could go back and add more entries to our category table and iterate a bit. The important thing is that we create new tables, we don't clobber old ones. \nWell this shows the challenge in managing budget pretty nicely. Most of the money is spent in a few job categories. But then there are still over 200 employees in a zillion other categories that are stile the #2 spend.",
"by_categories.sort('Total Pay Benefits sum', descending=True).barh('Category', select=['Total Pay Benefits sum', 'Cat sum'], overlay=False)",
"So let's try to understand the police category a bit more.",
"police = job_categories.where('Category', 'Police')\npolice",
"How do the pay labels spread across the force?\nWe can look at histograms by pay label. First all toegether and then broken apart.",
"police.select(paylabels).hist(bins=30,normed=False)\n\npolice.select(paylabels).hist(bins=30,normed=False, overlay=False)",
"Base pay seems to chunk into categories, perhaps by job title. \nMost members of the force do little overtime, but a few do a lot!\nHow many are in each Job Title?",
"police.group('Job Title')\n\n# We can actually get all the data by title\npolice.select(['Job Title','Base Pay', 'Overtime Pay']).group('Job Title', collect=lambda x:x)",
"We can't just pivot by Job Title because we don't have a uniform number of rows, but what we can do is for pivot and bin (or histogram) so we can see the distribution of a column by job title.\nSure enough. Officers cluster around 100-120k, sergeants at 130-140k, but there's a little overlap.",
"police.pivot_bin('Job Title', 'Base Pay', bins=np.arange(0,200000,10000)).show()\n\npolice.pivot_bin('Job Title', 'Base Pay', bins=np.arange(0,200000,10000)).bar('bin')\n\npolice.pivot_bin('Job Title', 'Overtime Pay', bins=np.arange(0,200000,10000)).bar('bin')\n\npolice.pivot_bin('Job Title', 'Total Pay Benefits', bins=np.arange(0,420000,10000)).bar('bin')\n\nfire = job_categories.where('Category','Fire')\nfire.select(paylabels).hist(bins=30)\n\nfire.group('Job Title')\n\nfire.pivot_bin('Job Title', 'Total Pay Benefits', bins=np.arange(0,420000,10000)).bar('bin')",
"Let's compare the 2011 data with more recent 2013 data.",
"raw_berkeley_sal_2013 = Table.read_table(\"./data/BerkeleyData/City_of_Berkeley_Employee_Salaries_-_2013.csv\")\nraw_berkeley_sal_2013",
"Well, the data base changed. It picked up a few columns over the years. And we need to convert the salary strings to numbers so we can do analysis on them. All in one go...",
"berkeley_sal_2013 = raw_berkeley_sal_2013.drop(['Year','Notes','Agency'])\nberkeley_sal_2013\n\nberkeley_sal_2013.set_format(berkeley_sal_2013.labels[2:], CurrencyFormatter)\n\nberkeley_sal_2013[\"Total Pay & Benefits\"]\n\nberkeley_sal_2013.sort('Total Pay & Benefits',descending=True)",
"Isn't that interesting. They seem to have gotten their overtime under control. Was that management, end of the occupy movement, something else? Let's do a bit of comparison.\nFirst we need to do some clean up and get labels we can deal with.",
"b2011 = berkeley_sal_2011.select([\"Employee Name\", \"Job Title\", \"Total Pay Benefits\"])\nb2011.relabel('Total Pay Benefits', \"Total 2011\")\nb2011.sort('Total 2011', descending=True)\n\nb2013 = berkeley_sal_2013.select([\"Employee Name\", \"Job Title\", \"Total Pay & Benefits\"])\nb2013.relabel('Job Title','Title 2013')\nb2013.relabel(\"Total Pay & Benefits\", \"Total 2013\")\nb2013.sort('Total 2013', descending=True)",
"Snap! They decided that case was a good idea for proper nouns. Let's go back to the old way.",
"b2013['Employee Name'] = b2013.apply(str.upper, 'Employee Name')\nb2013",
"Now we can put the two tables together to see what happened with employees who were around in both years. Here we get to use another powerful operations on tables.\n\nTable.join: joins two tables together using a column of each that contains common values.\n\nHere we have the employee names in each table. The join will give us the title and salary in both years for those employees in both tables, i.e., working for the city in both years",
"b11_13 = b2011.join('Employee Name', b2013)\nb11_13",
"Let's add a column with increase in total pay.",
"b11_13[\"Increase\"] =b11_13['Total 2013'] - b11_13['Total 2011']\n\nb11_13.sort('Increase', \"decr\").select('Increase').plot()",
"On the tails we have people who joined part way through 2011 or left part way through 2013.",
"b11_13.stats(summary_ops)",
"Well that's interesting. Total compensation seems to have dropped. Did the budget actually go down?",
"sum2011 = np.sum(berkeley_sal_2011['Total Pay Benefits'])\n\"${:,}\".format(int(sum2011))\n\nsum2013 = np.sum(berkeley_sal_2013['Total Pay & Benefits'])\n\"${:,}\".format(int(sum2013))\n\n\"${:,}\".format(int(sum2013-sum2011))\n\n\"{:.1%}\".format((sum2013-sum2011)/sum2011)",
"Look at that.",
"np.sum(berkeley_sal_2011['Overtime Pay'])\n\nnp.sum(berkeley_sal_2013['Overtime Pay'])",
"Let's see who got promoted or demoted",
"b11_13.where(b11_13['Job Title'] != b11_13['Title 2013']).sort('Total 2013', descending=True).show()\n\nb11_13.where(b11_13['Job Title'] == b11_13['Title 2013']).sort('Increase', descending=True).show()",
"Perhaps we might want to look at the relationship of these two variables. That leads to another useful operator\n\nTable.scatter: does a scatter plot of columns against one columns",
"b11_13.scatter('Total 2011', 'Total 2013')",
"Summary\nThis notebook has provided a introduction to many of the concepts and features in datascience tables in the context of a fairly complete example on open public data.\n\nCreating tables: Table.read_table - reads a file or url into a Table. It is primarily used for csv files. Tables can also be created from local data structures by constructing a tables with Table() and filling it using with_columns or with_rows.\nAccessing columns, rows, and elements of table.\nCleaning up raw tables and setting formatters for table displays.\nGetting descriptive statistics with stats to sumarize the columns in a table.\nWorking with portions of a table using select to select columns where to filter rows, drop to select all but the specified columns.\nVisualizing data with hist, plot, barh, bar, and scatter.\nSorting tables with sort using columns as keys.\nGrouping entries in tables using group, where groups are defined by rows with common values in a specified collection of columns; the values in the remaining columns are then aggregated using a collection function. The identity collector all all the values in a group to be collected into a list.\nApplying functions to all the elements of a column of a table, using apply\nDistributing columns of a table using pivot_bin where each unique set of values in a specified collection of columns serves as a \"key\" which is a column name in the result. Values in the remaining columns are binned to produce the rows in the result. This is used when the number of entries for each key varies. Where there is a single value for each key, pivot can be used."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
science-of-imagination/nengo-buffer
|
Project/trained_mental_rotation_ens_inhibition.ipynb
|
gpl-3.0
|
[
"Using the trained weights in an ensemble of neurons\n\nOn the function points branch of nengo\nOn the vision branch of nengo_extras",
"import nengo\nimport numpy as np\nimport cPickle\nfrom nengo_extras.data import load_mnist\nfrom nengo_extras.vision import Gabor, Mask\nfrom matplotlib import pylab\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nimport scipy.ndimage\nfrom scipy.ndimage.interpolation import rotate",
"Load the MNIST database",
"# --- load the data\nimg_rows, img_cols = 28, 28\n\n(X_train, y_train), (X_test, y_test) = load_mnist()\n\nX_train = 2 * X_train - 1 # normalize to -1 to 1\nX_test = 2 * X_test - 1 # normalize to -1 to 1\n",
"Each digit is represented by a one hot vector where the index of the 1 represents the number",
"temp = np.diag([1]*10)\n\nZERO = temp[0]\nONE = temp[1]\nTWO = temp[2]\nTHREE= temp[3]\nFOUR = temp[4]\nFIVE = temp[5]\nSIX = temp[6]\nSEVEN =temp[7]\nEIGHT= temp[8]\nNINE = temp[9]\n\nlabels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE]\n\ndim =28",
"Load the saved weight matrices that were created by training the model",
"label_weights = cPickle.load(open(\"label_weights_choose_enc1000.p\", \"rb\"))\nactivity_to_img_weights = cPickle.load(open(\"activity_to_img_weights_choose_enc1000.p\", \"rb\"))\n#rotated_clockwise_after_encoder_weights = cPickle.load(open(\"rotated_clockwise_after_encoder_weights_rot_enc1000.p\", \"r\"))\nrotated_counter_after_encoder_weights = cPickle.load(open(\"rotated_counter_after_encoder_weights_choose_enc1000.p\", \"r\"))\n\n#identity_after_encoder_weights = cPickle.load(open(\"identity_after_encoder_weights1000.p\",\"r\"))\n\n\n#rotation_clockwise_weights = cPickle.load(open(\"rotation_clockwise_weights1000.p\",\"rb\"))\n#rotation_counter_weights = cPickle.load(open(\"rotation_weights1000.p\",\"rb\"))\n\n\n#Training with filters used on train images\n#low_pass_weights = cPickle.load(open(\"low_pass_weights1000.p\", \"rb\"))\n#rotated_counter_after_encoder_weights_noise = cPickle.load(open(\"rotated_after_encoder_weights_counter_filter_noise5000.p\", \"r\"))\n#rotated_counter_after_encoder_weights_filter = cPickle.load(open(\"rotated_after_encoder_weights_counter_filter5000.p\", \"r\"))\n",
"Functions to perform the inhibition of each ensemble",
" #A value of zero gives no inhibition\n\ndef inhibit_rotate_clockwise(t):\n if t < 0.5:\n return dim**2\n else:\n return 0\n \ndef inhibit_rotate_counter(t):\n if t < 0.5:\n return 0\n else:\n return dim**2\n \ndef inhibit_identity(t):\n if t < 0.3:\n return dim**2\n else:\n return dim**2\n\ndef intense(img):\n newImg = img.copy()\n newImg[newImg < 0] = -1\n newImg[newImg > 0] = 1\n return newImg\n\ndef node_func(t,x):\n #clean = scipy.ndimage.gaussian_filter(x, sigma=1)\n #clean = scipy.ndimage.median_filter(x, 3)\n clean = intense(x)\n return clean\n\n#Create stimulus at horizontal\nweight = np.dot(label_weights,activity_to_img_weights)\n\nimg = np.dot(THREE,weight)\n\nimg = scipy.ndimage.rotate(img.reshape(28,28),90).ravel()\n\npylab.imshow(img.reshape(28,28),cmap=\"gray\")\nplt.show()\n\n",
"The network where the mental imagery and rotation occurs\n\nThe state, seed and ensemble parameters (including encoders) must all be the same for the saved weight matrices to work\nThe number of neurons (n_hid) must be the same as was used for training\nThe input must be shown for a short period of time to be able to view the rotation\nThe recurrent connection must be from the neurons because the weight matices were trained on the neuron activities",
"rng = np.random.RandomState(9)\nn_hid = 1000\nmodel = nengo.Network(seed=3)\nwith model:\n #Stimulus only shows for brief period of time\n stim = nengo.Node(lambda t: THREE if t < 0.1 else 0) #nengo.processes.PresentInput(labels,1))# For cycling through input\n \n #Starting the image at horizontal\n #stim = nengo.Node(lambda t:img if t< 0.1 else 0)\n \n ens_params = dict(\n eval_points=X_train,\n neuron_type=nengo.LIF(), #Why not use LIF?\n intercepts=nengo.dists.Choice([-0.5]),\n max_rates=nengo.dists.Choice([100]),\n )\n \n \n # linear filter used for edge detection as encoders, more plausible for human visual system\n #encoders = Gabor().generate(n_hid, (11, 11), rng=rng)\n #encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True)\n\n '''\n degrees = 6\n #must have same number of excoders as neurons (Want each random encoder to have same encoder at every angle)\n encoders = Gabor().generate(n_hid/(360/degrees), (11, 11), rng=rng)\n encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True)\n\n rotated_encoders = encoders.copy()\n\n\n #For each randomly generated encoder, create the same encoder at every angle (increments chosen by degree)\n for encoder in encoders:\n rotated_encoders = np.append(rotated_encoders, [encoder],axis =0)\n for i in range(1,59):\n #new_gabor = rotate(encoder.reshape(28,28),degrees*i,reshape = False).ravel()\n rotated_encoders = np.append(rotated_encoders, [rotate(encoder.reshape(28,28),degrees*i,reshape = False).ravel()],axis =0)\n #rotated_encoders = np.append(rotated_encoders, [encoder],axis =0)\n'''\n rotated_encoders = cPickle.load(open(\"encoders.p\", \"r\"))\n \n #Num of neurons does not divide evenly with 6 degree increments, so add random encoders\n extra_encoders = Gabor().generate(n_hid - len(rotated_encoders), (11, 11), rng=rng)\n extra_encoders = Mask((28, 28)).populate(extra_encoders, rng=rng, flatten=True)\n all_encoders = np.append(rotated_encoders, extra_encoders, axis =0)\n\n encoders = all_encoders\n \n\n #Ensemble that represents the image with different transformations applied to it\n ens = nengo.Ensemble(n_hid, dim**2, seed=3, encoders=encoders, **ens_params)\n \n\n #Connect stimulus to ensemble, transform using learned weight matrices\n nengo.Connection(stim, ens, transform = np.dot(label_weights,activity_to_img_weights).T)\n #nengo.Connection(stim, ens) #for rotated stim\n \n \n #Recurrent connection on the neurons of the ensemble to perform the rotation\n nengo.Connection(ens.neurons, ens.neurons, transform = rotated_counter_after_encoder_weights.T, synapse=0.1) \n #nengo.Connection(ens.neurons, ens.neurons, transform = low_pass_weights.T, synapse=0.1) \n\n #Identity ensemble\n #ens_iden = nengo.Ensemble(n_hid,dim**2, seed=3, encoders=encoders, **ens_params)\n #Rotation ensembles\n #ens_clock_rot = nengo.Ensemble(n_hid,dim**2,seed=3,encoders=encoders, **ens_params)\n ens_counter_rot = nengo.Ensemble(n_hid,dim**2,seed=3,encoders=encoders, **ens_params)\n \n \n #Inhibition nodes\n #inhib_iden = nengo.Node(inhibit_identity)\n #inhib_clock_rot = nengo.Node(inhibit_rotate_clockwise)\n #inhib_counter_rot = nengo.Node(inhibit_rotate_counter)\n\n #Connect the main ensemble to each manipulation ensemble and back with appropriate transformation\n #Identity\n #nengo.Connection(ens.neurons, ens_iden.neurons,transform=identity_after_encoder_weights.T,synapse=0.1)\n #nengo.Connection(ens_iden.neurons, ens.neurons,transform=identity_after_encoder_weights.T,synapse=0.1)\n #Clockwise\n #nengo.Connection(ens.neurons, ens_clock_rot.neurons, transform = rotated_clockwise_after_encoder_weights.T,synapse=0.1)\n #nengo.Connection(ens_clock_rot.neurons, ens.neurons, transform = rotated_clockwise_after_encoder_weights.T,synapse = 0.1)\n #Counter-clockwise\n #nengo.Connection(ens.neurons, ens_counter_rot.neurons, transform = rotated_counter_after_encoder_weights.T, synapse=0.1)\n #nengo.Connection(ens_counter_rot.neurons, ens.neurons, transform = rotated_counter_after_encoder_weights.T, synapse=0.1)\n #nengo.Connection(ens_counter_rot.neurons, ens.neurons, transform = rotated_counter_after_encoder_weights_filter.T, synapse=0.1)\n \n #nengo.Connection(ens.neurons, ens_counter_rot.neurons, transform = low_pass_weights.T, synapse=0.1)\n #nengo.Connection(ens_counter_rot.neurons, ens.neurons, transform = low_pass_weights.T, synapse=0.1)\n\n #Clean up by a node\n #n = nengo.Node(node_func, size_in=dim**2)\n #nengo.Connection(ens.neurons,n,transform=activity_to_img_weights.T, synapse=0.1)\n #nengo.Connection(n,ens_counter_rot,synapse=0.1)\n\n #Connect the inhibition nodes to each manipulation ensemble\n #nengo.Connection(inhib_iden, ens_iden.neurons, transform=[[-1]] * n_hid)\n #nengo.Connection(inhib_clock_rot, ens_clock_rot.neurons, transform=[[-1]] * n_hid)\n #nengo.Connection(inhib_counter_rot, ens_counter_rot.neurons, transform=[[-1]] * n_hid)\n\n \n #Collect output, use synapse for smoothing\n probe = nengo.Probe(ens.neurons,synapse=0.1)\n \n\nsim = nengo.Simulator(model)\n\nsim.run(5)",
"The following is not part of the brain model, it is used to view the output for the ensemble\nSince it's probing the neurons themselves, the output must be transformed from neuron activity to visual image",
"'''Animation for Probe output'''\nfig = plt.figure()\n\noutput_acts = []\nfor act in sim.data[probe]:\n output_acts.append(np.dot(act,activity_to_img_weights))\n\ndef updatefig(i):\n im = pylab.imshow(np.reshape(output_acts[i],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True)\n \n return im,\n\nani = animation.FuncAnimation(fig, updatefig, interval=0.1, blit=True)\nplt.show()\n\n#ouput_acts = sim.data[probe]\n\nplt.subplot(261)\nplt.title(\"100\")\npylab.imshow(np.reshape(output_acts[100],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(262)\nplt.title(\"500\")\npylab.imshow(np.reshape(output_acts[500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(263)\nplt.title(\"1000\")\npylab.imshow(np.reshape(output_acts[1000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(264)\nplt.title(\"1500\")\npylab.imshow(np.reshape(output_acts[1500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(265)\nplt.title(\"2000\")\npylab.imshow(np.reshape(output_acts[2000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(266)\nplt.title(\"2500\")\npylab.imshow(np.reshape(output_acts[2500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(267)\nplt.title(\"3000\")\npylab.imshow(np.reshape(output_acts[3000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(268)\nplt.title(\"3500\")\npylab.imshow(np.reshape(output_acts[3500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(269)\nplt.title(\"4000\")\npylab.imshow(np.reshape(output_acts[4000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(2,6,10)\nplt.title(\"4500\")\npylab.imshow(np.reshape(output_acts[4500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(2,6,11)\nplt.title(\"5000\")\npylab.imshow(np.reshape(output_acts[4999],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\n\nplt.show()",
"Pickle the probe's output if it takes a long time to run",
"#The filename includes the number of neurons and which digit is being rotated\nfilename = \"mental_rotation_output_ONE_\" + str(n_hid) + \".p\"\ncPickle.dump(sim.data[probe], open( filename , \"wb\" ) )",
"Testing",
"testing = np.dot(ONE,np.dot(label_weights,activity_to_img_weights))\ntesting = output_acts[300]\nplt.subplot(131)\npylab.imshow(np.reshape(testing,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\n#Get image\n#testing = np.dot(ONE,np.dot(label_weights,activity_to_img_weights))\n#noise = np.random.random([28,28]).ravel()\ntesting = node_func(0,testing)\n\nplt.subplot(132)\npylab.imshow(np.reshape(testing,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\n\n#Get activity of image\n_, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=testing)\n\n#Get encoder outputs\ntesting_filter = np.dot(testing_act,rotated_counter_after_encoder_weights_filter)\n\n#Get activities\ntesting_filter = ens.neuron_type.rates(testing_filter, sim.data[ens].gain, sim.data[ens].bias)\n\nfor i in range(5):\n testing_filter = np.dot(testing_filter,rotated_counter_after_encoder_weights_filter)\n testing_filter = ens.neuron_type.rates(testing_filter, sim.data[ens].gain, sim.data[ens].bias)\n testing_filter = np.dot(testing_filter,activity_to_img_weights)\n testing_filter = node_func(0,testing_filter)\n _, testing_filter = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=testing_filter)\n\n\n#testing_rotate = np.dot(testing_rotate,rotation_weights)\n\ntesting_filter = np.dot(testing_filter,activity_to_img_weights)\n\nplt.subplot(133)\npylab.imshow(np.reshape(testing_filter,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nplt.show()\n\n\nplt.subplot(121)\npylab.imshow(np.reshape(X_train[0],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\n#Get activity of image\n_, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=X_train[0])\n\ntesting_rotate = np.dot(testing_act,activity_to_img_weights)\n\nplt.subplot(122)\npylab.imshow(np.reshape(testing_rotate,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nplt.show()",
"Just for fun",
"letterO = np.dot(ZERO,np.dot(label_weights,activity_to_img_weights))\nplt.subplot(161)\npylab.imshow(np.reshape(letterO,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nletterL = np.dot(SEVEN,label_weights)\nfor _ in range(30):\n letterL = np.dot(letterL,rotation_weights)\nletterL = np.dot(letterL,activity_to_img_weights)\nplt.subplot(162)\npylab.imshow(np.reshape(letterL,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nletterI = np.dot(ONE,np.dot(label_weights,activity_to_img_weights))\nplt.subplot(163)\npylab.imshow(np.reshape(letterI,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(165)\npylab.imshow(np.reshape(letterI,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nletterV = np.dot(SEVEN,label_weights)\nfor _ in range(40):\n letterV = np.dot(letterV,rotation_weights)\nletterV = np.dot(letterV,activity_to_img_weights)\nplt.subplot(164)\npylab.imshow(np.reshape(letterV,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nletterA = np.dot(SEVEN,label_weights)\nfor _ in range(10):\n letterA = np.dot(letterA,rotation_weights)\nletterA = np.dot(letterA,activity_to_img_weights)\nplt.subplot(166)\npylab.imshow(np.reshape(letterA,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
harsh6292/machine-learning-nd
|
projects/customer_segments/customer_segments.ipynb
|
mit
|
[
"Machine Learning Engineer Nanodegree\nUnsupervised Learning\nProject: Creating Customer Segments\nWelcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nGetting Started\nIn this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.\nThe dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.\nRun the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.",
"# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom IPython.display import display # Allows the use of display() for DataFrames\n\n# Import supplementary visualizations code visuals.py\nimport visuals as vs\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the wholesale customers dataset\ntry:\n data = pd.read_csv(\"customers.csv\")\n data.drop(['Region', 'Channel'], axis = 1, inplace = True)\n print \"Wholesale customers dataset has {} samples with {} features each.\".format(*data.shape)\nexcept:\n print \"Dataset could not be loaded. Is the dataset missing?\"",
"Data Exploration\nIn this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.\nRun the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.",
"# Display a description of the dataset\ndisplay(data.describe())",
"Implementation: Selecting Samples\nTo get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.",
"# TODO: Select three indices of your choice you wish to sample from the dataset\nindices = [22, 165, 380]\n\n# Create a DataFrame of the chosen samples\nsamples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)\nprint \"Chosen samples of wholesale customers dataset:\"\ndisplay(samples)\n\nimport seaborn as sns\n\nsns.heatmap((samples-data.mean())/data.std(ddof=0), annot=True, cbar=False, square=True)",
"Question 1\nConsider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.\nWhat kind of establishment (customer) could each of the three samples you've chosen represent?\nHint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying \"McDonalds\" when describing a sample customer as a restaurant.\nAnswer:\nBy looking at the data set above of 3 samples, \n- Sample 0 customer buys a lot of Fresh and Frozen products as compared to others. This might represent a big retailer.\n\n\nSample 1 customer buys Milk, Grocery and Detergents and Paper. This kind will likely represent a restaurant.\n\n\nSample 2 customer buys Fresh products more than anyone. This one also can represent a market/retailer dealing in fresh products mostly.\n\n\nImplementation: Feature Relevance\nOne interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.\nIn the code block below, you will need to implement the following:\n - Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.\n - Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.\n - Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.\n - Import a decision tree regressor, set a random_state, and fit the learner to the training data.\n - Report the prediction score of the testing set using the regressor's score function.",
"# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature\nnew_data = data.drop(['Delicatessen'], axis=1)\ntest_label = data['Delicatessen']\n\nfrom sklearn.cross_validation import train_test_split\n\n# TODO: Split the data into training and testing sets using the given feature as the target\nX_train, X_test, y_train, y_test = train_test_split(new_data, test_label, test_size=0.25, random_state=2)\n\nfrom sklearn.tree import DecisionTreeClassifier\n\n# TODO: Create a decision tree regressor and fit it to the training set\nregressor = DecisionTreeClassifier(random_state=2).fit(X_train, y_train)\n\n# TODO: Report the score of the prediction using the testing set\nscore = regressor.score(X_test, y_test)\nprint score",
"Question 2\nWhich feature did you attempt to predict? What was the reported prediction score? Is this feature necessary for identifying customers' spending habits?\nHint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.\nAnswer:\nI attempted to predict 'Delicatessen'.\nThe prediction score given by Decision tree is 0.00909\nSince the score predicted is very less if we remove this feature, it means by dropping this feature necessary information will be lost in correctly predicting the customer behavior.\nDelicatessen feature thus is important part of the data and cannot be removed.\nVisualize Feature Distributions\nTo get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.",
"corr = data.corr()\nmask = np.zeros_like(corr)\nmask[np.triu_indices_from(mask, 1)] = True\nwith sns.axes_style(\"white\"):\n ax = sns.heatmap(corr, mask=mask, square=True, annot=True,\n cmap='RdBu', fmt='+.3f')\n plt.xticks(rotation=45, ha='center')\n\n# Produce a scatter matrix for each pair of features in the data\npd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');\n\n",
"Question 3\nAre there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?\nHint: Is the data normally distributed? Where do most of the data points lie? \nAnswer:\nThe pair of features exhibiting some type of correlation are [Milk, Grocery], [Grocery, Detergents_Paper].\nFrom the above data, Delicatessen feature doesn't seem to have any correlation between any other features. All the features when compared to Deli are bunched near the origin and thus don't have a normalized distribution.\nThis denies my suspicion about the relevance of the feature since the scatter matrix did not show any correlation between Deli and other features.\nData Preprocessing\nIn this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.\nImplementation: Feature Scaling\nIf data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.\nIn the code block below, you will need to implement the following:\n - Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this.\n - Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log.",
"fig, axes = plt.subplots(2, 3)\naxes = axes.flatten()\nfig.set_size_inches(18, 6)\nfig.suptitle('Distribution of Features')\n\nfor i, col in enumerate(data.columns):\n feature = data[col]\n sns.distplot(feature, label=col, ax=axes[i]).set(xlim=(-1000, 20000),)\n axes[i].axvline(feature.mean(),linewidth=2, color='y')\n axes[i].axvline(feature.median(),linewidth=1, color='r')\n \n# TODO: Scale the data using the natural logarithm\nlog_data = np.log(data)\n\n# TODO: Scale the sample data using the natural logarithm\nlog_samples = np.log(samples)\n\n\nfig, axes = plt.subplots(2, 3)\naxes = axes.flatten()\nfig.set_size_inches(18, 6)\nfig.suptitle('Distribution of Features for Log Data')\n\nfor i, col in enumerate(log_data.columns):\n feature = log_data[col]\n sns.distplot(feature, label=col, ax=axes[i])\n axes[i].axvline(feature.mean(),linewidth=2, color='y')\n axes[i].axvline(feature.median(),linewidth=1, color='r')\n \n\n# Produce a scatter matrix for each pair of newly-transformed features\npd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');\n\n# set plot style & color scheme\nsns.set_style('ticks')\nwith sns.color_palette(\"Reds_r\"):\n # plot densities of log data\n plt.figure(figsize=(8,4))\n for col in data.columns:\n sns.kdeplot(log_data[col], shade=True)\n plt.legend(loc='best')",
"Observation\nAfter applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).\nRun the code below to see how the sample data has changed after having the natural logarithm applied to it.",
"# Display the log-transformed sample data\ndisplay(log_samples)",
"Implementation: Outlier Detection\nDetecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many \"rules of thumb\" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.\nIn the code block below, you will need to implement the following:\n - Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.\n - Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.\n - Assign the calculation of an outlier step for the given feature to step.\n - Optionally remove data points from the dataset by adding indices to the outliers list.\nNOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!\nOnce you have performed this implementation, the dataset will be stored in the variable good_data.",
"# For each feature find the data points with extreme high or low values\nfor feature in log_data.keys():\n \n # TODO: Calculate Q1 (25th percentile of the data) for the given feature\n Q1 = np.percentile(log_data[feature], 25.0)\n \n # TODO: Calculate Q3 (75th percentile of the data) for the given feature\n Q3 = np.percentile(log_data[feature], 75.0)\n\n # TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)\n step = (Q3 - Q1) * 1.5\n \n # Display the outliers\n print \"Data points considered outliers for the feature '{}':\".format(feature)\n display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])\n \n# OPTIONAL: Select the indices for data points you wish to remove\noutliers = [65, 66, 75, 128, 154]\n\n# Remove the outliers, if any were specified\ngood_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)",
"Question 4\nAre there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why. \nAnswer:\nThere are some data points which are categorized as outliers in atleast 2 features.\nThe data points added to outliers list are: 65, 66, 75, 128, 154\nThese points should be removed from the dataset because these data points don't lie within normalized distribution range for two or more features. By removing these outliers, we can then correctly classify other data points.\nFeature Transformation\nIn this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.\nImplementation: PCA\nNow that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new \"feature\" of the space, however it is a composition of the original features present in the data.\nIn the code block below, you will need to implement the following:\n - Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.\n - Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.",
"from sklearn.decomposition import PCA\n\n# TODO: Apply PCA by fitting the good data with the same number of dimensions as features\npca = PCA(n_components=6).fit(good_data)\n\n# TODO: Transform log_samples using the PCA fit above\npca_samples = pca.transform(log_samples)\n\n# Generate PCA results plot\npca_results = vs.pca_results(good_data, pca)",
"Question 5\nHow much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.\nHint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the individual feature weights.\nAnswer:\nThe variance in data explained by first and second PCA is 0.7068 or 70.68%\nThe variance explained by first four PCA dimensions is 0.9311 or 93.11%\nThe dimensions representing customer spending is as follows:\n1. Dimension-1\n It indicates that customer spends more on Detergents_Paper and if customer spends more on Detergents_Paper it will also spend more on Milk and Groceries while spending less on Fresh products and Frozen items. This dimension is inclined to represent maybe a coffee/drinks restaurant.\n\n\nDimension-2\n In this dimension, all the weights are positive. Customer spends most on Fresh products, Frozen and Deli items with less focus on Milk, Grocery but also buys other items like Milk, Grocery and Detergents_Paper. This will likely represent a market.\n\n\nDimension-3\n Here customer spends most on Deli and frozen products and doesn't focus on buying Fresh products and Detergents and paper. This most likely represents a Deli restaurant.\n\n\nDimension-4\n This dimension is represented by Frozen. A customer buys more of Frozen products and Detergents and Paper while keeping away from Deli items and Fresh produce.\n\n\nObservation\nRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.",
"# Display sample log-data after having a PCA transformation applied\ndisplay(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))",
"Implementation: Dimensionality Reduction\nWhen using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.\nIn the code block below, you will need to implement the following:\n - Assign the results of fitting PCA in two dimensions with good_data to pca.\n - Apply a PCA transformation of good_data using pca.transform, and assign the results to reduced_data.\n - Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.",
"# TODO: Apply PCA by fitting the good data with only two dimensions\npca = PCA(n_components=2).fit(good_data)\n\n# TODO: Transform the good data using the PCA fit above\nreduced_data = pca.transform(good_data)\n\n# TODO: Transform log_samples using the PCA fit above\npca_samples = pca.transform(log_samples)\n\n# Create a DataFrame for the reduced data\nreduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])",
"Observation\nRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.",
"# Display sample log-data after applying PCA transformation in two dimensions\ndisplay(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))",
"Visualizing a Biplot\nA biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.\nRun the code cell below to produce a biplot of the reduced-dimension data.",
"# Create a biplot\nvs.biplot(good_data, reduced_data, pca)",
"Observation\nOnce we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories. \nFrom the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?\nClustering\nIn this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. \nQuestion 6\nWhat are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?\nAnswer:\n Advantages of K-means:\n1. It is simple to implement and requires less computation cost than compared to other clustering algorithms.\n2. Since it uses Euclidean distance to identify clusters, it is faster than other clustering algorithms.\n3. It produces tighter clusters than other algorithms (hard classification)\nReference: \n- https://en.wikipedia.org/wiki/K-means_clustering\n- http://playwidtech.blogspot.hk/2013/02/k-means-clustering-advantages-and.html\n- http://www.improvedoutcomes.com/docs/WebSiteDocs/Clustering/K-Means_Clustering_Overview.htm\nAdvantages of Gaussian Mixture Model:\n1. With GMM, a data point is allowed to be loosely associated with one or more clusters based on the probability. The data point is not strictly associated with just one cluster.\n2. Since a single data point can be associated with multiple clusters, GMM will avoid to create a cluster of a particular shape as opposed to K-means (soft classification).\nIn hard classification (like in K-means), a data point is assigned to let's say cluster A with 100% probability or belief that it belongs to cluster A. As the algorithm progresses, it might reverse it belief that the same data point now belongs to another cluster, Cluster B, but with same 100% probability that it belongs to this new cluster. In effect, in every iteration, the cluster assignment is hard in sense that either a data point belong to Cluster or not.\nIn Soft classification (like in GMM), a data point is assigned to Cluster A with some probability like 90%. At the same time the data point also has 10% OF chance that it belongs to Cluster B. In the next iteration, algorithm will recalculate cluster centers and it might lower the chance that data point now has 80% chance of belonging to Cluster A and 20% chance of belonging to Cluster B. GMM incorporates this degree of uncertainity into the algorithm.\nReferences: \n- http://scikit-learn.org/stable/modules/mixture.html#mixture\n- https://www.quora.com/What-are-the-advantages-to-using-a-Gaussian-Mixture-Model-clustering-algorithm\n- https://www.r-bloggers.com/k-means-clustering-is-not-a-free-lunch/\n- https://www.quora.com/What-is-the-difference-between-K-means-and-the-mixture-model-of-Gaussian\n- https://shapeofdata.wordpress.com/2013/07/30/k-means/\nBased on the dataset and PCA analysis, it looks like some of the data points (customers) doesn't really belong to a particular group. The PCA dimension-2 group can be seen as this example. Customers buying Fresh, Frozen and Deli can also be loosely placed in group of customers buying more Frozen items or Deli items. For this reason, Gaussian Mixture model is more appropriate to use. \nImplementation: Creating Clusters\nDepending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the \"goodness\" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.\nIn the code block below, you will need to implement the following:\n - Fit a clustering algorithm to the reduced_data and assign it to clusterer.\n - Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.\n - Find the cluster centers using the algorithm's respective attribute and assign them to centers.\n - Predict the cluster for each sample data point in pca_samples and assign them sample_preds.\n - Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.\n - Assign the silhouette score to score and print the result.",
"from sklearn.mixture import GMM\n\n# TODO: Apply your clustering algorithm of choice to the reduced data \nclusterer = GMM(n_components=2).fit(reduced_data)\n\n# TODO: Predict the cluster for each data point\npreds = clusterer.predict(reduced_data)\n\n# TODO: Find the cluster centers\ncenters = clusterer.means_\n\n# TODO: Predict the cluster for each transformed sample data point\nsample_preds = clusterer.predict(pca_samples)\n\nfrom sklearn.metrics import silhouette_score\n\n# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen\nscore = silhouette_score(reduced_data, preds)\nprint score",
"Question 7\nReport the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score? \nAnswer:\n| Number of Clusters | Silhouette Score |\n|---|---|\n| 2 | 0.41181 |\n| 3 | 0.37245 |\n| 5 | 0.29544 |\n| 7 | 0.32197 |\n| 11 | 0.25546 |\n| 15 | 0.22718 |\n| 30 | 0.18454 |\n| 130 | 0.12487 |\n| 205 | 0.22035 |\n| 435 | 0.29435 |\nThe number of clusters with best silhouette score is 2.\nCluster Visualization\nOnce you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.",
"# Display the results of the clustering from implementation\nvs.cluster_results(reduced_data, preds, centers, pca_samples)",
"Implementation: Data Recovery\nEach cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.\nIn the code block below, you will need to implement the following:\n - Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.\n - Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.",
"# TODO: Inverse transform the centers\nlog_centers = pca.inverse_transform(centers)\n\n# TODO: Exponentiate the centers\ntrue_centers = np.exp(log_centers)\n\n# Display the true centers\nsegments = ['Segment {}'.format(i) for i in range(0,len(centers))]\ntrue_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())\ntrue_centers.index = segments\ndisplay(true_centers)\n\nimport seaborn as sns\n\nsns.heatmap((true_centers-data.mean())/data.std(ddof=1), annot=True, cbar=False, square=True)",
"Question 8\nConsider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?\nHint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.\nAnswer:\nBased on the cluster centers in above figure and the statistical representation\n- Segment 0 depicts customer that buys more Fresh and Frozen items with little spending on Grocery and Detergents and Paper.\n- Segment 1 depicts a customer that buys more Milk, Grocery and Detergent_Paper with less focus on Fresh and Frozen products,\nSegment 1 will then represent coffee/drink restaurant and Segment 0 will represent a prepared foods industry like catering for airline etc.\nQuestion 9\nFor each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?\nRun the code block below to find which cluster each sample point is predicted to be.",
"# Display the predictions\nfor i, pred in enumerate(sample_preds):\n print \"Sample point\", i, \"predicted to be in Cluster\", pred",
"Answer:\n- For the sample point 0, Segment 1 best represents it.\n- For the Sample point 1, Segment 1 best represents it.\n- For the Sample point 2, Segment 0 best represents it.\nSample points 1 & 2 are predicted correctly. It looks like sample point 0 belongs more towards cluster 0, but is predicted to be in cluster 1.\nConclusion\nIn this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.\nQuestion 10\nCompanies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?\nHint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?\nAnswer:\nSegment 1 customers buys more of Milk, Grocery and Detergent and Paper while Segment 0 customers buys more of Fresh products and Frozen items.\nDistributor can first select segment 1 customers since Milk, Grocery items need not be delivered everyday and can work with reduced number of days. Distributor can then perform A/B test by dividing the segment 0 customers into two where one group will receive shipments 5 days a week and other group 3 days a week. Based on the response of two groups, if the group with shipment of 3 days respond positively, distributor can choose to switch customers of segment 1 to 3 day shipments.\nSimilarly, customers in Segment 0 order more Fresh produce which might make customers unhappy about 3 day shipment as fresh produce will not be much 'fresh' as compared to 5 day delivery.\nHowever, all customers in segment 1 cannot be considered as equally since some customers may specifically require 5 day delivery to maintain fresh stock.\nQuestion 11\nAdditional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.\nHow can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?\nHint: A supervised learner could be used to train on the original customers. What would be the target variable?\nAnswer:\nBy combining the customer segment data as a new feature with the 6 original features, distributor can use supervised learning algorithm like Decision Trees or Gradient Boosting to train and predict a customer's delivery schedule based on the results of A/B testing done earlier.\nThe target variable will then be delivery schedule for new customers.\nVisualizing Underlying Distributions\nAt the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.\nRun the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.",
"# Display the clustering results based on 'Channel' data\nvs.channel_results(reduced_data, outliers, pca_samples)",
"Question 12\nHow well does the clustering algorithm and number of clusters you've chosen compare to this underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers? Are there customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes' by this distribution? Would you consider these classifications as consistent with your previous definition of the customer segments?\nAnswer:\n1. The clustering algorithm does well in predicting 2 as the number of clusters. Seeing the distribution above for 'Channel' feature, the clustering algorithm also predicts nearly same distribution.\n\n\nYes, based on the above distribution visualizing 'Channel' feature, most of the customers belong to either one of the customer segments and can be classified as 'Retailers' or 'Hotel/Restaurants/Cafe'.\n\n\nYes, the clustering algorithm was able to find a distribution similar to above one based on the 6 features and not including 'Channel' feature and came to same conclusion by giving two segments/clusters as output.\n\n\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nreimers/deeplearning4nlp-tutorial
|
2015-10_Lecture/Lecture4/code/MNIST/Autoencoder.ipynb
|
apache-2.0
|
[
"Autoencoder for MNIST Dataset\nThis scripts trains an autoencoder on the MNIST dataset and plots some representation. It also tries to estimate how good the representation is using a a k-Means clustering an then computing the accurarcy of the clusters.\nReading the dataset\nThis reads the MNIST hand written digit dataset and creates a subset of the training data with only 10 training examples per class.",
"import gzip\nimport cPickle\nimport numpy as np\nimport theano\nimport theano.tensor as T\nimport random\n\nexamples_per_labels = 10\n\n\n# Load the pickle file for the MNIST dataset.\ndataset = 'mnist.pkl.gz'\n\nf = gzip.open(dataset, 'rb')\ntrain_set, dev_set, test_set = cPickle.load(f)\nf.close()\n\n#train_set contains 2 entries, first the X values, second the Y values\ntrain_x, train_y = train_set\ndev_x, dev_y = dev_set\ntest_x, test_y = test_set\n\nprint 'Train: ', train_x.shape\nprint 'Dev: ', dev_x.shape\nprint 'Test: ', test_x.shape\n\nexamples = []\nexamples_labels = []\nexamples_count = {}\n\nfor idx in xrange(train_x.shape[0]):\n label = train_y[idx]\n \n if label not in examples_count:\n examples_count[label] = 0\n \n if examples_count[label] < examples_per_labels:\n arr = train_x[idx]\n examples.append(arr)\n examples_labels.append(label)\n examples_count[label]+=1\n\ntrain_subset_x = np.asarray(examples)\ntrain_subset_y = np.asarray(examples_labels)\n\nprint \"Train Subset: \",train_subset_x.shape\n\n",
"Baseline\nWe use a feed forward network to train on the subset and to derive a accurarcy.",
"from keras.layers import containers\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Flatten, AutoEncoder, Dropout\nfrom keras.optimizers import SGD\nfrom keras.utils import np_utils\nfrom keras.callbacks import EarlyStopping\n\nrandom.seed(1)\nnp.random.seed(1)\n\nnb_epoch = 50\nbatch_size = 100\nnb_labels = 10\n\ntrain_subset_y_cat = np_utils.to_categorical(train_subset_y, nb_labels)\ndev_y_cat = np_utils.to_categorical(dev_y, nb_labels)\ntest_y_cat = np_utils.to_categorical(test_y, nb_labels)\n\nmodel = Sequential()\nmodel.add(Dense(1000, input_dim=train_x.shape[1], activation='tanh'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(nb_labels, activation='softmax'))\n\n\n\n\nmodel.compile(loss='categorical_crossentropy', optimizer='Adam')\nearlyStopping = keras.callbacks.EarlyStopping(monitor='val_loss', patience=1, verbose=0)\n\nprint('Start training')\nmodel.fit(train_subset_x, train_subset_y_cat, batch_size=batch_size, nb_epoch=nb_epoch,\n show_accuracy=True, verbose=True, validation_data=(dev_x, dev_y_cat), callbacks=[earlyStopping])\n\nscore = model.evaluate(test_x, test_y_cat, show_accuracy=True, verbose=False)\nprint('Test accuracy:', score[1])",
"Autoencoder - Pretraining\nThis is the code how the autoencoder should work in principle. However, the pretraining does not workly too good, as is has no real impact when then trained on the labeld data. But it gives some useful representations for the data.",
"# Train the autoencoder\n# Source: https://github.com/fchollet/keras/issues/358\nfrom keras.layers import containers\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Flatten, AutoEncoder, Dropout\nfrom keras.optimizers import SGD\nfrom keras.utils import np_utils\n\nrandom.seed(3)\nnp.random.seed(3)\n\n\n\nnb_epoch_pretraining = 10\nbatch_size_pretraining = 500\n\n\n# Layer-wise pretraining\nencoders = []\ndecoders = []\nnb_hidden_layers = [train_x.shape[1], 500, 2]\nX_train_tmp = np.copy(train_x)\n\ndense_layers = []\n\nfor i, (n_in, n_out) in enumerate(zip(nb_hidden_layers[:-1], nb_hidden_layers[1:]), start=1):\n print('Training the layer {}: Input {} -> Output {}'.format(i, n_in, n_out))\n # Create AE and training\n ae = Sequential()\n if n_out >= 100:\n encoder = containers.Sequential([Dense(output_dim=n_out, input_dim=n_in, activation='tanh'), Dropout(0.5)])\n else:\n encoder = containers.Sequential([Dense(output_dim=n_out, input_dim=n_in, activation='tanh')])\n decoder = containers.Sequential([Dense(output_dim=n_in, input_dim=n_out, activation='tanh')])\n ae.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=False))\n \n sgd = SGD(lr=2, decay=1e-6, momentum=0.0, nesterov=True)\n ae.compile(loss='mse', optimizer='adam')\n ae.fit(X_train_tmp, X_train_tmp, batch_size=batch_size_pretraining, nb_epoch=nb_epoch_pretraining, verbose = True, shuffle=True)\n # Store trainined weight and update training data\n encoders.append(ae.layers[0].encoder)\n decoders.append(ae.layers[0].decoder)\n \n X_train_tmp = ae.predict(X_train_tmp)\n \n\n \n\n\n##############\n \n \n \n \n\n#End to End Autoencoder training \nif len(nb_hidden_layers) > 2:\n full_encoder = containers.Sequential()\n for encoder in encoders:\n full_encoder.add(encoder)\n\n full_decoder = containers.Sequential()\n for decoder in reversed(decoders):\n full_decoder.add(decoder)\n\n full_ae = Sequential()\n full_ae.add(AutoEncoder(encoder=full_encoder, decoder=full_decoder, output_reconstruction=False)) \n full_ae.compile(loss='mse', optimizer='adam')\n\n print \"Pretraining of full AE\"\n full_ae.fit(train_x, train_x, batch_size=batch_size_pretraining, nb_epoch=nb_epoch_pretraining, verbose = True, shuffle=True)\n\n",
"Plot Autoencoder\nHere we are going to plot the output of the autoencoder (dimension of the last hidden layer should be 2).",
"############\n# Plot it\n############\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\n\nmodel = Sequential()\nfor encoder in encoders:\n model.add(encoder)\nmodel.compile(loss='categorical_crossentropy', optimizer='Adam')\n\nae_test = model.predict(test_x)\n\ncolors = {0: 'b', 1: 'g', 2: 'r', 3:'c', 4:'m',\n 5:'y', 6:'k', 7:'orange', 8:'darkgreen', 9:'maroon'}\n\nmarkers = {0: 'o', 1: '+', 2: 'v', 3:'<', 4:'>',\n 5:'^', 6:'s', 7:'p', 8:'*', 9:'x'}\n\nplt.figure(figsize=(10, 10)) \npatches = []\nfor idx in xrange(0,300): \n point = ae_test[idx]\n label = test_y[idx]\n \n if label in [2,5,8,9]: #We skip these labels to make the plot clearer\n continue\n \n color = colors[label]\n marker = markers[label]\n line = plt.plot(point[0], point[1], color=color, marker=marker, markersize=8)\n \n#plt.axis([-1.1, 1.1, -1.1, +1.1])",
"PCA\nIn comparison we are going to plot also an PCA image.",
"from sklearn.decomposition import PCA\npca = PCA(n_components=2)\npca.fit(train_x)\n\npca_test = pca.transform(test_x)\n\ncolors = {0: 'b', 1: 'g', 2: 'r', 3:'c', 4:'m',\n 5:'y', 6:'k', 7:'orange', 8:'darkgreen', 9:'maroon'}\n\nmarkers = {0: 'o', 1: '+', 2: 'v', 3:'<', 4:'>',\n 5:'^', 6:'s', 7:'p', 8:'*', 9:'x'}\n\nplt.figure(figsize=(10, 10)) \npatches = []\nfor idx in xrange(0,300): \n point = pca_test[idx]\n label = test_y[idx]\n if label in [2,5,8,9]:\n continue\n color = colors[label]\n marker = markers[label]\n line = plt.plot(point[0], point[1], color=color, marker=marker, markersize=8)\n \n#plt.axis([-1.1, 1.1, -1.1, +1.1])\nplt.show()",
"k-Means clustering\nWe run a k-means clustering on the AutoEncoder representations and the PCA representations and then do a majority voting to get the label per cluster. We then compute the accurarcy of the clustering. This gives us some impression how good the 2-dim representations are. This is not perfect, as AutoEncoder and PCA might create non-linear cluster boundaries.",
"from sklearn.cluster import KMeans\nimport operator\n\ndef clusterAccurarcy(predictions, n_clusters=10):\n km = KMeans(n_clusters=n_clusters)\n\n clusters = km.fit_predict(predictions)\n\n #Count labels per cluster\n labelCount = {}\n\n for idx in xrange(len(test_y)):\n cluster = clusters[idx]\n label = test_y[idx]\n\n if cluster not in labelCount:\n labelCount[cluster] = {}\n\n if label not in labelCount[cluster]:\n labelCount[cluster][label] = 0\n\n labelCount[cluster][label] += 1\n\n #Majority Voting\n clusterLabels = {}\n for num in xrange(n_clusters): \n maxLabel = max(labelCount[num].iteritems(), key=operator.itemgetter(1))[0]\n clusterLabels[num] = maxLabel\n #print clusterLabels\n #Number of errors\n errCount = 0\n for idx in xrange(len(test_y)):\n cluster = clusters[idx] \n clusterLabel = clusterLabels[cluster]\n label = test_y[idx]\n\n if label != clusterLabel:\n errCount += 1\n \n return errCount/float(len(test_y))\n \nprint \"PCA Accurarcy: %f%%\" % (clusterAccurarcy(pca_test)*100)\nprint \"AE Accurarcy: %f%%\" % (clusterAccurarcy(ae_test)*100)\n ",
"Using pretrained AutoEncoder for Classification\nIn principle the pretrained AutoEncoder could be used for classification as in the following code. But it does not yet result to better results than the Neural Network without pretraining.",
"nb_epoch = 50\nbatch_size = 100\n\nmodel = Sequential()\nfor encoder in encoders:\n model.add(encoder)\n \n\nmodel.add(Dense(output_dim=nb_labels, activation='softmax'))\n\ntrain_subset_y_cat = np_utils.to_categorical(train_subset_y, nb_labels)\ntest_y_cat = np_utils.to_categorical(test_y, nb_labels)\n\n\n\n\nmodel.compile(loss='categorical_crossentropy', optimizer='Adam')\nscore = model.evaluate(test_x, test_y_cat, show_accuracy=True, verbose=0)\nprint('Test score before fine turning:', score[0])\nprint('Test accuracy before fine turning:', score[1])\nmodel.fit(train_subset_x, train_subset_y_cat, batch_size=batch_size, nb_epoch=nb_epoch,\n show_accuracy=True, validation_data=(dev_x, dev_y_cat), shuffle=True)\nscore = model.evaluate(test_x, test_y_cat, show_accuracy=True, verbose=0)\nprint('Test score after fine turning:', score[0])\nprint('Test accuracy after fine turning:', score[1])\n "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fgnt/nara_wpe
|
examples/WPE_Numpy_online.ipynb
|
mit
|
[
"%reload_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport IPython\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport soundfile as sf\nimport time\nfrom tqdm import tqdm\n\nfrom nara_wpe.wpe import online_wpe_step, get_power_online, OnlineWPE\nfrom nara_wpe.utils import stft, istft, get_stft_center_frequencies\nfrom nara_wpe import project_root\n\nstft_options = dict(size=512, shift=128)",
"Example with real audio recordings\nThe iterations are dropped in contrast to the offline version. To use past observations the correlation matrix and the correlation vector are calculated recursively with a decaying window. $\\alpha$ is the decay factor.\nSetup",
"channels = 8\nsampling_rate = 16000\ndelay = 3\nalpha=0.9999\ntaps = 10\nfrequency_bins = stft_options['size'] // 2 + 1",
"Audio data",
"file_template = 'AMI_WSJ20-Array1-{}_T10c0201.wav'\nsignal_list = [\n sf.read(str(project_root / 'data' / file_template.format(d + 1)))[0]\n for d in range(channels)\n]\ny = np.stack(signal_list, axis=0)\nIPython.display.Audio(y[0], rate=sampling_rate)",
"Online buffer\nFor simplicity the STFT is performed before providing the frames.\nShape: (frames, frequency bins, channels)\nframes: K+delay+1",
"Y = stft(y, **stft_options).transpose(1, 2, 0)\nT, _, _ = Y.shape\n\ndef aquire_framebuffer():\n buffer = list(Y[:taps+delay, :, :])\n for t in range(taps+delay+1, T):\n buffer.append(Y[t, :, :])\n yield np.array(buffer)\n buffer.pop(0)",
"Non-iterative frame online approach\nA frame online example requires, that certain state variables are kept from frame to frame. That is the inverse correlation matrix $\\text{R}_{t, f}^{-1}$ which is stored in Q and initialized with an identity matrix, as well as filter coefficient matrix that is stored in G and initialized with zeros. \nAgain for simplicity the ISTFT is applied afterwards.",
"Z_list = []\nQ = np.stack([np.identity(channels * taps) for a in range(frequency_bins)])\nG = np.zeros((frequency_bins, channels * taps, channels))\n\nfor Y_step in tqdm(aquire_framebuffer()):\n Z, Q, G = online_wpe_step(\n Y_step,\n get_power_online(Y_step.transpose(1, 2, 0)),\n Q,\n G,\n alpha=alpha,\n taps=taps,\n delay=delay\n )\n Z_list.append(Z)\n\nZ_stacked = np.stack(Z_list)\nz = istft(np.asarray(Z_stacked).transpose(2, 0, 1), size=stft_options['size'], shift=stft_options['shift'])\n\nIPython.display.Audio(z[0], rate=sampling_rate)",
"Frame online WPE in class fashion:\nOnline WPE class holds the correlation Matrix and the coefficient matrix.",
"Z_list = []\nonline_wpe = OnlineWPE(\n taps=taps,\n delay=delay,\n alpha=alpha\n)\nfor Y_step in tqdm(aquire_framebuffer()):\n Z_list.append(online_wpe.step_frame(Y_step))\n\nZ = np.stack(Z_list)\nz = istft(np.asarray(Z).transpose(2, 0, 1), size=stft_options['size'], shift=stft_options['shift'])\n\nIPython.display.Audio(z[0], rate=sampling_rate)",
"Power spectrum\nBefore and after applying WPE.",
"fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(20, 8))\nim1 = ax1.imshow(20 * np.log10(np.abs(Y[200:400, :, 0])).T, origin='lower')\nax1.set_xlabel('')\n_ = ax1.set_title('reverberated')\nim2 = ax2.imshow(20 * np.log10(np.abs(Z_stacked[200:400, :, 0])).T, origin='lower')\n_ = ax2.set_title('dereverberated')\ncb = fig.colorbar(im1)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mbakker7/ttim
|
pumpingtest_benchmarks/8_test_of_hardinxveld_recovery.ipynb
|
mit
|
[
"Recovery Test\nThis test is taken from MLU examples.",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom ttim import *\nimport pandas as pd",
"Set basic parameters for the model:",
"H = 27 #aquifer thickness [m]\nzt = -10 #upper boundary of aquifer\nzb = zt - H\nrw = 0.155 #well screen radius [m]\nQ = 1848 #constant discharge rate [m^3/d]\nt0 = 0.013889 #time stop pumping [d]",
"Load data:",
"data = np.loadtxt('data/recovery.txt', skiprows=1)\nt = data[:, 0]\nh = data[:, 1]",
"Conceptual model:",
"ml1 = ModelMaq(kaq=[50, 40], z=[0, zt, zb, -68, -88], c=[1000, 1000], Saq=[1e-4, 5e-5],\\\n topboundary='semi', tmin=1e-4, tmax=0.04)\nw1 = Well(ml1, xw=0, yw=0, rw=rw, res=1, tsandQ=[(0, Q), (t0, 0)], layers=0)\nml1.solve()\n\nca1 = Calibrate(ml1)\nca1.set_parameter(name='kaq0', initial=50, pmin=0)\nca1.set_parameter(name='Saq0', initial=1e-4, pmin=0)\nca1.set_parameter_by_reference(name='res', parameter=w1.res[:], initial=1, pmin=0)\nca1.seriesinwell(name='obs', element=w1, t=t, h=h)\nca1.fit()\n\ndisplay(ca1.parameters)\nprint('RMSE:', ca1.rmse())\n\nhm1 = w1.headinside(t)\nplt.figure(figsize=(8, 5))\nplt.loglog(t, -h, '.', label='obs')\nplt.loglog(t, -hm1[0], label='ttim')\nplt.xlabel('time(d)')\nplt.ylabel('head(m)')\nplt.legend()\nplt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/recovery-double.eps');",
"Add wellbore storage:",
"ml2 = ModelMaq(kaq=[50, 40], z=[0, zt, zb, -68, -88], c=[1000, 1000], Saq=[1e-4, 5e-5],\\\n topboundary='semi', tmin=1e-4, tmax=0.04)\nw2 = Well(ml2, xw=0, yw=0, rw=rw, rc=0.155, res=1, tsandQ=[(0, Q), (t0, 0)], layers=0)\nml2.solve()\n\nca2 = Calibrate(ml2)\nca2.set_parameter(name='kaq0', initial=50, pmin=0)\nca2.set_parameter(name='Saq0', initial=1e-4, pmin=0)\nca2.set_parameter_by_reference(name='rc', parameter=w2.rc[:], initial=0.1, pmin=0)\nca2.set_parameter_by_reference(name='res', parameter=w2.res[:], initial=1, pmin=0)\nca2.seriesinwell(name='obs', element=w2, t=t, h=h)\nca2.fit()\n\ndisplay(ca2.parameters)\nprint('RMSE:', ca2.rmse())\n\nhm2 = w2.headinside(t)\nplt.figure(figsize=(8, 5))\nplt.loglog(t, -h, '.', label='obs')\nplt.loglog(t, -hm2[0], label='ttim')\nplt.xlabel('time(d)')\nplt.ylabel('head(m)')\nplt.legend()\nplt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/recovery-double rc.eps');",
"Simulation with rc has a worse performance. RMSE increases, and the Akaike criteria is much larger than the former model. Thus, rc should be removed.\nSingle aquifer model:",
"ml0 = ModelMaq(kaq=50, z=[0, zt, zb], c=1000, Saq=1e-4, topboundary='semi', \\\n tmin=1e-4, tmax=0.04)\nw0 = Well(ml0, xw=0, yw=0, rw=rw, res=1, tsandQ=[(0, Q), (t0, 0)], layers=0)\nml0.solve()\n\nca0 = Calibrate(ml0)\nca0.set_parameter(name='kaq0', initial=50, pmin=0)\nca0.set_parameter(name='Saq0', initial=1e-4, pmin=0)\nca0.set_parameter_by_reference(name='res', parameter=w0.res[:], initial=1)\nca0.seriesinwell(name='obs', element=w0, t=t, h=h)\nca0.fit()\n\ndisplay(ca0.parameters)\nprint('RMSE:', ca0.rmse())\n\nhm = w0.headinside(t)\nplt.figure(figsize=(8, 5))\nplt.loglog(t, -h, '.', label='obs')\nplt.loglog(t, -hm[0], label='ttim')\nplt.xlabel('time(d)')\nplt.ylabel('head(m)')\nplt.legend()\nplt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/recovery-single.eps');",
"Besides linear curve fitting solution, MLU uses log-drawdown-curve-fitting as comparision.\nfitm is a version with changed objective function of the Calibrate function. The original objective function is 'h_observed - h_predicted', while for log drawdown curve fitting solution, the objective function has been changed to 'log10(-h_observed) - log10(-h_predicted)'.",
"from fitm import Calibrate\n\nml3 = ModelMaq(kaq=[50, 40], z=[0, zt, zb, -68, -88], c=[1000, 1000], Saq=[1e-4, 5e-5],\\\n topboundary='semi', tmin=1e-4, tmax=0.04)\nw3 = Well(ml3, xw=0, yw=0, rw=rw, res=1, tsandQ=[(0, Q), (t0, 0)], layers=0)\nml3.solve()\n\nca3 = Calibrate(ml3)\nca3.set_parameter(name='kaq0', initial=50, pmin=0)\nca3.set_parameter(name='Saq0', initial=1e-4,pmin=0)\nca3.set_parameter_by_reference(name='res', parameter=w3.res[:], initial=1,pmin=0)\nca3.seriesinwell(name='obs', element=w3,t=t, h=h)\nca3.fit(report=True)\n\ndisplay(ca3.parameters)\nprint('RMSE:', ca3.rmse())\n\nhm3 = w3.headinside(t)\nplt.figure(figsize=(8, 5))\nplt.loglog(t, -h, '.', label='obs')\nplt.loglog(t, -hm3[0], label='ttim')\nplt.xlabel('time(d)')\nplt.ylabel('head(m)')\nplt.legend()\nplt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/recovery-double log.eps');",
"According to rmse and the Akaike criteria, log curve fitting solution performs worse than linear curve fitting. The results reported in following table are from models calibrated by linear curve fitting solution.\nSummary of values modeled by different methods:",
"ta = pd.DataFrame(columns=['k [m/d]', 'Ss [1/m]', 'res'], \\\n index=['MLU-log', 'TTim-single layer', 'TTim-two layers'])\nta.loc['TTim-single layer'] = ca0.parameters['optimal'].values\nta.loc['TTim-two layers'] = ca1.parameters['optimal'].values\nta.loc['MLU-log'] = [51.530, 8.16E-04, 0.022]\nta['RMSE [m]'] = [0.00756, ca0.rmse(), ca1.rmse()]\nta"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
awadalaa/DataSciencePractice
|
practice/03.TrainingModelWithSciKit.ipynb
|
mit
|
[
"Training a machine learning model with scikit-learn\nAgenda\n\nWhat is the K-nearest neighbors classification model?\nWhat is the four steps for model training and prediction in scikit learn?\nHow can I apply this pattern to other machine learning models?\n\nReviewing the iris dataset",
"from IPython.display import HTML\nHTML('<iframe src=http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data width=300 height=200></iframe>')",
"150 observations\n4 features (sepal length, sepal width, petal length, petal width)\nResponse variable is the iris species\nClassification problem since response is categorical\nMore information in the UCI Machine Learning Repository\n\nK-nearest neighbors (KNN) classification\n\nPick a value for K.\nSearch for the K observations in the training data that are \"nearest\" to the measurements of the unknown iris.\nUse the most popular response value from the K nearest neighbors as the predicted response value for the unknown iris.\n\nLoading the data",
"# import load_iris function from datasets module\nfrom sklearn.datasets import load_iris\n\n# save \"bunch\" object containing iris dataset and its attributes\niris = load_iris()\n\n# store feature matrix in \"X\"\nX = iris.data\n\n# store response vector in \"y\"\ny = iris.target\n\n# print the shapes of X and y\nprint X.shape\nprint y.shape",
"scikit-learn 4-step modeling pattern\n Step 1:Import the class you plan to use",
"from sklearn.neighbors import KNeighborsClassifier",
"Step 2: \"Instantiate\" the \"estimator\"\n* \"Estimator\" is scikit-learn's term for model\n* \"Instantiate\" means \"make an instance of\"",
"knn = KNeighborsClassifier(n_neighbors=1)",
"Name of the object does not matter\nCan specify tuning parameters (aka \"hyperparameters\") during this step\nAll parameters not specified are set to their defaults",
"print knn",
"Step 3: Fit the model with data (aka \"model training\")\n* Model is learning the relationship between X and y\n* Occurs in-place",
"knn.fit(X,y)",
"Step 4: Predict the response for a new observation\n* New observations are called \"out-of-sample\" data\n* Uses the information it learned during the model training process",
"knn.predict([3,5,4,2])",
"Returns a NumPy array\nCan predict for multiple observations at once",
"X_new = [[3, 5, 4, 2], [5, 4, 3, 2]]\nknn.predict(X_new)",
"Using a different value for K",
"# instantiate the model (using the value K=5)\nknn = KNeighborsClassifier(n_neighbors=5)\n\n# fit the model with data\nknn.fit(X, y)\n\n# predict the response for new observations\nknn.predict(X_new)",
"Using a different classification model",
"# import the class\nfrom sklearn.linear_model import LogisticRegression\n\n# instantiate the model (using the default parameters)\nlogreg = LogisticRegression()\n\n# fit the model with data\nlogreg.fit(X, y)\n\n# predict the response for new observations\nlogreg.predict(X_new)",
"Resources\n\nNearest Neighbors (user guide), KNeighborsClassifier (class documentation)\nLogistic Regression (user guide [LogisticRegression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html (class documentation)\nVideos from An Introduction to Statistical Learning\nClassification Problems and K-Nearest Neighbors (Chapter 2)\nIntroduction to Classification (Chapter 4)\nLogistic Regression and Maximum Likelihood (Chapter 4)",
"from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AshleySanders/Tutorials
|
w22_NLTK_TextPreprocessing.ipynb
|
mit
|
[
"import nltk",
"<h1>Reading in a Text Corpus</h1>",
"from nltk.corpus import PlaintextCorpusReader\n\n# identify the root folder where texts are stored\ncorpus_root = '/Users/asg/Dropbox/00-UCLA/Courses/DH150-TextAnalysis/plain_text'\n\n# \"Read\" the list of files\nfilelist = PlaintextCorpusReader(corpus_root, '.*')\n\n# list file names in root folder\nfilelist.fileids()",
"<h1>Single Document Analysis</h1>\n\nLet's begin by exploring just one document and its word frequencies.",
"# Use words function to create a words list from a single file\nwordslist = filelist.words('1737_OldBailey_piracy_trial.txt')\n\n# 'sents' is a method from PlaintextCorpusReader that reads in sentences from a document or corpus\n# 'paras' is a method from the same that reads in paragraphs\n\n# Begin to explore words in the text\n# If you're interested in longer words that may carry more meaning, say words longer than three letters\nfiltered_words = [w for w in set(wordslist) if len(w) > 3]\n#filtered_words\n\nfrom nltk.probability import FreqDist\n\n# Calculate the frequency distribution of all words\nfdist = FreqDist(wordslist)\nfdist\n\n# Plot the frequency distribution of the top 30 words. \n\nfdist.plot(30)",
"<h2>Zipf's Law</h2>\n\nGiven some corpus of natural language utterances, the frequency of any word is inversely proportional to its rank on the frequency table.\n\nThis is why we want to remove stopwords. We are much more interested in words with much lower frequencies than \"the\", \"and\", or \"to\". Removing stopwords allows us to filter out this very high frequency words to focus on words that are likely to be more meaningful. \nOne quick way to filter words without a stopwords list specified is to filter on length as we did above. Notice how the graph changes and which words appear when we filter on length.\n<h3>Optional Extension: Explore Zipf's Law</h3>\n<ol>\n <li>Read the [wikipedia article on Zipf’s Law](https://en.wikipedia.org/wiki/Zipf%27s_law)</li>\n <li>Generate a table of words and their frequencies in Voyant, export it as a tsv, and open it in Excel/Google Sheets <b>or</b> if you're familiar with pandas, create a data frame with this information using Python.\n <li>Add the following new columns to your spreadsheet/data frame:</li>\n <ul>\n <li>Word Rank</li>\n <li>log(rank)</li>\n <li>log(frequency)</li>\n </ul>\n <li>Graph log(rank) vs log(frequency) on a log-log graph</li>\n <li>Why might the log transformation be helpful in text analysis?</li>\n</ol>",
"fdist_2 = FreqDist(word.lower() for word in set(filtered_words))\n\nfdist_2.plot(30)\n\n\n# Uncomment the following line by removing the hash symbol at the start of the line and run the code to read the documentation on frequency distributions.\n# help(FreqDist)",
"<h1> Preprocessing text files for analysis</h1>\n\nWe're now starting to see why we need to use some sort of filtration method for our word list. In the last step, you will notice that we also use a method called \"lower\" to convert all of the text to lowercase so that all instances of a word will be grouped together whether that word is lower or upper case in the original raw text. This is generally one of the first steps in preprocessing a document before we begin analysis. Here is an outline of usual procedures to prepare our text:\n<ol>\n <li>Tokenize text by paragraphs, sentences, or words (usually by words)</li>\n <li>Convert all tokens to lower case</li>\n <li>Remove stopwords</li>\n <li><em>Optional:</em> Use a \"stemmer\" to chop off word endings to group words with the same meaning together (flowery, flowering, flowers all reduce to flower, for example). <b>Or</b> use a \"lemmatizer\" to reduce tokens to their root words, rather than simply lopping off the ends of words. This is more precise, but it take much longer to run.</li>\n</ol>\n\n<b>In step 0 below, please note that the path to your document will be different than mine (shown in red below). See Working with Files for more tips on how to open files when working in Jupyter Labs.</b>\n<h2>0. Read in your document</h2>",
"#Read in your document with the open() function; the 'r' opens the file in \"read\" mode.\n\ntext = open('/Users/asg/Dropbox/00-UCLA/Courses/DH150-TextAnalysis/plain_text/1737_OldBailey_piracy_trial.txt', 'r').read()\n\n# Uncomment the following line to see the document. Note that new line characters (\\n) appear in the text.\n# text.read()",
"<h2>1. Tokenize text</h2>",
"# Tokenize text \nfrom nltk import word_tokenize\ntokenlist = word_tokenize(text)\n\n# Find out how many tokens there are in the document with the length (len()) method\nprint(len(tokenlist))\n\n#Determine the type of object we've created with the tokenlist\ntype(tokenlist)",
"<h3>Strings and lists</h3>\n\n<h4>The Difference between Lists and Strings:</h4>\nStrings and lists are both kinds of sequence. We can pull them apart by indexing and slicing them, and we can join them together by concatenating them. However, we cannot join strings and lists.\n<h4>Why does this matter?</h4>\nWhen we open a file for reading into a Python program, we get a string corresponding to the contents of the whole file. If we use a for loop to process the elements of this string, all we can pick out are the individual characters — we don't get to choose the granularity. By contrast, the elements of a list can be as big or small as we like: for example, they could be paragraphs, sentences, phrases, words, characters. So lists have the advantage that we can be flexible about the elements they contain, and correspondingly flexible about any downstream processing. Consequently, one of the first things we are likely to do in a piece of NLP code is tokenize a string into a list of strings (3.7). Conversely, when we want to write our results to a file, or to a terminal, we will usually format them as a string (3.9). References point to NLTK Book",
"# We can sort the list of words with sorted()\n\nvocab = sorted(set(tokenlist))\n\n#Check the first four tokens in the sorted list. Note that punctuation appears first. We'll remove this in step 3.\nvocab[0:4]",
"<h2>2. Lowercase conversion</h2>",
"# Convert text to lowercase using .lower() and a for loop, which will loop through every word in the token list\n# and convert it lowercase.\n\nwords = [w.lower() for w in tokenlist]\n\n#Check the first four words in our list\nwords[0:4]",
"<h2>3. Remove stopwords</h2>",
"from nltk.corpus import stopwords\n\n#View stopwords in the English stopwords list\nprint(stopwords.words(\"english\"))",
"<b>Notice that numbers and punctuation are not included in the stopwords list.</b>",
"#Remove stopwords by selecting all words that are not in the stopwords list. \nstoplist = stopwords.words(\"english\")\n\nfiltered = [w for w in words if w not in stoplist]\n\n#View first 10 tokens in filtered list.\nfiltered[0:10]\n\n#Uncomment the following line to download punkt tokenizer to remove punctuation (only need to do this step once)\n#nltk.download(\"punkt\")\n\n#Keep only alpha-numeric tokens with word.isalnum() from punkt\nwords = [word for word in filtered if word.isalnum()]",
"<h2>4. Text Normalization</h2>\n\nIn this tutorial, we won't get into stemming and lemmatization, as this an optional step in text preparation. \nIf you decide this would be helpful as you work with your documents, see: NLTK: A Beginners Hands-on Guide to Natural Language Processing. \n<hr/>\n\n<h1>Word Frequency Analysis</h1>\n\nTo begin, we will need to install two new libraries if you don't already have them: \n<ul>\n <li>matplotlib is a data visualization library</li>\n <li>wordcloud is a wordcloud generator</li>\n</ul>",
"#Uncomment the following two lines and run this cell once to install the libraries.\n\n#!pip3 install matplotlib\n#!pip3 install wordcloud\n\n#import the necessary packages\n\nfrom matplotlib import pyplot as plt\n\n# Begin by a exploring a new line chart of the word frequency distribution after our pre-processing steps\nfd = FreqDist(words)\nfd.plot(30)\nplt.show()",
"Now we have reduced our word list to just those that are likely to have more meaning. \nIf the words that appear here are still not satisfactory, you can also use the extended stopwords list available in NLTK or add your own. \nTo add your own, try the following code and replace the words inside the triple quotes with terms you want to remove.\nmore_stopwords = \"\"\"with some your just have from it's /via &amp; that they your there this into providing would can't\"\"\"\nstoplist += more_stopwords.split()",
"import pprint\n\n#Convert word list to a single string\nclean_words_string = \" \".join(words)\n\n#generate a basic wordcloud\nfrom wordcloud import WordCloud\nwordcloud = WordCloud(background_color=\"black\").generate(clean_words_string)\n\n#plot the wordcloud\nplt.figure(figsize = (12, 12))\nplt.imshow(wordcloud)\n\n#to remove the axis value\nplt.axis(\"off\")\nplt.show()",
"<h1>Simple Concordance and Collocations Methods</h1>\n\n<h2>Converting from list to Text type in NLTK</h2>",
"#To be able to use methods like concordance and collocations, convert the token list to nltk's \"Text\" type\ntextcontent = nltk.Text(tokenlist)\ntextcontent \n\ntextcontent.concordance('piracy')\n\n#Get a list of top collocations in the document\ntextcontent.collocations()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dtamayo/reboundx
|
ipython_examples/Migration.ipynb
|
gpl-3.0
|
[
"Migration\nFor modifying orbital elements, REBOUNDx offers two implementations. modify_orbits_direct directly calculates orbital elements and modifies those, while modify_orbits_forces applies forces that when orbit-averaged yield the desired behavior. Let's set up a simple simulation of two planets on initially eccentric and inclined orbits:",
"import rebound\nimport reboundx\nimport numpy as np\nsim = rebound.Simulation()\nainner = 1.\naouter = 10.\ne0 = 0.1\ninc0 = 0.1\n\nsim.add(m=1.)\nsim.add(m=1e-6,a=ainner,e=e0, inc=inc0)\nsim.add(m=1e-6,a=aouter,e=e0, inc=inc0)\nsim.move_to_com() # Moves to the center of momentum frame\nps = sim.particles",
"Now let's set up reboundx and add the modify_orbits_forces effect, which implements the migration using forces:",
"rebx = reboundx.Extras(sim)\nmof = rebx.load_force(\"modify_orbits_forces\")\nrebx.add_force(mof)",
"Both modify_orbits_forces and modify_orbits_direct exponentially alter the semimajor axis, on an e-folding timescale tau_a. If tau_a < 0, you get exponential damping, and for tau_a > 0, exponential growth, i.e.,\n\\begin{equation}\na = a_0e^{t/\\tau_a}\n\\end{equation}\nIn general, each body will have different damping timescales. By default, all particles have timescales of infinity, i.e., no effect. The units of time are set by the units of time in your simulation.\nLet's set a maximum time for our simulation, and give our two planets different (inward) migration timescales. This can simply be done through:",
"tmax = 1.e3\nps[1].params[\"tau_a\"] = -tmax/2.\nps[2].params[\"tau_a\"] = -tmax ",
"Now we run the simulation like we would normally with REBOUND. Here we store the semimajor axes at 1000 equally spaced intervals:",
"Nout = 1000\na1,a2 = np.zeros(Nout), np.zeros(Nout)\ntimes = np.linspace(0.,tmax,Nout)\nfor i,time in enumerate(times):\n sim.integrate(time)\n a1[i] = ps[1].a\n a2[i] = ps[2].a",
"Now let's plot it on a linear-log scale to check whether we get the expected exponential behavior. We'll also overplot the expected exponential decays for comparison.",
"a1pred = [ainner*np.e**(t/ps[1].params[\"tau_a\"]) for t in times]\na2pred = [aouter*np.e**(t/ps[2].params[\"tau_a\"]) for t in times]\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(15,5))\nax = plt.subplot(111)\nax.set_yscale('log')\nax.plot(times,a1, 'r.', label='Integration')\nax.plot(times,a2, 'r.')\nax.plot(times,a1pred, 'k--',label='Prediction')\nax.plot(times,a2pred, 'k--')\nax.set_xlabel(\"Time\", fontsize=24)\nax.set_ylabel(\"Semimajor axis\", fontsize=24)\nax.legend(fontsize=24)",
"Coordinate Systems\nEverything in REBOUND by default uses Jacobi coordinates. To change the coordinate system, see the bottom of EccAndIncDamping.ipynb"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sdpython/ensae_teaching_cs
|
_doc/notebooks/td1a_soft/td1a_cython_edit.ipynb
|
mit
|
[
"1A.soft - Calcul numérique et Cython\nPython est très lent. Il est possible d'écrire certains parties en C mais le dialogue entre les deux langages est fastidieux. Cython propose un mélange de C et Python qui accélère la conception.",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()",
"Calcul numérique\nOn peut mesurer le temps que met en programme comme ceci (qui ne marche qu'avec IPython...timeit) :",
"def racine_carree1(x) :\n return x**0.5\n\n%timeit -r 10 [ racine_carree1(x) for x in range(0,1000) ]\n\nimport math\ndef racine_carree2(x) :\n return math.sqrt(x)\n\n%timeit -r 10 [ racine_carree2(x) for x in range(0,1000) ]",
"La seconde fonction est plus rapide. Seconde vérification :",
"%timeit -r 10 [ x**0.5 for x in range(0,1000) ]\n%timeit -r 10 [ math.sqrt(x) for x in range(0,1000) ]",
"On remarque également que l'appel à une fonction pour ensuite effectuer le calcul a coûté environ 100 $\\mu s$ pour 1000 appels. L'instruction timeit effectue 10 boucles qui calcule 1000 fois une racine carrée.\nCython\nLe module Cython est une façon d'accélérer les calculs en insérant dans un programme python du code écrit dans une syntaxe proche de celle du C. Il existe différentes approches pour accélérer un programme python :\n\nCython : on insère du code [C](http://fr.wikipedia.org/wiki/C_(langage) dans le programme python, on peut gagné un facteur 10 sur des fonctions qui utilisent des boucles de façon intensives.\nautres alternatives :\ncffi, il faut connaître le C (ne fait pas le C++)\npythran\nnumba\n...\n\n\nPyPy : on compile le programme python de façon statique au lieu de l'interpréter au fur et à mesure de l'exécution, cette solution n'est praticable que si on a déjà programmé avec un langage compilé ou plus exactement un langage où le typage est fort. Le langage python, parce qu'il autorise une variable à changer de type peut créer des problèmes d'inférence de type.\nmodule implémenté en C : c'est le cas le plus fréquent et une des raisons pour lesquelles Python a été rapidement adopté. Beaucoup de librairies se sont ainsi retrouvées disponibles en Python. Néanmoins, l'API C du Python nécessite un investissement conséquent pour éviter les erreurs. Il est préférable de passer par des outils tels que \nboost python : facile d'accès, le module sera disponible sous forme compilée,\nSWIG : un peu plus difficile, le module sera soit compilé par la librairie soit packagé de telle sorte qu'il soit compilé lors de son l'installation.\n\n\n\nParmi les trois solutions, la première est la plus accessible, et en développement constant (Cython changes). \nL'exemple qui suit ne peut pas fonctionner directement sous notebook car Cython compile un module (fichier *.pyd) avant de l'utiliser. Si la compilation ne fonctionne pas et fait apparaître un message avec unable for find file vcvarsall.bat, il vous faut lire l'article Build a Python 64 bit extension on Windows 8 après avoir noté la version de Visual Studio que vous utilisez. Il est préférable d'avoir programmé en C/C++ même si ce n'est pas indispensable.\nCython dans un notebook\nLe module IPython propose une façon simplifiée de se servir de Cython illustrée ici : Some Linear Algebra with Cython. Vous trouverez plus bas la façon de faire sans IPython que nous n'utiliserons pas pour cette séance. On commence par les préliminaires à n'exécuter d'une fois :",
"%load_ext cython",
"Puis on décrit la fonction avec la syntaxe Cython :",
"%%cython --annotate\ncimport cython\n\ndef cprimes(int kmax):\n cdef int n, k, i\n cdef int p[1000]\n result = []\n if kmax > 1000:\n kmax = 1000\n k = 0\n n = 2\n while k < kmax:\n i = 0\n while i < k and n % p[i] != 0:\n i = i + 1\n if i == k:\n p[k] = n\n k = k + 1\n result.append(n)\n n = n + 1\n return result",
"On termine en estimant son temps d'exécution. Il faut noter aussi que ce code ne peut pas être déplacé dans la section précédente qui doit être entièrement écrite en cython.",
"%timeit [ cprimes (567) for i in range(10) ]",
"Exercice : python/C appliqué à une distance d'édition\nLa distance de Levenshtein aussi appelé distance d'édition calcule une distance entre deux séquences d'éléments. Elle s'applique en particulier à deux mots comme illustré par Distance d'édition et programmation dynamique. L'objectif est de modifier la fonction suivante de façon à utiliser Cython puis de comparer les temps d'exécution.",
"def distance_edition(mot1, mot2):\n dist = { (-1,-1): 0 }\n for i,c in enumerate(mot1) :\n dist[i,-1] = dist[i-1,-1] + 1\n dist[-1,i] = dist[-1,i-1] + 1\n for j,d in enumerate(mot2) :\n opt = [ ]\n if (i-1,j) in dist : \n x = dist[i-1,j] + 1\n opt.append(x)\n if (i,j-1) in dist : \n x = dist[i,j-1] + 1\n opt.append(x)\n if (i-1,j-1) in dist :\n x = dist[i-1,j-1] + (1 if c != d else 0)\n opt.append(x)\n dist[i,j] = min(opt)\n return dist[len(mot1)-1,len(mot2)-1]\n\n%timeit distance_edition(\"idstzance\",\"distances\")",
"Auparavant, il est probablement nécessaire de suivre ces indications :\n\nSi vous souhaitez remplacer le dictionnaire par un tableau à deux dimensions, comme le langage C n'autorise pas la création de tableau de longueur variables, il faut allouer un pointeur (c'est du C par du C++). Toutefois, je déconseille cette solution :\nCython n'accepte pas les doubles pointeurs : How to declare 2D list in Cython, les pointeurs simples si Python list to Cython.\nCython n'est pas forcément compilé avec la même version que votre version du compilateur Visual Studio C++. Ce faisant, vous pourriez obtenir l'erreur warning C4273: 'round' : inconsistent dll linkage. Après la lecture de cet article, BUILDING PYTHON 3.3.4 WITH VISUAL STUDIO 2013, vous comprendrez que ce n'est pas si simple à résoudre.\n\n\n\nJe suggère donc de remplacer dist par un tableau cdef int dist [500][500]. La signature de la fonction est la suivante : def cdistance_edition(str mot1, str mot2). Enfin, Cython a été optimisé pour une utilisation conjointe avec numpy, à chaque fois que vous avez le choix, il vaut mieux utiliser les container numpy plutôt que d'allouer de grands tableaux sur la pile des fonctions ou d'allouer soi-même ses propres pointeurs.\nCython sans les notebooks\nCette partie n'est utile que si vous avez l'intention d'utiliser Cython sans IPython. Les lignes suivantes implémentent toujours avec Cython la fonction primes qui retourne les entiers premiers entiers compris entre 1 et $N$. On suit maintenant la méthode préconisée dans le tutoriel de Cython. Il faut d'abord créer deux fichiers :\n\nexample_cython.pyx qui contient le code de la fonction\nsetup.py qui compile le module avec le compilateur Visual Studio C++",
"code = \"\"\"\ndef primes(int kmax):\n cdef int n, k, i\n cdef int p[1000]\n result = []\n if kmax > 1000:\n kmax = 1000\n k = 0\n n = 2\n while k < kmax:\n i = 0\n while i < k and n % p[i] != 0:\n i = i + 1\n if i == k:\n p[k] = n\n k = k + 1\n result.append(n)\n n = n + 1\n return result\n\"\"\"\n\nname = \"example_cython\"\nwith open(name + \".pyx\",\"w\") as f : f.write(code)\n\nsetup_code = \"\"\"\nfrom distutils.core import setup\nfrom Cython.Build import cythonize\n\nsetup(\n ext_modules = cythonize(\"__NAME__.pyx\",\n compiler_directives={'language_level' : \"3\"})\n)\n\"\"\".replace(\"__NAME__\",name)\n\nwith open(\"setup.py\",\"w\") as f:\n f.write(setup_code)",
"Puis on compile le fichier .pyx créé en exécutant le fichier setup.py avec des paramètres précis :",
"import os\nimport sys\ncmd = \"{0} setup.py build_ext --inplace\".format(sys.executable)\nfrom pyquickhelper.loghelper import run_cmd\nout,err = run_cmd(cmd)\nif err != '' and err is not None: \n raise Exception(err)\n \n[ _ for _ in os.listdir(\".\") if \"cython\" in _ or \"setup.py\" in _ ] ",
"Puis on importe le module :",
"import pyximport\npyximport.install()\nimport example_cython",
"Si votre dernière modification n'apparaît pas, il faut redémarrer le kernel. Lorsque Python importe le module example_cython la première fois, il charge le fichier example_cython.pyd. Lors d'une modification du module, ce fichier est bloqué en lecture et ne peut être modifié. Or cela est nécessaire car le module doit être recompilé. Pour cette raison, il est plus pratique d'implémenter sa fonction dans un éditeur de texte qui n'utilise pas IPython.\nOn teste le temps mis par la fonction primes :",
"%timeit [ example_cython.primes (567) for i in range(10) ]",
"Puis on compare avec la version écrites un Python :",
"def py_primes(kmax):\n p = [ 0 for _ in range(1000) ]\n result = []\n if kmax > 1000:\n kmax = 1000\n k = 0\n n = 2\n while k < kmax:\n i = 0\n while i < k and n % p[i] != 0:\n i = i + 1\n if i == k:\n p[k] = n\n k = k + 1\n result.append(n)\n n = n + 1\n return result\n\n%timeit [ py_primes (567) for i in range(10) ]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cas/cmip6/models/sandbox-3/atmoschem.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: CAS\nSource ID: SANDBOX-3\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cas', 'sandbox-3', 'atmoschem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Timestep Framework --> Split Operator Order\n5. Key Properties --> Tuning Applied\n6. Grid\n7. Grid --> Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --> Surface Emissions\n11. Emissions Concentrations --> Atmospheric Emissions\n12. Emissions Concentrations --> Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --> Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmospheric chemistry model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmospheric chemistry model code.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Chemistry Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"1.8. Coupling With Chemical Reactivity\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemical species advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Split Operator Chemistry Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemistry (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Split Operator Alternate Order\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\n?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.6. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.7. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Timestep Framework --> Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.2. Convection\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.3. Precipitation\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.4. Emissions\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.5. Deposition\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.6. Gas Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.9. Photo Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.10. Aerosols\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview of transport implementation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Use Atmospheric Transport\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Transport Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric chemistry emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Emissions Concentrations --> Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Emissions Concentrations --> Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Emissions Concentrations --> Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview gas phase atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Number Of Bimolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.4. Number Of Termolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.7. Number Of Advected Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.8. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.9. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.10. Wet Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.11. Wet Oxidation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Gas Phase Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n",
"14.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n",
"14.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.5. Sedimentation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Gas Phase Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n",
"15.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.5. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric photo chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Number Of Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17. Photo Chemistry --> Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nPhotolysis scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"17.2. Environmental Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cyang019/blight_fight
|
src/Building_List_and_Label.ipynb
|
mit
|
[
"import numpy as np\nimport pandas as pd",
"Step 1: Building List and Labels\nCollecting instances from 311 calls, crimes, blight violations, and demolition permits.\nData already cleaned by this notebook\nThe collection of data was saved at ../data/events.csv",
"data_events = pd.read_csv('../data/events.csv')\n\ndata_events.head(10)\n\ndata_events.shape\n\n# To get rid of duplicates with same coordinates and possibly different address names\nbuilding_pool = data_events.drop_duplicates(subset=['lon','lat'])\n\nbuilding_pool.shape\n\n# 1. sort data according to longitude\n# init new_data\n# 2. for each record:\n# if record[lon] - prev[lon] > length:\n# add new record into new_data\n# else:\n# find previous coords that are close\n# if no coords in bbox:\n# add new record into new_data\n# else:\n# for each of these coords:\n# if record in bbox:\n# append event_id\n# \n# At the same time, if building is assigned one permit or more for demolition, blighted will be assigned to one.\n#\n\ndef gen_buildings(data):\n '''generate buildings from coordinates'''\n from assign_bbox import nearest_pos, is_in_bbox, raw_dist # defined in assign_bbox.py in current dir\n new_data = {'addr': [], 'lon': [], 'lat': [], 'event_id_list': [], 'blighted': []}\n data_sorted = data.sort_values(by='lon', inplace=False)\n length = 4.11e-4 # longitude\n width = 2.04e-4 # latitude\n prev_lon = 0\n prev_lat = 0\n max_distX = abs(length/2)\n max_distY = abs(width/2)\n \n for i, entry in data_sorted.iterrows():\n lon = entry['lon']\n lat = entry['lat']\n b = entry['type']\n if abs(lon - prev_lon) > length:\n new_data['addr'].append(entry['addr'])\n new_data['lon'].append(lon)\n new_data['lat'].append(lat)\n # below line is different from the loop for events_part2\n new_data['event_id_list'].append([entry['event_id']])\n if b == 4: # if demolition permit\n new_data['blighted'].append(1)\n else:\n new_data['blighted'].append(0)\n \n prev_lon = lon\n prev_lat = lat\n else:\n listX = np.array(new_data['lon'])\n listY = np.array(new_data['lat'])\n poses = nearest_pos((lon,lat), listX, listY, length, width)\n \n # if already in new_data\n if poses.size > 0:\n has_pos = False\n for pos in poses:\n temp_lon = new_data['lon'][pos]\n temp_lat = new_data['lat'][pos]\n if (abs(temp_lon - lon) < max_distX) & (abs(temp_lat - lat) < max_distY):\n new_data['event_id_list'][pos] += [entry['event_id']]\n if b == 4:\n new_data['blighted'][pos] = 1\n has_pos = True\n if has_pos:\n continue\n \n new_data['addr'].append(entry['addr'])\n new_data['lon'].append(lon)\n new_data['lat'].append(lat)\n # below line is different from the loop for events_part2\n new_data['event_id_list'].append([entry['event_id']])\n if b == 4:\n new_data['blighted'].append(1)\n else:\n new_data['blighted'].append(0)\n prev_lon = lon\n prev_lat = lat\n \n\n return pd.DataFrame(new_data)\n\nbuildings_concise = gen_buildings(building_pool)\n\nbuildings_concise.shape# shorter than before\n\nbuildings_concise.tail()\n\nbuildings = buildings_concise",
"Get rid of void coordinates",
"buildings = buildings[(buildings['lat']>42.25) & (buildings['lat']<42.5) & (buildings['lon']>-83.3) & (buildings['lon']<-82.9)]\n\nbuildings.shape\n\nbuildings['blighted'].value_counts()",
"Recap of step 0\nAdopting building coordinates\nIt turns out that there is a slight mismatch between real world building coordinates w.r.t given data. So that only median building dimension info is reserved from the building info we got from online open data at data.detroitmi.gov.",
"data_dir = '../data/'\n\nbuildings_step_0 = pd.read_csv(data_dir+'buildings_step_0.csv')\npermits = pd.read_csv(data_dir+'permits.csv')\n\npermits = permits[['PARCEL_NO', 'BLD_PERMIT_TYPE', 'addr', 'lon', 'lat']]\n\npermits['BLD_PERMIT_TYPE'].unique()",
"For example: the very first entry of permit has coordinate:",
"demo01 = permits.loc[0,['PARCEL_NO','addr','lon','lat']]\nprint(demo01)",
"In real world data, this corresponds to:",
"c = buildings_step_0['addr'].apply(lambda x: x == permits.loc[0,'addr'])\n\nbuildings_step_0[c][['PARCELNO','lon','lat','addr']]",
"The coordinate of this building from data.detroitmi.gov is slightly different from data given in our course material.\nOnly building dimension info is adopted for our analysis.",
"length = 0.000411\nwidth = 0.000204 # These results come from step 0.\n\nbuildings.loc[:,'llcrnrlon'] = buildings.loc[:,'lon'] - length/2\nbuildings.loc[:,'llcrnrlat'] = buildings.loc[:,'lat'] - width/2\nbuildings.loc[:,'urcrnrlon'] = buildings.loc[:,'lon'] + length/2\nbuildings.loc[:,'urcrnrlat'] = buildings.loc[:,'lat'] + width/2\n\nbuildings.loc[:,'building_id'] = np.arange(0,buildings.shape[0])\nbuildings = buildings.reindex()\n\nbuildings.tail()\n\nbuildings.to_csv('../data/buildings.csv', index=False)",
"Visualization",
"from bbox import draw_screen_bbox\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\nbuildings = pd.read_csv('../data/buildings.csv')\nbboxes = buildings.loc[:,['llcrnrlon','llcrnrlat','urcrnrlon','urcrnrlat']]\nbboxes = bboxes.as_matrix()\n\nfig = plt.figure(figsize=(8,6), dpi=2000)\nfor box in bboxes: \n draw_screen_bbox(box, fig)\n \nplt.xlim(-83.3,-82.9)\nplt.ylim(42.25,42.45)\nplt.savefig('../data/buildings_distribution.png')\nplt.show()",
"Distribution of blighted buildings",
"blighted_buildings = buildings[buildings.loc[:,'blighted'] == 1]\n\nblighted_bboxes = blighted_buildings.loc[:,['llcrnrlon','llcrnrlat','urcrnrlon','urcrnrlat']]\nblighted_bboxes = blighted_bboxes.as_matrix()\n\nfig = plt.figure(figsize=(8,6), dpi=2000)\nfor box in blighted_bboxes: \n draw_screen_bbox(box, fig)\n \nplt.xlim(-83.3,-82.9)\nplt.ylim(42.25,42.46)\nplt.title(\"Distribution of Blighted Buildings in Detroit\")\nplt.savefig('../data/blighted_buildings_distribution.png')\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dm-wyncode/zipped-code
|
content/posts/makefile-tutorial/makefile_tutorial_1.ipynb
|
mit
|
[
"Learning how to make a Makefile\nAdapted from swcarpentry/make-novice repository.\nMake’s fundamental concepts are common across build tools.\n\nGNU Make is a free, fast, well-documented, and very popular Make implementation. From now on, we will focus on it, and when we say Make, we mean GNU Make.\n\nA tutorial named Makefiles—part 2 of the tutorial.\nCells that follow are the result of following this Makefiles tutorial.\nOther blog posts\n\nTutorial begins: Introduction\n\nNB: I have adapted the tutorial so that the steps take place in this Jupyter notebook so that the notebook can be transpiled into a Pelican blog post using a danielfrg/pelican-ipynb Pelican plugin. Some of the code is what is necessary to display output in the notebook and therefore the blog post.\nSome Jupyter notebook housekeeping to set up some variables with path references.",
"import os\n\nfrom IPython.core.display import Image, display\n\n(\n TAB_CHAR,\n) = (\n '\\t',\n)\n\nhome = os.path.expanduser('~')",
"repo_path is the path to a clone of swcarpentry/make-novice",
"repo_path = os.path.join(\n home, \n 'Dropbox/spikes/make-novice',\n)\n\nassert os.path.exists(repo_path)",
"paths are the paths to child directories in a clone of swcarpentry/make-novice",
"paths = (\n 'code',\n 'data',\n)\npaths = (\n code,\n data,\n) = [os.path.join(repo_path, path) for path in paths]\nassert all(os.path.exists(path) for path in paths)\n\nformat_context = dict(zip(\n ('repo_path', 'data', 'code', 'tab',), \n (repo_path, data, code, TAB_CHAR))\n)",
"Begin tutorial.\n\nCreate a file, called Makefile, with the following content:",
"makefile_contents_0 = \"\"\"# Count words.\n{repo_path}/isles.dat : {data}/books/isles.txt\n{tab}python {code}/wordcount.py {data}/books/isles.txt {repo_path}/isles.dat\n\"\"\".format(**format_context)",
"Using the shell to create Makefile with contents the value of Python variable makefile_contents.",
"!printf \"$makefile_contents_0\" > Makefile",
"This is a build file, which for Make is called a Makefile - a file executed by Make. Note how it resembles one of the lines from our shell script.\n\n```bash\nCount words.\n/home/dmmmd/Dropbox/spikes/make-novice/isles.dat : /home/dmmmd/Dropbox/spikes/make-novice/data/books/isles.txt\n python /home/dmmmd/Dropbox/spikes/make-novice/code/wordcount.py /home/dmmmd/Dropbox/spikes/make-novice/data/books/isles.txt /home/dmmmd/Dropbox/spikes/make-novice/isles.dat\n```\n\nLet us go through each line in turn:\n\n# denotes a comment. Any text from # to the end of the line is ignored by Make.\nisles.dat is a target, a file to be created, or built.\nbooks/isles.txt is a dependency, a file that is needed to build or update the target. Targets can have zero or more dependencies.\nA colon, :, separates targets from dependencies.\npython wordcount.py books/isles.txt isles.dat is an action, a command to run to build or update the target using the dependencies. Targets can have zero or more actions. These actions form a recipe to build the target from its dependencies and can be considered to be a shell script.\nActions are indented using a single TAB character, not 8 spaces. This is a legacy of Make’s 1970’s origins. If the difference between spaces and a TAB character isn’t obvious in your editor, try moving your cursor from one side of the TAB to the other. It should jump four or more spaces.\nTogether, the target, dependencies, and actions form a a rule.\n\nLet’s first sure we start from scratch and delete the .dat and .png files we created earlier",
"!rm $repo_path/*.dat $repo_path/*.png",
"By default, Make looks for a Makefile, called Makefile, and we can run Make as follows",
"!make\n# By default, Make prints out the actions it executes:",
"Let’s see if we got what we expected",
"!head -5 $repo_path/isles.dat",
"We don’t have to call our Makefile Makefile. However, if we call it something else we need to tell Make where to find it. This we can do using -f flag. For example, if our Makefile is named MyOtherMakefile:",
"!printf \"$makefile_contents_0\" > MyOtherMakeFile.mk\n\n!make -f MyOtherMakeFile.mk",
"This is because our target, isles.dat, has now been created, and Make will not create it again. To see how this works, let’s pretend to update one of the text files. Rather than opening the file in an editor, we can use the shell touch command to update its timestamp (which would happen if we did edit the file)",
"!touch $data/books/isles.txt",
"If we compare the timestamps of books/isles.txt and isles.dat,",
"!ls -l $data/books/isles.txt $repo_path/isles.dat\n# then we see that isles.dat, the target, is now older thanbooks/isles.txt, its dependency",
"If we run Make again,",
"!make\n#then it recreates isles.dat",
"When it is asked to build a target, Make checks the ‘last modification time’ of both the target and its dependencies. If any dependency has been updated since the target, then the actions are re-run to update the target. Using this approach, Make knows to only rebuild the files that, either directly or indirectly, depend on the file that changed. This is called an incremental build.\nup to date means that the Makefile has a rule for the file and the file is up to date whereas Nothing to be done means that the file exists but the Makefile has no rule for it.",
"!make $code/wordcount.py",
"By explicitly recording the inputs to and outputs from steps in our analysis and the dependencies between files, Makefiles act as a type of documentation, reducing the number of things we have to remember.\nLet’s add another rule to the end of Makefile:",
"makefile_contents_1 = \"\"\"\n{repo_path}/abyss.dat : {data}/books/abyss.txt\n{tab}python {code}/wordcount.py {data}/books/abyss.txt {repo_path}/abyss.dat\n\"\"\".format(**format_context)\nmakefile_contents = ''.join((makefile_contents_0, makefile_contents_1))\n\n# append makefile_contents to Makefile\n!printf \"$makefile_contents\" > Makefile\n\n!make",
"Nothing happens because Make attempts to build the first target it finds in the Makefile, the default target, which is isles.dat which is already up-to-date. We need to explicitly tell Make we want to build abyss.dat:",
"!make $repo_path/abyss.dat",
"We may want to remove all our data files so we can explicitly recreate them all. We can introduce a new target, and associated rule, to do this. We will call it clean, as this is a common name for rules that delete auto-generated files, like our .dat files:",
"makefile_contents_2 = \"\"\"\n{repo_path}/clean:\n{tab}rm -f {repo_path}/*.dat\n\"\"\".format(**format_context)\nmakefile_contents = ''.join((makefile_contents_0, makefile_contents_1, makefile_contents_2))\n\n# add makefile_contents to Makefile\n!printf \"$makefile_contents\" > Makefile",
"This is an example of a rule that has no dependencies. clean has no dependencies on any .dat file as it makes no sense to create these just to remove them. We just want to remove the data files whether or not they exist. If we run Make and specify this target,",
"!make clean",
"All .dat files are removed!\n\nThere is no actual thing built called clean. Rather, it is a short-hand that we can use to execute a useful sequence of actions. Such targets, though very useful, can lead to problems.\nFor example, let us recreate our data files, create a directory called clean, then run Make:",
"!make \"$repo_path\"/isles.dat \"$repo_path\"/abyss.dat\n!mkdir \"$repo_path\"/clean\n\n!make \"$repo_path\"/clean",
"Make finds a file (or directory) called clean and, as its clean rule has no dependencies, assumes that clean has been built and is up-to-date and so does not execute the rule’s actions. As we are using clean as a short-hand, we need to tell Make to always execute this rule if we run make clean, by telling Make that this is a phony target, that it does not build anything. This we do by marking the target as .PHONY:",
"makefile_contents_phony_clean = \"\"\"\n.PHONY : clean\nclean:\n{tab}rm -f {repo_path}/*.dat\n\"\"\".format(**format_context)\nmakefile_contents = ''.join((makefile_contents_0, makefile_contents_1, makefile_contents_phony_clean))\n\n# add makefile_contents to Makefile\n!printf \"$makefile_contents\" > Makefile",
"Now get expected result.",
"!make clean",
"We can add a similar command to create all the data files. We can put this at the top of our Makefile so that it is the default target, which is executed by default if no target is given to the make command:",
"makefile_contents_create_all_data = \"\"\"\n.PHONY : dats\ndats : {repo_path}/isles.dat {repo_path}/abyss.dat\n\"\"\".format(**format_context)\nmakefile_contents = ''.join((\n makefile_contents_0, \n makefile_contents_1, \n makefile_contents_phony_clean, \n makefile_contents_create_all_data,\n))\n\n# add makefile_contents to Makefile\n!printf \"$makefile_contents\" > Makefile\n\n!make dats\n\n!make dats",
"The following figure shows a graph of the dependencies embodied within our Makefile, involved in building the dats target",
"display(Image('http://swcarpentry.github.io/make-novice/fig/02-makefile.png', unconfined=True))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ProfessorKazarinoff/staticsite
|
content/code/circuit_diagrams/circuit_diagram_and_problem.ipynb
|
gpl-3.0
|
[
"In this post, we are going to solve a circuit diagram problem using Python and a package called SchemDraw. SchemDraw is a specialized Python package for drawing circuit diagrams. For SchemDraw documentation see:\nhttps://cdelker.bitbucket.io/SchemDraw/SchemDraw.html\nGiven:\nThe circuit diagram below with a driving voltage $V_t = 5.20 V$ and resistor values in the table below.\n\nA table of resistance values is below:\n| V<sub>t</sub> =| 5.20 V |\n| --- | --- |\n| R<sub>1</sub> =| 13.2 mΩ |\n| R<sub>2</sub> =| 21.0 mΩ |\n| R<sub>3</sub> =| 3.60 mΩ |\n| R<sub>4</sub> =| 15.2 mΩ |\n| R<sub>5</sub> =| 11.9 mΩ |\n| R<sub>6</sub> =| 2.20 mΩ |\n| R<sub>7</sub> =| 7.40 mΩ |\nFind:\nV<sub>6</sub> and V<sub>7</sub>, the voltage drop across resistors R<sub>6</sub> and R<sub>7</sub>\nI<sub>3</sub> and I<sub>6</sub>, the current running through resistors R<sub>3</sub> and R<sub>6</sub>\nP<sub>4</sub> and P<sub>7</sub>, the power dissipated by resistors R<sub>4</sub> and R<sub>7</sub>\nSolution:\nFirst we'll import the necessary packages. I'm using a jupyter notebook, so the %matplotlib inline command is included. If you want high-resolution circuit diagrams, include the line:\n%config InlineBackend.figure_format = 'svg'\nat the top of the notebook will ensure high-resolution images.",
"import matplotlib.pyplot as plt\n# if using a jupyter notebook: include %matplotlib inline. If constructing a .py-file: comment out\n%matplotlib inline\n# if high-resolution images are desired: include %config InlineBackend.figure_format = 'svg'\n%config InlineBackend.figure_format = 'svg'\nimport SchemDraw as schem\nimport SchemDraw.elements as e",
"Now we'll build the circuit diagram by creating a SchemDraw Drawing object and adding elements to it.",
"d = schem.Drawing(unit=2.5)\nR7 = d.add(e.RES, d='right', botlabel='$R_7$')\nR6 = d.add(e.RES, d='right', botlabel='$R_6$')\nd.add(e.LINE, d='right', l=2)\nd.add(e.LINE, d='right', l=2)\nR5 = d.add(e.RES, d='up' , botlabel='$R_5$')\nR4 = d.add(e.RES, d='up', botlabel='$R_4$')\nd.add(e.LINE, d='left', l=2)\nd.push()\nR3 = d.add(e.RES, d='down', toy=R6.end, botlabel='$R_3$')\nd.pop()\nd.add(e.LINE, d='left', l=2)\nd.push()\nR2 = d.add(e.RES, d='down', toy=R6.end, botlabel='$R_2$')\nd.pop()\nR1 = d.add(e.RES, d='left', tox=R7.start, label='$R_1$')\nVt = d.add(e.BATTERY, d='up', xy=R7.start, toy=R1.end, label='$V_t$', lblofst=0.3)\nd.labelI(Vt, arrowlen=1.5, arrowofst=0.5)\nd.draw()\nd.save('7_resistors_3_loops.png')\n#d.save('7_resistors_3_loops.pdf')\n",
"Find R<sub>t</sub>\nNow we'll find the total resistance of the circuit R<sub>t</sub> using the individual resistances. First, define the resistances and driving voltage as variables.",
"Vt = 5.2\n\nR1 = 0.0132\nR2 = 0.021\nR3 = 0.00360\nR4 = 0.0152\nR5 = 0.0119\nR6 = 0.0022\nR7 = 0.00740",
"Find R<sub>45</sub> and R<sub>67</sub>\nTo simplify the circuit diagram, we'll combine the resistors in series.\nFor resistors in a simple series circuit:\n$$ R_t = R_1 + R_2 + R_3 ... + R_n $$\nSince resistors $R_4$ and $R_5$ are in simple series:\n$$ R_{45} = R_4 + R_5 $$\nSince resistors $R_6$ and $R_7$ are in simple series:\n$$ R_{67} = R_6 + R_7 $$\nWe can easily calculate this with Python. After the calculation, we can use an fstring to print the results. Note the round() function is used on the inside of the fstring curly braces { }, in case there are some floating point math errors that lead to the values printing out as long floats.",
"R45 = R4 + R5\nR67 = R6 + R7\n\nprint(f'R45 = {round(R45,7)} Ohm, R67 = {round(R67,5)} Ohm')",
"Let's redraw our circuit diagram to show the combined resistors.",
"d = schem.Drawing(unit=2.5)\nR67 = d.add(e.RES, d='right', botlabel='$R_{67}$')\nd.add(e.LINE, d='right', l=2)\nd.add(e.LINE, d='right', l=2)\nR45 = d.add(e.RES, d='up', botlabel='$R_{45}$')\nd.add(e.LINE, d='left', l=2)\nd.push()\nR3 = d.add(e.RES, d='down', toy=R67.end, botlabel='$R_3$')\nd.pop()\nd.add(e.LINE, d='left', l=2)\nd.push()\nR2 = d.add(e.RES, d='down', toy=R67.end, botlabel='$R_2$')\nd.pop()\nR1 = d.add(e.RES, d='left', tox=R67.start, label='$R_1$')\nVt = d.add(e.BATTERY, d='up', xy=R67.start, toy=R1.end, label='$V_t$', lblofst=0.3)\nd.labelI(Vt, arrowlen=1.5, arrowofst=0.5)\nd.draw()\nd.save('5_resistors_3_loops.png')\n#d.save('5_resistors_3_loops.pdf')",
"Find R<sub>2345</sub>\nNext we can combine the resistors in parallel. The resistors in parallel are $R_2$, $R_3$ and $R_{45}$. For a resistors in a simple parallel circuit:\n$$ \\frac{1}{R_t} = \\frac{1}{R_1} + \\frac{1}{R_2} + \\frac{1}{R_3} ... + \\frac{1}{R_n} $$\nSince $R_2$, $R_3$ and $R_{45}$ are in parallel:\n$$ \\frac{1}{R_{2345}} = \\frac{1}{R_2} + \\frac{1}{R_3} + \\frac{1}{R_{45}} $$\n$$ R_{2345} = \\frac{1}{\\frac{1}{R_2} + \\frac{1}{R_3} + \\frac{1}{R_{45}}} $$\nWe can code this calculation in Python. To find the reciprocal, raise the combined sum to the negative one power. Remember, exponentiation is performed with a double asterisk ** in Python.",
"Vt = 5.2\n\nR1 = 0.0132\nR2 = 0.021\nR3 = 0.00360\nR4 = 0.0152\nR5 = 0.0119\nR6 = 0.0022\nR7 = 0.00740\n\nR45 = R4 + R5\nR67 = R6 + R7\n\nR2345 = ((1/R2)+(1/R3)+(1/R45))**(-1)\nprint(f'R2345 = {round(R2345,7)} Ohm')",
"OK, now let's construct a new SchemDraw diagram of the simplified the circuit. In this diagram, we'll combine $R_2$, $R_3$ and $R_{45}$ into one big resistor, $R_{2345}$.",
"d = schem.Drawing(unit=2.5)\nR67 = d.add(e.RES, d='right', botlabel='$R_{67}$')\nR345 = d.add(e.RES, d='up' , botlabel='$R_{2345}$')\nR1 = d.add(e.RES, d='left', tox=R67.start, label='$R_1$')\nVt = d.add(e.BATTERY, d='up', xy=R67.start, toy=R1.end, label='$V_t$', lblofst=0.3)\nd.labelI(Vt, arrowlen=1.5, arrowofst=0.5)\nd.draw()\nd.save('3_resistors_1_loop.png')\n#d.save('3_resistors_1_loop.pdf')",
"Find R<sub>t</sub>\nTo find $R_t$, we again combine the resistors in series. The remaining resistors $R_1$, $R_{2345}$ and $R_{67}$ are in series:\n$$ R_{1234567} = R_1 + R_{2345} + R_{67} $$\nWe'll call the total resistance of the circuit $R_t$ which is equal to $R_{1234567}$\n$$ R_t = R_{1234567} $$\nAnother calculation in Python.",
"Vt = 5.2\n\nR1 = 0.0132\nR2 = 0.021\nR3 = 0.00360\nR4 = 0.0152\nR5 = 0.0119\nR6 = 0.0022\nR7 = 0.00740\n\nR45 = R4 + R5\nR67 = R6 + R7\n\nR2345 = ((1/R2)+(1/R3)+(1/R45))**(-1)\n\nRt = R1 + R2345 + R67\nprint(f'Rt = {round(Rt,7)} Ohm')",
"Last circuit diagram. The simplest one. This SchemDraw diagram just includes $V_t$ and $R_t$.",
"d = schem.Drawing(unit=2.5)\nL2 = d.add(e.LINE, d='right')\nRt = d.add(e.RES, d='up' , botlabel='$R_{t}$')\nL1 = d.add(e.LINE, d='left', tox=L2.start)\nVt = d.add(e.BATTERY, d='up', xy=L2.start, toy=L1.end, label='$V_t$', lblofst=0.3)\nd.labelI(Vt, arrowlen=1.5, arrowofst=0.5)\nd.draw()\nd.save('1_resistor_no_loops.png')\n#d.save('1_resistor_no_loops.pdf')",
"Find V<sub>6</sub> and V<sub>7</sub>\nNow that we've solved for the total resistance of the circuit $R_t$, we can find the total current running through the circuit using Ohm's Law $V = IR $.\n$$ V = IR $$\n$$ I = \\frac{V}{R} $$\n$$ I_t = \\frac{V_t}{R_t} $$",
"Vt = 5.2\n\nR1 = 0.0132\nR2 = 0.021\nR3 = 0.00360\nR4 = 0.0152\nR5 = 0.0119\nR6 = 0.0022\nR7 = 0.00740\n\nR45 = R4 + R5\nR67 = R6 + R7\n\nR2345 = ((1/R2)+(1/R3)+(1/R45))**(-1)\nRt = R1 + R2345 + R67\n\nIt = Vt/Rt\nprint(f'It = {round(It,2)} A')",
"The total current of the circuit, $I_t$ is the same as the current running through resistor $R_6$ and resistor $R_7$.\n$$ I_t = I_6 = I_7 $$\nWe can apply Ohm's law to find $V_6$ now that we have $I_6$ and $I_7$. \n$$ V_6 = I_6 R_6 $$\n$$ V_7 = I_7 R_7 $$",
"I6 = It\nI7 = It\nV6 = I6 * R6\nV7 = I7 * R7\nprint(f'V6 = {round(V6,5)} V, V7 = {round(V7,5)} V')",
"Find I<sub>3</sub> and I<sub>6</sub>\nThe total current of the circuit, $I_t$ is the same as the current running through resistor $R_{2345}$.\n$$ I_t = I_{2345} $$\nWe can apply Ohm's law to find $V_{2345}$ now that we have $I_{2345}$. \n$$ V_{2345} = I_{2345} R_{2345} $$",
"I2345 = It\nV2345 = I2345 * R2345\nprint(f'V2345 = {round(V2345,5)} V')",
"The voltage drop across resistor $R_3$ is the same as the voltage drop across resistor $R_{2345}$.\n$$ V_3 = V_{2345} $$\nSince $V_3$ and $R_3$ are known, we can solve for $I_3$ using Ohm's law:\n$$ V = IR $$\n$$ I = \\frac{V}{R} $$\n$$ I_3 = \\frac{V_3}{R_3} $$\nThe current $I_6$ running through resistor $R_6$ is the same as the total current $I_t$.\n$$ I_6 = I_t $$",
"V3 = V2345\nI3 = V3 / R3\n\nI6 = It\n\nprint(f'I3 = {round(I3,2)} A, I6 = {round(I6,2)} A')",
"Find P<sub>7</sub> and P<sub>4</sub>\nPower is equal to voltage times current:\n$$ P = VI $$\nAccording to Ohm's law: \n$$V = IR$$\nIf we substitute $V$ as $IR$ in the power equation we get:\n$$ P = (IR)(I) $$\n$$ P = I^2 R $$\nWith a known $R_7$ and $I_7 = I_t$:\n$$ P_7 = {I_7}^2 R_7 $$",
"I7 = It\nP7 = R7 * I7**2\nprint(f'P7 = {round(P7,2)} W')",
"Current $I_{45}$ is equal to current $I_4$. Voltage $V_{45} = V_{2345}$. Using Ohm's Law again:\n$$ V = IR $$\n$$ I = \\frac{V}{R} $$\n$$ I_{45} = \\frac{V_{45}}{R_{45}} $$",
"V45 = V2345\nI45 = V45/R45\nprint(f'I45 = {round(I45,3)} A')",
"One more time using the power law:\n$$ P = I^2 R $$\nWith a known $R_4$ and $I_4 = I_{45}$:\n$$ P_4 = {I_4}^2 R_4 $$",
"I4 = I45\nP4 = R4 * I4**2\nprint(f'P4 = {round(P4,4)} W')",
"Final Answer\nLet's print out all of the final values to three significant figures including units:",
"print(f'V6 = {round(V6,3)} V')\nprint(f'V7 = {round(V7,2)} V')\nprint(f'I3 = {round(I3,0)} A')\nprint(f'I6 = {round(I6,0)} A')\nprint(f'P4 = {round(P4,2)} W')\nprint(f'P7 = {round(P7,0)} W')",
"Conclusion\nSchemDraw is a great package for making circuit diagrams in Python. Python is also useful for doing calculations that involve lots of different values. Although none of the calculations in this problem were particularly difficult, keeping track of all the values as variables in Python can cut down on errors when there multiple calculations and many parameters to keep track of."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mark-r-anderson/GaussianProcesses
|
regression.ipynb
|
mit
|
[
"import numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\nfrom GPs.Kernel import Kernel, SqExp, RQ, ExpSine, WhiteNoise\nfrom GPs.GP import GPR\n\n%matplotlib inline",
"Initialize\nDefine the Training Data Set\nDefine the training dataset for the independent and dependent variables",
"x = np.random.RandomState(0).uniform(-5, 5, 20)\n#x = np.random.uniform(-5, 5, 20)\ny = x*np.sin(x)\n#y += np.random.normal(0,0.5,y.size)\ny += np.random.RandomState(34).normal(0,0.5,y.size)",
"Define the Test Set\nDefine the training dataset for the independent variables. In this case it is a \"continuous\" curve",
"x_star = np.linspace(-5,5,500)",
"Train the Model\nInstantiate the kernels, instantiate the GPR with the kernel, and train the model.",
"#Define the basic kernels\nk1 = SqExp(0.45,2)\nk2 = RQ(0.5,2,3)\nk3 = ExpSine(0.1,2,30)\nk4 = WhiteNoise(0.01)\n\n#Define the combined kernel\nk1 = k1+k4\n\n#Instantiate the GP predictor object with the desired kernel\ngp = GPR(k1)\n\n#Train the model\ngp.train(x,y)",
"Regression\nPerform the regression based on the set of training data. The best estimate of the prediction is given by the mean of the distribution from which the posterior samples are drawn.\nPredict (Initial Hyperparameters)\nPerform regression using the initial user-specified hyperparameters.",
"#Predict a new set of test data given the independent variable observations\ny_mean1,y_var1 = gp.predict(x_star,False)\n\n#Convert the variance to the standard deviation\ny_err1 = np.sqrt(y_var1)\n\nplt.scatter(x,y,s=30)\nplt.plot(x_star,x_star*np.sin(x_star),'r:')\nplt.plot(x_star,y_mean1,'k-')\nplt.fill_between(x_star,y_mean1+y_err1,y_mean1-y_err1,alpha=0.5)",
"Optimize Hyperparameters\nOptimize over the hyperparameters.",
"gp.optimize('SLSQP')",
"array([ 1.47895967, 3.99711988, 0.16295754])\narray([ 1.80397587, 4.86011667, 0.18058626])\nPredict (Optimized Hyperparameters)\nPerform the regression from the hyperparameters that optimize the log marginal likelihood. Note the improvement in the fit in comparison to the actual function (red dotted line).",
"#Predict a new set of test data given the independent variable observations\ny_mean2,y_var2 = gp.predict(x_star,False)\n\n#Convert the variance to the standard deviation\ny_err2 = np.sqrt(y_var2)\n\nplt.scatter(x,y,s=30)\nplt.plot(x_star,x_star*np.sin(x_star),'r:')\nplt.plot(x_star,y_mean2,'k-')\nplt.fill_between(x_star,y_mean2+y_err2,y_mean2-y_err2,alpha=0.5)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mjuenema/ipython-notebooks
|
ciscoconfparse.ipynb
|
bsd-2-clause
|
[
"ciscoconfparse\nciscoconfparse is a Python library for querying Cisco-style configurations. \nThe purpose of this workbook is to examine some features that are, in my view, not very well presented in the official documentation.",
"# The line below can be ignored but I didn't set up my environment properly \nimport sys ; sys.path.append('/home/mjuenemann/.virtualenvs/ciscoconfparse/lib/python3.6/site-packages')\n\nimport ciscoconfparse",
"I am going to use a very stripped down version of the Secure IOS Template by Team Cymru. This is not a fully functional IOS configuration!",
"CONFIG = \"\"\"\n!\nhostname router01\n!\ntacacs-server host 192.0.2.34\ntacacs-server key cheezit\n!\ninterface Ethernet2/0\n description Unprotected interface, facing towards Internet\n ip address 192.0.2.14 255.255.255.240\n no ip unreachables\n ntp disable\n no mop enable\n mtu 900\n!\ninterface Ethernet2/1\n description Protected interface, facing towards DMZ\n ip address 192.0.2.17 255.255.255.240\n no mop enable\n\"\"\"",
"First, he configuration must be parsed by creating an instance of ciscoconfparse.CiscoConfParse(). The class expects either a file object or a list of configuration lines.",
"config = ciscoconfparse.CiscoConfParse(CONFIG.split('\\n'))\nconfig",
"When CiscoConfParse() reads a configuration, it stores parent-child relationships as a special IOSCfgLine object. IOSCfgLine instances are returned when one queries the parsed configuration.\nFinding lines matching a regular expression",
"tacacs_lines = config.find_objects(r'^tacacs')\ntacacs_lines\n\nfirst_tacacs_line = tacacs_lines[0]\ntype(first_tacacs_line)",
"IOSCfgLine instances have several useful attributes.",
"first_tacacs_line.linenum\n\nfirst_tacacs_line.indent\n\n first_tacacs_line.text",
"Finding sections with children\nThe next example finds interfaces that have NTP disabled.",
"interfaces_with_ntp_disabled = config.find_objects_w_child(r'^interface', r'ntp disable')\ninterfaces_with_ntp_disabled",
"The .text attribute only returns the matching line whereas the .ioscfg attributes includes all children lines as a list.",
"interfaces_with_ntp_disabled[0].text\n\ninterfaces_with_ntp_disabled[0].ioscfg",
"Finding sections without children\nThe next example finds interfaces that have NTP not disabled.",
"interfaces_with_ntp_not_disabled = config.find_objects_wo_child(r'^interface', r'ntp disable')\ninterfaces_with_ntp_not_disabled\n\ninterfaces_with_ntp_not_disabled[0].ioscfg",
"Finding sections with all children\nThe next example finds interfaces that have IP-Unreachables and MOP disabled.",
"results = config.find_objects_w_all_children(r'interface', [r'no ip unreachables', r'no mop enable'])\nresults\n\nresults[0].ioscfg",
"Finding lines with parents\nThe .find_objects_w_parents() methods returns children and not their parents.",
"results = config.find_objects_w_parents(r'^interface', 'no mop enable')\nresults",
"Deleting lines\nThis can be handy in combination with the IOSCfgLine.delete() method which I haven't covered yet. \nIOSCfgLine objects provide several methods for changing an existing configuration. Let's delete all no mop enable lines.",
"for result in results:\n result.delete()\n# Call .commit() after changing the configuration\nconfig.commit()",
"The no mop enable lines are now missing.",
"config.ioscfg",
"Adding lines\nLet's ensure that the configuration uses an NTP server.",
"config.append_line('ntp server 192.168.1.1')\n\n# Call .commit() before searching again!!!\nconfig.commit()\nconfig.ioscfg",
"Adding lines to sections\nLet's ensure that all Ethernet interfaces have an explicit MTU of 1500 configured. This is a two-step process were first any existing MTU lines are deleted and then the correct ones are added.",
"# Delete all existing MTU lines.\nfor interface in config.find_objects(r'^interface.+Ethernet'):\n interface.delete_children_matching('mtu \\d+')\nconfig.commit()\n\n# Add the correct MTU. Note the use of the correct indentation value for children.\nfor interface in config.find_objects(r'^interface.+Ethernet'):\n interface.append_to_family('mtu 1500', indent=interface.child_indent)\nconfig.commit()\n\nconfig.ioscfg"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
|
Convolutional Neural Networks/Face+Recognition+for+the+Happy+House+-+v3.ipynb
|
mit
|
[
"Face Recognition for the Happy House\nWelcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from FaceNet. In lecture, we also talked about DeepFace. \nFace recognition problems commonly fall into two categories: \n\nFace Verification - \"is this the claimed person?\". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem. \nFace Recognition - \"who is this person?\". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem. \n\nFaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.\nIn this assignment, you will:\n- Implement the triplet loss function\n- Use a pretrained model to map face images into 128-dimensional encodings\n- Use these encodings to perform face verification and face recognition\nIn this exercise, we will be using a pre-trained model which represents ConvNet activations using a \"channels first\" convention, as opposed to the \"channels last\" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community. \nLet's load the required packages.",
"from keras.models import Sequential\nfrom keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate\nfrom keras.models import Model\nfrom keras.layers.normalization import BatchNormalization\nfrom keras.layers.pooling import MaxPooling2D, AveragePooling2D\nfrom keras.layers.merge import Concatenate\nfrom keras.layers.core import Lambda, Flatten, Dense\nfrom keras.initializers import glorot_uniform\nfrom keras.engine.topology import Layer\nfrom keras import backend as K\nK.set_image_data_format('channels_first')\nimport cv2\nimport os\nimport numpy as np\nfrom numpy import genfromtxt\nimport pandas as pd\nimport tensorflow as tf\nfrom fr_utils import *\nfrom inception_blocks_v2 import *\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nnp.set_printoptions(threshold=np.nan)",
"0 - Naive Face Verification\nIn Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person! \n<img src=\"images/pixel_comparison.png\" style=\"width:380px;height:150px;\">\n<caption><center> <u> <font color='purple'> Figure 1 </u></center></caption>\nOf course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on. \nYou'll see that rather than using the raw image, you can learn an encoding $f(img)$ so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person.\n1 - Encoding face images into a 128-dimensional vector\n1.1 - Using an ConvNet to compute encodings\nThe FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from Szegedy et al.. We have provided an inception network implementation. You can look in the file inception_blocks.py to see how it is implemented (do so by going to \"File->Open...\" at the top of the Jupyter notebook). \nThe key things you need to know are:\n\nThis network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$ \nIt outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector\n\nRun the cell below to create the model for face images.",
"FRmodel = faceRecoModel(input_shape=(3, 96, 96))\n\nprint(\"Total Params:\", FRmodel.count_params())",
"Expected Output \n<table>\n<center>\nTotal Params: 3743280\n</center>\n</table>\n\nBy using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows:\n<img src=\"images/distance_kiank.png\" style=\"width:680px;height:250px;\">\n<caption><center> <u> <font color='purple'> Figure 2: <br> </u> <font color='purple'> By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption>\nSo, an encoding is a good one if: \n- The encodings of two images of the same person are quite similar to each other \n- The encodings of two images of different persons are very different\nThe triplet loss function formalizes this, and tries to \"push\" the encodings of two images of the same person (Anchor and Positive) closer together, while \"pulling\" the encodings of two images of different persons (Anchor, Negative) further apart. \n<img src=\"images/triplet_comparison.png\" style=\"width:280px;height:150px;\">\n<br>\n<caption><center> <u> <font color='purple'> Figure 3: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption>\n1.2 - The Triplet Loss\nFor an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.\n<img src=\"images/f_x.png\" style=\"width:380px;height:150px;\">\n<!--\nWe will also add a normalization step at the end of our model so that $\\mid \\mid f(x) \\mid \\mid_2 = 1$ (means the vector of encoding should be of norm 1).\n!-->\n\nTraining will use triplets of images $(A, P, N)$: \n\nA is an \"Anchor\" image--a picture of a person. \nP is a \"Positive\" image--a picture of the same person as the Anchor image.\nN is a \"Negative\" image--a picture of a different person than the Anchor image.\n\nThese triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. \nYou'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\\alpha$:\n$$\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2 + \\alpha < \\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2$$\nYou would thus like to minimize the following \"triplet cost\":\n$$\\mathcal{J} = \\sum^{m}{i=1} \\large[ \\small \\underbrace{\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2}\\text{(1)} - \\underbrace{\\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2}\\text{(2)} + \\alpha \\large ] \\small+ \\tag{3}$$\nHere, we are using the notation \"$[z]_+$\" to denote $max(z,0)$. \nNotes:\n- The term (1) is the squared distance between the anchor \"A\" and the positive \"P\" for a given triplet; you want this to be small. \n- The term (2) is the squared distance between the anchor \"A\" and the negative \"N\" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it. \n- $\\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\\alpha = 0.2$. \nMost implementations also normalize the encoding vectors to have norm equal one (i.e., $\\mid \\mid f(img)\\mid \\mid_2$=1); you won't have to worry about that here.\nExercise: Implement the triplet loss as defined by formula (3). Here are the 4 steps:\n1. Compute the distance between the encodings of \"anchor\" and \"positive\": $\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2$\n2. Compute the distance between the encodings of \"anchor\" and \"negative\": $\\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2$\n3. Compute the formula per training example: $ \\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid - \\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2 + \\alpha$\n3. Compute the full formula by taking the max with zero and summing over the training examples:\n$$\\mathcal{J} = \\sum^{m}{i=1} \\large[ \\small \\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2 - \\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2+ \\alpha \\large ] \\small+ \\tag{3}$$\nUseful functions: tf.reduce_sum(), tf.square(), tf.subtract(), tf.add(), tf.maximum().\nFor steps 1 and 2, you will need to sum over the entries of $\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2$ and $\\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2$ while for step 4 you will need to sum over the training examples.",
"# GRADED FUNCTION: triplet_loss\n\ndef triplet_loss(y_true, y_pred, alpha = 0.2):\n \"\"\"\n Implementation of the triplet loss as defined by formula (3)\n \n Arguments:\n y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.\n y_pred -- python list containing three objects:\n anchor -- the encodings for the anchor images, of shape (None, 128)\n positive -- the encodings for the positive images, of shape (None, 128)\n negative -- the encodings for the negative images, of shape (None, 128)\n \n Returns:\n loss -- real number, value of the loss\n \"\"\"\n \n anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]\n \n ### START CODE HERE ### (≈ 4 lines)\n # Step 1: Compute the (encoding) distance between the anchor and the positive\n pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)))\n # Step 2: Compute the (encoding) distance between the anchor and the negative\n neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)))\n # Step 3: subtract the two previous distances and add alpha.\n basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)\n # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.\n loss = tf.maximum(basic_loss, 0)\n ### END CODE HERE ###\n \n \n return loss\n\nwith tf.Session() as test:\n tf.set_random_seed(1)\n y_true = (None, None, None)\n y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),\n tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),\n tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))\n loss = triplet_loss(y_true, y_pred)\n \n print(\"loss = \" + str(loss.eval()))",
"Expected Output:\n<table>\n <tr>\n <td>\n **loss**\n </td>\n <td>\n 528.143\n </td>\n </tr>\n\n</table>\n\n2 - Loading the trained model\nFaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.",
"FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])\nload_weights_from_FaceNet(FRmodel)",
"Here're some examples of distances between the encodings between three individuals:\n<img src=\"images/distance_matrix.png\" style=\"width:380px;height:200px;\">\n<br>\n<caption><center> <u> <font color='purple'> Figure 4:</u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption>\nLet's now use this model to perform face verification and face recognition! \n3 - Applying the model\nBack to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment. \nHowever, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food. \nSo, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a Face verification system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be. \n3.1 - Face Verification\nLet's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use img_to_encoding(image_path, model) which basically runs the forward propagation of the model on the specified image. \nRun the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.",
"database = {}\ndatabase[\"danielle\"] = img_to_encoding(\"images/danielle.png\", FRmodel)\ndatabase[\"younes\"] = img_to_encoding(\"images/younes.jpg\", FRmodel)\ndatabase[\"tian\"] = img_to_encoding(\"images/tian.jpg\", FRmodel)\ndatabase[\"andrew\"] = img_to_encoding(\"images/andrew.jpg\", FRmodel)\ndatabase[\"kian\"] = img_to_encoding(\"images/kian.jpg\", FRmodel)\ndatabase[\"dan\"] = img_to_encoding(\"images/dan.jpg\", FRmodel)\ndatabase[\"sebastiano\"] = img_to_encoding(\"images/sebastiano.jpg\", FRmodel)\ndatabase[\"bertrand\"] = img_to_encoding(\"images/bertrand.jpg\", FRmodel)\ndatabase[\"kevin\"] = img_to_encoding(\"images/kevin.jpg\", FRmodel)\ndatabase[\"felix\"] = img_to_encoding(\"images/felix.jpg\", FRmodel)\ndatabase[\"benoit\"] = img_to_encoding(\"images/benoit.jpg\", FRmodel)\ndatabase[\"arnaud\"] = img_to_encoding(\"images/arnaud.jpg\", FRmodel)",
"Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.\nExercise: Implement the verify() function which checks if the front-door camera picture (image_path) is actually the person called \"identity\". You will have to go through the following steps:\n1. Compute the encoding of the image from image_path\n2. Compute the distance about this encoding and the encoding of the identity image stored in the database\n3. Open the door if the distance is less than 0.7, else do not open.\nAs presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)",
"# GRADED FUNCTION: verify\n\ndef verify(image_path, identity, database, model):\n \"\"\"\n Function that verifies if the person on the \"image_path\" image is \"identity\".\n \n Arguments:\n image_path -- path to an image\n identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.\n database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).\n model -- your Inception model instance in Keras\n \n Returns:\n dist -- distance between the image_path and the image of \"identity\" in the database.\n door_open -- True, if the door should open. False otherwise.\n \"\"\"\n \n ### START CODE HERE ###\n \n # Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)\n encoding = img_to_encoding(image_path, model)\n \n # Step 2: Compute distance with identity's image (≈ 1 line)\n dist = np.linalg.norm(encoding - database[identity])\n \n # Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)\n if dist<0.7:\n print(\"It's \" + str(identity) + \", welcome home!\")\n door_open = True\n else:\n print(\"It's not \" + str(identity) + \", please go away\")\n door_open = False\n \n ### END CODE HERE ###\n \n return dist, door_open",
"Younes is trying to enter the Happy House and the camera takes a picture of him (\"images/camera_0.jpg\"). Let's run your verification algorithm on this picture:\n<img src=\"images/camera_0.jpg\" style=\"width:100px;height:100px;\">",
"verify(\"images/camera_0.jpg\", \"younes\", database, FRmodel)",
"Expected Output:\n<table>\n <tr>\n <td>\n **It's younes, welcome home!**\n </td>\n <td>\n (0.65939283, True)\n </td>\n </tr>\n\n</table>\n\nBenoit, who broke the aquarium last weekend, has been banned from the house and removed from the database. He stole Kian's ID card and came back to the house to try to present himself as Kian. The front-door camera took a picture of Benoit (\"images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.\n<img src=\"images/camera_2.jpg\" style=\"width:100px;height:100px;\">",
"verify(\"images/camera_2.jpg\", \"kian\", database, FRmodel)",
"Expected Output:\n<table>\n <tr>\n <td>\n **It's not kian, please go away**\n </td>\n <td>\n (0.86224014, False)\n </td>\n </tr>\n\n</table>\n\n3.2 - Face Recognition\nYour face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in! \nTo reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them! \nYou'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input. \nExercise: Implement who_is_it(). You will have to go through the following steps:\n1. Compute the target encoding of the image from image_path\n2. Find the encoding from the database that has smallest distance with the target encoding. \n - Initialize the min_dist variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.\n - Loop over the database dictionary's names and encodings. To loop use for (name, db_enc) in database.items().\n - Compute L2 distance between the target \"encoding\" and the current \"encoding\" from the database.\n - If this distance is less than the min_dist, then set min_dist to dist, and identity to name.",
"# GRADED FUNCTION: who_is_it\n\ndef who_is_it(image_path, database, model):\n \"\"\"\n Implements face recognition for the happy house by finding who is the person on the image_path image.\n \n Arguments:\n image_path -- path to an image\n database -- database containing image encodings along with the name of the person on the image\n model -- your Inception model instance in Keras\n \n Returns:\n min_dist -- the minimum distance between image_path encoding and the encodings from the database\n identity -- string, the name prediction for the person on image_path\n \"\"\"\n \n ### START CODE HERE ### \n \n ## Step 1: Compute the target \"encoding\" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)\n encoding = img_to_encoding(image_path=image_path, model=model)\n \n ## Step 2: Find the closest encoding ##\n \n # Initialize \"min_dist\" to a large value, say 100 (≈1 line)\n min_dist = 100\n \n # Loop over the database dictionary's names and encodings.\n for (name, db_enc) in database.items():\n \n # Compute L2 distance between the target \"encoding\" and the current \"emb\" from the database. (≈ 1 line)\n dist = np.linalg.norm(encoding - db_enc)\n\n # If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)\n if dist<min_dist:\n min_dist = dist\n identity = name\n\n ### END CODE HERE ###\n \n if min_dist > 0.7:\n print(\"Not in the database.\")\n else:\n print (\"it's \" + str(identity) + \", the distance is \" + str(min_dist))\n \n return min_dist, identity",
"Younes is at the front-door and the camera takes a picture of him (\"images/camera_0.jpg\"). Let's see if your who_it_is() algorithm identifies Younes.",
"who_is_it(\"images/camera_0.jpg\", database, FRmodel)",
"Expected Output:\n<table>\n <tr>\n <td>\n **it's younes, the distance is 0.659393**\n </td>\n <td>\n (0.65939283, 'younes')\n </td>\n </tr>\n\n</table>\n\nYou can change \"camera_0.jpg\" (picture of younes) to \"camera_1.jpg\" (picture of bertrand) and see the result.\nYour Happy House is running well. It only lets in authorized persons, and people don't need to carry an ID card around anymore! \nYou've now seen how a state-of-the-art face recognition system works.\nAlthough we won't implement it here, here're some ways to further improve the algorithm:\n- Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then given a new image, compare the new face to multiple pictures of the person. This would increae accuracy.\n- Crop the images to just contain the face, and less of the \"border\" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.\n<font color='blue'>\nWhat you should remember:\n- Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem. \n- The triplet loss is an effective loss function for training a neural network to learn an encoding of a face image.\n- The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person. \nCongrats on finishing this assignment! \nReferences:\n\nFlorian Schroff, Dmitry Kalenichenko, James Philbin (2015). FaceNet: A Unified Embedding for Face Recognition and Clustering\nYaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf (2014). DeepFace: Closing the gap to human-level performance in face verification \nThe pretrained model we use is inspired by Victor Sy Wang's implementation and was loaded using his code: https://github.com/iwantooxxoox/Keras-OpenFace.\nOur implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kemerelab/NeuroHMM
|
StateClustering.ipynb
|
mit
|
[
"StateClustering.ipynb\nState clustering—categorical and positional clustering of inferred model states\nHere I will investigate the degree to which learned (inferred) states from hidden Markov models (HMMs) cluster in categorical space (e.g., RUN vs NORUN, SWR vs NOSWR, Lin1 vs Lin2, Lin1a-long vs Lin1b-short, etc.) as well as in positional space (state probability as a function of space. à la place fields).\nIn particular, I expect (hope) that the spatial clustering will look nice, and resemble actual place fields, but we can't reasonably expect this sort of clustering to form part of our regular analysis, since we most likely won't have external positional data available (for example, when the animal is alseep).\nI also hope to be able to discriminate between different categories by looking at the state clustering, because looking at the log probabilities of individual bins did not always reveal much information. For example, using bin-by-bin evaluation of observations, I could not see any difference between RUN vs NORUN epochs, although I did not leave a gap (say RUN > 8 vs NORUN < 4) in the data, and a comprehensive analysis is still lacking. In short, I hope to make faster progress by looking at state distributions and clustering, since my initial attempts to learn something directly from the log probability of a [single bin] observation proved rather fruitless.\nComplementary to this notebook is StateOrdering.ipynb, in which I look at both linear and 2-dimensional state ordering and associations.\nFFB: I have stumbled across http://www.lx.it.pt/~mtf/MLDM_2003.pdf (Similarity-Based Clustering of Sequences using Hidden Markov Models, 2003) which I should definitely look at in more detail to see if it's relevant.\nImport packages and initialization",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport sys\n\nsys.path.insert(0, 'helpers')\n\nfrom efunctions import * # load my helper function(s) to save pdf figures, etc.\nfrom hc3 import load_data, get_sessions\nfrom hmmlearn import hmm # see https://github.com/ckemere/hmmlearn\nimport klabtools as klab\nimport seqtools as sq\n\nimport importlib\n\nimportlib.reload(sq) # reload module here only while prototyping...\nimportlib.reload(klab) # reload module here only while prototyping...\n\n%matplotlib inline\n\nsns.set(rc={'figure.figsize': (12, 4),'lines.linewidth': 1.5})\nsns.set_style(\"white\")",
"Load data\nHere we consider lin2 data for gor01 on the first recording day (6-7-2006), since this session had the most units (91) of all the gor01 sessions, and lin2 has position data, whereas lin1 only has partial position data.",
"datadirs = ['/home/etienne/Dropbox/neoReader/Data',\n 'C:/etienne/Dropbox/neoReader/Data',\n '/Users/etienne/Dropbox/neoReader/Data']\n\nfileroot = next( (dir for dir in datadirs if os.path.isdir(dir)), None)\n\nanimal = 'gor01'; month,day = (6,7); session = '16-40-19' # 91 units\n\nspikes = load_data(fileroot=fileroot, datatype='spikes',animal=animal, session=session, month=month, day=day, fs=32552, verbose=False)\neeg = load_data(fileroot=fileroot, datatype='eeg', animal=animal, session=session, month=month, day=day,channels=[0,1,2], fs=1252, starttime=0, verbose=False)\nposdf = load_data(fileroot=fileroot, datatype='pos',animal=animal, session=session, month=month, day=day, verbose=False)\nspeed = klab.get_smooth_speed(posdf,fs=60,th=8,cutoff=0.5,showfig=False,verbose=False)",
"Part 1: Positional state clustering (à la place fields)\nHere I have a few design choices that might impact the results significantly. First, binning spikes: I have positional data at 60 Hz, and I have to trust that starting synchronization is pretty precise (otherwise I'm somewhat screwed). There is however no synchronization information in the .whl file itself. So I could bin the observations at 60 Hz, and then train an HMM at that timescale, or I could train a HMM at one of my usual timescales (125 ms or 62.5 ms) and then interpolate the position information.\nI should also be careful to specify what model I am training exactly, since usually I train on RUN data only. Should I train an HMM on all data (a training set, of course) or on RUN data only? Should I look at positional state clustering on all the data, or only on the RUN data? \nAnother consideration is how I intend to compute the posterior state estimates—I could do MLSE within RUN epochs, so that I know I am getting valid state sequences, or I could do MAP decoding so that per bin, I have the posterior state estimates.\nThe spatial bin sizes could also make a difference, of course.\nPositional state clustering 1: RUN data only ($> 8$ u$\\cdot$s$^{-1}$) @ 62.5 ms bins; $N_s$ = 50 spatial bins; MAP",
"from mymap import Map\n\ndef extract_subsequences_from_binned_spikes(binned_spikes, bins):\n data = spikes.data.copy()\n boundaries = klab.get_continuous_segments(bins)\n \n binned = Map()\n binned['bin_width'] = binned_spikes.bin_width\n binned['data'] = binned_spikes.data[bins,:]\n binned['boundaries'] = boundaries\n binned['boundaries_fs'] = 1/binned_spikes.bin_width \n binned['sequence_lengths'] = (boundaries[:,1] - boundaries[:,0] + 1).flatten()\n \n return binned\n\n## bin ALL spikes\nds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)\nbinned_spikes = klab.bin_spikes(spikes.data, ds=ds, fs=spikes.samprate, verbose=True)\n\ncenterx = (np.array(posdf['x1']) + np.array(posdf['x2']))/2\ncentery = (np.array(posdf['y1']) + np.array(posdf['y2']))/2\n\ntend = len(speed.data)/speed.samprate # end in seconds\ntime_axis = np.arange(0,len(speed.data))/speed.samprate\nspeed_0625, tvel_0625 = klab.resample_velocity(velocity=speed.data,t_bin=ds,tvel=time_axis,t0=0,tend=tend)\ntruepos_0625 = np.interp(np.arange(0,len(binned_spikes.data))*ds,time_axis,centerx)\n\n# get bins where rat was running faster than thresh units per second\nrunidx_0625 = np.where(speed_0625>8)[0]\nseq_stk_run_0625 = extract_subsequences_from_binned_spikes(binned_spikes,runidx_0625)\n\n## split data into train, test, and validation sets:\ntr_b,vl_b,ts_b = sq.data_split(seq_stk_run_0625, tr=50, vl=2, ts=50, randomseed = 0, verbose=True)\n\n## train HMM on active behavioral data; training set (with a fixed, arbitrary number of states for now):\nmyhmm = sq.hmm_train(tr_b, num_states=35, n_iter=50, verbose=False)\n\n###########################################################3\nstacked_data = seq_stk_run_0625\n###########################################################3\n\nx0=0; xl=100; num_pos_bins=50\nxx_left = np.linspace(x0,xl,num_pos_bins+1)\nnum_sequences = len(stacked_data.sequence_lengths)\nnum_states = myhmm.n_components\nstate_pos = np.zeros((num_states, num_pos_bins))\n\nfor seq_id in np.arange(0,num_sequences):\n tmpseqbdries = [0]; tmpseqbdries.extend(np.cumsum(stacked_data.sequence_lengths).tolist());\n obs = stacked_data.data[tmpseqbdries[seq_id]:tmpseqbdries[seq_id+1],:]\n ll, pp = myhmm.score_samples(obs)\n xx = truepos_0625[stacked_data.boundaries[seq_id,0]:stacked_data.boundaries[seq_id,1]+1]\n digitized = np.digitize(xx, xx_left) - 1 # spatial bin numbers\n for ii, ppii in enumerate(pp):\n state_pos[:,digitized[ii]] += np.transpose(ppii)\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nax1.matshow(state_pos, interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering before sorting; RUN > 8')\n\ngrid_kws = {\"width_ratios\": (.9, .03), \"wspace\": .07}\nf, (ax, cbar_ax) = plt.subplots(1, 2, gridspec_kw=grid_kws, figsize=(6, 4))\nax = sns.heatmap(state_pos, ax=ax, \n cmap='OrRd',\n cbar_ax=cbar_ax,\n cbar_kws={\"orientation\": \"vertical\", 'label':'colorbar label'})\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nsns.heatmap(state_pos, cmap='OrRd', linewidths=.5, ax=ax1, cbar=True, xticklabels=5, yticklabels=5)\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering before sorting; RUN > 8')\n\n# sort model states:\ndef get_sorted_order_from_transmat(A, start_state = 0):\n \n new_order = [start_state]\n num_states = A.shape[0]\n rem_states = np.arange(0,start_state).tolist()\n rem_states.extend(np.arange(start_state+1,num_states).tolist())\n cs = start_state\n\n for ii in np.arange(0,num_states-1):\n nstilde = np.argmax(A[cs,rem_states])\n ns = rem_states[nstilde]\n rem_states.remove(ns)\n cs = ns\n new_order.append(cs)\n \n return new_order, A[:, new_order][new_order]\n\nnew_order, Anew = get_sorted_order_from_transmat(myhmm.transmat_, start_state = 17)\n\n## now order states by peak location on track\npeaklocations = state_pos.argmax(axis=1)\npeakorder = peaklocations.argsort()\n\nfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))\nax1.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering after sorting (peak-order); RUN > 8', y=1.08)\n\nax2.matshow(state_pos[new_order,:], interpolation='none', cmap='OrRd')\nax2.set_xlabel('position bin')\nax2.set_ylabel('state')\nax2.set_title('positional state clustering after sorting (trans-mat); RUN > 8', y=1.08)",
"Remarks: I am very pleased with the above results; the model states do indeed look like place fields. Note that the animal never explored the track for $x < 10$ or $x>90$. We can perform a similar analysis in 2D if the experiment calls for it, and we should similarly see nice place fields forming in the environment.\nThe above results were obtained using RUN > 8 data, evaluated (decoded) in a RUN > 8 model. Q. What would happen if we use the same model, but evaluate the NORUN < 4 data? Also, what would happen if we train a model on NORUN < 4 data and evaluate only the corresponding data? It is worth looking at some, or many of these and other combinations, to get a feel for how well the model really picked up on the place-preference of cells, or to what extent other choices in our design influenced the results shown above.\nAnother interesting feature is that we can actually see that some of the cells (states) shifted place fields when the track was shortened. See e.g. states 30 to 35, where the bifurcation is clearly visible. This claim should be verified more carefully of course, but it seems very plausible.\nOn ordering with the transition probability matrix (right): Note that the transition probability matrix sorted results are more difficult to interpret, but could still make sense. First of all, for a truly linear progression of states, we expect to see a zig-zag pattern across position, since it might correspond to L$\\to$R$\\to$L. However, since we really have two tracks (the longer and shorter ones) encoded in the model, we can expect four paths through the environment, but since some states share locations and are used in both lengths of the track, a strictly linear progression of the states will not lead to a clean, smooth, spatial representation.\nPositional state clustering 2: NORUN data only ($< 4$ u$\\cdot$s$^{-1}$) @ 62.5 ms bins; $N_s$ = 50 spatial bins; MAP\nHere I train and evaluate using NORUN < 4 data. Perhaps 4 units per second is still too fast? I could easily go down to NORUN < 1 or even less. Ideally one might hope to see much less well defined spatial clustering than the above results, but it is certainly possible that place cells are still firing robustly when the animal is stationary in the environment.",
"# get bins where rat was running slower than thresh units per second\nnorunidx_0625 = np.where(speed_0625<4)[0]\nseq_stk_norun_0625 = extract_subsequences_from_binned_spikes(binned_spikes,norunidx_0625)\n\n## split data into train, test, and validation sets:\ntr_q,vl_q,ts_q = sq.data_split(seq_stk_norun_0625, tr=50, vl=2, ts=50, randomseed = 0, verbose=True)\n\n## train HMM on active behavioral data; training set (with a fixed, arbitrary number of states for now):\nmyhmmq = sq.hmm_train(tr_q, num_states=35, n_iter=50, verbose=False)\n\n###########################################################3\nstacked_data = seq_stk_norun_0625\n###########################################################3\n\nx0=0; xl=100; num_pos_bins=50\nxx_left = np.linspace(x0,xl,num_pos_bins+1)\nnum_sequences = len(stacked_data.sequence_lengths)\nnum_states = myhmmq.n_components\nstate_pos = np.zeros((num_states, num_pos_bins))\n\nfor seq_id in np.arange(0,num_sequences):\n tmpseqbdries = [0]; tmpseqbdries.extend(np.cumsum(stacked_data.sequence_lengths).tolist());\n obs = stacked_data.data[tmpseqbdries[seq_id]:tmpseqbdries[seq_id+1],:]\n ll, pp = myhmmq.score_samples(obs)\n xx = truepos_0625[stacked_data.boundaries[seq_id,0]:stacked_data.boundaries[seq_id,1]+1]\n digitized = np.digitize(xx, xx_left) - 1 # spatial bin numbers\n for ii, ppii in enumerate(pp):\n state_pos[:,digitized[ii]] += np.transpose(ppii)\n\n## now order states by peak location on track\npeaklocations = state_pos.argmax(axis=1)\npeakorder = peaklocations.argsort()\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nax1.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering after sorting; NORUN < 4')",
"Remarks: We can see that very few place field like patterns were found, but the reward locations are robustly encoded, both before and after shortening the track.\nPositional state clustering 3: NORUN data only ($< 1$ u$\\cdot$s$^{-1}$) @ 62.5 ms bins; $N_s$ = 50 spatial bins; MAP\nHere I train and evaluate using NORUN < 1 data. We should see something very similar to when we were using NORUN < 4 data.",
"# get bins where rat was running slower than thresh units per second\nnorunidx_0625 = np.where(speed_0625<1)[0]\nseq_stk_norun_0625 = extract_subsequences_from_binned_spikes(binned_spikes,norunidx_0625)\n\n## split data into train, test, and validation sets:\ntr_q,vl_q,ts_q = sq.data_split(seq_stk_norun_0625, tr=50, vl=2, ts=50, randomseed = 0, verbose=True)\n\n## train HMM on active behavioral data; training set (with a fixed, arbitrary number of states for now):\nmyhmmq = sq.hmm_train(tr_q, num_states=35, n_iter=50, verbose=False)\n\n###########################################################3\nstacked_data = seq_stk_norun_0625\n###########################################################3\n\nx0=0; xl=100; num_pos_bins=50\nxx_left = np.linspace(x0,xl,num_pos_bins+1)\nnum_sequences = len(stacked_data.sequence_lengths)\nnum_states = myhmmq.n_components\nstate_pos = np.zeros((num_states, num_pos_bins))\n\nfor seq_id in np.arange(0,num_sequences):\n tmpseqbdries = [0]; tmpseqbdries.extend(np.cumsum(stacked_data.sequence_lengths).tolist());\n obs = stacked_data.data[tmpseqbdries[seq_id]:tmpseqbdries[seq_id+1],:]\n ll, pp = myhmmq.score_samples(obs)\n xx = truepos_0625[stacked_data.boundaries[seq_id,0]:stacked_data.boundaries[seq_id,1]+1]\n digitized = np.digitize(xx, xx_left) - 1 # spatial bin numbers\n for ii, ppii in enumerate(pp):\n state_pos[:,digitized[ii]] += np.transpose(ppii)\n\n## now order states by peak location on track\npeaklocations = state_pos.argmax(axis=1)\npeakorder = peaklocations.argsort()\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nax1.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering after sorting; NORUN < 1')",
"Remarks: Indeed, we again see a robust representation/encoding of the reward sites, and little else. What about stationary data decoded in the RUN model? And what about RUN data in the stationary model? Hopefully, both will reveal no significant place field structure outside of the reward sites, but let's confirm this next.\nPositional state clustering 4: NORUN ($< 1$ u$\\cdot$s$^{-1}$) in RUN model @ 62.5 ms bins; $N_s$ = 50 spatial bins; MAP\nHere I evaluate NORUN < 1 data in the RUN > 8 model.",
"###########################################################3\nstacked_data = seq_stk_norun_0625\n###########################################################3\n\nx0=0; xl=100; num_pos_bins=50\nxx_left = np.linspace(x0,xl,num_pos_bins+1)\nnum_sequences = len(stacked_data.sequence_lengths)\nnum_states = myhmm.n_components\nstate_pos = np.zeros((num_states, num_pos_bins))\n\nfor seq_id in np.arange(0,num_sequences):\n tmpseqbdries = [0]; tmpseqbdries.extend(np.cumsum(stacked_data.sequence_lengths).tolist());\n obs = stacked_data.data[tmpseqbdries[seq_id]:tmpseqbdries[seq_id+1],:]\n ll, pp = myhmm.score_samples(obs)\n xx = truepos_0625[stacked_data.boundaries[seq_id,0]:stacked_data.boundaries[seq_id,1]+1]\n digitized = np.digitize(xx, xx_left) - 1 # spatial bin numbers\n for ii, ppii in enumerate(pp):\n state_pos[:,digitized[ii]] += np.transpose(ppii)\n\n## now order states by peak location on track\npeaklocations = state_pos.argmax(axis=1)\npeakorder = peaklocations.argsort()\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nax1.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering after sorting; NORUN < 1 in RUN')",
"Positional state clustering 5: RUN ($> 8$ u$\\cdot$s$^{-1}$) in NORUN model @ 62.5 ms bins; $N_s$ = 50 spatial bins; MAP\nHere I evaluate RUN > 8 data in the NORUN < 1 model.",
"###########################################################3\nstacked_data = seq_stk_run_0625\n###########################################################3\n\nx0=0; xl=100; num_pos_bins=50\nxx_left = np.linspace(x0,xl,num_pos_bins+1)\nnum_sequences = len(stacked_data.sequence_lengths)\nnum_states = myhmmq.n_components\nstate_pos = np.zeros((num_states, num_pos_bins))\n\nfor seq_id in np.arange(0,num_sequences):\n tmpseqbdries = [0]; tmpseqbdries.extend(np.cumsum(stacked_data.sequence_lengths).tolist());\n obs = stacked_data.data[tmpseqbdries[seq_id]:tmpseqbdries[seq_id+1],:]\n ll, pp = myhmmq.score_samples(obs)\n xx = truepos_0625[stacked_data.boundaries[seq_id,0]:stacked_data.boundaries[seq_id,1]+1]\n digitized = np.digitize(xx, xx_left) - 1 # spatial bin numbers\n for ii, ppii in enumerate(pp):\n state_pos[:,digitized[ii]] += np.transpose(ppii)\n\n## now order states by peak location on track\npeaklocations = state_pos.argmax(axis=1)\npeakorder = peaklocations.argsort()\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nax1.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering after sorting; RUN > 8 in NORUN < 1')",
"Positional state clustering 6: ALL data @ 62.5 ms bins; $N_s$ = 50 spatial bins; MAP\nWhat if we don't know when the animal is running, and when it is resting? What if we just inject ALL of the data into the model to learn some underlying states? In this case, we should of course expect to learn states with place field like structures, but also very robust encoding of the reward sites. I expect that the model will do fine, but one potential problem is the fact that the RUN data constitutes a much smaller part of the data than the NORUN data, so that the quiescent color levels might mask the place field activity. I could be more careful in normalizing, or I could use a logarithmic color scale to reveal multi-level structures. Let's see what we get first:",
"# get bins where rat was running slower than thresh units per second\nallidx_0625 = np.where(speed_0625>=0)[0]\nseq_stk_all_0625 = extract_subsequences_from_binned_spikes(binned_spikes,allidx_0625)\n\n## train HMM on active behavioral data; training set (with a fixed, arbitrary number of states for now):\nmyhmmall = sq.hmm_train(seq_stk_all_0625, num_states=35, n_iter=50, verbose=False)\n\n###########################################################3\nstacked_data = seq_stk_all_0625\n###########################################################3\n\nx0=0; xl=100; num_pos_bins=50\nxx_left = np.linspace(x0,xl,num_pos_bins+1)\nnum_sequences = len(stacked_data.sequence_lengths)\nnum_states = myhmmall.n_components\nstate_pos = np.zeros((num_states, num_pos_bins))\n\nfor seq_id in np.arange(0,num_sequences):\n tmpseqbdries = [0]; tmpseqbdries.extend(np.cumsum(stacked_data.sequence_lengths).tolist());\n obs = stacked_data.data[tmpseqbdries[seq_id]:tmpseqbdries[seq_id+1],:]\n ll, pp = myhmmall.score_samples(obs)\n xx = truepos_0625[stacked_data.boundaries[seq_id,0]:stacked_data.boundaries[seq_id,1]+1]\n digitized = np.digitize(xx, xx_left) - 1 # spatial bin numbers\n for ii, ppii in enumerate(pp):\n state_pos[:,digitized[ii]] += np.transpose(ppii)\n\ndef normalize_state_pos(state_pos):\n num_states = state_pos.shape[0]\n num_pos_bins = state_pos.shape[1]\n state_pos = state_pos / np.tile(np.reshape(state_pos.sum(axis=1),(num_states,1)),num_pos_bins)\n return state_pos\n\n## now order states by peak location on track\nstate_pos = normalize_state_pos(state_pos)\npeaklocations = state_pos.argmax(axis=1)\npeakorder = peaklocations.argsort()\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nim = ax1.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering after sorting; ALL data')\ncax = fig.add_axes([0.9, 0.1, 0.03, 0.8])\nfig.colorbar(im, cax=cax)",
"Remarks: Notice that the majority of the states have been recruited to encode the reward locations, with only a small number of states (6 to 11) representing other positions along the track. This is somewhat disappointing, but ultimately expected, since the majority of the data were spent at the reward locations, and comparatively very little data were collected when running along the track. \nCan we possibly make the track representation better by considering more states?\nPositional state clustering 7: RUN data in an ALL model @ 62.5 ms bins; $N_s$ = 50 spatial bins; MAP\nHere we see what we can learn by using a model trained on ALL the data (mostly quiescent data, and a little bit of RUN data), so that most of the states are tuned to encode the reward locations... but this time, we look at the place fields by only evaluating RUN sequences in the ALL-trained model.",
"###########################################################3\nstacked_data = seq_stk_run_0625\n###########################################################3\n\nx0=0; xl=100; num_pos_bins=50\nxx_left = np.linspace(x0,xl,num_pos_bins+1)\nnum_sequences = len(stacked_data.sequence_lengths)\nnum_states = myhmmall.n_components\nstate_pos = np.zeros((num_states, num_pos_bins))\n\nfor seq_id in np.arange(0,num_sequences):\n tmpseqbdries = [0]; tmpseqbdries.extend(np.cumsum(stacked_data.sequence_lengths).tolist());\n obs = stacked_data.data[tmpseqbdries[seq_id]:tmpseqbdries[seq_id+1],:]\n ll, pp = myhmmall.score_samples(obs)\n xx = truepos_0625[stacked_data.boundaries[seq_id,0]:stacked_data.boundaries[seq_id,1]+1]\n digitized = np.digitize(xx, xx_left) - 1 # spatial bin numbers\n for ii, ppii in enumerate(pp):\n state_pos[:,digitized[ii]] += np.transpose(ppii)\n\nstate_pos = normalize_state_pos(state_pos)\n#peaklocations = state_pos.argmax(axis=1)\n#peakorder = peaklocations.argsort()\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nax1.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering after sorting; RUN > 8 in ALL; same sorting as ALL in ALL')",
"Remarks: Somewhat surprisingly, we find that a wide range of states now encode the position along the track. Recall that we did not update the ALL model in any way. This is the SAME model as what was used for the ALL in ALL result above, where most of the states encoded reward locations, and only states 6 through 11 seemed to cover the remaining positions on the track. How do we reconcile these two different views of the underlying model? I have to think about it a little more carefully... \nHere we have intentionally not resorted the states, so that we can compare it directly to the ALL in ALL result above. However, if we re-sort the states according to their peak locations along the track, we can better see the almost uniform coverage along the track that was nonetheless learned from the data:",
"## now order states by peak location on track\nstate_pos = normalize_state_pos(state_pos)\npeaklocations = state_pos.argmax(axis=1)\npeakorder = peaklocations.argsort()\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nax1.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering after sorting; RUN > 8 in ALL; re-sorted')",
"Positional state clustering 8: NORUN data in an ALL model @ 62.5 ms bins; $N_s$ = 50 spatial bins; MAP\nAs a final check, we look at the NORUN < 1 data in the ALL model.",
"###########################################################3\nstacked_data = seq_stk_norun_0625\n###########################################################3\n\nx0=0; xl=100; num_pos_bins=50\nxx_left = np.linspace(x0,xl,num_pos_bins+1)\nnum_sequences = len(stacked_data.sequence_lengths)\nnum_states = myhmmall.n_components\nstate_pos = np.zeros((num_states, num_pos_bins))\n\nfor seq_id in np.arange(0,num_sequences):\n tmpseqbdries = [0]; tmpseqbdries.extend(np.cumsum(stacked_data.sequence_lengths).tolist());\n obs = stacked_data.data[tmpseqbdries[seq_id]:tmpseqbdries[seq_id+1],:]\n ll, pp = myhmmall.score_samples(obs)\n xx = truepos_0625[stacked_data.boundaries[seq_id,0]:stacked_data.boundaries[seq_id,1]+1]\n digitized = np.digitize(xx, xx_left) - 1 # spatial bin numbers\n for ii, ppii in enumerate(pp):\n state_pos[:,digitized[ii]] += np.transpose(ppii)\n\n## now order states by peak location on track\npeaklocations = state_pos.argmax(axis=1)\npeakorder = peaklocations.argsort()\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nax1.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering after sorting; NORUN <1 in ALL')",
"Remarks: Perhaps unsurprisingly this time, we see results very similar to the NORUN results presented earlier. Indeed, the ALL model consists of mostly NORUN data, so evaluating the NORUN data in the ALL model should be very similar to evaluating NORUN data in a NORUN model. No surprises here, and things are as expected.\nDiscussion: In conclusion I think it is safe to say that the HMM effectively learned behaviorally relevant states, especially when presented with RUN data. However, even when given ALL the data, the model somehow still captured state representations for both (i) the reward locations, as well as (positions along the track).\nNext, we see if we can represent the track better by using more model states.\nPositional state clustering 9: ALL data @ 62.5 ms bins; $N_s$ = 50 spatial bins; MAP; 55 states\nPreviously we saw that around 5–6 states were recruited to represent positions along the track when we used $m=35$ model states and all the data. If we increase the model states to $m=55$, what should we expect? Perhaps we should expect a linear increase in the number of states representing the positions along the track, so that we should expect 10–11 states to represent positions between the reward locations.",
"# get bins where rat was running slower than thresh units per second\nallidx_0625 = np.where(speed_0625>=0)[0]\nseq_stk_all_0625 = extract_subsequences_from_binned_spikes(binned_spikes,allidx_0625)\n\n## train HMM on active behavioral data; training set (with a fixed, arbitrary number of states for now):\nmyhmmall = sq.hmm_train(seq_stk_all_0625, num_states=55, n_iter=50, verbose=False)\n\n###########################################################3\nstacked_data = seq_stk_all_0625\n###########################################################3\n\nx0=0; xl=100; num_pos_bins=50\nxx_left = np.linspace(x0,xl,num_pos_bins+1)\nnum_sequences = len(stacked_data.sequence_lengths)\nnum_states = myhmmall.n_components\nstate_pos = np.zeros((num_states, num_pos_bins))\n\nfor seq_id in np.arange(0,num_sequences):\n tmpseqbdries = [0]; tmpseqbdries.extend(np.cumsum(stacked_data.sequence_lengths).tolist());\n obs = stacked_data.data[tmpseqbdries[seq_id]:tmpseqbdries[seq_id+1],:]\n ll, pp = myhmmall.score_samples(obs)\n xx = truepos_0625[stacked_data.boundaries[seq_id,0]:stacked_data.boundaries[seq_id,1]+1]\n digitized = np.digitize(xx, xx_left) - 1 # spatial bin numbers\n for ii, ppii in enumerate(pp):\n state_pos[:,digitized[ii]] += np.transpose(ppii)\n\n## now order states by peak location on track\nstate_pos = normalize_state_pos(state_pos)\npeaklocations = state_pos.argmax(axis=1)\npeakorder = peaklocations.argsort()\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nax1.matshow(state_pos[peakorder,:], interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering after sorting; ALL data')",
"Remarks: Indeed, we found a linear increase in the number of states representing the positions along the track. The proportion of states encoding this part of the behavior (running along the track) is also roughly consistent with the proportion of data: The animal ran faster than th = 8.0 units/s for a total of 237.7 seconds (out of a total of 2587.8 seconds) That is around 10 % of the time, but quiescent data are also expected to be more homogeneous, so that fewer states have to be recruited to capture the underlying dynamics.",
"## now order states by peak location on track\nstate_pos = normalize_state_pos(state_pos)\npeaklocations = state_pos.argmax(axis=1)\npeakorder = peaklocations.argsort()\n\nfig, ax1 = plt.subplots(1, 1, figsize=(6, 4))\nax1.matshow(state_pos[peakorder[6:17],:], interpolation='none', cmap='OrRd')\nax1.set_xlabel('position bin')\nax1.set_ylabel('state')\nax1.set_title('positional state clustering after sorting; ALL data', y=1.18)\n",
"Part 2: Categorical state clustering\nWhat questions am I trying to answer?\n\nCan we distinguish (using log probability alone) RUN from NORUN point observations?\nCan we distinguish (using log probability alone) RUN from NORUN sequence observations?\nCan we distinguish (using log probability alone) FIRST from SECOND using point observations?\nCan we distinguish (using log probability alone) FIRST from SECOND using sequence observations?\nCan we see differences in state distributions between FIRST and SECOND, or RUN and NORUN? Of course, we expect that we must... is this really a useful question to ask? What conclusion will we be able to draw from this?\n\nBigger questions (above questions are not particularly useful):\n* What is the underlying state of the animal? RUN, NORUN, SWR (MUA), REPLAY, REWARD, ...?\n* What is the underlying context of the neural activity? Which environment/scenario?\n* \nSome other things\n* train model at certain speed (timescale), evaluate data at different timescales in model—hopefully we can infer the proper timescale. Extend this to replay<--->behavioral situation\nNote to self: If we have many states, then state clustering might work better for clustering states by first and second half. With fewer states, many of those (esp the center track states) will be shared.\ntrain on ALL data with many states, and look at decoded state distribution for first and second half; as well as quiescent vs run",
"## bin ALL spikes\nds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)\nnum_states = 65\n\nbinned_spikes = klab.bin_spikes(spikes.data, ds=ds, fs=spikes.samprate, verbose=True)\n\ncenterx = (np.array(posdf['x1']) + np.array(posdf['x2']))/2\ncentery = (np.array(posdf['y1']) + np.array(posdf['y2']))/2\n\ntend = len(speed.data)/speed.samprate # end in seconds\ntime_axis = np.arange(0,len(speed.data))/speed.samprate\nspeed_0625, tvel_0625 = klab.resample_velocity(velocity=speed.data,t_bin=ds,tvel=time_axis,t0=0,tend=tend)\ntruepos_0625 = np.interp(np.arange(0,len(binned_spikes.data))*ds,time_axis,centerx)\n\n# get bins where rat was running faster than thresh units per second\nrunidx_0625 = np.where(speed_0625>8)[0]\nseq_stk_run_0625 = extract_subsequences_from_binned_spikes(binned_spikes,runidx_0625)\n\n## split data into train, test, and validation sets:\ntr_b,vl_b,ts_b = sq.data_split(seq_stk_run_0625, tr=50, vl=2, ts=50, randomseed = 0, verbose=True)\n\n## train HMM on active behavioral data; training set (with a fixed, arbitrary number of states for now):\nmyhmm = sq.hmm_train(tr_b, num_states=num_states, n_iter=50, verbose=False)\n\n###########################################################3\nstacked_data = seq_stk_run_0625\n###########################################################3\n\nx0=0; xl=100; num_pos_bins=50\nxx_left = np.linspace(x0,xl,num_pos_bins+1)\nnum_sequences = len(stacked_data.sequence_lengths)\nnum_states = myhmm.n_components\nstate_pos = np.zeros((num_states, num_pos_bins))\nstate_distr_1 = np.zeros((num_states,1))\nstate_distr_2 = np.zeros((num_states,1))\n\nfor seq_id in np.arange(0,num_sequences):\n tmpseqbdries = [0]; tmpseqbdries.extend(np.cumsum(stacked_data.sequence_lengths).tolist());\n obs = stacked_data.data[tmpseqbdries[seq_id]:tmpseqbdries[seq_id+1],:]\n ll, pp = myhmm.score_samples(obs)\n if stacked_data.boundaries[seq_id][0] < len(speed_0625)/2:\n #print('1st half')\n state_distr_1[:,0] = state_distr_1[:,0] + pp.sum(axis=0)\n else:\n #print('2nd half')\n state_distr_2[:,0] = state_distr_2[:,0] + pp.sum(axis=0)\n \n# xx = truepos_0625[stacked_data.boundaries[seq_id,0]:stacked_data.boundaries[seq_id,1]+1]\n# digitized = np.digitize(xx, xx_left) - 1 # spatial bin numbers\n# for ii, ppii in enumerate(pp):\n# state_pos[:,digitized[ii]] += np.transpose(ppii)\n\naaa = np.argsort(state_distr_1, axis=None)\nplt.plot(state_distr_1[aaa])\nplt.plot(state_distr_2[aaa])\n\n## bin ALL spikes\nds = 0.0625 # bin spikes into 62.5 ms bins (theta-cycle inspired)\nnum_states = 65\n\nbinned_spikes = klab.bin_spikes(spikes.data, ds=ds, fs=spikes.samprate, verbose=True)\n\ncenterx = (np.array(posdf['x1']) + np.array(posdf['x2']))/2\ncentery = (np.array(posdf['y1']) + np.array(posdf['y2']))/2\n\ntend = len(speed.data)/speed.samprate # end in seconds\ntime_axis = np.arange(0,len(speed.data))/speed.samprate\nspeed_0625, tvel_0625 = klab.resample_velocity(velocity=speed.data,t_bin=ds,tvel=time_axis,t0=0,tend=tend)\ntruepos_0625 = np.interp(np.arange(0,len(binned_spikes.data))*ds,time_axis,centerx)\n\n# get bins where rat was running faster than thresh units per second\nrunidx_0625 = np.where(speed_0625>8)[0]\nseq_stk_run_0625 = extract_subsequences_from_binned_spikes(binned_spikes,runidx_0625)\n\n## split data into train, test, and validation sets:\ntr_b,vl_b,ts_b = sq.data_split(seq_stk_run_0625, tr=50, vl=2, ts=50, randomseed = 0, verbose=True)\n\n## train HMM on active behavioral data; training set (with a fixed, arbitrary number of states for now):\nmyhmm = sq.hmm_train(tr_b, num_states=num_states, n_iter=50, verbose=False)\n\n###########################################################3\nstacked_data = seq_stk_run_0625\n###########################################################3\n\nx0=0; xl=100; num_pos_bins=50\nxx_left = np.linspace(x0,xl,num_pos_bins+1)\nnum_sequences = len(stacked_data.sequence_lengths)\nnum_states = myhmm.n_components\nstate_pos = np.zeros((num_states, num_pos_bins))\nstate_distr_1 = np.zeros((num_states,1))\nstate_distr_2 = np.zeros((num_states,1))\n\nfor seq_id in np.arange(0,num_sequences):\n tmpseqbdries = [0]; tmpseqbdries.extend(np.cumsum(stacked_data.sequence_lengths).tolist());\n obs = stacked_data.data[tmpseqbdries[seq_id]:tmpseqbdries[seq_id+1],:]\n ll, pp = myhmm.score_samples(obs)\n if stacked_data.boundaries[seq_id][0] < len(speed_0625)/2:\n #print('1st half')\n state_distr_1[:,0] = state_distr_1[:,0] + pp.sum(axis=0)\n else:\n #print('2nd half')\n state_distr_2[:,0] = state_distr_2[:,0] + pp.sum(axis=0)\n \n xx = truepos_0625[stacked_data.boundaries[seq_id,0]:stacked_data.boundaries[seq_id,1]+1]\n digitized = np.digitize(xx, xx_left) - 1 # spatial bin numbers\n for ii, ppii in enumerate(pp):\n state_pos[:,digitized[ii]] += np.transpose(ppii)\n \nstate_pos = normalize_state_pos(state_pos)\npeaklocations = state_pos.argmax(axis=1)\npeakorder = peaklocations.argsort()\n\naaa = np.argsort(state_distr_1, axis=None)\nplt.plot(state_distr_1[peakorder])\nplt.plot(state_distr_2[peakorder])",
"To be continued..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Olsthoorn/TransientGroundwaterFlow
|
Syllabus_in_notebooks/Sec6_5_Theis-well.ipynb
|
gpl-3.0
|
[
"Section 6.5 pumptest\nTheis wells introduction exp1 and simplification\nTheis considered the transient flow due to a well with a constant extraction since $t=0$ placed in a uniform confined aquifer of infinite extent.\nThe solution may be opbtained by straighforward Lapace transformation and looking up de result from the Laplace inversions table. It reads\n$$ s(r, t) = \\frac Q {4 \\pi kD} W(u),\\,\\,\\,\\, u = \\frac {r^2 S} {4 kD t}$$\nwhere W(..) is the so-called Theis well function, which is actually equal to the mathematical exponential integral\n$$ W(z) = \\mathtt{exp1}(z) = \\intop _z ^\\infty \\frac {e^{-y}} {y} dy $$\nThe exponential integral lives in scipy special as the function $\\mathtt{exp1}(z)$\nAfter importing this function from the module scipy.special we can use exp1(u)",
"from scipy.special import exp1\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport pdb\n\ndef newfig(title=\"title\", xlabel=\"xlabel\", ylabel=\"ylabel\", xlim=None, ylim=None, xscale=None, yscale=None, size_inches=(12, 8),\n fontsize=15):\n fig, ax = plt.subplots()\n fig.set_size_inches(size_inches)\n ax.set_title(title, fontsize=fontsize)\n ax.set_xlabel(xlabel, fontsize=fontsize)\n ax.set_ylabel(ylabel, fontsize=fontsize)\n if xlim: ax.set_xlim(xlim)\n if ylim: ax.set_ylim(ylim)\n if xscale: ax.set_xscale(xscale)\n if yscale: ax.set_yscale(yscale)\n ax.grid()\n return ax",
"How does the drawdown behave for diffent distances from the well?\nFor this assume a real situation.",
"kD = 900 # m2/d\nS = 0.1 # [-]\nQ = 1200 # m3/d\n\nplt.title('Drawdown for different distances to the well')\nplt.xlabel('t [d]')\nplt.ylabel('s [m]')\nplt.xscale('log')\nplt.grid()\n\n\nt = np.logspace(-3, 3, 61) # d\ndistances = [5, 10, 25, 100, 150] # m\nfor r in distances:\n u = r**2 * S / (4 * kD * t)\n s = Q/(4 * np.pi * kD) * exp1(u)\n plt.plot(t, s, label='r = {:.0f} m'.format(r))\nplt.legend()\nplt.show()",
"Approxmation of the Theis well function\n$$ W(u) \\approx = -\\gamma - \\ln u + u - \\frac {u^2} {3\\times 2!}\n+ \\frac {u^3} {3\\times 3!} - \\frac {u^4} {4 \\times 4!} + ... $$\nwith $\\gamma$ Euler's constant\n$$ \\gamma = 0.577216... $$\nIt's straigt-forward to implement this power series. But one also see that for very small values of $u$, the series may be approximated by\n$$ \\mathtt{W}(u) \\approx -\\gamma - \\ln u $$\nThis may be worked out as follows:\n$$ W(u) \\approx - \\ln \\left( e^\\gamma \\right) - \\ln \\left( \\frac {r^2 S} {4 kD t} \\right) $$\n$$ W(u) \\approx + \\ln \\left( \\frac 1 {e^\\gamma} \\right) + \\ln \\left( \\frac {4 kD t} {r^2 S }\\right)$$\n$$ W(u) \\approx \\ln \\left( \\frac {2.25 kD t} {r^2 S} \\right) $$\nThis is a simple logarithm, valid for small enough values or $u$, hence, for large enough values of time $t$. (Or for small enough values of $r$ in case a fixed time is chosen.)\nLet's compare the Theis well function with this approxmation for different values or distance $r$",
"\n\n\nplt.title('Theis drawdown, full well function and logarithmic approximation')\nplt.xlabel('t')\nplt.ylabel('W(u), exp1(u)')\nplt.xscale('log')\nplt.yscale('linear')\nplt.grid()\n\nfor r in distances:\n plt.plot(t, exp1(r**2 * S /(4 * kD * t)), label='theis, r={:.0f} m'.format(r))\n plt.plot(t, np.log(2.25 * kD * t/(r**2 * S)), '-.', label='approx r={:.0f} m'.format(r))\nplt.legend()\nplt.show()",
"The graph shows that the proximation is indeed only valid after some time, but after that it falls on the true drawdown graph.\nRadius of influence\nThe distance at which the drawdown starts to deviate from zero increases with time as can be seen on the previous graph. That distance is called the radius of influence. It can be set equal to the distance at which the straight drawdown approxmation intersects with the horizontal axis, the line of zero drawdown. If we do that, we can derive it immediately from our approximation of the Theis drawdown curve by setting it to zero, i.e. by setting the argument under the logarithm equal to 1.\n$$ \\frac {2.25 kD t} {r^2 S} = 1 $$\nAnd so\n$$ r = \\sqrt {\\frac {2.25 kD t} S}$$\nFrom which it follows that this radius is proportional to the root of $kD$ and $t$ and inversily proportional to the root of $S$. In fact, one may say that the area of influence is directly proportional to $kD t / S$\nThis radius of influence is a very practial tool to estimated the influence that a transient well has on its enviroment.\nDradown versus log $r$ is linear for small enough $r$\nIt should also be clear that the drawdown increases with $1/r$ on logarithmic scale for small enough $r$.\nWe wil show this for different times in this section.",
"r = np.logspace(-1, 3, 41)\ntimes = [0.1, 0.3, 0.5, 1, 2, 5]\n\nplt.title('Theis drawdown, full well function and logarithmic approximation')\nplt.xlabel('r [m]')\nplt.ylabel('W(u), exp1(u)')\nplt.xscale('log')\nplt.yscale('linear')\nplt.ylim((17, -7))\nplt.grid()\n\nfor t in times:\n plt.plot(r, exp1(r**2 * S /(4 * kD * t)), label='theis, t={:.1f} d'.format(t))\n plt.plot(r, np.log(2.25 * kD * t/(r**2 * S)), '-.', label='approx t={:.1f} d'.format(t))\nplt.legend()\nplt.show()",
"In the graph above, the y-axis was turned upside down by using plt.ylim((min, max)) so that the drawdown is downward on the graph, which is more inuitive. Instead of a raidus of influence, one may now define and observe a time of influence\n$$ t = \\sqrt{\\frac {r^2 S} {2.25 kD} } $$\nOne observes that with time the drawdown (straight lines) moves parallel to the right on this logarithmic distance axis.\nPumping test example",
"Q = 1200 # m3/d\n\n# Measurements in the piezometers\ncolumns = ['t[min]','piez[10]','piez[50]','piez[125]','piez[250]']\ndata = np.array([[1.00 ,0.37 ,0.00 ,0.01 ,0.00],\n [1.33 ,0.41 ,0.02 ,0.00 ,0.01],\n [1.78 ,0.44 ,0.05 ,0.00 ,0.01],\n [2.37 ,0.50 ,0.09 ,0.02 ,0.01],\n [3.16 ,0.54 ,0.10 ,0.01 ,0.00],\n [4.22 ,0.57 ,0.12 ,0.01 ,0.02],\n [5.62 ,0.63 ,0.17 ,0.03 ,0.00],\n [7.50 ,0.68 ,0.18 ,0.02 ,0.00],\n [10.00 ,0.73 ,0.24 ,0.05 ,0.00],\n [13.34 ,0.76 ,0.26 ,0.06 ,0.00],\n [17.78 ,0.82 ,0.31 ,0.07 ,0.00],\n [23.71 ,0.86 ,0.36 ,0.12 ,0.03],\n [31.62 ,0.91 ,0.41 ,0.15 ,0.03],\n [42.17 ,0.95 ,0.44 ,0.18 ,0.03],\n [56.23 ,0.99 ,0.50 ,0.21 ,0.06],\n [74.99 ,1.05 ,0.53 ,0.25 ,0.07],\n [100.00 ,1.08 ,0.59 ,0.28 ,0.13],\n [133.35 ,1.15 ,0.63 ,0.34 ,0.17],\n [177.83 ,1.19 ,0.65 ,0.37 ,0.18],\n [237.14 ,1.23 ,0.72 ,0.45 ,0.23],\n [316.23 ,1.26 ,0.76 ,0.47 ,0.26],\n [421.70 ,1.32 ,0.80 ,0.50 ,0.31],\n [562.34 ,1.37 ,0.85 ,0.57 ,0.37],\n [749.89 ,1.40 ,0.88 ,0.61 ,0.39],\n [1000.00 ,1.48 ,0.95 ,0.65 ,0.42],\n [1333.52 ,1.51 ,0.99 ,0.70 ,0.49],\n [1778.28 ,1.56 ,1.04 ,0.75 ,0.53],\n [2371.37 ,1.58 ,1.11 ,0.79 ,0.58],\n [3162.28 ,1.65 ,1.13 ,0.84 ,0.63],\n [4216.97 ,1.67 ,1.17 ,0.89 ,0.65],\n [5623.41 ,1.72 ,1.21 ,0.92 ,0.69],\n [7498.94 ,1.78 ,1.27 ,0.97 ,0.76],\n [10000.00 ,1.82 ,1.30 ,1.01 ,0.79]])\npdata = pd.DataFrame(data, columns=columns)\npdata['t[d]'] = pdata['t[min]'] / (24 * 60)\n\nax = newfig(\"Drawdown piezometer, Q = {:.0f} m3/d\".format(Q), \"t [d]\", \"drawdown [m]\", ylim=(2, -0.1))\nfor pz in ['piez[10]', 'piez[50]', 'piez[125]', 'piez[250]']:\n t = pdata['t[d]']\n ax.plot(t, pdata[pz], '.-', label=pz)\nax.legend()\n\nax = newfig(\"Drawdown piezometer, Q = {:.0f} m3/d\".format(Q), \"t [d]\", \"drawdown [m]\", xscale='log',\n ylim=(2, -0.1))\nfor pz in ['piez[10]', 'piez[50]', 'piez[125]', 'piez[250]']:\n t = pdata['t[d]']\n ax.plot(t, pdata[pz], '.-', label=pz)\nax.legend()",
"$$kD=\\frac{2.3Q}{4\\pi\\left(s_{10t}-s_{t}\\right)}$$\n$$S_S=2.25kD\\frac{t_{0}}{r^{2}}$$",
"# Drawdown increase per log-cycle = 0.375 m\n# intersection with y = 0, 2e-3 (orange line), 1.2 e-2 green line, 2 e-2 read line\n\n#kD = 2.3 * Q / (4 np.pi * Ddelta_s)\nkD = 2.3 * Q / (4 * np.pi * 0.375)\n\nSy = []\nfor t0, r in zip([2e-3, 1.2e-2, 3.5e-2],[50, 125, 250]):\n Sy.append(2.25 * kD * t0 / r ** 2)\nprint('kD= {:.0f} m2/d'.format(kD))\nprint('Sy = {:.4g}, {:.4g}, {:.4g} [-]'.format(*Sy))\n\nax = newfig(\"Drawdown piezometer, Q = {:.0f} m3/d\".format(Q), \"t/r^2 [d/m^2]\", \"drawdown [m]\", xscale='log',\n ylim=(2, -0.1))\nfor pz, r in zip(['piez[10]', 'piez[50]', 'piez[125]', 'piez[250]'], [10, 50, 125, 250]):\n t = pdata['t[d]']\n ax.plot(t/r ** 2, pdata[pz], '.-', label=pz)\nax.legend()\n\n# delta pp"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sonium0/pymatgen
|
examples/Ordering Disordered Structures.ipynb
|
mit
|
[
"Introduction\nThis notebook demonstrates how to carry out an ordering of a disordered structure using pymatgen.",
"# Let us start by creating a disordered CuAu fcc structure.\n\nfrom pymatgen import Structure, Lattice\n\nspecie = {\"Cu0+\": 0.5, \"Au0+\": 0.5}\ncuau = Structure.from_spacegroup(\"Fm-3m\", Lattice.cubic(3.677), [specie], [[0, 0, 0]])\nprint cuau",
"Note that each site is now 50% occupied by Cu and Au. Because the ordering algorithms uses an Ewald summation to rank the structures, you need to explicitly specify the oxidation state for each species, even if it is 0. Let us now perform ordering of these sites using two methods.\nMethod 1 - Using the OrderDisorderedStructureTransformation\nThe first method is to use the OrderDisorderedStructureTransformation.",
"from pymatgen.transformations.standard_transformations import OrderDisorderedStructureTransformation\n\ntrans = OrderDisorderedStructureTransformation()\n\nss = trans.apply_transformation(cuau, return_ranked_list=100)\n\nprint(len(ss))\n\nprint ss[0]",
"Note that the OrderDisorderedTransformation (with a sufficiently large return_ranked_list parameter) returns all orderings, including duplicates without accounting for symmetry. A computed ewald energy is returned together with each structure. To eliminate duplicates, the best way is to use StructureMatcher's group_structures method, as demonstrated below.",
"from pymatgen.analysis.structure_matcher import StructureMatcher\n\nmatcher = StructureMatcher()\ngroups = matcher.group_structures([d[\"structure\"] for d in ss])\nprint len(groups)\nprint groups[0][0]",
"Method 2 - Using the EnumerateStructureTransformation\nIf you have enumlib installed, you can use the EnumerateStructureTransformation. This automatically takes care of symmetrically equivalent orderings and can enumerate supercells, but is much more prone to parameter sensitivity and cannot handle very large structures. The example below shows an enumerate of CuAu up to cell sizes of 4.",
"from pymatgen.transformations.advanced_transformations import EnumerateStructureTransformation\nspecie = {\"Cu\": 0.5, \"Au\": 0.5}\ncuau = Structure.from_spacegroup(\"Fm-3m\", Lattice.cubic(3.677), [specie], [[0, 0, 0]])\n\ntrans = EnumerateStructureTransformation(max_cell_size=3)\nss = trans.apply_transformation(cuau, return_ranked_list=1000)\n\nprint len(ss)\nprint \"cell sizes are %s\" % ([len(d[\"structure\"]) for d in ss])",
"Note that structures with cell sizes ranging from 1-3x the unit cell size is generated.\nConclusion\nThis notebook illustrates two basic ordering/enumeration approaches. In general, OrderDisorderedTransformation works better for large cells and is useful if you need just any quick plausible ordering. EnumerateStructureTransformation is more rigorous, but is prone to sensitivity errors and may require fiddling with various parameters."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hardmaru/pytorch_notebooks
|
mixtures_density_network_relu_version.ipynb
|
mit
|
[
"Mixture Density Networks with PyTorch\nRelated posts:\nJavaScript implementation.\nTensorFlow implementation.",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nimport math\nfrom torch.autograd import Variable\nimport torch.nn as nn",
"Simple Data Fitting\nBefore we talk about MDN's, we try to perform some simple data fitting using PyTorch to make sure everything works. To get started, let's try to quickly build a neural network to fit some fake data. As neural nets of even one hidden layer can be universal function approximators, we can see if we can train a simple neural network to fit a noisy sinusoidal data, like this ( $\\epsilon$ is just standard gaussian random noise):\n$y=7.0 \\sin( 0.75 x) + 0.5 x + \\epsilon$\nAfter importing the libraries, we generate the sinusoidal data we will train a neural net to fit later:",
"NSAMPLE = 1000\nx_data = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T\nr_data = np.float32(np.random.normal(size=(NSAMPLE,1)))\ny_data = np.float32(np.sin(0.75*x_data)*7.0+x_data*0.5+r_data*1.0)\n\nplt.figure(figsize=(8, 8))\nplot_out = plt.plot(x_data,y_data,'ro',alpha=0.3)\nplt.show()",
"We will define this simple neural network one-hidden layer and 100 nodes:\n$Y = W_{out} \\max( W_{in} X + b_{in}, 0) + b_{out}$",
"# N is batch size; D_in is input dimension;\n# H is hidden dimension; D_out is output dimension.\n# from (https://github.com/jcjohnson/pytorch-examples)\nN, D_in, H, D_out = NSAMPLE, 1, 100, 1\n\n# Create random Tensors to hold inputs and outputs, and wrap them in Variables.\n# since NSAMPLE is not large, we train entire dataset in one minibatch.\nx = Variable(torch.from_numpy(x_data.reshape(NSAMPLE, D_in)))\ny = Variable(torch.from_numpy(y_data.reshape(NSAMPLE, D_out)), requires_grad=False)\n\nmodel = torch.nn.Sequential(\n torch.nn.Linear(D_in, H),\n torch.nn.ReLU(),\n torch.nn.Linear(H, D_out),\n )",
"We can define a loss function as the sum of square error of the output vs the data (we can add regularisation if we want).",
"loss_fn = torch.nn.MSELoss()",
"We will also define a training loop to minimise the loss function later. We can use the RMSProp gradient descent optimisation method.",
"learning_rate = 0.01\noptimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate, alpha=0.8)\nfor t in range(100000):\n y_pred = model(x)\n loss = loss_fn(y_pred, y)\n if (t % 10000 == 0):\n print(t, loss.data[0])\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\nx_test = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T\nx_test = Variable(torch.from_numpy(x_test.reshape(NSAMPLE, D_in)))\ny_test = model(x_test)\nplt.figure(figsize=(8, 8))\nplt.plot(x_data,y_data,'ro', x_test.data.numpy(),y_test.data.numpy(),'bo',alpha=0.3)\nplt.show()",
"We see that the neural network can fit this sinusoidal data quite well, as expected. However, this type of fitting method only works well when the function we want to approximate with the neural net is a one-to-one, or many-to-one function. Take for example, if we invert the training data:\n$x=7.0 \\sin( 0.75 y) + 0.5 y+ \\epsilon$",
"temp_data = x_data\nx_data = y_data\ny_data = temp_data\n\nplt.figure(figsize=(8, 8))\nplot_out = plt.plot(x_data,y_data,'ro',alpha=0.3)\nplt.show()",
"If we were to use the same method to fit this inverted data, obviously it wouldn't work well, and we would expect to see a neural network trained to fit only to the square mean of the data.",
"x = Variable(torch.from_numpy(x_data.reshape(NSAMPLE, D_in)))\ny = Variable(torch.from_numpy(y_data.reshape(NSAMPLE, D_out)), requires_grad=False)\nlearning_rate = 0.01\noptimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate, alpha=0.8)\nfor t in range(3000):\n y_pred = model(x)\n loss = loss_fn(y_pred, y)\n if (t % 300 == 0):\n print(t, loss.data[0])\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\nx_test = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T\nx_test = Variable(torch.from_numpy(x_test.reshape(NSAMPLE, D_in)))\ny_test = model(x_test)\nplt.figure(figsize=(8, 8))\nplt.plot(x_data,y_data,'ro', x_test.data.numpy(),y_test.data.numpy(),'bo',alpha=0.3)\nplt.show()",
"Our current model only predicts one output value for each input, so this approach will fail miserably. What we want is a model that has the capacity to predict a range of different output values for each input. In the next section we implement a Mixture Density Network (MDN) to achieve this task.\nMixture Density Networks\nOur current model only predicts one output value for each input, so this approach will fail. What we want is a model that has the capacity to predict a range of different output values for each input. In the next section we implement a Mixture Density Network (MDN) to do achieve this task.\nMixture Density Networks, developed by Christopher Bishop in the 1990s, is an attempt to address this problem. Rather to have the network predict a single output value, the MDN predicts an entire probability distribution of the output, so we can sample several possible different output values for a given input.\nThis concept is quite powerful, and can be employed many current areas of machine learning research. It also allows us to calculate some sort of confidence factor in the predictions that the network is making.\nThe inverse sinusoidal data we chose is not just for a toy problem, as there are applications in the field of robotics, for example, where we want to determine which angle we need to move the robot arm to achieve a target location. MDNs are also used to model handwriting, where the next stroke is drawn from a probability distribution of multiple possibilities, rather than sticking to one prediction.\nBishop's implementation of MDNs will predict a class of probability distributions called Mixture Gaussian distributions, where the output value is modelled as a sum of many gaussian random values, each with different means and standard deviations. So for each input $x$, we will predict a probability distribution function $P(Y = y | X = x)$ that is approximated by a weighted sum of different gaussian distributions.\n$P(Y = y | X = x) = \\sum_{k=0}^{K-1} \\Pi_{k}(x) \\phi(y, \\mu_{k}(x), \\sigma_{k}(x)), \\sum_{k=0}^{K-1} \\Pi_{k}(x) = 1$\nOur network will therefore predict the parameters of the pdf, in our case the set of $\\mu$, $\\sigma$, and $\\Pi$ values for each input $x$. Rather than predict $y$ directly, we will need to sample from our distribution to sample $y$. This will allow us to have multiple possible values of $y$ for a given $x$.\nEach of the parameters $\\Pi_{k}(x), \\mu_{k}(x), \\sigma_{k}(x)$ of the distribution will be determined by the neural network, as a function of the input $x$. There is a restriction that the sum of $\\Pi_{k}(x)$ add up to one, to ensure that the pdf integrates to 1. In addition, $\\sigma_{k}(x)$ must be strictly positive.\nIn our implementation, we will use a neural network of one hidden later with 100 nodes, and also generate 20 mixtures, hence there will be 60 actual outputs of our neural network of a single input. Our definition will be split into 2 parts:\n$Z = W_{out} \\max( W_{in} X + b_{in}, 0) + b_{out}$\nIn the first part, $Z$ is a vector of 60 values that will be then splitup into three equal parts, $[Z_{\\Pi}, Z_{\\sigma}, Z_{\\mu}] = Z$, where each of $Z_{\\Pi}$, $Z_{\\sigma}$, $Z_{\\mu}$ are vectors of length 20.\nIn this PyTorch implementation, unlike the TF version, we will implement this operation with 3 seperate Linear layers, rather than splitting a large $Z$, for clarity:\n$Z_{\\Pi} = W_{\\Pi} \\max( W_{in} X + b_{in}, 0) + b_{\\Pi}$\n$Z_{\\sigma} = W_{\\sigma} \\max( W_{in} X + b_{in}, 0) + b_{\\sigma}$\n$Z_{\\mu} = W_{\\mu} \\max( W_{in} X + b_{in}, 0) + b_{\\mu}$\nIn the second part, the parameters of the pdf will be defined as below to satisfy the earlier conditions:\n$\\Pi = \\frac{\\exp(Z_{\\Pi})}{\\sum_{i=0}^{20} exp(Z_{\\Pi, i})}, \\ \\sigma = \\exp(Z_{\\sigma}), \\ \\mu = Z_{\\mu}$\n$\\Pi_{k}$ are put into a softmax operator to ensure that the sum adds to one, and that each mixture probability is positive. Each $\\sigma_{k}$ will also be positive due to the exponential operator.\nBelow is the PyTorch implementation of the MDN network:",
"NHIDDEN = 100 # hidden units\nKMIX = 20 # number of mixtures\n\nclass MDN(nn.Module):\n def __init__(self, hidden_size, num_mixtures):\n super(MDN, self).__init__()\n self.fc_in = nn.Linear(1, hidden_size) \n self.relu = nn.ReLU()\n self.pi_out = torch.nn.Sequential(\n nn.Linear(hidden_size, num_mixtures),\n nn.Softmax()\n )\n self.sigma_out = nn.Linear(hidden_size, num_mixtures)\n self.mu_out = nn.Linear(hidden_size, num_mixtures) \n \n def forward(self, x):\n out = self.fc_in(x)\n out = self.relu(out)\n out_pi = self.pi_out(out)\n out_sigma = torch.exp(self.sigma_out(out))\n out_mu = self.mu_out(out)\n return (out_pi, out_sigma, out_mu)",
"Let's define the inverted data we want to train our MDN to predict later. As this is a more involved prediction task, I used a higher number of samples compared to the simple data fitting task earlier.",
"NSAMPLE = 2500\n\ny_data = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T\nr_data = np.float32(np.random.normal(size=(NSAMPLE,1))) # random noise\nx_data = np.float32(np.sin(0.75*y_data)*7.0+y_data*0.5+r_data*1.0)\n\nx_train = Variable(torch.from_numpy(x_data.reshape(NSAMPLE, 1)))\ny_train = Variable(torch.from_numpy(y_data.reshape(NSAMPLE, 1)), requires_grad=False)\n\nplt.figure(figsize=(8, 8))\nplt.plot(x_train.data.numpy(),y_train.data.numpy(),'ro', alpha=0.3)\nplt.show()",
"We cannot simply use the min square error L2 lost function in this task the output is an entire description of the probability distribution. A more suitable loss function is to minimise the logarithm of the likelihood of the distribution vs the training data:\n$CostFunction(y | x) = -\\log[ \\sum_{k}^K \\Pi_{k}(x) \\phi(y, \\mu(x), \\sigma(x)) ]$\nSo for every $(x,y)$ point in the training data set, we can compute a cost function based on the predicted distribution versus the actual points, and then attempt the minimise the sum of all the costs combined. To those who are familiar with logistic regression and cross entropy minimisation of softmax, this is a similar approach, but with non-discretised states.\nWe have to implement this cost function ourselves:",
"oneDivSqrtTwoPI = 1.0 / math.sqrt(2.0*math.pi) # normalisation factor for gaussian.\ndef gaussian_distribution(y, mu, sigma):\n # braodcast subtraction with mean and normalization to sigma\n result = (y.expand_as(mu) - mu) * torch.reciprocal(sigma)\n result = - 0.5 * (result * result)\n return (torch.exp(result) * torch.reciprocal(sigma)) * oneDivSqrtTwoPI\n\ndef mdn_loss_function(out_pi, out_sigma, out_mu, y):\n epsilon = 1e-3\n result = gaussian_distribution(y, out_mu, out_sigma) * out_pi\n result = torch.sum(result, dim=1)\n result = - torch.log(epsilon + result)\n return torch.mean(result)",
"Let's define our model, and use the Adam optimizer to train our model below:",
"model = MDN(hidden_size=NHIDDEN, num_mixtures=KMIX)\n\nlearning_rate = 0.00001\noptimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)\n\nfor t in range(20000):\n (out_pi, out_sigma, out_mu) = model(x_train)\n loss = mdn_loss_function(out_pi, out_sigma, out_mu, y_train)\n if (t % 1000 == 0):\n print(t, loss.data[0])\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()",
"We want to use our network to generate the parameters of the pdf for us to sample from. In the code below, we will sample $M=10$ values of $y$ for every $x$ input, and compare the sampled results with the training data.",
"x_test_data = np.float32(np.random.uniform(-15, 15, (1, NSAMPLE))).T\nx_test = Variable(torch.from_numpy(x_test_data.reshape(NSAMPLE, 1)))\n\n(out_pi_test, out_sigma_test, out_mu_test) = model(x_test)\n\nout_pi_test_data = out_pi_test.data.numpy()\nout_sigma_test_data = out_sigma_test.data.numpy()\nout_mu_test_data = out_mu_test.data.numpy()\n\ndef get_pi_idx(x, pdf):\n N = pdf.size\n accumulate = 0\n for i in range(0, N):\n accumulate += pdf[i]\n if (accumulate >= x):\n return i\n print('error with sampling ensemble')\n return -1\n\ndef generate_ensemble(M = 10):\n # for each point in X, generate M=10 ensembles\n NTEST = x_test_data.size\n result = np.random.rand(NTEST, M) # initially random [0, 1]\n rn = np.random.randn(NTEST, M) # normal random matrix (0.0, 1.0)\n mu = 0\n std = 0\n idx = 0\n\n # transforms result into random ensembles\n for j in range(0, M):\n for i in range(0, NTEST):\n idx = get_pi_idx(result[i, j], out_pi_test_data[i])\n mu = out_mu_test_data[i, idx]\n std = out_sigma_test_data[i, idx]\n result[i, j] = mu + rn[i, j]*std\n return result\n\ny_test_data = generate_ensemble()\n\nplt.figure(figsize=(8, 8))\nplt.plot(x_test_data,y_test_data,'b.', x_data,y_data,'r.',alpha=0.3)\nplt.show()",
"In the above graph, we plot out the generated data we sampled from the MDN distribution, in blue. We also plot the original training data in red over the predictions. Apart from a few outliers, the distributions seem to match the data. We can also plot a graph of $\\mu(x)$ as well to interpret what the neural net is actually doing:",
"plt.figure(figsize=(8, 8))\nplt.plot(x_test_data,out_mu_test_data,'g.', x_data,y_data,'r.',alpha=0.3)\nplt.show()",
"In the plot above, we see that for every point on the $x$-axis, there are multiple lines or states where $y$ may be, and we select these states with probabilities modelled by $\\Pi$ ."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sdpython/ensae_teaching_cs
|
_doc/notebooks/td2a_ml/td2a_clustering_correction.ipynb
|
mit
|
[
"2A.ml - Clustering - correction\nCe notebook utilise les données des vélos de Chicago Divvy Data. Il s'inspire du challenge créée pour découvrir les habitudes des habitantes de la ville City Bike. L'idée est d'explorer plusieurs algorithmes de clustering et comment trafiquer les données pour les faire marcher.",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()\n\n%matplotlib inline",
"Les données\nElles ont été prétraitées selon le notebook Bike Pattern 2. Elles représentent la distribution du nombre de vélos partant (startdist) et arrivant (stopdist). On utilise le clustering pour découvrir les différents usages des habitants de Chicago avec pour intuition le fait que les habitants de Chicago utilise majoritairement les vélos pour aller et venir entre leur appartement et leur lieu de travail. Cette même idée mais à Paris est illustrée par ce billet de blog : Busy areas in Paris.",
"from pyensae.datasource import download_data\nfile = download_data(\"features_bike_chicago.zip\")\nfile\n\nimport pandas\nfeatures = pandas.read_csv(\"features_bike_chicago.txt\", sep=\"\\t\", encoding=\"utf-8\", low_memory=False, header=[0,1])\nfeatures.columns = [\"station_id\", \"station_name\", \"weekday\"] + list(features.columns[3:])\nfeatures.head()\n\nfeatures.shape",
"Les données sont agrégrées par tranche de 10 minutes soit 144 période durant la journée et 288 nombre pour les départs et arrivées de vélos. Cela explique les dimensions de la matrice.\nk-means\nOn cherche à trouver différentes zones de la villes pour différents usages. Zones de résidences, zones de travail, zone d'amusements et on suppose que les heures de départs et d'arrivées reflètent ces usages. Dans une zone de travail, typiquement le quartier d'affaires, les vélos arriveront principalement le matin et repartiront le soir. Ce sera l'inverse pour une zone de résidences. C'est pour cela que les arrivées et les départs des vélos ont été agrégés par jour de la semaine. La distribution des arrivées risquent d'être bien différentes le week-end.",
"names = features.columns[3:]\n\nfrom sklearn.cluster import KMeans\nclus = KMeans(10)\nclus.fit(features[names])\n\npred = clus.predict(features[names])\nset(pred)\n\nfeatures[\"cluster\"] = pred\n\nfeatures[[\"cluster\", \"weekday\", \"station_id\"]].groupby([\"cluster\", \"weekday\"]).count()\n\nnb = features[[\"cluster\", \"weekday\", \"station_id\"]].groupby([\"cluster\", \"weekday\"]).count()\nnb = nb.reset_index()\nnb[nb.cluster.isin([0, 3, 5, 6])].pivot(\"weekday\",\"cluster\", \"station_id\").plot(kind=\"bar\");",
"Let's draw the clusters.",
"centers = clus.cluster_centers_.T\nimport matplotlib.pyplot as plt\nfig, ax = plt.subplots(centers.shape[1], 2, figsize=(10,10))\nnbf = centers.shape[0] // 2\nx = list(range(0,nbf))\ncol = 0\ndec = 0\ncolors = [\"red\", \"yellow\", \"gray\", \"green\", \"brown\", \"orange\", \"blue\"]\nfor i in range(centers.shape[1]):\n if 2*i == centers.shape[1]:\n col += 1\n dec += centers.shape[1] \n color = colors[i%len(colors)]\n ax[2*i-dec, col].bar (x, centers[:nbf,i], width=1.0, color=color)\n ax[2*i-dec, col].set_ylabel(\"cluster %d - start\" % i, color=color)\n ax[2*i+1-dec, col].bar (x, centers[nbf:,i], width=1.0, color=color)\n ax[2*i+1-dec, col].set_ylabel(\"cluster %d - stop\" % i, color=color)",
"On a réussi à isoler plusieurs usages différents. On voit les départs les matin et les arrivées le soir, le modèle inversé, un autre... Mais les-a-t-on tous trouvés ?\nExercice 1 : petits clusters\nLes petits clusters viennent du fait qu'on impose un nombre de clusters à l'algorithme qui s'efforce de les trouver. Quelques points aberrents sont trop éloignés et isolés du reste des autres. Pour les enlever, on peut simplement réduire le nombre de clusters ou éliminer les points aberrants et les garder comme anomalies.\nLes petits clusters sont des stations peu utilisées. Le problème quand on calcule une distribution et qu'on la normalise, typiquement celles des arrivées de vélos à une stations, c'est qu'on perd l'information du nombre de vélos ayant servi à l'estimer. \nL'algorithme k-means++ commence à choisir les points les plus éloignés des uns des autres et cela tombe sur ces stations peu utilisées. Vu le nombre de dimension, 288, les distances sont rapidement très grandes.\nUne idée consiste à retirer ces stations peu utilisées ou boucler sur un algorithme du type :\n\nFaire tourner les k-means.\nRetirer les stations dans des clusters isolées.\nRetour à l'étape 1 jusqu'à ce que cela soit interprétable.\n\nExercice 2 : autres types de clustering",
"from sklearn.cluster import DBSCAN\ndbs = DBSCAN(eps=0.1)\npred_dbs = dbs.fit_predict(features[names])\nset(pred_dbs)\n\nfeatures[\"cluster_dbs\"] = pred_dbs\nnbs = features[[\"cluster_dbs\", \"weekday\", \"station_id\"]].groupby([\"cluster_dbs\", \"weekday\"]).count()\nnbs = nbs.reset_index()\nnbs[nbs.cluster_dbs.isin([0, 3, 5, 6])].pivot(\"weekday\",\"cluster_dbs\", \"station_id\").plot(kind=\"bar\");",
"L'algorithme dbscan utilise la proximité des points pour classer les observations. On se retrouve dans le cas où il n'y a pas vraiment de frontière entre les clusters et tous les points se retrouvent associés en un unique cluster excepté quelques points aberrants.\nL'algorithme DBScan ne fonctionne pas sur ces données. Une des raisons que la distance dans un espace vectoriels avec tant de dimensions n'est pas loin d'une information type binaire : différent partout ou identique presque partout. Qu'un vélo arrive 9h10 ou à 9h20, cela n'a pas beaucoup d'importance pour notre problème, pourtant, la distance choisie lui donnera autant d'importance que si le vélo était arrivé à 10h du soir. Pour éviter cela, il faudrait lisser les distributions avec une moyenne mobile.\nGraphes\nUn peu de code pour voir la réparition des clusters sur une carte.",
"piv = features.pivot_table(index=[\"station_id\", \"station_name\"], \n columns=\"weekday\", values=\"cluster\")\npiv.head()\n\npiv[\"distincts\"] = piv.apply(lambda row: len(set(row[i] for i in range(0,7))), axis=1)\n\npivn = piv.reset_index()\npivn.head()\n\npivn.columns = [str(_).replace(\".0\", \"\") for _ in pivn.columns.values]\npivn.head()",
"Une carte des stations un jour de semaine.",
"from pyensae.datasource import download_data\nif False:\n # Provient du site de Chicago\n file = download_data(\"Divvy_Trips_2016_Q3Q4.zip\",\n url=\"https://s3.amazonaws.com/divvy-data/tripdata/\")\nelse:\n # Copie au cas où celui-ci tomberait en panne\n file = download_data(\"Divvy_Trips_2016_Q3.zip\") \n\nstations = pandas.read_csv(\"Divvy_Stations_2016_Q3.csv\")\nstations.head()\n\ndata = stations.merge(pivn, left_on=[\"id\", \"name\"],\n right_on=[\"station_id\", \"station_name\"], suffixes=('_s', '_c'))\ndata.sort_values(\"id\").head()\n\ndef folium_html_stations_map(stations, html_width=None, html_height=None, **kwargs):\n import folium\n from pyensae.notebookhelper import folium_html_map\n map_osm = None\n for key, value in stations:\n x, y = key\n if map_osm is None:\n if \"zoom_start\" not in kwargs:\n kwargs[\"zoom_start\"] = 11\n if \"location\" not in kwargs:\n map_osm = folium.Map(location=[x, y], **kwargs)\n else:\n map_osm = folium.Map(kwargs[\"location\"], **kwargs)\n if isinstance(value, tuple):\n name, value = value\n map_osm.add_child(folium.CircleMarker(\n [x, y], popup=name, radius=15, fill_color=value, color=value))\n else:\n map_osm.add_child(folium.CircleMarker(\n [x, y], radius=15, fill_color=value, color=value))\n return folium_html_map(map_osm, width=html_width, height=html_height)\n\ncolors = [\"red\", \"yellow\", \"gray\", \"green\", \"brown\", \"orange\", \"blue\", \"black\", \"pink\", \"violet\"]\nfor i, c in enumerate(colors):\n print(\"Cluster {0} is {1}\".format(i, c))\nxy = []\nfor els in data.apply(lambda row: (row[\"latitude\"], row[\"longitude\"], row[\"1\"], row[\"name\"]), axis=1):\n try:\n cl = int(els[2])\n except:\n # NaN\n continue\n name = \"%s c%d\" % (els[3], cl)\n color = colors[cl]\n xy.append( ( (els[0], els[1]), (name, color)))\nfolium_html_stations_map(xy, width=\"80%\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ngcm/training-public
|
FEEG6016 Simulation and Modelling/10-Stochastic-DEs-Lab-2.ipynb
|
mit
|
[
"Stochastic Differential Equations: Lab 2",
"from IPython.core.display import HTML\ncss_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'\nHTML(url=css_file)",
"This background for these exercises is article of D Higham, An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, SIAM Review 43:525-546 (2001).\nHigham provides Matlab codes illustrating the basic ideas at http://personal.strath.ac.uk/d.j.higham/algfiles.html, which are also given in the paper.",
"%matplotlib inline\nimport numpy\nfrom matplotlib import pyplot\nfrom matplotlib import rcParams\nrcParams['font.family'] = 'serif'\nrcParams['font.size'] = 16\nrcParams['figure.figsize'] = (12,6)\nfrom scipy.integrate import quad",
"Further Stochastic integrals\nQuick recap: the key feature is the Ito stochastic integral\n\\begin{equation}\n \\int_{t_0}^t G(t') \\, \\text{d}W(t') = \\text{mean-square-}\\lim_{n\\to +\\infty} \\left{ \\sum_{i=1}^n G(t_{i-1}) (W_{t_i} - W_{t_{i-1}} ) \\right}\n\\end{equation}\nwhere the key point for the Ito integral is that the first term in the sum is evaluated at the left end of the interval ($t_{i-1}$).\nNow we use this to write down the SDE\n\\begin{equation}\n \\text{d}X_t = f(X_t) \\, \\text{d}t + g(X_t) \\, \\text{d}W_t\n\\end{equation}\nwith formal solution\n\\begin{equation}\n X_t = X_0 + \\int_0^t f(X_s) \\, \\text{d}s + \\int_0^t g(X_s) \\, \\text{d}W_s.\n\\end{equation}\nUsing the Ito stochastic integral formula we get the Euler-Maruyama method\n\\begin{equation}\n X_{n+1} = X_n + \\delta t \\, f(X_n) + \\sqrt{\\delta t} \\xi_n \\, g(X_n)\n\\end{equation}\nby applying the integral over the region $[t_n, t_{n+1} = t_n + \\delta t]$. Here $\\delta t$ is the width of the interval and $\\xi_n$ is the normal random variable $\\xi_n \\sim N(0, 1)$.\nNormal chain rule\nIf\n\\begin{equation}\n \\frac{\\text{d}X}{\\text{d}t} = f(X_t)\n\\end{equation}\nand we want to find the differential equation satisfied by $h(X(t))$ (or $h(X_t)$), then we write\n\\begin{align}\n &&\\frac{\\text{d}}{\\text{d}t} h(X_t) &= h \\left( X(t) + \\text{d}X(t) \\right) - h(X(t)) \\\n &&&\\simeq h(X(t)) + \\text{d}X \\, h'(X(t)) + \\frac{1}{2} (\\text{d}X)^2 \\, h''(X(t)) + \\dots - h(X(t)) \\\n &&&\\simeq f(X) h'(X) \\text{d}t + \\frac{1}{2} (f(X))^2 h''(X) (\\text{d}t)^2 + \\dots \\\n \\implies && \\frac{\\text{d} h(X)}{dt} &= f(X) h'(X).\n\\end{align}\nStochastic chain rule\nNow run through the same steps using the equation\n\\begin{equation}\n \\text{d}X = f(X)\\, \\text{d}t + g(X) \\, \\text{d}W.\n\\end{equation}\nWe find\n\\begin{align}\n && \\text{d}h &\\simeq h'(X(t))\\, \\text{d}X + \\frac{1}{2} h''(X(t)) (\\text{d}X)^2 + \\dots, \\\n &&&\\simeq h'(X) f(X)\\, \\text{d}t + h'(X) g(X) ', \\text{d}W + \\frac{1}{2} \\left( f(X) \\text{d}t^2 + 2 f(x)g(x)\\, \\text{d}t dW + g^2(x) \\text{d}W^2 \\right) \\\n \\implies && \\text{d}h &= \\left( f(X) h'(X) + \\frac{1}{2} h''(X)g^2(X) \\right) \\, \\text{d}t + h'(X) g(X) \\, \\text{d}W.\n\\end{align}\nThis additional $g^2$ term makes all the difference when deriving numerical methods, where the chain rule is repeatedly used.\nUsing this result\nRemember that\n\\begin{equation}\n \\int_{t_0}^t W_s \\, \\text{d}W_s = \\frac{1}{2} W^2_t - \\frac{1}{2} W^2_{t_0} - \\frac{1}{2} (t - t_0).\n\\end{equation}\nFrom this we need to identify the stochastic differential equation, and also the function $h$, that will give us this result just from the chain rule.\nThe SDE is\n\\begin{equation}\n \\text{d}X_t = \\text{d}W_t, \\quad f(X) = 0, \\quad g(X) = 1.\n\\end{equation}\nWriting the chain rule down in the form\n\\begin{equation}\n h(X_t) = h(X_0) + \\int_0^t \\left( f(X_s) h'(X_s) + \\frac{1}{2} h''(X_s) g^2(X_s) \\right) \\, \\text{d}t + \\int_0^t h'(X_s) g(X_s) \\, \\text{d}W_s.\n\\end{equation}\nMatching the final term (the integral over $\\text{d}W_s$) we see that we need $h'$ to go like $X$, or \n\\begin{equation}\n h = X^2, \\quad \\text{d}X_t = \\text{d}W_t, \\quad f(X) = 0, \\quad g(X) = 1.\n\\end{equation}\nWith $X_t = W_t$ we therefore have\n\\begin{align}\n W_t^2 &= W_0^2 + \\int_{t_0}^t \\frac{1}{2} 2 \\, \\text{d}s + \\int_{t_0}^t 2 W_s \\, \\text{d}W_s\n &= W_0^2 + (t - t_0) + \\int_{t_0}^t 2 W_s \\, \\text{d}W_s\n\\end{align}\nas required.\nMilstein's method\nUsing our chain rule we can construct higher order methods for stochastic differential equations. Milstein's method, applied to the SDE\n$$\n \\text{d}X = f(X) \\, \\text{d}t + g(X) \\,\\text{d}W,\n$$\nis\n$$\n X_{n+1} = X_n + h f_n + g_n \\, \\text{d}W_{n} + \\tfrac{1}{2} g_n g'n \\left( \\text{d}W{n}^2 - h \\right).\n$$\nTasks\nImplement Milstein's method, applied to the problem in the previous lab:\n$$\n\\begin{equation}\n \\text{d}X(t) = \\lambda X(t) \\, \\text{d}t + \\mu X(t) \\text{d}W(t), \\qquad X(0) = X_0.\n\\end{equation}\n$$\nChoose any reasonable values of the free parameters $\\lambda, \\mu, X_0$.\nThe exact solution to this equation is $X(t) = X(0) \\exp \\left[ \\left( \\lambda - \\tfrac{1}{2} \\mu^2 \\right) t + \\mu W(t) \\right]$. Fix the timetstep and compare your solution to the exact solution.\nCheck the convergence again.\nCompare the performance of the Euler-Maruyama and Milstein method using eg timeit. At what point is one method better than the other?\nPopulation problem\nApply the algorithms, convergence and performance tests to the SDE\n$$\n\\begin{equation}\n \\text{d}X(t) = r X(t) (K - X(t)) \\, \\text{d}t + \\beta X(t) \\,\\text{d}W(t), \\qquad X(0) = X_0.\n\\end{equation}\n$$\nUse the parameters $r = 2, K = 1, \\beta = 0.25, X_0 = 0.5$.",
"r = 2.0\nK = 1.0\nbeta = 0.25\nX0 = 0.5\nT = 1.0\n",
"Investigate how the behaviour varies as you change the parameters $r, K, \\beta$."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cccr-iitm/cmip6/models/sandbox-1/atmos.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: CCCR-IITM\nSource ID: SANDBOX-1\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:48\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-1', 'atmos')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Overview\n2. Key Properties --> Resolution\n3. Key Properties --> Timestepping\n4. Key Properties --> Orography\n5. Grid --> Discretisation\n6. Grid --> Discretisation --> Horizontal\n7. Grid --> Discretisation --> Vertical\n8. Dynamical Core\n9. Dynamical Core --> Top Boundary\n10. Dynamical Core --> Lateral Boundary\n11. Dynamical Core --> Diffusion Horizontal\n12. Dynamical Core --> Advection Tracers\n13. Dynamical Core --> Advection Momentum\n14. Radiation\n15. Radiation --> Shortwave Radiation\n16. Radiation --> Shortwave GHG\n17. Radiation --> Shortwave Cloud Ice\n18. Radiation --> Shortwave Cloud Liquid\n19. Radiation --> Shortwave Cloud Inhomogeneity\n20. Radiation --> Shortwave Aerosols\n21. Radiation --> Shortwave Gases\n22. Radiation --> Longwave Radiation\n23. Radiation --> Longwave GHG\n24. Radiation --> Longwave Cloud Ice\n25. Radiation --> Longwave Cloud Liquid\n26. Radiation --> Longwave Cloud Inhomogeneity\n27. Radiation --> Longwave Aerosols\n28. Radiation --> Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --> Boundary Layer Turbulence\n31. Turbulence Convection --> Deep Convection\n32. Turbulence Convection --> Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --> Large Scale Precipitation\n35. Microphysics Precipitation --> Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --> Optical Cloud Properties\n38. Cloud Scheme --> Sub Grid Scale Water Distribution\n39. Cloud Scheme --> Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --> Isscp Attributes\n42. Observation Simulation --> Cosp Attributes\n43. Observation Simulation --> Radar Inputs\n44. Observation Simulation --> Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --> Orographic Gravity Waves\n47. Gravity Waves --> Non Orographic Gravity Waves\n48. Solar\n49. Solar --> Solar Pathways\n50. Solar --> Solar Constant\n51. Solar --> Orbital Parameters\n52. Solar --> Insolation Ozone\n53. Volcanos\n54. Volcanos --> Volcanoes Treatment \n1. Key Properties --> Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of atmospheric model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.4. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.5. High Top\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the orography.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n",
"4.2. Changes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n",
"5. Grid --> Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Discretisation --> Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n",
"6.3. Scheme Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation function order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.4. Horizontal Pole\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal discretisation pole singularity treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7. Grid --> Discretisation --> Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType of vertical coordinate system",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere dynamical core",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the dynamical core of the model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Timestepping Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTimestepping framework type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of the model prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Dynamical Core --> Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTop boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Top Heat\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary heat treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Top Wind\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary wind treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Dynamical Core --> Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nType of lateral boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Dynamical Core --> Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nHorizontal diffusion scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal diffusion scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Dynamical Core --> Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nTracer advection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.3. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.4. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracer advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Dynamical Core --> Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMomentum advection schemes name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Scheme Staggering Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Radiation --> Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nShortwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nShortwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Radiation --> Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Radiation --> Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18. Radiation --> Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Radiation --> Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Radiation --> Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21. Radiation --> Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Radiation --> Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLongwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLongwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23. Radiation --> Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Radiation --> Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Physical Reprenstation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25. Radiation --> Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Radiation --> Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27. Radiation --> Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28. Radiation --> Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere convection and turbulence",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. Turbulence Convection --> Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nBoundary layer turbulence scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBoundary layer turbulence scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.3. Closure Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nBoundary layer turbulence scheme closure order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Counter Gradient\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"31. Turbulence Convection --> Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDeep convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Turbulence Convection --> Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nShallow convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nshallow convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nshallow convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n",
"32.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Microphysics Precipitation --> Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.2. Hydrometeors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35. Microphysics Precipitation --> Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLarge scale cloud microphysics processes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the atmosphere cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.3. Atmos Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n",
"36.4. Uses Separate Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.6. Prognostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.7. Diagnostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.8. Prognostic Variables\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37. Cloud Scheme --> Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37.2. Cloud Inhomogeneity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Cloud Scheme --> Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale water distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"38.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale water distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale water distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"38.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale water distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"39. Cloud Scheme --> Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale ice distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"39.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale ice distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"39.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale ice distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"39.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of observation simulator characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Observation Simulation --> Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. Top Height Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator ISSCP top height direction",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42. Observation Simulation --> Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator COSP run configuration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42.2. Number Of Grid Points\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of grid points",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.3. Number Of Sub Columns\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.4. Number Of Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of levels",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43. Observation Simulation --> Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nCloud simulator radar frequency (Hz)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43.2. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator radar type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"43.3. Gas Absorption\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses gas absorption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"43.4. Effective Radius\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses effective radius",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"44. Observation Simulation --> Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator lidar ice type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"44.2. Overlap\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator lidar overlap",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"45.2. Sponge Layer\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.3. Background\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBackground wave distribution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.4. Subgrid Scale Orography\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSubgrid scale orography effects taken into account.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46. Gravity Waves --> Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"46.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47. Gravity Waves --> Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"47.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n",
"47.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of solar insolation of the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"49. Solar --> Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"50. Solar --> Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the solar constant.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"50.2. Fixed Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"50.3. Transient Characteristics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nsolar constant transient characteristics (W m-2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51. Solar --> Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"51.2. Fixed Reference Date\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"51.3. Transient Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescription of transient orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51.4. Computation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used for computing orbital parameters.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"52. Solar --> Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"54. Volcanos --> Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
navaro1/deep-learning
|
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
|
mit
|
[
"Sentiment analysis with TFLearn\nIn this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.\nWe'll start off by importing all the modules we'll need, then load and prepare the data.",
"import pandas as pd\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nfrom tflearn.data_utils import to_categorical",
"Preparing the data\nFollowing along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.\nRead the data\nUse the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.",
"reviews = pd.read_csv('reviews.txt', header=None)\nlabels = pd.read_csv('labels.txt', header=None)",
"Counting word frequency\nTo start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.\n\nExercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.",
"from collections import Counter\n\ntotal_counts = Counter()\nfor review in reviews.values:\n for word in review[0].split(\" \"):\n total_counts[word] += 1\n\nprint(\"Total words in data set: \", len(total_counts))",
"Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.",
"vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]\nprint(vocab[:60])",
"What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.",
"print(vocab[-1], ': ', total_counts[vocab[-1]])",
"The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.\nNote: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.\nNow for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.\n\nExercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.",
"word2idx = {word: idx for idx, word in enumerate(vocab)}",
"Text to vector function\nNow we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:\n\nInitialize the word vector with np.zeros, it should be the length of the vocabulary.\nSplit the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.\nFor each word in that list, increment the element in the index associated with that word, which you get from word2idx.\n\nNote: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.",
"def text_to_vector(text):\n result = np.zeros([1, len(vocab)])\n for word in text.split(\" \"):\n idx = word2idx.get(word, None)\n if idx is not None:\n result[0][idx] += 1\n return result",
"If you do this right, the following code should return\n```\ntext_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake')[:65]\narray([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])\n```",
"text_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake')[:65]",
"Now, run through our entire review data set and convert each review to a word vector.",
"word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)\nfor ii, (_, text) in enumerate(reviews.iterrows()):\n word_vectors[ii] = text_to_vector(text[0])\n\n# Printing out the first 5 word vectors\nword_vectors[:5, :23]",
"Train, Validation, Test sets\nNow that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.",
"Y = (labels=='positive').astype(np.int_)\nrecords = len(labels)\n\nshuffle = np.arange(records)\nnp.random.shuffle(shuffle)\ntest_fraction = 0.9\n\ntrain_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]\ntrainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)\ntestX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)\n\ntrainY",
"Building the network\nTFLearn lets you build the network by defining the layers. \nInput layer\nFor the input layer, you just need to tell it how many units you have. For example, \nnet = tflearn.input_data([None, 100])\nwould create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.\nThe number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).\nOutput layer\nThe last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.\nnet = tflearn.fully_connected(net, 2, activation='softmax')\nTraining\nTo set how you train the network, use \nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with the categorical cross-entropy.\n\nFinally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like \nnet = tflearn.input_data([None, 10]) # Input\nnet = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden\nnet = tflearn.fully_connected(net, 2, activation='softmax') # Output\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nmodel = tflearn.DNN(net)\n\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.",
"# Network building\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n #### Your code ####\n net = tflearn.input_data([None, len(vocab)]) \n net = tflearn.fully_connected(net, 1024, activation='ReLU') \n net = tflearn.fully_connected(net, 2, activation='softmax') \n net = tflearn.regression(net, optimizer='sgd', learning_rate=0.025, loss='categorical_crossentropy')\n return model",
"Intializing the model\nNext we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.\n\nNote: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.",
"model = build_model()",
"Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.\nYou can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.",
"# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=512, n_epoch=150)",
"Testing\nAfter you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.",
"predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)\ntest_accuracy = np.mean(predictions == testY[:,0], axis=0)\nprint(\"Test accuracy: \", test_accuracy)",
"Try out your own text!",
"# Helper function that uses your model to predict sentiment\ndef test_sentence(sentence):\n positive_prob = model.predict(text_to_vector(sentence.lower()))[0][1]\n print(model.predict(text_to_vector(sentence.lower())))\n print('Sentence: {}'.format(sentence))\n print('P(positive) = {:.3f} :'.format(positive_prob), \n 'Positive' if positive_prob > 0.5 else 'Negative')\n\nsentence = \"Moonlight is by far the best movie of 2016.\"\ntest_sentence(sentence)\n\nsentence = \"It's amazing anyone could be talented enough to make something this spectacularly awful\"\ntest_sentence(sentence)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
andreyf/machine-learning-examples
|
decision_trees_knn/practice_trees_titanic.ipynb
|
gpl-3.0
|
[
"<center>\n<img src=\"../img/ods_stickers.jpg\">\nОткрытый курс по машинному обучению. Сессия № 2\n</center>\nАвтор материала: программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий. Материал распространяется на условиях лицензии Creative Commons CC BY-NC-SA 4.0. Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала.\n<center>Тема 3. Обучение с учителем. Методы классификации\n<center>Практика. Дерево решений в задаче предсказания выживания пассажиров \"Титаника\". Решение\nЗаполните код в клетках и выберите ответы в веб-форме.\n<a href=\"https://www.kaggle.com/c/titanic\">Соревнование</a> Kaggle \"Titanic: Machine Learning from Disaster\".",
"import numpy as np\nimport pandas as pd\nfrom sklearn.tree import DecisionTreeClassifier, export_graphviz\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.metrics import roc_auc_score, accuracy_score, confusion_matrix\n%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport seaborn as sns",
"Функция для формирования csv-файла посылки на Kaggle:",
"def write_to_submission_file(predicted_labels, out_file, train_num=891,\n target='Survived', index_label=\"PassengerId\"):\n # turn predictions into data frame and save as csv file\n predicted_df = pd.DataFrame(predicted_labels,\n index = np.arange(train_num + 1,\n train_num + 1 +\n predicted_labels.shape[0]),\n columns=[target])\n predicted_df.to_csv(out_file, index_label=index_label)",
"Считываем обучающую и тестовую выборки.",
"train_df = pd.read_csv(\"../data/titanic_train.csv\") \ntest_df = pd.read_csv(\"../data/titanic_test.csv\") \n\ny = train_df['Survived']\n\ntrain_df.head()\n\ntrain_df.describe(include='all')\n\ntest_df.describe(include='all')",
"Заполним пропуски медианными значениями.",
"train_df['Age'].fillna(train_df['Age'].median(), inplace=True)\ntest_df['Age'].fillna(train_df['Age'].median(), inplace=True)\ntrain_df['Embarked'].fillna('S', inplace=True)\ntest_df['Fare'].fillna(train_df['Fare'].median(), inplace=True)",
"Кодируем категориальные признаки Pclass, Sex, SibSp, Parch и Embarked с помощью техники One-Hot-Encoding.",
"train_df = pd.concat([train_df, pd.get_dummies(train_df['Pclass'], \n prefix=\"PClass\"),\n pd.get_dummies(train_df['Sex'], prefix=\"Sex\"),\n pd.get_dummies(train_df['SibSp'], prefix=\"SibSp\"),\n pd.get_dummies(train_df['Parch'], prefix=\"Parch\"),\n pd.get_dummies(train_df['Embarked'], prefix=\"Embarked\")],\n axis=1)\ntest_df = pd.concat([test_df, pd.get_dummies(test_df['Pclass'], \n prefix=\"PClass\"),\n pd.get_dummies(test_df['Sex'], prefix=\"Sex\"),\n pd.get_dummies(test_df['SibSp'], prefix=\"SibSp\"),\n pd.get_dummies(test_df['Parch'], prefix=\"Parch\"),\n pd.get_dummies(test_df['Embarked'], prefix=\"Embarked\")],\n axis=1)\n\ntrain_df.drop(['Survived', 'Pclass', 'Name', 'Sex', 'SibSp', \n 'Parch', 'Ticket', 'Cabin', 'Embarked', 'PassengerId'], \n axis=1, inplace=True)\ntest_df.drop(['Pclass', 'Name', 'Sex', 'SibSp', 'Parch', 'Ticket', 'Cabin', 'Embarked', 'PassengerId'], \n axis=1, inplace=True)",
"В тестовой выборке появляется новое значение Parch = 9, которого нет в обучающей выборке. Проигнорируем его.",
"train_df.shape, test_df.shape\n\nset(test_df.columns) - set(train_df.columns)\n\ntest_df.drop(['Parch_9'], axis=1, inplace=True)\n\ntrain_df.head()\n\ntest_df.head()",
"1. Дерево решений без настройки параметров\nОбучите на имеющейся выборке дерево решений (DecisionTreeClassifier) максимальной глубины 2. Используйте параметр random_state=17 для воспроизводимости результатов.",
"tree = DecisionTreeClassifier(max_depth=2, random_state=17)\n\ntree.fit(train_df, y)",
"Сделайте с помощью полученной модели прогноз для тестовой выборки",
"predictions = tree.predict(test_df)",
"Сформируйте файл посылки и отправьте на Kaggle",
"write_to_submission_file(predictions, \n 'titanic_tree_depth2.csv')",
"<font color='red'>Вопрос 1. </font> Каков результат первой посылки (дерево решений без настройки параметров) в публичном рейтинге соревнования Titanic?\n- <font color='green'>0.746</font>\n- 0.756\n- 0.766\n- 0.776\nУ такой посылки результат на публичной тестовой выборке - 0.74641.",
"export_graphviz(tree, out_file=\"../img/titanic_tree_depth2.dot\", \n feature_names=train_df.columns)\n!dot -Tpng ../img/titanic_tree_depth2.dot -o ../img/titanic_tree_depth2.png",
"<img src='../img/titanic_tree_depth2.png'>\n<font color='red'>Вопрос 2. </font> Сколько признаков задействуются при прогнозе деревом решений глубины 2?\n- 2\n- <font color='green'>3</font>\n- 4\n- 5\n2. Дерево решений с настройкой параметров\nОбучите на имеющейся выборке дерево решений (DecisionTreeClassifier). Также укажите random_state=17. Максимальную глубину и минимальное число элементов в листе настройте на 5-кратной кросс-валидации с помощью GridSearchCV.",
"# tree params for grid search\ntree_params = {'max_depth': list(range(1, 5)), \n 'min_samples_leaf': list(range(1, 5))}\n\nlocally_best_tree = GridSearchCV(DecisionTreeClassifier(random_state=17), \n tree_params, \n verbose=True, n_jobs=-1, cv=5)\nlocally_best_tree.fit(train_df, y)\n\nexport_graphviz(locally_best_tree.best_estimator_, \n out_file=\"../img/titanic_tree_tuned.dot\", \n feature_names=train_df.columns)\n!dot -Tpng ../img/titanic_tree_tuned.dot -o ../img/titanic_tree_tuned.png",
"<img src='../img/titanic_tree_tuned.png'>",
"print(\"Best params:\", locally_best_tree.best_params_)\nprint(\"Best cross validaton score\", locally_best_tree.best_score_)",
"<font color='red'>Вопрос 3. </font> Каковы лучшие параметры дерева, настроенные на кросс-валидации с помощью GridSearchCV?\n- max_depth=2, min_samples_leaf=1\n- max_depth=2, min_samples_leaf=4\n- max_depth=3, min_samples_leaf=2\n- <font color='green'>max_depth=3, min_samples_leaf=3</font>\n<font color='red'>Вопрос 4. </font> Какой получилась средняя доля верных ответов на кросс-валидации для дерева решений с лучшим сочетанием гиперпараметров max_depth и min_samples_leaf?\n- 0.77\n- 0.79\n- <font color='green'>0.81</font>\n- 0.83\nСделайте с помощью полученной модели прогноз для тестовой выборки.",
"predictions = locally_best_tree.predict(test_df)",
"Сформируйте файл посылки и отправьте на Kaggle.",
"write_to_submission_file(predictions, 'titanic_tree_tuned.csv')",
"<font color='red'>Вопрос 5. </font> Каков результат второй посылки (дерево решений с настройкой гиперпараметров) в публичном рейтинге соревнования Titanic?\n- 0.7499\n- 0.7599\n- 0.7699\n- <font color='green'>0.7799</font>\nСсылки:\n\n<a href=\"https://www.kaggle.com/c/titanic\">Соревнование</a> Kaggle \"Titanic: Machine Learning from Disaster\"\n<a href=\"https://www.dataquest.io/mission/74/getting-started-with-kaggle/\">Тьюториал</a> Dataquest по задаче Kaggle \"Titanic: Machine Learning from Disaster\""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mahieke/maschinelles_lernen
|
a3/Aufgabe_3.1.ipynb
|
mit
|
[
"Praktikum Maschinelles Lernen WS 15/16\n<table>\n <tr>\n <td>Name</td>\n <td>Vorname</td>\n <td>Matrikelnummer</td>\n <td>Datum</td>\n </tr>\n <tr>\n <td>Alt</td>\n <td>Tobias</td>\n <td>282385</td>\n <td>18.12.2015</td>\n </tr>\n <tr>\n <td>Hieke</td>\n <td>Manuel</td>\n <td>283912</td>\n <td>08.01.2016</td>\n </tr>\n</table>\n\n<b>Aufgabe 3.1 - Perzeptron",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math\nfrom numpy import linalg as LA\nimport scipy as sp\nimport urllib2\nfrom urllib2 import urlopen, URLError, HTTPError\nimport zipfile\nimport tarfile\nimport sys\nimport os\nfrom skimage import data, io, filter\nfrom PIL import Image",
"<b>Teil A - Toy Dataset",
"# Funktion zum Erstellen des Datensatzes\n#-----------------------------------------------------------------------------\n# loc : float Mean (“centre”) of the distribution.\n# scale : float Standard deviation (spread or “width”) of the distribution.\n# size : int or tuple of ints, optional\n# numpy.random.normal(loc=0.0, scale=1.0, size=None)\n\ndef createToyDataSet(ypos,numberOfData,clusterDistance,varianz):\n #sigma = sqrt(clusterBright) # mean and standard deviation\n #data1 = np.random.normal(mu, sigma, numberOfData)\n #data2 = np.random.normal(mu, sigma, numberOfData)\n\n mu = clusterDistance #loc Paramter -> Abstand\n sigma = sqrt(varianz) #scale Parameter -> Clusterbreite\n sizeOfData = numberOfData #Anzahl Daten\n\n #np.vstack -> Stack arrays in sequence vertically\n X = np.vstack([np.random.normal(ypos+mu, sigma, (sizeOfData, 2)), np.random.normal(ypos-mu, sigma, (sizeOfData, 2))])\n return X \n\n#Graphische Darstellung\n#-------------------------------------------------------------------------\ndef plotToyData(data,mu,varianz):\n fig = plt.figure()\n \n fig, ax = subplots(figsize=(14,6))\n data1 = data[0]\n data2 = data[1]\n #plot data histogramm\n ax = plt.subplot(1,2,1)\n title('x/y Histogramm')\n count, bins, ignored = ax.hist(data, 30, normed=True)\n ax.plot(bins, 1/(sqrt(varianz) * np.sqrt(2 * np.pi)) * np.exp( - (bins)**2 / (2 * varianz) ),\n linewidth=2, color='g')\n # 1. Gaussverteilungen - Cluster 1\n x_plot = np.linspace(mu - 4*sqrt(varianz), mu + 4*sqrt(varianz), 100) # the x-values to use in the plot\n # compute the values of this density at the locations given by x_plot\n py = 1/np.sqrt(4*np.pi*varianz)*np.exp(-0.5*(x_plot-mu)**2/varianz)\n # sample some random values from this density\n x_samps = data\n # Plot the density\n ax.plot(x_plot, py)\n \n # 2. Gaussverteilungen - Cluster 2\n x_plot = np.linspace(-mu - 4*sqrt(varianz), -mu + 4*sqrt(varianz), 100) # the x-values to use in the plot\n # compute the values of this density at the locations given by x_plot\n py = 1/np.sqrt(4*np.pi*varianz)*np.exp(-0.5*(x_plot+mu)**2/varianz)\n # sample some random values from this density\n x_samps = data\n # Plot the density\n ax.plot(x_plot, py)\n\n # Scatter plot\n ax = plt.subplot(1,2,2)\n colors = np.hstack([np.zeros(len(data)/2), np.ones(len(data)/2)])\n plt.scatter(data[:, 0], data[:, 1], c=colors, edgecolors='none',cmap=plt.cm.Accent)\n\n#Erzeugen der Daten (wie gewünscht einstellbar)\n#---------------------------------------------------------------------------\nvarianz = 0.5 #Clusterbreite\nnumberOfData = 200 #Anzahl neuer Datenpunkte pro Cluster\nmean= 1.5 #Abstand\nypos = 0 #y-Achsen-Verschiebung\n\ntoyData = createToyDataSet(ypos,numberOfData,mean,varianz)\n\n\nplotToyData(toyData, mean, varianz)\n\n#Erzeugen des zugehörigen Labelvektor mit den Werten ±1\n#-------------------------------------------------------------------\nlabelvector = np.ones(len(toyData)) \nlabelvector[len(toyData)/2:] *= -1\nprint 'ToyData Größe :',shape(toyData),' 1.Klasse: ',toyData[0][0],' 2.Klasse: ', toyData[1][0]\nprint 'Labelvektor Größe:',shape(labelvector),' 1.Klasse: ',labelvector[0],'\\t\\t2.Klasse: ', labelvector[200]",
"<b>Teil B - Perzeptron"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
paris-saclay-cds/ramp-workflow
|
rampwf/tests/kits/titanic_no_test/titanic_no_test_starting_kit.ipynb
|
bsd-3-clause
|
[
"Paris Saclay Center for Data Science\nTitanic RAMP: survival prediction of Titanic passengers\nBenoit Playe (Institut Curie/Mines ParisTech), Chloé-Agathe Azencott (Institut Curie/Mines ParisTech), Alex Gramfort (LTCI/Télécom ParisTech), Balázs Kégl (LAL/CNRS)\nIntroduction\nThis is an initiation project to introduce RAMP and get you to know how it works.\nThe goal is to develop prediction models able to identify people who survived from the sinking of the Titanic, based on gender, age, and ticketing information. \nThe data we will manipulate is from the Titanic kaggle challenge.\nRequirements\n\nnumpy>=1.10.0 \nmatplotlib>=1.5.0 \npandas>=0.19.0 \nscikit-learn>=0.17 (different syntaxes for v0.17 and v0.18) \nseaborn>=0.7.1",
"%matplotlib inline\nimport os\nimport glob\nimport numpy as np\nfrom scipy import io\nimport matplotlib.pyplot as plt\nimport pandas as pd",
"Exploratory data analysis\nLoading the data",
"train_filename = 'data/train.csv'\ndata = pd.read_csv(train_filename)\ny_train = data['Survived'].values\nX_train = data.drop(['Survived', 'PassengerId'], axis=1)\nX_train.head(5)\n\ndata.describe()\n\ndata.count()",
"The original training data frame has 891 rows. In the starting kit, we give you a subset of 445 rows. Some passengers have missing information: in particular Age and Cabin info can be missing. The meaning of the columns is explained on the challenge website:\nPredicting survival\nThe goal is to predict whether a passenger has survived from other known attributes. Let us group the data according to the Survived columns:",
"data.groupby('Survived').count()",
"About two thirds of the passengers perished in the event. A dummy classifier that systematically returns \"0\" would have an accuracy of 62%, higher than that of a random model.\nSome plots\nFeatures densities and co-evolution\nA scatterplot matrix allows us to visualize:\n* on the diagonal, the density estimation for each feature\n* on each of the off-diagonal plots, a scatterplot between two features. Each dot represents an instance.",
"from pandas.plotting import scatter_matrix\nscatter_matrix(data.get(['Fare', 'Pclass', 'Age']), alpha=0.2,\n figsize=(8, 8), diagonal='kde');",
"Non-linearly transformed data\nThe Fare variable has a very heavy tail. We can log-transform it.",
"data_plot = data.get(['Age', 'Survived'])\ndata_plot = data.assign(LogFare=lambda x : np.log(x.Fare + 10.))\nscatter_matrix(data_plot.get(['Age', 'LogFare']), alpha=0.2, figsize=(8, 8), diagonal='kde');\n\ndata_plot.plot(kind='scatter', x='Age', y='LogFare', c='Survived', s=50, cmap=plt.cm.Paired);",
"Plot the bivariate distributions and marginals of two variables\nAnother way of visualizing relationships between variables is to plot their bivariate distributions.",
"import seaborn as sns\n\nsns.set()\nsns.set_style(\"whitegrid\")\nsns.jointplot(data_plot.Age[data_plot.Survived == 1],\n data_plot.LogFare[data_plot.Survived == 1],\n kind=\"kde\", size=7, space=0, color=\"b\");\n\nsns.jointplot(data_plot.Age[data_plot.Survived == 0],\n data_plot.LogFare[data_plot.Survived == 0],\n kind=\"kde\", size=7, space=0, color=\"y\");",
"Making predictions\nA basic prediction workflow, using scikit-learn, will be presented below.\nFirst, we will perform some simple preprocessing of our data:\n\none-hot encode the categorical features: Sex, Pclass, Embarked\nfor the numerical columns Age, SibSp, Parch, Fare, fill in missing values with a default value (-1)\nall remaining columns will be dropped\n\nThis can be done succintly with make_column_transformer which performs specific transformations on specific features.",
"from sklearn.compose import make_column_transformer\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.impute import SimpleImputer\n\ncategorical_cols = ['Sex', 'Pclass', 'Embarked']\nnumerical_cols = ['Age', 'SibSp', 'Parch', 'Fare']\n\npreprocessor = make_column_transformer(\n (OneHotEncoder(handle_unknown='ignore'), categorical_cols),\n (SimpleImputer(strategy='constant', fill_value=-1), numerical_cols),\n)",
"The preprocessor object created with make_column_transformer can be used in a scikit-learn pipeline. A pipeline assembles several steps together and can be used to cross validate an entire workflow. Generally, transformation steps are combined with a final estimator.\nWe will create a pipeline consisting of the preprocessor created above and a final estimator, LogisticRegression.",
"from sklearn.pipeline import Pipeline\nfrom sklearn.linear_model import LogisticRegression\n\npipeline = Pipeline([\n ('transformer', preprocessor),\n ('classifier', LogisticRegression()),\n])",
"We can cross-validate our pipeline using cross_val_score. Below we will have specified cv=8 meaning KFold cross-valdiation splitting will be used, with 8 folds. The Area Under the Receiver Operating Characteristic Curve (ROC AUC) score is calculated for each split. The output score will be an array of 8 scores from each KFold. The score mean and standard of the 8 scores is printed at the end.",
"from sklearn.model_selection import cross_val_score\n\nscores = cross_val_score(pipeline, X_train, y_train, cv=8, scoring='roc_auc')\n\nprint(\"mean: %e (+/- %e)\" % (scores.mean(), scores.std()))",
"Testing\nOnce you have created a model with cross-valdiation scores you are happy with, you can test how well your model performs on the independent test data.\nFirst we will read in our test data:",
"# test_filename = 'data/test.csv'\n# data = pd.read_csv(test_filename)\n# y_test = data['Survived'].values\n# X_test = data.drop(['Survived', 'PassengerId'], axis=1)\n# X_test.head(5)",
"Next we need to fit our pipeline on our training data:",
"# clf = pipeline.fit(X_train, y_train)",
"Now we can predict on our test data:",
"# y_pred = pipeline.predict(X_test)",
"Finally, we can calculate how well our model performed on the test data:",
"# from sklearn.metrics import roc_auc_score\n\n# score = roc_auc_score(y_test, y_pred)\n# score",
"RAMP submissions\nFor submitting to the RAMP site, you will need to write a submission.py file that defines a get_estimator function that returns a scikit-learn pipeline.\nFor example, to submit our basic example above, we would define our pipeline within the function and return the pipeline at the end. Remember to include all the necessary imports at the beginning of the file.",
"from sklearn.compose import make_column_transformer\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.linear_model import LogisticRegression\n\ndef get_estimator():\n\n categorical_cols = ['Sex', 'Pclass', 'Embarked']\n numerical_cols = ['Age', 'SibSp', 'Parch', 'Fare']\n\n preprocessor = make_column_transformer(\n (OneHotEncoder(handle_unknown='ignore'), categorical_cols),\n (SimpleImputer(strategy='constant', fill_value=-1), numerical_cols),\n )\n\n pipeline = Pipeline([\n ('transformer', preprocessor),\n ('classifier', LogisticRegression()),\n ])\n\n return pipeline",
"If you take a look at the sample submission in the directory submissions/starting_kit, you will find a file named submission.py, which has the above code in it.\nYou can test that the sample submission works by running ramp_test_submission in your terminal (ensure that ramp-workflow has been installed and you are in the titanic ramp kit directory). Alternatively, within this notebook you can run:",
"# !ramp_test_submission",
"To test that your own submission works, create a new folder within submissions and name it how you wish. Within your new folder save your submission.py file that defines a get_estimator function. Test your submission locally by running:\nramp_test_submission --submission <folder>\nwhere <folder> is the name of the new folder you created above.\nSubmitting to ramp.studio\nOnce you found a good solution, you can submit it to ramp.studio. First, if it is your first time using RAMP, sign up, otherwise log in. Then, find the appropriate open event for the titanic challenge. Sign up for the event. Note that both RAMP and event signups are controlled by RAMP administrators, so there can be a delay between asking for signup and being able to submit.\nOnce your signup request(s) have been accepted, you can go to your sandbox and copy-paste (or upload) your submissions.py file. Save your submission, name it, then click 'submit'. The submission is trained and tested on our backend in the same way as ramp_test_submission does it locally. While your submission is waiting in the queue and being trained, you can find it in the \"New submissions (pending training)\" table in my submissions. Once it is trained, you get a mail, and your submission shows up on the public leaderboard.\nIf there is an error (despite having tested your submission locally with ramp_test_submission), it will show up in the \"Failed submissions\" table in my submissions. You can click on the error to see part of the trace.\nAfter submission, do not forget to give credits to the previous submissions you reused or integrated into your submission.\nThe data set we use at the backend is usually different from what you find in the starting kit, so the score may be different.\nThe usual workflow with RAMP is to explore solutions by refining feature transformations, selecting different models and perhaps do some AutoML/hyperopt, etc., in a notebook setting, then test them with ramp_test_submission. The script prints mean cross-validation scores:\n```\ntrain auc = 0.85 ± 0.005\ntrain acc = 0.81 ± 0.006\ntrain nll = 0.45 ± 0.007\nvalid auc = 0.87 ± 0.023\nvalid acc = 0.81 ± 0.02\nvalid nll = 0.44 ± 0.024\ntest auc = 0.83 ± 0.006\ntest acc = 0.76 ± 0.003\ntest nll = 0.5 ± 0.005\n```\nThe official score in this RAMP (the first score column after \"historical contributivity\" on the leaderboard) is area under the roc curve (\"auc\"), so the line that is relevant in the output of ramp_test_submission is valid auc = 0.87 ± 0.023.\nMore information\nYou can find more information in the README of the ramp-workflow library.\nContact\nDon't hesitate to contact us."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ledeprogram/algorithms
|
class6/donow/argueso_olaya_donow6.ipynb
|
gpl-3.0
|
[
"1. Import the necessary packages to read in the data, plot, and create a linear regression model",
"import pandas as pd\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport statsmodels.formula.api as smf",
"2. Read in the hanford.csv file",
"df = pd.read_csv(\"hanford.csv\")",
"<img src=\"images/hanford_variables.png\">",
"df.head()",
"3. Calculate the basic descriptive statistics on the data",
"df.describe()",
"4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?",
"df.corr()",
"There seems to be a highly positive correlation between both variables, as shown by the coefficient of correlation, which equals 0.92.",
"df.plot(kind='scatter', x='Exposure', y='Mortality')",
"5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure",
"lm = smf.ols(formula=\"Mortality~Exposure\",data=df).fit()\n\nlm.params\n\nintercept, slope = lm.params\n\ndef mortality_rate(exposure):\n for item in df['Exposure']:\n mortality = exposure * slope + intercept\n return mortality\n\nmortality_rate(3)",
"6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)",
"ax = df.plot(kind='scatter', x= 'Exposure', y='Mortality')\nplt.plot(df[\"Exposure\"],slope*df[\"Exposure\"]+intercept,\"-\",color=\"green\")\n\ndet_corr = (df.corr())* (df.corr())\n\ndet_corr",
"7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10",
"mortality_rate(10)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Rotvig/cs231n
|
Deep Learning/Exercise 2/Q1.ipynb
|
mit
|
[
"Modular neural nets\nIn the previous exercise, we started to build modules/general layers for implementing large neural networks. In this exercise, we will expand on this by implementing a convolutional layer, max pooling layer and a dropout layer.\nFor each layer we will implement forward and backward functions. The forward function will receive data, weights, and other parameters, and will return both an output and a cache object that stores data needed for the backward pass. The backward function will recieve upstream derivatives and the cache object, and will return gradients with respect to the data and all of the weights. This will allow us to write code that looks like this:\n```python\ndef two_layer_net(X, W1, b1, W2, b2, reg):\n # Forward pass; compute scores\n s1, fc1_cache = affine_forward(X, W1, b1)\n a1, relu_cache = relu_forward(s1)\n scores, fc2_cache = affine_forward(a1, W2, b2)\n# Loss functions return data loss and gradients on scores\ndata_loss, dscores = svm_loss(scores, y)\n\n# Compute backward pass\nda1, dW2, db2 = affine_backward(dscores, fc2_cache)\nds1 = relu_backward(da1, relu_cache)\ndX, dW1, db1 = affine_backward(ds1, fc1_cache)\n\n# A real network would add regularization here\n\n# Return loss and gradients\nreturn loss, dW1, db1, dW2, db2\n\n```",
"# As usual, a bit of setup\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient\nfrom cs231n.layers import *\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))",
"Dropout layer: forward\nOpen the file cs231n/layers.py and implement the dropout_forward function. You should implement inverted dropout rather than regular dropout. We can check the forward pass by looking at the statistics of the outputs in train and test modes.",
"# Check the dropout forward pass\n\nx = np.random.randn(100, 100)\ndropout_param_train = {'p': 0.25, 'mode': 'train'}\ndropout_param_test = {'p': 0.25, 'mode': 'test'}\n\nout_train, _ = dropout_forward(x, dropout_param_train)\nout_test, _ = dropout_forward(x, dropout_param_test)\n\n# Test dropout training mode; about 25% of the elements should be nonzero\nprint np.mean(out_train != 0) # expected to be ~0.25\n\n# Test dropout test mode; all of the elements should be nonzero\nprint np.mean(out_test != 0) # expected to be = 1",
"Dropout layer: backward\nOpen the file cs231n/layers.py and implement the dropout_backward function. We can check the backward pass using numerical gradient checking.",
"from cs231n.gradient_check import eval_numerical_gradient_array\n\n# Check the dropout backward pass\n\nx = np.random.randn(5, 4)\ndout = np.random.randn(*x.shape)\ndropout_param = {'p': 0.8, 'mode': 'train', 'seed': 123}\n\ndx_num = eval_numerical_gradient_array(lambda x: dropout_forward(x, dropout_param)[0], x, dout)\n\n_, cache = dropout_forward(x, dropout_param)\ndx = dropout_backward(dout, cache)\n\n# The error should be around 1e-12\nprint 'Testing dropout_backward function:'\nprint 'dx error: ', rel_error(dx_num, dx)",
"Convolution layer: forward naive\nWe are now ready to implement the forward pass for a convolutional layer. Implement the function conv_forward_naive in the file cs231n/layers.py.\nYou don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.\nYou can test your implementation by running the following:",
"x_shape = (2, 3, 4, 4)\nw_shape = (3, 3, 4, 4)\nx = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)\nw = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)\nb = np.linspace(-0.1, 0.2, num=3)\n\nconv_param = {'stride': 2, 'pad': 1}\nout, _ = conv_forward_naive(x, w, b, conv_param)\ncorrect_out = np.array([[[[[-0.08759809, -0.10987781],\n [-0.18387192, -0.2109216 ]],\n [[ 0.21027089, 0.21661097],\n [ 0.22847626, 0.23004637]],\n [[ 0.50813986, 0.54309974],\n [ 0.64082444, 0.67101435]]],\n [[[-0.98053589, -1.03143541],\n [-1.19128892, -1.24695841]],\n [[ 0.69108355, 0.66880383],\n [ 0.59480972, 0.56776003]],\n [[ 2.36270298, 2.36904306],\n [ 2.38090835, 2.38247847]]]]])\n\n# Compare your output to ours; difference should be around 1e-8\nprint 'Testing conv_forward_naive'\nprint 'difference: ', rel_error(out, correct_out)",
"Aside: Image processing via convolutions\nAs fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.",
"from scipy.misc import imread, imresize\n\nkitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')\n# kitten is wide, and puppy is already square\nd = kitten.shape[1] - kitten.shape[0]\nkitten_cropped = kitten[:, d/2:-d/2, :]\n\nimg_size = 200 # Make this smaller if it runs too slow\nx = np.zeros((2, 3, img_size, img_size))\nx[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))\nx[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))\n\n# Set up a convolutional weights holding 2 filters, each 3x3\nw = np.zeros((2, 3, 3, 3))\n\n# The first filter converts the image to grayscale.\n# Set up the red, green, and blue channels of the filter.\nw[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]\nw[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]\nw[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]\n\n# Second filter detects horizontal edges in the blue channel.\nw[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]\n\n# Vector of biases. We don't need any bias for the grayscale\n# filter, but for the edge detection filter we want to add 128\n# to each output so that nothing is negative.\nb = np.array([0, 128])\n\n# Compute the result of convolving each input in x with each filter in w,\n# offsetting by b, and storing the results in out.\nout, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})\n\ndef imshow_noax(img, normalize=True):\n \"\"\" Tiny helper to show images as uint8 and remove axis labels \"\"\"\n if normalize:\n img_max, img_min = np.max(img), np.min(img)\n img = 255.0 * (img - img_min) / (img_max - img_min)\n plt.imshow(img.astype('uint8'))\n plt.gca().axis('off')\n\n# Show the original images and the results of the conv operation\nplt.subplot(2, 3, 1)\nimshow_noax(puppy, normalize=False)\nplt.title('Original image')\nplt.subplot(2, 3, 2)\nimshow_noax(out[0, 0])\nplt.title('Grayscale')\nplt.subplot(2, 3, 3)\nimshow_noax(out[0, 1])\nplt.title('Edges')\nplt.subplot(2, 3, 4)\nimshow_noax(kitten_cropped, normalize=False)\nplt.subplot(2, 3, 5)\nimshow_noax(out[1, 0])\nplt.subplot(2, 3, 6)\nimshow_noax(out[1, 1])\nplt.show()",
"Convolution layer: backward naive\nNext you need to implement the function conv_backward_naive in the file cs231n/layers.py. As usual, we will check your implementation with numeric gradient checking.",
"x = np.random.randn(4, 3, 5, 5)\nw = np.random.randn(2, 3, 3, 3)\nb = np.random.randn(2,)\ndout = np.random.randn(4, 2, 5, 5)\nconv_param = {'stride': 1, 'pad': 1}\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)\n\nout, cache = conv_forward_naive(x, w, b, conv_param)\ndx, dw, db = conv_backward_naive(dout, cache)\n\n# Your errors should be around 1e-9'\nprint 'Testing conv_backward_naive function'\nprint 'dx error: ', rel_error(dx, dx_num)\nprint 'dw error: ', rel_error(dw, dw_num)\nprint 'db error: ', rel_error(db, db_num)",
"Max pooling layer: forward naive\nThe last layer we need for a basic convolutional neural network is the max pooling layer. First implement the forward pass in the function max_pool_forward_naive in the file cs231n/layers.py.",
"x_shape = (2, 3, 4, 4)\nx = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)\npool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}\n\nout, _ = max_pool_forward_naive(x, pool_param)\n\ncorrect_out = np.array([[[[-0.26315789, -0.24842105],\n [-0.20421053, -0.18947368]],\n [[-0.14526316, -0.13052632],\n [-0.08631579, -0.07157895]],\n [[-0.02736842, -0.01263158],\n [ 0.03157895, 0.04631579]]],\n [[[ 0.09052632, 0.10526316],\n [ 0.14947368, 0.16421053]],\n [[ 0.20842105, 0.22315789],\n [ 0.26736842, 0.28210526]],\n [[ 0.32631579, 0.34105263],\n [ 0.38526316, 0.4 ]]]])\n\n# Compare your output with ours. Difference should be around 1e-8.\nprint 'Testing max_pool_forward_naive function:'\nprint 'difference: ', rel_error(out, correct_out)",
"Max pooling layer: backward naive\nImplement the backward pass for a max pooling layer in the function max_pool_backward_naive in the file cs231n/layers.py. As always we check the correctness of the backward pass using numerical gradient checking.",
"x = np.random.randn(3, 2, 8, 8)\ndout = np.random.randn(3, 2, 4, 4)\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\ndx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)\n\nout, cache = max_pool_forward_naive(x, pool_param)\ndx = max_pool_backward_naive(dout, cache)\n\n# Your error should be around 1e-12\nprint 'Testing max_pool_backward_naive function:'\nprint 'dx error: ', rel_error(dx, dx_num)",
"Fast layers\nMaking convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.\nThe fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:\nbash\npython setup.py build_ext --inplace\nThe API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.\nNOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.\nYou can compare the performance of the naive and fast versions of these layers by running the following:",
"from cs231n.fast_layers import conv_forward_fast, conv_backward_fast\nfrom time import time\n\nx = np.random.randn(100, 3, 31, 31)\nw = np.random.randn(25, 3, 3, 3)\nb = np.random.randn(25,)\ndout = np.random.randn(100, 25, 16, 16)\nconv_param = {'stride': 2, 'pad': 1}\n\nt0 = time()\nout_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)\nt1 = time()\nout_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)\nt2 = time()\n\nprint 'Testing conv_forward_fast:'\nprint 'Naive: %fs' % (t1 - t0)\nprint 'Fast: %fs' % (t2 - t1)\nprint 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))\nprint 'Difference: ', rel_error(out_naive, out_fast)\n\nt0 = time()\ndx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)\nt1 = time()\ndx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)\nt2 = time()\n\nprint '\\nTesting conv_backward_fast:'\nprint 'Naive: %fs' % (t1 - t0)\nprint 'Fast: %fs' % (t2 - t1)\nprint 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))\nprint 'dx difference: ', rel_error(dx_naive, dx_fast)\nprint 'dw difference: ', rel_error(dw_naive, dw_fast)\nprint 'db difference: ', rel_error(db_naive, db_fast)\n\nfrom cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast\n\nx = np.random.randn(100, 3, 32, 32)\ndout = np.random.randn(100, 3, 16, 16)\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\nt0 = time()\nout_naive, cache_naive = max_pool_forward_naive(x, pool_param)\nt1 = time()\nout_fast, cache_fast = max_pool_forward_fast(x, pool_param)\nt2 = time()\n\nprint 'Testing pool_forward_fast:'\nprint 'Naive: %fs' % (t1 - t0)\nprint 'fast: %fs' % (t2 - t1)\nprint 'speedup: %fx' % ((t1 - t0) / (t2 - t1))\nprint 'difference: ', rel_error(out_naive, out_fast)\n\nt0 = time()\ndx_naive = max_pool_backward_naive(dout, cache_naive)\nt1 = time()\ndx_fast = max_pool_backward_fast(dout, cache_fast)\nt2 = time()\n\nprint '\\nTesting pool_backward_fast:'\nprint 'Naive: %fs' % (t1 - t0)\nprint 'speedup: %fx' % ((t1 - t0) / (t2 - t1))\nprint 'dx difference: ', rel_error(dx_naive, dx_fast)",
"Sandwich layers\nThere are a couple common layer \"sandwiches\" that frequently appear in ConvNets. For example convolutional layers are frequently followed by ReLU and pooling, and affine layers are frequently followed by ReLU. To make it more convenient to use these common patterns, we have defined several convenience layers in the file cs231n/layer_utils.py. Lets grad-check them to make sure that they work correctly:",
"from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward\n\nx = np.random.randn(2, 3, 16, 16)\nw = np.random.randn(3, 3, 3, 3)\nb = np.random.randn(3,)\ndout = np.random.randn(2, 3, 8, 8)\nconv_param = {'stride': 1, 'pad': 1}\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\nout, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)\ndx, dw, db = conv_relu_pool_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)\n\nprint 'Testing conv_relu_pool_forward:'\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dw error: ', rel_error(dw_num, dw)\nprint 'db error: ', rel_error(db_num, db)\n\nfrom cs231n.layer_utils import conv_relu_forward, conv_relu_backward\n\nx = np.random.randn(2, 3, 8, 8)\nw = np.random.randn(3, 3, 3, 3)\nb = np.random.randn(3,)\ndout = np.random.randn(2, 3, 8, 8)\nconv_param = {'stride': 1, 'pad': 1}\n\nout, cache = conv_relu_forward(x, w, b, conv_param)\ndx, dw, db = conv_relu_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)\n\nprint 'Testing conv_relu_forward:'\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dw error: ', rel_error(dw_num, dw)\nprint 'db error: ', rel_error(db_num, db)\n\nfrom cs231n.layer_utils import affine_relu_forward, affine_relu_backward\n\nx = np.random.randn(2, 3, 4)\nw = np.random.randn(12, 10)\nb = np.random.randn(10)\ndout = np.random.randn(2, 10)\n\nout, cache = affine_relu_forward(x, w, b)\ndx, dw, db = affine_relu_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)\n\nprint 'Testing affine_relu_forward:'\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dw error: ', rel_error(dw_num, dw)\nprint 'db error: ', rel_error(db_num, db)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
macks22/gensim
|
docs/notebooks/Corpora_and_Vector_Spaces.ipynb
|
lgpl-2.1
|
[
"Tutorial 1: Corpora and Vector Spaces\nSee this gensim tutorial on the web here.\nDon’t forget to set:",
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\n\nimport os\nimport tempfile\nTEMP_FOLDER = tempfile.gettempdir()\nprint('Folder \"{}\" will be used to save temporary dictionary and corpus.'.format(TEMP_FOLDER))",
"if you want to see logging events.\nFrom Strings to Vectors\nThis time, let’s start from documents represented as strings:",
"from gensim import corpora\n\ndocuments = [\"Human machine interface for lab abc computer applications\",\n \"A survey of user opinion of computer system response time\",\n \"The EPS user interface management system\",\n \"System and human system engineering testing of EPS\", \n \"Relation of user perceived response time to error measurement\",\n \"The generation of random binary unordered trees\",\n \"The intersection graph of paths in trees\",\n \"Graph minors IV Widths of trees and well quasi ordering\",\n \"Graph minors A survey\"]",
"This is a tiny corpus of nine documents, each consisting of only a single sentence.\nFirst, let’s tokenize the documents, remove common words (using a toy stoplist) as well as words that only appear once in the corpus:",
"# remove common words and tokenize\nstoplist = set('for a of the and to in'.split())\ntexts = [[word for word in document.lower().split() if word not in stoplist]\n for document in documents]\n\n# remove words that appear only once\nfrom collections import defaultdict\nfrequency = defaultdict(int)\nfor text in texts:\n for token in text:\n frequency[token] += 1\n\ntexts = [[token for token in text if frequency[token] > 1] for text in texts]\n\nfrom pprint import pprint # pretty-printer\npprint(texts)",
"Your way of processing the documents will likely vary; here, I only split on whitespace to tokenize, followed by lowercasing each word. In fact, I use this particular (simplistic and inefficient) setup to mimic the experiment done in Deerwester et al.’s original LSA article (Table 2).\nThe ways to process documents are so varied and application- and language-dependent that I decided to not constrain them by any interface. Instead, a document is represented by the features extracted from it, not by its “surface” string form: how you get to the features is up to you. Below I describe one common, general-purpose approach (called bag-of-words), but keep in mind that different application domains call for different features, and, as always, it’s garbage in, garbage out...\nTo convert documents to vectors, we’ll use a document representation called bag-of-words. In this representation, each document is represented by one vector where a vector element i represents the number of times the ith word appears in the document.\nIt is advantageous to represent the questions only by their (integer) ids. The mapping between the questions and ids is called a dictionary:",
"dictionary = corpora.Dictionary(texts)\ndictionary.save(os.path.join(TEMP_FOLDER, 'deerwester.dict')) # store the dictionary, for future reference\nprint(dictionary)",
"Here we assigned a unique integer ID to all words appearing in the processed corpus with the gensim.corpora.dictionary.Dictionary class. This sweeps across the texts, collecting word counts and relevant statistics. In the end, we see there are twelve distinct words in the processed corpus, which means each document will be represented by twelve numbers (ie., by a 12-D vector). To see the mapping between words and their ids:",
"print(dictionary.token2id)",
"To actually convert tokenized documents to vectors:",
"new_doc = \"Human computer interaction\"\nnew_vec = dictionary.doc2bow(new_doc.lower().split())\nprint(new_vec) # the word \"interaction\" does not appear in the dictionary and is ignored",
"The function doc2bow() simply counts the number of occurrences of each distinct word, converts the word to its integer word id and returns the result as a bag-of-words--a sparse vector, in the form of [(word_id, word_count), ...]. \nAs the token_id is 0 for \"human\" and 2 for \"computer\", the new document “Human computer interaction” will be transformed to [(0, 1), (2, 1)]. The words \"computer\" and \"human\" exist in the dictionary and appear once. Thus, they become (0, 1), (2, 1) respectively in the sparse vector. The word \"interaction\" doesn't exist in the dictionary and, thus, will not show up in the sparse vector. The other ten dictionary words, that appear (implicitly) zero times, will not show up in the sparse vector and , ,there will never be a element in the sparse vector like (3, 0).\nFor people familiar with scikit learn, doc2bow() has similar behaviors as calling transform() on CountVectorizer. doc2bow() can behave like fit_transform() as well. For more details, please look at gensim API Doc.",
"corpus = [dictionary.doc2bow(text) for text in texts]\ncorpora.MmCorpus.serialize(os.path.join(TEMP_FOLDER, 'deerwester.mm'), corpus) # store to disk, for later use\nfor c in corpus:\n print(c)",
"By now it should be clear that the vector feature with id=10 represents the number of times the word \"graph\" occurs in the document. The answer is “zero” for the first six documents and “one” for the remaining three. As a matter of fact, we have arrived at exactly the same corpus of vectors as in the Quick Example. If you're running this notebook yourself the word IDs may differ, but you should be able to check the consistency between documents comparing their vectors. \nCorpus Streaming – One Document at a Time\nNote that corpus above resides fully in memory, as a plain Python list. In this simple example, it doesn’t matter much, but just to make things clear, let’s assume there are millions of documents in the corpus. Storing all of them in RAM won’t do. Instead, let’s assume the documents are stored in a file on disk, one document per line. Gensim only requires that a corpus be able to return one document vector at a time:",
"class MyCorpus(object):\n def __iter__(self):\n for line in open('datasets/mycorpus.txt'):\n # assume there's one document per line, tokens separated by whitespace\n yield dictionary.doc2bow(line.lower().split())",
"The assumption that each document occupies one line in a single file is not important; you can design the __iter__ function to fit your input format, whatever that may be - walking directories, parsing XML, accessing network nodes... Just parse your input to retrieve a clean list of tokens in each document, then convert the tokens via a dictionary to their IDs and yield the resulting sparse vector inside __iter__.",
"corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!\nprint(corpus_memory_friendly)",
"corpus_memory_friendly is now an object. We didn’t define any way to print it, so print just outputs address of the object in memory. Not very useful. To see the constituent vectors, let’s iterate over the corpus and print each document vector (one at a time):",
"for vector in corpus_memory_friendly: # load one vector into memory at a time\n print(vector)",
"Although the output is the same as for the plain Python list, the corpus is now much more memory friendly, because at most one vector resides in RAM at a time. Your corpus can now be as large as you want.\nWe are going to create the dictionary from the mycorpus.txt file without loading the entire file into memory. Then, we will generate the list of token ids to remove from this dictionary by querying the dictionary for the token ids of the stop words, and by querying the document frequencies dictionary (dictionary.dfs) for token ids that only appear once. Finally, we will filter these token ids out of our dictionary. Keep in mind that dictionary.filter_tokens (and some other functions such as dictionary.add_document) will call dictionary.compactify() to remove the gaps in the token id series thus enumeration of remaining tokens can be changed.",
"from six import iteritems\n\n# collect statistics about all tokens\ndictionary = corpora.Dictionary(line.lower().split() for line in open('datasets/mycorpus.txt'))\n\n# remove stop words and words that appear only once\nstop_ids = [dictionary.token2id[stopword] for stopword in stoplist \n if stopword in dictionary.token2id]\nonce_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]\n\n# remove stop words and words that appear only once\ndictionary.filter_tokens(stop_ids + once_ids)\nprint(dictionary)",
"And that is all there is to it! At least as far as bag-of-words representation is concerned. Of course, what we do with such a corpus is another question; it is not at all clear how counting the frequency of distinct words could be useful. As it turns out, it isn’t, and we will need to apply a transformation on this simple representation first, before we can use it to compute any meaningful document vs. document similarities. Transformations are covered in the next tutorial, but before that, let’s briefly turn our attention to corpus persistency.\nCorpus Formats\nThere exist several file formats for serializing a Vector Space corpus (~sequence of vectors) to disk. Gensim implements them via the streaming corpus interface mentioned earlier: documents are read from (or stored to) disk in a lazy fashion, one document at a time, without the whole corpus being read into main memory at once.\nOne of the more notable file formats is the Matrix Market format. To save a corpus in the Matrix Market format:",
"# create a toy corpus of 2 documents, as a plain Python list\ncorpus = [[(1, 0.5)], []] # make one document empty, for the heck of it\n\ncorpora.MmCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.mm'), corpus)",
"Other formats include Joachim’s SVMlight format, Blei’s LDA-C format and GibbsLDA++ format.",
"corpora.SvmLightCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.svmlight'), corpus)\ncorpora.BleiCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.lda-c'), corpus)\ncorpora.LowCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.low'), corpus)",
"Conversely, to load a corpus iterator from a Matrix Market file:",
"corpus = corpora.MmCorpus(os.path.join(TEMP_FOLDER, 'corpus.mm'))",
"Corpus objects are streams, so typically you won’t be able to print them directly:",
"print(corpus)",
"Instead, to view the contents of a corpus:",
"# one way of printing a corpus: load it entirely into memory\nprint(list(corpus)) # calling list() will convert any sequence to a plain Python list",
"or",
"# another way of doing it: print one document at a time, making use of the streaming interface\nfor doc in corpus:\n print(doc)",
"The second way is obviously more memory-friendly, but for testing and development purposes, nothing beats the simplicity of calling list(corpus).\nTo save the same Matrix Market document stream in Blei’s LDA-C format,",
"corpora.BleiCorpus.serialize(os.path.join(TEMP_FOLDER, 'corpus.lda-c'), corpus)",
"In this way, gensim can also be used as a memory-efficient I/O format conversion tool: just load a document stream using one format and immediately save it in another format. Adding new formats is dead easy, check out the code for the SVMlight corpus for an example.\nCompatibility with NumPy and SciPy\nGensim also contains efficient utility functions to help converting from/to numpy matrices:",
"import gensim\nimport numpy as np\nnumpy_matrix = np.random.randint(10, size=[5,2])\ncorpus = gensim.matutils.Dense2Corpus(numpy_matrix)\nnumpy_matrix_dense = gensim.matutils.corpus2dense(corpus, num_terms=10)",
"and from/to scipy.sparse matrices:",
"import scipy.sparse\nscipy_sparse_matrix = scipy.sparse.random(5,2)\ncorpus = gensim.matutils.Sparse2Corpus(scipy_sparse_matrix)\nscipy_csc_matrix = gensim.matutils.corpus2csc(corpus)",
"For a complete reference (want to prune the dictionary to a smaller size? Optimize converting between corpora and NumPy/SciPy arrays?), see the API documentation. Or continue to the next tutorial on Topics and Transformations (notebook \nor website)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kthyng/tracpy
|
docs/manual.ipynb
|
mit
|
[
"# Turning on inline plots -- just for use in ipython notebooks.\n%pylab inline",
"Initialization of a numerical experiment\nBefore running a drifter simulation, a number of parameters need to be specified. Previous examples of this are set in init.py. Because these examples change over time, we'll go through a specific example here.",
"# Normal Python libraries\nimport numpy as np\nimport netCDF4 as netCDF\nimport tracpy\nimport tracpy.plotting\nfrom tracpy.tracpy_class import Tracpy\nmatplotlib.rcParams.update({'font.size': 20})",
"Model output\nModel output from a high resolution model of the Texas-Louisiana shelf for the years 2004-2012 is stored on a thredds served at the address in loc. This is freely accessible.",
"# Location of TXLA model output file and grid, on a thredds server.\nloc = 'http://barataria.tamu.edu:8080/thredds/dodsC/NcML/txla_nesting6.nc'",
"Time parameters\nModel output is known to occur every four hours. The default test here will start at 00:00 on November 25, 2009 and run for 5 days.",
"# Number of days to run the drifters.\nndays = 3\n\n# Start date in date time formatting\ndate = datetime.datetime(2009, 11, 25, 0)\n\n# Time between outputs\ntseas = 4*3600 # 4 hours between outputs, in seconds \n\n# Time units\ntime_units = 'seconds since 1970-01-01'",
"The TRACMASS algorithm updates the u and v flux fields using a linear combination of the previous and subsequent time step every time a drifter passes a grid cell wall, or when the time for the drifter has reached the time of the second model output time step. This maximum time allowed can be decreased using the nsteps parameter, which divides the time between model outputs into smaller pieces that then act as the maximum time the drifter can travel without the time fields being updated. The importance of this will depend on the grid size, velocity fields, and time between model outputs.\nSeparately, the user can choose how often to sample the drifter tracks. Since each drifter can experience a different number of steps, the N parameter is used to divide up the time between model output according to how often to have plot points for the drifters. N does not affect the drifter paths; it only controls the interval of sampling the drifter tracks. Linear interpolation is used to output drifter positions at the same times.",
"# Sets a smaller limit than between model outputs for when to force interpolation if hasn't already occurred.\nnsteps = 5\n# Controls the sampling frequency of the drifter tracks.\nN = 4",
"After initialization, drifters can be stepped forward or backward in time. Running backward in time essentially means that we change the sign of the velocity fields and step backward in the model output files (in which case we set ff=-1). We'll move forward in time (ff=1).",
"# Use ff = 1 for forward in time and ff = -1 for backward in time.\nff = 1",
"Subgrid parameterization parameters\nAn integer flag is used to control whether or not to use subgrid parameterization in the particle tracking, and if so, which kind.\nOptions are:\n\ndoturb=0 uses no sub grid parameterization and thus the drifters are passively advected according strictly to the output velocity fields\ndoturb=1 adds to the current velocity fluxes parameterized turbulent velocity fluxes of the order of the current velocity fluxes\ndoturb=2 adds to the calculated new drifter location a slightly displaced drifter location that is randomly placed based on a circle around the drifter location\ndoturb=3 adds to the calculated new drifter location a slightly displaced drifter location that is randomly placed based on an ellipse of the bathymetry around the drifter location\n\nThe horizontal and vertical diffusivities are set by the user. These values may or may not be used in the experiment depending on whether a subgrid parameterization is used, and, if so, which is used. The horizontal diffusivity value is used by all of the horizontal subgrid parameterizations. The vertical diffusivity is not used in the two-dimensional case. Since this experiment is not using either diffusivity values, they will be set to zero to avoid confusion.\nAppropriate values to use for this are currently being investigated using sensitivity studies on the Texas-Louisiana shelf. Some values have been used and compared in studies, and values can be calculated from physical drifters for a specific domain. This is on-going work! In a sensitivity study, a smaller value, like ah=5, leads to somewhat diffused results that are still very close to the non-diffusive case. A larger value of ah=20 led to more diffused results that were still quite similar to the non-diffusive case.",
"ah = 0. # m^2/s\nav = 0. # m^2/s\n\n# turbulence/diffusion flag\ndoturb = 0",
"File saving\nThe input name will be used for saving the particle tracks into a netCDF file and for the figures.",
"# simulation name, used for saving results into netcdf file\nname = 'temp'",
"Vertical\nThere are a number of options for the initial vertical placement of the drifters. The behavior is controlled by the combination of z0 and zpar, and do3d must be set accordingly as well.\nThe do3d flag controls whether or not drifters are allowed to move vertically or not:\n\ndo3d=0 for two-dimensional particle tracking\ndo3d=1 for three-dimensional particle tracking\n\nFor 3D tracking, set do3d=1 and z0 should be an array of initial drifter depths. The array should be the same size as lon0 and negative for under water. Currently, drifter depths need to be above the seabed for every (x, y) particle location for the script to run.\nTo do 3D but start at surface, use z0 = zeros(lon0.shape) and have either zpar='fromMSL' so that z0 starting depths represent that depth below the base, time-independent sea level (or mean sea level) or choose zpar='fromZeta' to have z0 starting depths represent that depth below the time-dependent sea surface. Currently only the zpar='fromZeta' case is coded up.\nFor 2D drifter movement, set do3d=0. Then there are the following options:\n\nset z0 to 's' for 2D along a terrain-following slice and zpar to be the index of s level you want to use (0 to km-1)\nset z0 to 'rho' for 2D along a density surface and zpar to be the density value you want to use. Can do the same thing with salinity ('salt') or temperature ('temp'). The model output doesn't currently have density.\nset z0 to 'z' for 2D along a depth slice and zpar to be the constant (negative) depth value you want to use\nTo simulate drifters at the surface, set z0 to 's' and zpar = grid['km']-1 (whatever that value is) to put them in the upper s level. This is probably the most common option.",
"# for 3d flag, do3d=0 makes the run 2d and do3d=1 makes the run 3d\ndo3d = 0\n\n## Choose method for vertical placement of drifters\nz0 = 's' # I know the size from checking #'s after eliminating those outside domain ' #'z' #'salt' #'s' \nnum_layers = 30\nzpar = num_layers-1 # 29 #-10 #grid['km']-1 # 30 #grid['km']-1\n\n# #### 3D Sample Options ####\n# # for 3d flag, do3d=0 makes the run 2d and do3d=1 makes the run 3d\n# do3d = 1\n\n# ## Choose method for vertical placement of drifters\n# z0 = np.zeros(676) # I know the size from checking #'s after eliminating those outside domain ' #'z' #'salt' #'s' \n# num_layers = 30\n# zpar = 'fromZeta' #num_layers-1 # 29 #-10 #grid['km']-1 # 30 #grid['km']-1\n# ####",
"Initialize a projection\nIn newer versions of TracPy (after 0.01), projection information has been separated from the grid information. So, you first set up a project, then use this to set up your grid. The function call is:\n\ntracpy.tools.make_proj(setup='nwgom', usebasemap=True, **kwargs)\n\nwhere the keyword arguments include lat/lon inputs for the projection bounds, and there are several built in projections for convenience which can be altered:\n\n'nwgom' - for NW Gulf of Mexico, for use with basemap\n'galveston' - for Galveston Bay, for use with pyproj\n'nwgom-pyproj' - for NW Gulf of Mexico, for use without basemap",
"proj = tracpy.tools.make_proj('nwgom-pyproj')",
"Initialize TracPy class",
"# Read in grid\ngrid = tracpy.inout.readgrid(loc, proj, usespherical=True)\n\n# Initialize Tracpy class\ntp = Tracpy(loc, grid, name=name, tseas=tseas, ndays=ndays, nsteps=nsteps,\n N=N, ff=ff, ah=ah, av=av, doturb=doturb, do3d=do3d, z0=z0, zpar=zpar, time_units=time_units)",
"Drifter initialization\nHorizontal\nDrifters are seeded by the latitude and longitude. A simple way to do this is to set up a mesh of points within a lat/lon box. In this case, we are looking at drifters starting throughout the TX-LA shelf domain. For the linspace function, we can play around with the number of points to control approximately how far apart the drifters begin. For this example, the number of points are about 20 km apart. \nAfter initializing these points, we can run them through a check script to eliminate points outside the domain (without this step, points outside the numerical domain will cause an error).",
"# Input starting locations as real space lon,lat locations\nlon0, lat0 = np.meshgrid(np.linspace(-98.5,-87.5,55), \\\n np.linspace(22.5,31,49)) # whole domain, 20 km\n\n# Eliminate points that are outside domain or in masked areas\nlon0, lat0 = tracpy.tools.check_points(lon0, lat0, tp.grid)",
"Run the numerical experiment",
"# Note in timing that the grid was already read in\nlonp, latp, zp, t, T0, U, V = tracpy.run.run(tp, date, lon0, lat0)",
"Plotting the results\nPlots generated below by the user can be compared with those available in tracpy/docs/figures.\nPlot tracks",
"fig = plt.figure(figsize=(9.4, 7.7), dpi=100)\nfig, ax = tracpy.plotting.background(grid, fig=fig, extent=[-98, -87.5, 22.8, 30.5],\n col='lightgrey', halpha=1, outline=[1, 1, 0, 1], res='50m')\ntracpy.plotting.tracks(lonp, latp, tp.name, grid, fig=fig, ax=ax)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
synthicity/activitysim
|
activitysim/examples/example_estimation/notebooks/17_tour_mode_choice.ipynb
|
agpl-3.0
|
[
"Estimating Tour Mode Choice\nThis notebook illustrates how to re-estimate tour and subtour mode choice for ActivitySim. This process \nincludes running ActivitySim in estimation mode to read household travel survey files and write out\nthe estimation data bundles used in this notebook. To review how to do so, please visit the other\nnotebooks in this directory.\nLoad libraries",
"import os\nimport larch # !conda install larch -c conda-forge # for estimation\nimport pandas as pd",
"We'll work in our test directory, where ActivitySim has saved the estimation data bundles.",
"os.chdir('test')",
"Load data and prep model for estimation",
"modelname = \"tour_mode_choice\"\n\nfrom activitysim.estimation.larch import component_model\nmodel, data = component_model(modelname, return_data=True)",
"The tour mode choice model is already a ModelGroup segmented on different purposes,\nso we can add the subtour mode choice as just another member model of the group",
"model2, data2 = component_model(\"atwork_subtour_mode_choice\", return_data=True)\n\nmodel.extend(model2)",
"Review data loaded from the EDB\nThe next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.\nCoefficients",
"data.coefficients",
"Utility specification",
"data.spec",
"Chooser data",
"data.chooser_data",
"Estimate\nWith the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.",
"model.load_data()\nmodel.doctor(repair_ch_av=\"-\")\n\nresult = model.maximize_loglike(method=\"SLSQP\", options={\"maxiter\": 1000})\n\nmodel.calculate_parameter_covariance()",
"Estimated coefficients",
"model.parameter_summary()",
"Output Estimation Results",
"from activitysim.estimation.larch import update_coefficients\nresult_dir = data.edb_directory/\"estimated\"\nupdate_coefficients(\n model, data, result_dir,\n output_file=f\"{modelname}_coefficients_revised.csv\",\n);",
"Write the model estimation report, including coefficient t-statistic and log likelihood",
"model.to_xlsx(\n result_dir/f\"{modelname}_model_estimation.xlsx\", \n data_statistics=False,\n)",
"Next Steps\nThe final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.",
"pd.read_csv(result_dir/f\"{modelname}_coefficients_revised.csv\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/graphics
|
tensorflow_graphics/notebooks/matting.ipynb
|
apache-2.0
|
[
"Copyright 2019 Google LLC.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Closed Form Matting Energy\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/matting.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/matting.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nMatting is an important task in image editing where a novel background is combined with a given foreground to produce a new composite image. To achieve a plausible result, the foreground needs to be carefully extracted from a given image, i.e. preserving all the thin structures, before being inpainted over the new background. In image matting, the input image $I$ is assumed to be a linear combination of a foreground image $F$ and a background image $B$. For a pixel $j$ of $I$, the color of the pixel can therefore be expressed as $I_j = \\alpha_j F_j +(1-\\alpha_j)B_j$,\nwhere $\\alpha_j$ is the foreground opacity for the pixel $j$. The opacity image made of all the $\\alpha_j$ pixels is called a matte.\n<div align=\"center\">\n<img src=\"https://github.com/frcs/alternative-matting-laplacian/raw/master/GT04.png\" width=\"283\" height=\"200\" />\n<img src=\"https://github.com/frcs/alternative-matting-laplacian/raw/master/alpha0-GT04.png\" width=\"283\" height=\"200\" />\n</div>\n\nUsing a trimap (white for foreground, black for background, and gray for unknown pixels)\n<div align=\"center\">\n<img src=\"https://github.com/frcs/alternative-matting-laplacian/raw/master/trimap-GT04.png\" width=\"283\" height=\"200\" />\n</div>\n\nor a set of scribbles (user strokes), an optimization problem can be formulated to retrieve the unknown pixel opacities. This colab demonstrates how to use the image matting loss implemented in TensorFlow Graphics to precisely segment out objects from images and have the ability to paste them on top of new backgrounds. This matting loss is derived from the paper titled \"A Closed Form Solution to Natural Image Matting\" from Levin et al. The loss was \"tensorized\" inspired by \"Deep-Energy: Unsupervised Training of Deep Neural Networks\" from Golts et al.\nSetup & Imports\nIf TensorFlow Graphics is not installed on your system, the following cell can install the TensorFlow Graphics package for you.",
"!pip install tensorflow_graphics",
"Now that TensorFlow Graphics is installed, let's import everything needed to run the demos contained in this notebook.",
"from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nfrom tensorflow_graphics.image import matting\nfrom tqdm import tqdm\n",
"Import the image and trimap\nDownload the image and trimap from alphamatting.com.",
"# Download dataset from alphamatting.com\n!rm -rf input_training_lowres\n!rm -rf trimap_training_lowres\n!rm -rf gt_training_lowres\n\n!wget -q http://www.alphamatting.com/datasets/zip/input_training_lowres.zip\n!wget -q http://www.alphamatting.com/datasets/zip/trimap_training_lowres.zip\n!wget -q http://www.alphamatting.com/datasets/zip/gt_training_lowres.zip\n\n!unzip -q input_training_lowres.zip -d input_training_lowres\n!unzip -q trimap_training_lowres.zip -d trimap_training_lowres\n!unzip -q gt_training_lowres.zip -d gt_training_lowres\n\n# Read and decode images\nsource = tf.io.read_file('input_training_lowres/GT07.png')\nsource = tf.cast(tf.io.decode_png(source), tf.float64) / 255.0\nsource = tf.expand_dims(source, axis=0)\ntrimap = tf.io.read_file('trimap_training_lowres/Trimap1/GT07.png')\ntrimap = tf.cast(tf.io.decode_png(trimap), tf.float64) / 255.0\ntrimap = tf.reduce_mean(trimap, axis=-1, keepdims=True)\ntrimap = tf.expand_dims(trimap, axis=0)\ngt_matte = tf.io.read_file('gt_training_lowres/GT07.png')\ngt_matte = tf.cast(tf.io.decode_png(gt_matte), tf.float64) / 255.0\ngt_matte = tf.reduce_mean(gt_matte, axis=-1, keepdims=True)\ngt_matte = tf.expand_dims(gt_matte, axis=0)\n\n# Resize images to improve performance\nsource = tf.image.resize(\n source,\n tf.shape(source)[1:3] // 2,\n method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)\ntrimap = tf.image.resize(\n trimap,\n tf.shape(trimap)[1:3] // 2,\n method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)\ngt_matte = tf.image.resize(\n gt_matte,\n tf.shape(gt_matte)[1:3] // 2,\n method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)\n\n# Show images\nfigure = plt.figure(figsize=(22, 18))\naxes = figure.add_subplot(1, 3, 1)\naxes.grid(False)\naxes.set_title('Input image', fontsize=14)\n_= plt.imshow(source[0, ...].numpy())\naxes = figure.add_subplot(1, 3, 2)\naxes.grid(False)\naxes.set_title('Input trimap', fontsize=14)\n_= plt.imshow(trimap[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)\naxes = figure.add_subplot(1, 3, 3)\naxes.grid(False)\naxes.set_title('GT matte', fontsize=14)\n_= plt.imshow(gt_matte[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)",
"Extract the foreground and background constraints from the trimap image",
"# Extract the foreground and background constraints from the trimap image\nforeground = tf.cast(tf.equal(trimap, 1.0), tf.float64)\nbackground = tf.cast(tf.equal(trimap, 0.0), tf.float64)\n\n# Show foreground and background constraints\nfigure = plt.figure(figsize=(22, 18))\naxes = figure.add_subplot(1, 2, 1)\naxes.grid(False)\naxes.set_title('Foreground constraints', fontsize=14)\n_= plt.imshow(foreground[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)\naxes = figure.add_subplot(1, 2, 2)\naxes.grid(False)\naxes.set_title('Background constraints', fontsize=14)\n_= plt.imshow(background[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)",
"Setup & run the optimization\nSetup the matting loss function using TensorFlow Graphics and run the Adam optimizer for 400 iterations.",
"# Initialize the matte with random values\nmatte_shape = tf.concat((tf.shape(source)[:-1], (1,)), axis=-1)\nmatte = tf.Variable(\n tf.random.uniform(\n shape=matte_shape, minval=0.0, maxval=1.0, dtype=tf.float64))\n# Create the closed form matting Laplacian\nlaplacian, _ = matting.build_matrices(source)\n\n# Function computing the loss and applying the gradient\n@tf.function\ndef optimize(optimizer):\n with tf.GradientTape() as tape:\n tape.watch(matte)\n # Compute a loss enforcing the trimap constraints\n constraints = tf.reduce_mean((foreground + background) *\n tf.math.squared_difference(matte, foreground))\n # Compute the matting loss\n smoothness = matting.loss(matte, laplacian)\n # Sum up the constraint and matting losses\n total_loss = 100 * constraints + smoothness\n # Compute and apply the gradient to the matte\n gradient = tape.gradient(total_loss, [matte])\n optimizer.apply_gradients(zip(gradient, (matte,)))\n\n# Run the Adam optimizer for 400 iterations\noptimizer = tf.optimizers.Adam(learning_rate=1.0)\nnb_iterations = 400\nfor it in tqdm(range(nb_iterations)):\n optimize(optimizer)\n\n# Clip the matte value between 0 and 1\nmatte = tf.clip_by_value(matte, 0.0, 1.0)\n\n# Display the results\nfigure = plt.figure(figsize=(22, 18))\naxes = figure.add_subplot(1, 3, 1)\naxes.grid(False)\naxes.set_title('Input image', fontsize=14)\nplt.imshow(source[0, ...].numpy())\naxes = figure.add_subplot(1, 3, 2)\naxes.grid(False)\naxes.set_title('Input trimap', fontsize=14)\n_= plt.imshow(trimap[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)\naxes = figure.add_subplot(1, 3, 3)\naxes.grid(False)\naxes.set_title('Matte', fontsize=14)\n_= plt.imshow(matte[0, ..., 0].numpy(), cmap='gray', vmin=0, vmax=1)",
"Compositing\nLet's now composite our extracted object on top of a new background!",
"!wget -q https://p2.piqsels.com/preview/861/934/460/concrete-texture-background-backdrop.jpg\nbackground = tf.io.read_file('concrete-texture-background-backdrop.jpg')\nbackground = tf.cast(tf.io.decode_jpeg(background), tf.float64) / 255.0\nbackground = tf.expand_dims(background, axis=0)\n\n# Resize images to improve performance\nbackground = tf.image.resize(\n background,\n tf.shape(source)[1:3],\n method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)\n\n# Inpaint the foreground over a new background\ninpainted_black = matte * source\ninpainted_concrete = matte * source + (1.0 - matte) * background\n\n# Display the results\nfigure = plt.figure(figsize=(22, 18))\naxes = figure.add_subplot(1, 2, 1)\naxes.grid(False)\naxes.set_title('Inpainted black', fontsize=14)\n_= plt.imshow(inpainted_black[0, ...].numpy())\naxes = figure.add_subplot(1, 2, 2)\naxes.grid(False)\naxes.set_title('Inpainted concrete', fontsize=14)\n_= plt.imshow(inpainted_concrete[0, ...].numpy())",
"Note that the inpainting is approximate as we did not recover the real foreground $F_j = \\frac{I_j - (1-\\alpha_j)B_j}{\\alpha_j } $, which also necessitates an estimation of the background color."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
saashimi/code_guild
|
wk9/notebooks/ch.1-getting-started-with-django.ipynb
|
mit
|
[
"Wk9.0\nCh. 1 Getting Django Set Up Using a Functional Test\nObey the testing goat! Do nothing until you have a test\nBefore even installing anything, we'll write a test.\nWriting our first test",
"# Make a directory called examples\n#!mkdir ../examples\n%cd ../examples\n!ls\n",
"Write functional test",
"#%%writefile functional_tests.py\n\nfrom selenium import webdriver\n\nbrowser = webdriver.Firefox()\nbrowser.get('http://localhost:8000')\n\nassert 'Django' in browser.title",
"Installing django and selenium",
"# Create a virtual env to load with selenium and django\n#!conda create -yn django_class django python=3 # y flag automatically selects yes to install\n!source activate django_class # activate virtual environment\n!pip install --upgrade selenium # install selenium.",
"Checking that our test correctly fails",
"# Try running our tests. We're expecting an assertion error here.\n%run functional_tests.py",
"Fixing our failure",
"!ls\n\n# Use django to create a project called 'superlists'\ndjango-admin.py startproject superlists\n\n!tree ../examples/",
"Let's fire up our new project on a django server",
"!cd superlists/ && python3 manage.py runserver",
"Do our tests pass now?",
"%run functional_tests.py ",
"Now that our test passed, let's turn this into a git repo.",
"# First, move our tests into the main project dir.\n#!mv functional_tests.py superlists/\n# \n# Change directories and initialize our superlist into a new git repo\n#%cd superlists/\n#!git init .\n\n%ls\n\n# Don't add the database to git.\n#! echo \"db.sqlite3\" >> .gitignore # >> means concatenate to end of file.\n\n# Don't add .pyc files\n#!echo \"*.pyc\" >> .gitignore\n\n# Add everything else.\n#!git add .\n#!git status\n#!git commit -m \"Initial commit\""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.