repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
paulvangentcom/heartrate_analysis_python
examples/3_smartwatch_data/Analysing_Smartwatch_Data.ipynb
mit
[ "Analysing Smartwatch Data\nThis notebook gives an overview of how to use HeartPy in the analysis of raw PPG data taken from a commercial (Samsung) smartwatch device.\nA signal measured this way contains a lot more noise when compared to a typical PPG sensor on the fingertip or earlobe, where perfusion is much easier to measure than on the wrist.\nAnalysing such a signal requires some additional steps as described in this notebook.\nFirst let's load up the dependencies and the data file", "import numpy as np\n\nimport heartpy as hp\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ndf = pd.read_csv('raw_ppg.csv')\n\ndf.keys()", "Exploring data file\nLet's explore the data file to get an idea of what we're working with.", "plt.figure(figsize=(12,6))\n\nplt.plot(df['ppg'].values)\nplt.show()", "Ok..\nThere seems to be intermittent sections of PPG dotted between non-signals (periods where the sensor was not recording). \nFor now let's slice the first signal section and see what's up. Later on I'll show you how to exclude non-signal sections automatically.", "signal = df['ppg'].values[14500:20500]\ntimer = df['timer'].values[14500:20500]\nplt.plot(signal)\nplt.show()", "Now we need to know the sampling rate\nThe sampling rate is the one measure to rule them all. It is used to compute all others.\nHeartPy has several ways of getting the sample rate from timer columns. Let's look at the format of the timer column to see what we're working with.", "timer[0:20]", "So, the format seems to be 'hours:minutes:seconds.miliseconds'\nHeartPy comes with a datetime function that can work with date- and time-strings called get_samplerate_datetime. Check the help to see how it works:", "help(hp.get_samplerate_datetime)\n\n#Seems easy enough, right? Now let's determine the sample rate\n\nsample_rate = hp.get_samplerate_datetime(timer, timeformat = '%H:%M:%S.%f')\n\nprint('sampling rate is: %.3f Hz' %sample_rate)", "That's pretty low.\nThe sample rate is quite low but to conserve power this is what many smart watches work with. For determining the BPM this is just fine, but any heart rate variability (HRV) measures are likely not going to be super accurate. Depending on your needs it may still be fine, though.\nA second consideration with sampling rate is whether it's stable or not. Many devices including smart watches do many things at once. They run an OS that has other tasks besides measuring heart rate, so when measuring at 10Hz, the OS might not be ready exactly every 100ms to get a measurement. As such, the sampling rate might vary. Let's visualise this.", "from datetime import datetime\n\n#let's create a list 'newtimer' to house our datetime objects\nnewtimer = [datetime.strptime(x, '%H:%M:%S.%f') for x in timer]\n\n#let's compute the real distances from entry to entry\nelapsed = []\nfor i in range(len(newtimer) - 1):\n elapsed.append(1 / ((newtimer[i+1] - newtimer[i]).microseconds / 1000000))\n\n#and plot the results\nplt.figure(figsize=(12,4))\nplt.plot(elapsed)\nplt.xlabel('Sample number')\nplt.ylabel('Actual sampling rate in Hz')\nplt.show()\n\nprint('mean sampling rate: %.3f' %np.mean(elapsed))\nprint('median sampling rate: %.3f'%np.median(elapsed))\nprint('standard deviation: %.3f'%np.std(elapsed))", "That's actually not bad!\nThe signal mean is close to 10Hz and shows a low variance. Sporadic peaks to 12Hz or dips to 9Hz indicate timer inaccuracies but they are infrequent.\nFor our current purposes this is just fine.\nYou could of course interpolate and resample the signal so that it has an exact sampling rate but the effects on computed measures are likely minimal. For now let's just continue on.", "#Let's plot 4 minutes of the segment we selected to get a view \n#of what we're working with\nplt.figure(figsize=(12,6))\nplt.plot(signal[0:int(240 * sample_rate)])\nplt.title('original signal')\nplt.show()", "The first thing to note is that amplitude varies dramatically. Let's run it through a bandpass filter and take out all frequencies that definitely are not heart rate.\nWe'll take out frequencies below 0.7Hz (42 BPM) and above 3.5 Hz (210 BPM).", "#Let's run it through a standard butterworth bandpass implementation to remove everything < 0.8 and > 3.5 Hz.\nfiltered = hp.filter_signal(signal, [0.7, 3.5], sample_rate=sample_rate, \n order=3, filtertype='bandpass')\n\n#let's plot first 240 seconds and work with that!\nplt.figure(figsize=(12,12))\nplt.subplot(211)\nplt.plot(signal[0:int(240 * sample_rate)])\nplt.title('original signal')\nplt.subplot(212)\nplt.plot(filtered[0:int(240 * sample_rate)])\nplt.title('filtered signal')\nplt.show()\n\nplt.figure(figsize=(12,6))\nplt.plot(filtered[0:int(sample_rate * 60)])\nplt.title('60 second segment of filtered signal')\nplt.show()", "Still low quality but at least the heart rate is quite visible now!", "#let's resample to ~100Hz as well\n#10Hz is low for the adaptive threshold analysis HeartPy uses\nfrom scipy.signal import resample\n\nresampled = resample(filtered, len(filtered) * 10)\n\n#don't forget to compute the new sampling rate\nnew_sample_rate = sample_rate * 10\n\n#run HeartPy over a few segments, fingers crossed, and plot results of each\nfor s in [[0, 10000], [10000, 20000], [20000, 30000], [30000, 40000], [40000, 50000]]:\n wd, m = hp.process(resampled[s[0]:s[1]], sample_rate = new_sample_rate, \n high_precision=True, clean_rr=True)\n hp.plotter(wd, m, title = 'zoomed in section', figsize=(12,6))\n hp.plot_poincare(wd, m)\n plt.show()\n for measure in m.keys():\n print('%s: %f' %(measure, m[measure]))", "That seems a reasonable result. By far the most peaks are marked correctly, and most peaksin noisy sections (low confidence) are simply rejected.\nclean_rr uses by default quotient-filtering, which is a bit aggressive.\nYou can set 'iqr' or 'z-score' with the clean_rr_method flag.\nFinally let's look at a way to extract signal section and exclude non-signal sections automatically.", "raw = df['ppg'].values\n\nplt.plot(raw)\nplt.show()\n\nimport sys\nfrom scipy.signal import resample\n\nwindowsize = 100\nstd = []\n\nfor i in range(len(raw) // windowsize):\n start = i * windowsize\n end = (i + 1) * windowsize\n sliced = raw[start:end]\n try:\n std.append(np.std(sliced))\n except:\n print(i)\n \nplt.plot(std)\nplt.show()\n\nplt.plot(raw)\nplt.show()\n\nplt.plot(raw[0:(len(raw) // windowsize) * windowsize] - resample(std, len(std)*windowsize))\nplt.show()", "Hmmm, not much luck yet, but an idea:", "(len(raw) // windowsize) * windowsize\n\nmx = np.max(raw)\nmn = np.min(raw)\nglobal_range = mx - mn\n\nwindowsize = 100\nfiltered = []\n\nfor i in range(len(raw) // windowsize):\n start = i * windowsize\n end = (i + 1) * windowsize\n sliced = raw[start:end]\n rng = np.max(sliced) - np.min(sliced)\n \n if ((rng >= (0.5 * global_range)) \n or \n (np.max(sliced) >= 0.9 * mx) \n or \n (np.min(sliced) <= mn + (0.1 * mn))):\n \n for x in sliced:\n filtered.append(0)\n else:\n for x in sliced:\n filtered.append(x)\n \nplt.figure(figsize=(12,6))\nplt.plot(raw)\nplt.show()\n\nplt.figure(figsize=(12,6))\nplt.plot(filtered)\nplt.show()", "That works! A quick and dirty automatic extraction of signal sections\nFor this we use a window function and for each window test whether it:\n\nHas a range that is at least 50% of range of the raw signal\nOR\nHas a maximum that is 90% the raw signal’s maximum\nOR\nHas a minimum that is the minimum + 10% of the raw signal\n\nThis works well enough" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cdawei/flickr-photo
src/poi_wikipedia.ipynb
gpl-2.0
[ "Extract Landmarks Data in Melbourne from Wikipedia Interactively\n<a id=toc>\nExtract landmarks data:\n- category, \n- name, \n- (latitude, longitude)\nfrom Wikipedia page landmarks in Melbourne in an interactive way.", "%matplotlib inline\n\nimport requests, re, os\nfrom bs4 import BeautifulSoup\nfrom bs4.element import Tag\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport lxml\nfrom fastkml import kml, styles\nfrom shapely.geometry import Point", "URL for the landmarks in the Melbourne city centre.", "#url = 'https://en.wikipedia.org/wiki/Template:Melbourne_landmarks'\nurl = 'https://en.wikipedia.org/wiki/Template:Melbourne_landmarks?action=render' # cleaner HTML\ndata_dir = '../data'\nfpoi = os.path.join(data_dir, 'poi-Melb-0.csv')\n\nresponse = requests.get(url, timeout=10)\n\nhtml = response.text\nsoup = BeautifulSoup(html, 'html.parser')\n\n#print(soup.prettify())", "Extract POI coordinates from its Wikipedia page\nNOTE that there could be more than one coordinate pairs exists in a page, e.g. Yarra River.", "def extract_coord(url):\n \"\"\"\n Assume a URL of a location with a Wikipedia page\n \"\"\"\n url1 = url + '?action=render' # cleaner HTML\n response = requests.get(url1, timeout=10)\n html = response.text\n soup = BeautifulSoup(html, 'html.parser')\n coords = list(soup.find_all('span', {'class':'geo-dec'}))\n if coords is None or len(coords) == 0:\n print('No Geo-coordinates found')\n return\n \n idx = 0\n if len(coords) > 1:\n if len(coords) == 2 and coords[0].string == coords[1].string:\n idx = 0\n else:\n print('WARN: more than one geo-coordinates detected!')\n print('please check the actual page', url)\n for i, c in enumerate(coords): \n print('%d: %s' % (i, c.string))\n ii = input('Input the index of the correct coordinates... ')\n idx = int(ii)\n assert(0 <= idx < len(coords))\n \n coord = coords[idx]\n children = list(coord.children)\n assert(len(children) > 0)\n coordstr = children[0]\n #print(coordstr)\n \n ss = re.sub(r'\\s+', ',', coordstr).split(',') # replace blank spaces with ','\n assert(len(ss) == 2)\n latstr = ss[0].split('°') # e.g. 37.82167°S\n lonstr = ss[1].split('°') # e.g. 144.96778°E\n \n assert(len(latstr) == 2 and len(lonstr) == 2)\n lat = float(latstr[0]) if latstr[1] == 'N' else -1 * float(latstr[0])\n lon = float(lonstr[0]) if lonstr[1] == 'E' else -1 * float(lonstr[0])\n \n print(lat, lon)\n return (lat, lon, url)\n\nextract_coord('https://en.wikipedia.org/wiki/Yarra_River')", "Extract POI data, e.g. category, name, coordinates, from a HTML string retrieved from Wikipedia.", "def extract_poi(html):\n \"\"\"\n Assume POI category is a string in <th>\n POI name and hyperlink is in <li> contained in an unordered list <ul> \n \"\"\"\n soup = BeautifulSoup(html, 'html.parser')\n th = soup.find('th')\n if th is None:\n print('NO POI category found')\n return\n assert(len(th.contents) > 0)\n cat = th.contents[0]\n print('CAT:', cat)\n \n ul = soup.find('ul')\n if ul is None:\n print('NO POI found')\n return\n \n poi_data = [] # (name, cat, lat, lon, url)\n \n for li in ul.children:\n #print(type(li), li)\n if isinstance(li, Tag):\n addr = ''.join(['https:', li.a['href']])\n children = list(li.a.children)\n assert(len(children) > 0) \n name = children[0]\n print(addr, name)\n ret = extract_coord(addr)\n if ret is not None:\n poi_data.append((name, cat, ret[0], ret[1], ret[2]))\n return poi_data", "Extract POI data from landmarks in Melbourne recorded in this Wikipedia page.", "#columns = ['Name', 'Category', 'Latitude', 'Longitude']\ncolumns = ['poiName', 'poiTheme', 'poiLat', 'poiLon', 'poiURL']\npoi_df = pd.DataFrame(columns=columns)\n\ntable = soup.find('table', {'class':'navbox-inner'}) # this class info was found by looking at the raw HTML text", "Interactively check if the portion of HTML contains a category and a list of POIs of that category.", "cnt = 0\nhline = '-'*90\nfor c in table.children:\n print(hline)\n print('NODE %d BEGIN' % cnt)\n print(c)\n print('NODE %d END' % cnt)\n print(hline)\n k = input('Press [Y] or [y] to extract POI, press any other key to ignore ')\n if k == 'Y' or k == 'y':\n print('Extracting POI...')\n poi_data = extract_poi(str(c))\n for t in poi_data: poi_df.loc[poi_df.shape[0]] = [t[i] for i in range(len(t))]\n else:\n print('IGNORED.')\n print('\\n\\n')\n \n cnt += 1", "Latitude/Longitude statistics.", "poi_df.head()\n\nprint('#POIs:', poi_df.shape[0])\n\nprint('Latitude Range:', poi_df['poiLat'].max() - poi_df['poiLat'].min())\npoi_df['poiLat'].describe()\n\nprint('Longitude Range:', poi_df['poiLon'].max() - poi_df['poiLon'].min())\npoi_df['poiLon'].describe()", "Scatter plot.", "plt.figure(figsize=[10, 10])\nplt.scatter(poi_df['poiLat'], poi_df['poiLon'])", "The outlier is the Harbour Town Docklands in category Shopping, with a coordinates actually in Queensland, the Harbour Town shopping centre in Docklands Victoria was sold in 2014, which could likely result changes of its wiki page.\nFiltering out the outliers", "lat_range = [-39, -36]\nlon_range = [143, 147]\n\npoi_df = poi_df[poi_df['poiLat'] > min(lat_range)]\npoi_df = poi_df[poi_df['poiLat'] < max(lat_range)]\npoi_df = poi_df[poi_df['poiLon'] > min(lon_range)]\npoi_df = poi_df[poi_df['poiLon'] < max(lon_range)]", "Latitude/Longitude statistics.", "print('#POIs:', poi_df.shape[0])\n\nprint('Latitude Range:', poi_df['poiLat'].max() - poi_df['poiLat'].min())\npoi_df['poiLat'].describe()\n\nprint('Longitude Range:', poi_df['poiLon'].max() - poi_df['poiLon'].min())\npoi_df['poiLon'].describe()", "Scatter plot.", "plt.figure(figsize=[10, 10])\nplt.scatter(poi_df['poiLat'], poi_df['poiLon'])", "Filtering POIs with the same wikipage and coordinates but associated with several names and categories", "print('#POIs:', poi_df.shape[0])\nprint('#URLs:', poi_df['poiURL'].unique().shape[0])\n\nduplicated = poi_df['poiURL'].duplicated()\nduplicated[duplicated == True]\n\nprint(poi_df.loc[15, 'poiURL'])\npoi_df[poi_df['poiURL'] == poi_df.loc[15, 'poiURL']]", "This is a place located at Melbourne CBD, let's choose the second item with category 'Shopping'.", "poi_df.drop(4, axis=0, inplace=True)\n\npoi_df.head()\n\nprint(poi_df.loc[37, 'poiURL'])\npoi_df[poi_df['poiURL'] == poi_df.loc[37, 'poiURL']]", "For a Post Office, Let's choose the second item with category 'Institutions'.", "poi_df.drop(19, axis=0, inplace=True)\n\npoi_df.head(20)", "Check distance between POIs", "def calc_dist_vec(longitudes1, latitudes1, longitudes2, latitudes2):\n \"\"\"Calculate the distance (unit: km) between two places on earth, vectorised\"\"\"\n # convert degrees to radians\n lng1 = np.radians(longitudes1)\n lat1 = np.radians(latitudes1)\n lng2 = np.radians(longitudes2)\n lat2 = np.radians(latitudes2)\n radius = 6371.0088 # mean earth radius, en.wikipedia.org/wiki/Earth_radius#Mean_radius\n\n # The haversine formula, en.wikipedia.org/wiki/Great-circle_distance\n dlng = np.fabs(lng1 - lng2)\n dlat = np.fabs(lat1 - lat2)\n dist = 2 * radius * np.arcsin( np.sqrt( \n (np.sin(0.5*dlat))**2 + np.cos(lat1) * np.cos(lat2) * (np.sin(0.5*dlng))**2 ))\n return dist\n\npoi_dist_df = pd.DataFrame(data=np.zeros((poi_df.shape[0], poi_df.shape[0]), dtype=np.float), \\\n index=poi_df.index, columns=poi_df.index)\nfor ix in poi_df.index:\n dists = calc_dist_vec(poi_df.loc[ix, 'poiLon'], poi_df.loc[ix, 'poiLat'], poi_df['poiLon'], poi_df['poiLat'])\n poi_dist_df.loc[ix] = dists", "POI pairs that are less than 50 metres.", "check_ix = []\nfor i in range(poi_df.index.shape[0]):\n for j in range(i+1, poi_df.index.shape[0]):\n if poi_dist_df.iloc[i, j] < 0.05: # less 50m\n check_ix = check_ix + [poi_df.index[i], poi_df.index[j]]\n print(poi_df.index[i], poi_df.index[j])\n\npoi_df.loc[check_ix]\n\nprint(poi_df.loc[33, 'poiURL'])\nprint(poi_df.loc[35, 'poiURL'])\nprint(poi_df.loc[76, 'poiURL'])", "According to the above wikipage,\n- \"The Australian Centre for the Moving Image (ACMI) is a ... It is located in Federation Square, in Melbourne\".\n- \"The Ian Potter Centre: NGV Australia houses the Australian part of the art collection of the National Gallery of Victoria (NGV). It is located at Federation Square in Melbourne ...\"\nSo let's just keep the Federation Square.", "poi_df.drop(33, axis=0, inplace=True)\npoi_df.drop(35, axis=0, inplace=True)\n\npoi_df.head(35)", "Save POI data to file", "#poi_ = poi_df[['poiTheme', 'poiLon', 'poiLat']].copy()\npoi_ = poi_df.copy()\npoi_.reset_index(inplace=True)\npoi_.drop('index', axis=1, inplace=True)\npoi_.index.name = 'poiID'\npoi_\n\npoi_.to_csv(fpoi, index=True)\n\n#poi_df.to_csv(fpoi, index=False)", "Visualise POIs on map\nThis is a shared Google map.", "def generate_kml(fname, poi_df):\n k = kml.KML()\n ns = '{http://www.opengis.net/kml/2.2}'\n styid = 'style1'\n # colors in KML: aabbggrr, aa=00 is fully transparent\n sty = styles.Style(id=styid, styles=[styles.LineStyle(color='9f0000ff', width=2)]) # transparent red\n doc = kml.Document(ns, '1', 'POIs', 'POIs visualization', styles=[sty])\n k.append(doc)\n \n # Placemark for POIs\n for ix in poi_df.index:\n name = poi_df.loc[ix, 'poiName']\n cat = poi_df.loc[ix, 'poiTheme']\n lat = poi_df.loc[ix, 'poiLat']\n lon = poi_df.loc[ix, 'poiLon']\n desc = ''.join(['POI Name: ', name, '<br/>Category: ', cat, '<br/>Coordinates: (%f, %f)' % (lat, lon)])\n pm = kml.Placemark(ns, str(ix), name, desc, styleUrl='#' + styid)\n pm.geometry = Point(lon, lat)\n doc.append(pm)\n \n # save to file\n kmlstr = k.to_string(prettyprint=True)\n with open(fname, 'w') as f:\n f.write('<?xml version=\"1.0\" encoding=\"UTF-8\"?>\\n')\n f.write(kmlstr)\n\ngenerate_kml('./poi.kml', poi_df)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
drcgw/bass
Single Wave- Interactive.ipynb
gpl-3.0
[ "Welcome to BASS!\nVersion: Single Wave- Interactive Notebook.\nBASS: Biomedical Analysis Software Suite for event detection and signal processing.\nCopyright (C) 2015 Abigail Dobyns\n\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program. If not, see &lt;http://www.gnu.org/licenses/&gt;\n\nInitalize\nRun the following code block to intialize the program. This notebook and bass.py file must be in the same folder.", "from bass import *", "Begin User Input\nFor help, check out the wiki: Protocol\nOr the video tutorial: Coming Soon!\nLoad Data File\nUse the following block to change your settings. You must use this block.\nHere are some helpful information about the loading settings:\nFull File Path to Folder containing file:\nDesignate the path to your file to load. It can also be the relative path to the folder where this notebook is stored. This does not include the file itself.\nMac OSX Example: '/Users/MYNAME/Documents/bass'\nMicrosoft Example: 'C:\\\\Users\\MYNAME\\Documents\\bass'\n\nFile name:\nThis is the name of your data file. It should include the file type. This file should NOT have a header and the first column must be time in seconds. Note: This file name will also appear as part of the output files names.\n'rat34_ECG.txt'\n\nFull File Path for data output: Designate the location of the folder where you would like the folder containing your results to go. If the folder does not exist, then it will be created. A plots folder, called 'plots' will be created inside this folder for you if it does not already exist. \nMac OSX Example: '/Users/MYNAME/Documents/output'\nMicrosoft Example: 'C:\\\\Users\\MYNAME\\Documents\\output'\n\nLoading a file", "Data, Settings, Results = load_interact()", "Graph Data (Optional)\nUse this block to check any slicing you need to do to cut out problematic data from the head or tail. You can click on any point in the wave to get the (x,y) location of that point. Clipping inside this notebook is not supported at this time.\nGraph Raw Data", "plot_rawdata(Data)", "Power Spectral Density (Optional)\nUse the settings code block to set your frequency bands to calculate area under the curve. This block is not required. band output is always in raw power, even if the graph scale is dB/Hz.\nPower Spectral Density: Signal", "#optional\nSettings['PSD-Signal'] = Series(index = ['ULF', 'VLF', 'LF','HF','dx'])\n\n#Set PSD ranges for power in band\nSettings['PSD-Signal']['ULF'] = 25 #max of the range of the ultra low freq band. range is 0:ulf\nSettings['PSD-Signal']['VLF'] = 75 #max of the range of the very low freq band. range is ulf:vlf\nSettings['PSD-Signal']['LF'] = 150 #max of the range of the low freq band. range is vlf:lf\nSettings['PSD-Signal']['HF'] = 300 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2) where hz is the sampling frequency\nSettings['PSD-Signal']['dx'] = 2 #segmentation for integration of the area under the curve. ", "Use the block below to generate the PSD graph and power in bands results (if selected). scale toggles which units to use for the graph:\nraw = s^2/Hz\ndb = dB/Hz = 10*log10(s^2/Hz)\n\nGraph and table are automatically saved in the PSD-Signal subfolder.", "scale = 'raw' #raw or db\nResults = psd_signal(version = 'original', key = 'Mean1', scale = scale, \n Data = Data, Settings = Settings, Results = Results)\nResults['PSD-Signal']", "Spectrogram (Optional)\nUse the block below to get the spectrogram of the signal. The frequency (y-axis) scales automatically to only show 'active' frequencies. This can take some time to run. \nversion = 'original'\nkey = 'Mean1'\n\nAfter transformation is run, you can call version = 'trans'. This graph is not automatically saved.\nSpectrogram", "version = 'original'\nkey = 'Mean1'\nspectogram(version, key, Data, Settings, Results)", "Transforming Data\nMust be done for each new uploaded data file.\nWARNING: If you do not load a settings file OR enter your own settings, the analysis will not run. There are no defaults. This section is not optional.\nTransforming Data\nLoad settings from a file\nMust be a previously outputed BASS settings file, although the name can be changed. Expected format is '.csv'. Enter the full file path and name.\nMac OSX Example: '/Users/MYNAME/Documents/rat34_Settings.csv'\nMicrosoft Example: 'C:\\\\Users\\MYNAME\\Documents\\rat34_Settings.csv'\n\nSee above instructions for how to load your data file.\nWarning!! You must load a settings file or specify your settings below. There are no defaults\nLoading Settings", "Settings = load_settings_interact(Settings)\nSettings_display = display_settings(Settings)\nSettings_display", "Enter your settings for data transformation\nWARNING: If you do not load a settings file OR enter your own settings, the analysis will not run. There are no defaults. This section is not optional.\nEnter the parameters of the functions you would like to use to transform your data. \nIf you do not want to use a function, enter 'none'\nFor more help on settings:\nTransformation Settings", "Settings = user_input_trans(Settings)", "Run data transformation\nThis Block Is Not Optional\nTransform", "Data, Settings = transform_wrapper(Data, Settings)\ngraph_ts(Data, Settings, Results)", "Set Baseline for Thresholding\nWARNING If you do not load a settings file OR enter your own settings, the analysis will not run. There are no defaults. This section is not optional.\nBaseline\nChoose either linear or rolling baseline.\nLinear - takes a user specified time segment as a good representation of baseline. If the superstructure is linear but has a slope, use linear fit in the transformation to still use linear. Linear automatically shifts your data by the ammount of your baseline normalize the baseline to zero.\nRolling - rolling mean of the data is generated based on a moving window. User provides the window size in miliseconds. there is no shift in the data with this method.\nStatic - Skips baseline generatoin and allows you to choose an arbitrary y value for threshold. No Shift in the data.", "Settings = user_input_base(Settings)", "Run baseline\nGenerate Baseline", "Data, Settings, Results = baseline_wrapper(Data, Settings, Results)\ngraph_ts(Data, Settings, Results)", "Display Settings (Optional)\nOptional block. Run this at any time to check what your settings are. If it does not appear, it has not been set yet.\nDisplay Settings", "Settings_display = display_settings(Settings)\nSettings_display", "Event Detection\nPeaks\nPeaks are local maxima, defined by local minima on either side of them. Click here for more information about this algorithm\nPeak Detection Settings\nRun the Following Block of code to enter or change peak detection settings. If you have loaded settings from a previous file, you do not need to run this block.\nPeak Detection Settings", "Settings = event_peakdet_settings(Data, Settings)", "Run Event Peak Detection\nRun block of code below to run peak deteaction. This block will print a summary table of the all available peak measurments.\nPeak Detection", "Results = event_peakdet_wrapper(Data, Settings, Results)\nResults['Peaks-Master'].groupby(level=0).describe()", "Plot Events (Optional)\nUse the block below to visualize event detection. Peaks are blue triangles. Valleys are pink triangles.\nVisualize Events", "graph_ts(Data, Settings, Results)", "Bursts\nBursts are the boundaries of events defined by their amplitudes, which are greater than the set threshold\nEnter Burst Settings\nRun the Following Block of code to enter or change burst detection settings. If you have loaded settings from a previous file, you do not need to run this block.\nBurst Settings", "Settings = event_burstdet_settings(Data, Settings, Results)", "Run Event Burst Detection\nRun block of code below to run burst deteaction. \nThis block will print a summary table of all available burst measurements.\nBurst Detection", "Results = event_burstdet_wrapper(Data, Settings, Results)\nResults['Bursts-Master'].groupby(level=0).describe()", "Plot Events (Optional)\nCall a column of data by its key (column name). Default name for one column of data is 'Mean1'\nVisualize Bursts", "key = 'Mean1'\ngraph_ts(Data, Settings, Results, key)", "Save all files and settings\nSave Event Tables and Settings", "Save_Results(Data, Settings, Results)", "Event Analysis\nNow that events are detected, you can analyze them using any of the optional blocks below. \nMore information about how to use this\nDisplay Tables\nDisplay Summary Results for Peaks", "#grouped summary for peaks\nResults['Peaks-Master'].groupby(level=0).describe()", "Display Summary Results for Bursts", "#grouped summary for bursts\nResults['Bursts-Master'].groupby(level=0).describe()", "Results Plots\nPoincare Plots\nCreate a Poincare Plot of your favorite varible. Choose an event type (Peaks or Bursts), measurement type. Calling meas = 'All' is supported.\nPlots and tables are saved automatically\nExample:\nevent_type = 'Bursts'\nmeas = 'Burst Duration'\n\nMore on Poincare Plots\nBatch Poincare\nBatch Poincare", "#Batch\nevent_type = 'Peaks'\nmeas = 'all'\nResults = poincare_batch(event_type, meas, Data, Settings, Results)\npd.concat({'SD1':Results['Poincare SD1'],'SD2':Results['Poincare SD2']})", "Quick Poincare Plot\nQuickly call one poincare plot for display. Plot and Table are not saved automatically. Choose an event type (Peaks or Bursts), measurement type, and key. Calling meas = 'All' is not supported.\nQuick Poincare", "#quick\nevent_type = 'Bursts'\nmeas = 'Burst Duration'\nkey = 'Mean1'\npoincare_plot(Results[event_type][key][meas])", "Line Plots\nCreate line plots of the raw data as well as the data analysis. \nPlots are saved by clicking the save button in the pop-up window with your graph.\nkey = 'Mean1'\nstart =100 \nend= 101\n\nResults Line Plot", "key = 'Mean1'\nstart =100 #start time in seconds\nend= 101 #end time in seconds\nresults_timeseries_plot(key, start, end, Data, Settings, Results)", "Autocorrelation Plot\nDisplay the Autocorrelation plot of your transformed data.\nChoose the start and end time in seconds. May be slow\nkey = 'Mean1'\nstart = 0 \nend = 10\n\nAutocorrelation Plot", "#autocorrelation\nkey = 'Mean1'\nstart = 0 #seconds, where you want the slice to begin\nend = 10 #seconds, where you want the slice to end.\nautocorrelation_plot(Data['trans'][key][start:end])\nplt.show()", "Frequency Plot\nUse this block to plot changes of any measurement over time. Does not support 'all'. Example:\nevent_type = 'Peaks'\nmeas = 'Intervals'\nkey = 'Mean1'\n\nFrequency Plot", "event_type = 'Peaks'\nmeas = 'Intervals'\nkey = 'Mean1' #'Mean1' default for single wave\nfrequency_plot(event_type, meas, key, Data, Settings, Results)", "Power Spectral Density\nThe following blocks allows you to asses the power of event measuments in the frequency domain. While you can call this block on any event measurement, it is intended to be used on interval data (or at least data with units in seconds). Reccomended:\nevent_type = 'Bursts'\nmeas = 'Total Cycle Time'\nkey = 'Mean1'\nscale = 'raw'\n\nevent_type = 'Peaks'\nmeas = 'Intervals'\nkey = 'Mean1'\nscale = 'raw'\n\nBecause this data is in the frequency domain, we must interpolate it in order to perform a FFT on it. Does not support 'all'.\nPower Spectral Density: Events\nSettings\nUse the code block below to specify your settings for event measurment PSD.", "Settings['PSD-Event'] = Series(index = ['Hz','ULF', 'VLF', 'LF','HF','dx'])\n#Set PSD ranges for power in band\n\nSettings['PSD-Event']['hz'] = 4.0 #freqency that the interpolation and PSD are performed with.\nSettings['PSD-Event']['ULF'] = 0.03 #max of the range of the ultra low freq band. range is 0:ulf\nSettings['PSD-Event']['VLF'] = 0.05 #max of the range of the very low freq band. range is ulf:vlf\nSettings['PSD-Event']['LF'] = 0.15 #max of the range of the low freq band. range is vlf:lf\nSettings['PSD-Event']['HF'] = 0.4 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2)\nSettings['PSD-Event']['dx'] = 10 #segmentation for the area under the curve. ", "Event PSD\nUse block below to return the PSD plot, as well as the power in the bands defined by the settings above.", "event_type = 'Bursts'\nmeas = 'Total Cycle Time'\nkey = 'Mean1'\nscale = 'raw'\nResults = psd_event(event_type, meas, key, scale, Data, Settings, Results)\nResults['PSD-Event'][key]", "Analyze Events by Measurement\nGenerates a line plot with error bars for a given event measurement. X axis is the names of each time series. Display Only. Intended for more than one column of data. This is not a box and whiskers plot.\nevent_type = 'peaks'\nmeas = 'Peaks Amplitude'\n\nAnalyze Events by Measurement", "#Get average plots, display only\nevent_type = 'peaks'\nmeas = 'Peaks Amplitude'\naverage_measurement_plot(event_type, meas,Results)", "Moving/Sliding Averages, Standard Deviation, and Count\nGenerates the moving mean, standard deviation, and count for a given measurement across all columns of the Data in the form of a DataFrame (displayed as a table).\nSaves out the dataframes of these three results automatically with the window size in the name as a .csv.\nIf meas == 'All', then the function will loop and produce these tables for all measurements.\nevent_type = 'Peaks'\nmeas = 'all'\nwindow = 30\n\nMoving Stats", "#Moving Stats\nevent_type = 'Peaks'\nmeas = 'all'\nwindow = 30 #seconds\nResults = moving_statistics(event_type, meas, window, Data, Settings, Results)", "Histogram Entropy\nCalculates the histogram entropy of a measurement for each column of data. Also saves the histogram of each. If meas is set to 'all', then all available measurements from the event_type chosen will be calculated iteratevely. \nIf all of the samples fall in one bin regardless of the bin size, it means we have the most predictable sitution and the entropy is 0. If we have uniformly dist function, the max entropy will be 1\nevent_type = 'Bursts'\nmeas = 'all'\n\nHistogram Entropy", "#Histogram Entropy\nevent_type = 'Bursts'\nmeas = 'all'\nResults = histent_wrapper(event_type, meas, Data, Settings, Results)\nResults['Histogram Entropy']", "STOP HERE\nYou can run another file be going back the the Begin User Input section and chose another file path.\nWhat Should I do now?\nAdvanced user options\nApproximate entropy\nthis only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY\nrun the below code to get the approximate entropy of any measurement or raw signal. Returns the entropy of the entire results array (no windowing). I am using the following M and R values:\nM = 2 \nR = 0.2*std(measurement)\n\nthese values can be modified in the source code. alternatively, you can call ap_entropy directly. supports 'all'\nInterpretation: A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn.\nApproximate Entropy in BASS\nAproximate Entropy Source\nEvents", "#Approximate Entropy\nevent_type = 'Peaks'\nmeas = 'all'\nResults = ap_entropy_wrapper(event_type, meas, Data, Settings, Results)\nResults['Approximate Entropy']", "Time Series", "#Approximate Entropy on raw signal\n#takes a VERY long time\nfrom pyeeg import ap_entropy\n\nversion = 'original' #original, trans, shift, or rolling\nkey = 'Mean1' #Mean1 default key for one time series\nstart = 0 #seconds, where you want the slice to begin\nend = 1 #seconds, where you want the slice to end. The absolute end is -1\n\nap_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))", "Sample Entropy\nthis only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY\nrun the below code to get the sample entropy of any measurement. Returns the entropy of the entire results array (no windowing). I am using the following M and R values:\nM = 2 \nR = 0.2*std(measurement)\n\nthese values can be modified in the source code. alternatively, you can call samp_entropy directly. \nSupports 'all'\nSample Entropy in BASS\nSample Entropy Source\nEvents", "#Sample Entropy\nevent_type = 'Bursts'\nmeas = 'all'\nResults = samp_entropy_wrapper(event_type, meas, Data, Settings, Results)\nResults['Sample Entropy']", "Time Series", "#on raw signal\n#takes a VERY long time\nversion = 'original' #original, trans, shift, or rolling\nkey = 'Mean1' #Mean1 default key for one time series\nstart = 0 #seconds, where you want the slice to begin\nend = 1 #seconds, where you want the slice to end. The absolute end is -1\n\nsamp_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))", "Blank Code block\nyou're still here, reading? you must be a dedicated super user!\nIf that is the case, then you must know how to code in Python. Use this space to get crazy with your own advanced analysis and stuff.\nBlank Code Block" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Benedicto/ML-Learning
Classifier_1_linear_regression.ipynb
gpl-3.0
[ "Predicting sentiment from product reviews\nThe goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions.\nIn this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.\n\nUse SFrames to do some feature engineering\nTrain a logistic regression model to predict the sentiment of product reviews.\nInspect the weights (coefficients) of a trained logistic regression model.\nMake a prediction (both class and probability) of sentiment for a new product review.\nGiven the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.\nInspect the coefficients of the logistic regression model and interpret their meanings.\nCompare multiple logistic regression models.\n\nLet's get started!\nFire up GraphLab Create\nMake sure you have the latest version of GraphLab Create.", "from __future__ import division\nimport graphlab\nimport math\nimport string", "Data preperation\nWe will use a dataset consisting of baby product reviews on Amazon.com.", "products = graphlab.SFrame('amazon_baby.gl/')", "Now, let us see a preview of what the dataset looks like.", "products", "Build the word count vector for each review\nLet us explore a specific example of a baby product.", "products[269]", "Now, we will perform 2 simple data transformations:\n\nRemove punctuation using Python's built-in string functionality.\nTransform the reviews into word-counts.\n\nAside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as \"I'd\", \"would've\", \"hadn't\" and so forth. See this page for an example of smart handling of punctuations.", "def remove_punctuation(text):\n import string\n return text.translate(None, string.punctuation) \n\nreview_without_puctuation = products['review'].apply(remove_punctuation)\nproducts['word_count'] = graphlab.text_analytics.count_words(review_without_puctuation)", "Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.", "products[269]['word_count']", "Extract sentiments\nWe will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.", "products = products[products['rating'] != 3]\nlen(products)\n\nproducts = products.filter_by([3.], 'rating', exclude=True)\nlen(products)", "Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.", "products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)\nproducts", "Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).\nSplit data into training and test sets\nLet's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.", "train_data, test_data = products.random_split(.8, seed=1)\nprint len(train_data)\nprint len(test_data)", "Train a sentiment classifier with logistic regression\nWe will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None to obtain same results as everyone else.\nNote: This line may take 1-2 minutes.", "sentiment_model = graphlab.logistic_classifier.create(train_data,\n target = 'sentiment',\n features=['word_count'],\n validation_set=None)\n\nsentiment_model", "Aside. You may get an warning to the effect of \"Terminated due to numerical difficulties --- this model may not be ideal\". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.\nNow that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows:", "weights = sentiment_model.coefficients\nweights.column_names()", "There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment. \nFill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'value' column in SFrame weights must be positive ( >= 0)).", "num_positive_weights = len(weights[weights['value']>=0])\nnum_negative_weights = len(weights[weights['value']<0])\n\nprint \"Number of positive weights: %s \" % num_positive_weights\nprint \"Number of negative weights: %s \" % num_negative_weights", "Quiz question: How many weights are >= 0?\nMaking predictions with logistic regression\nNow that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.", "sample_test_data = test_data[10:13]\nprint sample_test_data['rating']\nsample_test_data", "Let's dig deeper into the first row of the sample_test_data. Here's the full review:", "sample_test_data[0]['review']", "That review seems pretty positive.\nNow, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.", "sample_test_data[1]['review']", "We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:\n$$\n\\mbox{score}_i = \\mathbf{w}^T h(\\mathbf{x}_i)\n$$ \nwhere $h(\\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores using GraphLab Create. For each row, the score (or margin) is a number in the range [-inf, inf].", "scores = sentiment_model.predict(sample_test_data, output_type='margin')\nprint scores\n\nsentiment_model.predict?", "Predicting sentiment\nThese scores can be used to make class predictions as follows:\n$$\n\\hat{y} = \n\\left{\n\\begin{array}{ll}\n +1 & \\mathbf{w}^T h(\\mathbf{x}_i) > 0 \\\n -1 & \\mathbf{w}^T h(\\mathbf{x}_i) \\leq 0 \\\n\\end{array} \n\\right.\n$$\nUsing scores, write code to calculate $\\hat{y}$, the class predictions:\nRun the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.", "print \"Class predictions according to GraphLab Create:\" \nprint sentiment_model.predict(sample_test_data)", "Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create.\nProbability predictions\nRecall from the lectures that we can also calculate the probability predictions from the scores using:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))}.\n$$\nUsing the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].", "def probability(score):\n import math\n return 1./(1+math.exp(-score))\n\nfor score in scores:\n print probability(score)", "Checkpoint: Make sure your probability predictions match the ones obtained from GraphLab Create.", "print \"Class predictions according to GraphLab Create:\" \nprint sentiment_model.predict(sample_test_data, output_type='probability')", "Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?\nFind the most positive (and negative) review\nWe now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance.\nUsing the sentiment_model, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the \"most positive reviews.\"\nTo calculate these top-20 reviews, use the following steps:\n1. Make probability predictions on test_data using the sentiment_model. (Hint: When you call .predict to make predictions on the test data, use option output_type='probability' to output the probability rather than just the most likely class.)\n2. Sort the data according to those predictions and pick the top 20. (Hint: You can use the .topk method on an SFrame to find the top k rows sorted according to the value of a specified column.)", "probabilities = sentiment_model.predict(test_data, output_type='probability')\n\ntest_data['prediction'] = probabilities\n\ntop20 = test_data.topk('prediction', 20)['name']\n\nfor product in ['Snuza Portable Baby Movement Monitor', 'MamaDoo Kids Foldable Play Yard Mattress Topper, Blue',\n 'Britax Decathlon Convertible Car Seat, Tiffany', 'Safety 1st Exchangeable Tip 3 in 1 Thermometer']:\n print product, product in top20", "Quiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice]\nNow, let us repeat this excercise to find the \"most negative reviews.\" Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive review. Repeat the same steps above but make sure you sort in the opposite order.", "worst20 = test_data.sort('prediction')[0:20]['name']\n\nfor product in ['The First Years True Choice P400 Premium Digital Monitor, 2 Parent Unit',\n 'JP Lizzy Chocolate Ice Classic Tote Set',\n 'Peg-Perego Tatamia High Chair, White Latte',\n 'Safety 1st High-Def Digital Monitor']:\n print product, product in worst20", "Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]\nCompute accuracy of the classifier\nWe will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by\n$$\n\\mbox{accuracy} = \\frac{\\mbox{# correctly classified examples}}{\\mbox{# total examples}}\n$$\nThis can be computed as follows:\n\nStep 1: Use the trained model to compute class predictions (Hint: Use the predict method)\nStep 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below).\nStep 3: Divide the total number of correct predictions by the total number of data points in the dataset.\n\nComplete the function below to compute the classification accuracy:", "def get_classification_accuracy(model, data, true_labels):\n # First get the predictions\n ## YOUR CODE HERE\n predictions = model.predict(data)\n \n # Compute the number of correctly classified examples\n ## YOUR CODE HERE\n correct = (predictions == true_labels).sum()\n\n # Then compute accuracy by dividing num_correct by total number of examples\n ## YOUR CODE HERE\n accuracy = correct * 1. / len(data)\n \n return accuracy", "Now, let's compute the classification accuracy of the sentiment_model on the test_data.", "get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])", "Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).\nQuiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?\nLearn another classifier with fewer words\nThere were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subet of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are:", "significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves', \n 'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed', \n 'work', 'product', 'money', 'would', 'return']\n\nlen(significant_words)", "For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.", "train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)\ntest_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)", "Let's see what the first example of the dataset looks like:", "train_data[0]['review']", "The word_count column had been working with before looks like the following:", "print train_data[0]['word_count']", "Since we are only working with a subet of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.", "print train_data[0]['word_count_subset']", "Train a logistic regression model on a subset of data\nWe will now build a classifier with word_count_subset as the feature and sentiment as the target.", "simple_model = graphlab.logistic_classifier.create(train_data,\n target = 'sentiment',\n features=['word_count_subset'],\n validation_set=None)\nsimple_model", "We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.", "get_classification_accuracy(simple_model, test_data, test_data['sentiment'])", "Now, we will inspect the weights (coefficients) of the simple_model:", "simple_model.coefficients", "Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.", "simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21)", "Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?", "coefficients = simple_model.coefficients\n\ncoefficients_words = coefficients[coefficients['name'] != '(intercept)']\n\n(coefficients_words['value'] > 0).sum()", "Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?", "joined = coefficients_words.join(sentiment_model.coefficients, 'index')\n\nprint (joined['value']>0)\nprint (joined['value.1']>0)", "Comparing models\nWe will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.\nFirst, compute the classification accuracy of the sentiment_model on the train_data:", "get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])", "Now, compute the classification accuracy of the simple_model on the train_data:", "get_classification_accuracy(simple_model, train_data, train_data['sentiment'])", "Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?\nNow, we will repeat this excercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:", "get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])", "Next, we will compute the classification accuracy of the simple_model on the test_data:", "get_classification_accuracy(simple_model, test_data, test_data['sentiment'])", "Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?\nBaseline: Majority class prediction\nIt is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.\nWhat is the majority class in the train_data?", "num_positive = (train_data['sentiment'] == +1).sum()\nnum_negative = (train_data['sentiment'] == -1).sum()\nprint num_positive\nprint num_negative", "Now compute the accuracy of the majority class classifier on test_data.\nQuiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).", "num_positive = (test_data['sentiment'] == +1).sum()\nnum_negative = (test_data['sentiment'] == -1).sum()\nprint num_positive\nprint num_negative\n\nnum_positive * 1. / len(test_data)", "Quiz Question: Is the sentiment_model definitely better than the majority class classifier (the baseline)?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
statsmodels/statsmodels.github.io
v0.13.1/examples/notebooks/generated/glm_weights.ipynb
bsd-3-clause
[ "Weighted Generalized Linear Models", "import numpy as np\nimport pandas as pd\nimport statsmodels.formula.api as smf\nimport statsmodels.api as sm", "Weighted GLM: Poisson response data\nLoad data\nIn this example, we'll use the affair dataset using a handful of exogenous variables to predict the extra-marital affair rate. \nWeights will be generated to show that freq_weights are equivalent to repeating records of data. On the other hand, var_weights is equivalent to aggregating data.", "print(sm.datasets.fair.NOTE)", "Load the data into a pandas dataframe.", "data = sm.datasets.fair.load_pandas().data", "The dependent (endogenous) variable is affairs", "data.describe()\n\ndata[:3]", "In the following we will work mostly with Poisson. While using decimal affairs works, we convert them to integers to have a count distribution.", "data[\"affairs\"] = np.ceil(data[\"affairs\"])\ndata[:3]\n\n(data[\"affairs\"] == 0).mean()\n\nnp.bincount(data[\"affairs\"].astype(int))", "Condensing and Aggregating observations\nWe have 6366 observations in our original dataset. When we consider only some selected variables, then we have fewer unique observations. In the following we combine observations in two ways, first we combine observations that have values for all variables identical, and secondly we combine observations that have the same explanatory variables.\nDataset with unique observations\nWe use pandas's groupby to combine identical observations and create a new variable freq that count how many observation have the values in the corresponding row.", "data2 = data.copy()\ndata2[\"const\"] = 1\ndc = (\n data2[\"affairs rate_marriage age yrs_married const\".split()]\n .groupby(\"affairs rate_marriage age yrs_married\".split())\n .count()\n)\ndc.reset_index(inplace=True)\ndc.rename(columns={\"const\": \"freq\"}, inplace=True)\nprint(dc.shape)\ndc.head()", "Dataset with unique explanatory variables (exog)\nFor the next dataset we combine observations that have the same values of the explanatory variables. However, because the response variable can differ among combined observations, we compute the mean and the sum of the response variable for all combined observations.\nWe use again pandas groupby to combine observations and to create the new variables. We also flatten the MultiIndex into a simple index.", "gr = data[\"affairs rate_marriage age yrs_married\".split()].groupby(\n \"rate_marriage age yrs_married\".split()\n)\ndf_a = gr.agg([\"mean\", \"sum\", \"count\"])\n\n\ndef merge_tuple(tpl):\n if isinstance(tpl, tuple) and len(tpl) > 1:\n return \"_\".join(map(str, tpl))\n else:\n return tpl\n\n\ndf_a.columns = df_a.columns.map(merge_tuple)\ndf_a.reset_index(inplace=True)\nprint(df_a.shape)\ndf_a.head()", "After combining observations with have a dataframe dc with 467 unique observations, and a dataframe df_a with 130 observations with unique values of the explanatory variables.", "print(\"number of rows: \\noriginal, with unique observations, with unique exog\")\ndata.shape[0], dc.shape[0], df_a.shape[0]", "Analysis\nIn the following, we compare the GLM-Poisson results of the original data with models of the combined observations where the multiplicity or aggregation is given by weights or exposure.\noriginal data", "glm = smf.glm(\n \"affairs ~ rate_marriage + age + yrs_married\",\n data=data,\n family=sm.families.Poisson(),\n)\nres_o = glm.fit()\nprint(res_o.summary())\n\nres_o.pearson_chi2 / res_o.df_resid", "condensed data (unique observations with frequencies)\nCombining identical observations and using frequency weights to take into account the multiplicity of observations produces exactly the same results. Some results attribute will differ when we want to have information about the observation and not about the aggregate of all identical observations. For example, residuals do not take freq_weights into account.", "glm = smf.glm(\n \"affairs ~ rate_marriage + age + yrs_married\",\n data=dc,\n family=sm.families.Poisson(),\n freq_weights=np.asarray(dc[\"freq\"]),\n)\nres_f = glm.fit()\nprint(res_f.summary())\n\nres_f.pearson_chi2 / res_f.df_resid", "condensed using var_weights instead of freq_weights\nNext, we compare var_weights to freq_weights. It is a common practice to incorporate var_weights when the endogenous variable reflects averages and not identical observations.\nI do not see a theoretical reason why it produces the same results (in general).\nThis produces the same results but df_resid differs the freq_weights example because var_weights do not change the number of effective observations.", "glm = smf.glm(\n \"affairs ~ rate_marriage + age + yrs_married\",\n data=dc,\n family=sm.families.Poisson(),\n var_weights=np.asarray(dc[\"freq\"]),\n)\nres_fv = glm.fit()\nprint(res_fv.summary())", "Dispersion computed from the results is incorrect because of wrong df_resid. \nIt is correct if we use the original df_resid.", "res_fv.pearson_chi2 / res_fv.df_resid, res_f.pearson_chi2 / res_f.df_resid", "aggregated or averaged data (unique values of explanatory variables)\nFor these cases we combine observations that have the same values of the explanatory variables. The corresponding response variable is either a sum or an average.\nusing exposure\nIf our dependent variable is the sum of the responses of all combined observations, then under the Poisson assumption the distribution remains the same but we have varying exposure given by the number of individuals that are represented by one aggregated observation.\nThe parameter estimates and covariance of parameters are the same with the original data, but log-likelihood, deviance and Pearson chi-squared differ", "glm = smf.glm(\n \"affairs_sum ~ rate_marriage + age + yrs_married\",\n data=df_a,\n family=sm.families.Poisson(),\n exposure=np.asarray(df_a[\"affairs_count\"]),\n)\nres_e = glm.fit()\nprint(res_e.summary())\n\nres_e.pearson_chi2 / res_e.df_resid", "using var_weights\nWe can also use the mean of all combined values of the dependent variable. In this case the variance will be related to the inverse of the total exposure reflected by one combined observation.", "glm = smf.glm(\n \"affairs_mean ~ rate_marriage + age + yrs_married\",\n data=df_a,\n family=sm.families.Poisson(),\n var_weights=np.asarray(df_a[\"affairs_count\"]),\n)\nres_a = glm.fit()\nprint(res_a.summary())", "Comparison\nWe saw in the summary prints above that params and cov_params with associated Wald inference agree across versions. We summarize this in the following comparing individual results attributes across versions.\nParameter estimates params, standard errors of the parameters bse and pvalues of the parameters for the tests that the parameters are zeros all agree. However, the likelihood and goodness-of-fit statistics, llf, deviance and pearson_chi2 only partially agree. Specifically, the aggregated version do not agree with the results using the original data.\nWarning: The behavior of llf, deviance and pearson_chi2 might still change in future versions.\nBoth the sum and average of the response variable for unique values of the explanatory variables have a proper likelihood interpretation. However, this interpretation is not reflected in these three statistics. Computationally this might be due to missing adjustments when aggregated data is used. However, theoretically we can think in these cases, especially for var_weights of the misspecified case when likelihood analysis is inappropriate and the results should be interpreted as quasi-likelihood estimates. There is an ambiguity in the definition of var_weights because they can be used for averages with correctly specified likelihood as well as for variance adjustments in the quasi-likelihood case. We are currently not trying to match the likelihood specification. However, in the next section we show that likelihood ratio type tests still produce the same result for all aggregation versions when we assume that the underlying model is correctly specified.", "results_all = [res_o, res_f, res_e, res_a]\nnames = \"res_o res_f res_e res_a\".split()\n\npd.concat([r.params for r in results_all], axis=1, keys=names)\n\npd.concat([r.bse for r in results_all], axis=1, keys=names)\n\npd.concat([r.pvalues for r in results_all], axis=1, keys=names)\n\npd.DataFrame(\n np.column_stack([[r.llf, r.deviance, r.pearson_chi2] for r in results_all]),\n columns=names,\n index=[\"llf\", \"deviance\", \"pearson chi2\"],\n)", "Likelihood Ratio type tests\nWe saw above that likelihood and related statistics do not agree between the aggregated and original, individual data. We illustrate in the following that likelihood ratio test and difference in deviance agree across versions, however Pearson chi-squared does not.\nAs before: This is not sufficiently clear yet and could change.\nAs a test case we drop the age variable and compute the likelihood ratio type statistics as difference between reduced or constrained and full or unconstrained model.\noriginal observations and frequency weights", "glm = smf.glm(\n \"affairs ~ rate_marriage + yrs_married\", data=data, family=sm.families.Poisson()\n)\nres_o2 = glm.fit()\n# print(res_f2.summary())\nres_o2.pearson_chi2 - res_o.pearson_chi2, res_o2.deviance - res_o.deviance, res_o2.llf - res_o.llf\n\nglm = smf.glm(\n \"affairs ~ rate_marriage + yrs_married\",\n data=dc,\n family=sm.families.Poisson(),\n freq_weights=np.asarray(dc[\"freq\"]),\n)\nres_f2 = glm.fit()\n# print(res_f2.summary())\nres_f2.pearson_chi2 - res_f.pearson_chi2, res_f2.deviance - res_f.deviance, res_f2.llf - res_f.llf", "aggregated data: exposure and var_weights\nNote: LR test agrees with original observations, pearson_chi2 differs and has the wrong sign.", "glm = smf.glm(\n \"affairs_sum ~ rate_marriage + yrs_married\",\n data=df_a,\n family=sm.families.Poisson(),\n exposure=np.asarray(df_a[\"affairs_count\"]),\n)\nres_e2 = glm.fit()\nres_e2.pearson_chi2 - res_e.pearson_chi2, res_e2.deviance - res_e.deviance, res_e2.llf - res_e.llf\n\nglm = smf.glm(\n \"affairs_mean ~ rate_marriage + yrs_married\",\n data=df_a,\n family=sm.families.Poisson(),\n var_weights=np.asarray(df_a[\"affairs_count\"]),\n)\nres_a2 = glm.fit()\nres_a2.pearson_chi2 - res_a.pearson_chi2, res_a2.deviance - res_a.deviance, res_a2.llf - res_a.llf", "Investigating Pearson chi-square statistic\nFirst, we do some sanity checks that there are no basic bugs in the computation of pearson_chi2 and resid_pearson.", "res_e2.pearson_chi2, res_e.pearson_chi2, (res_e2.resid_pearson ** 2).sum(), (\n res_e.resid_pearson ** 2\n).sum()\n\nres_e._results.resid_response.mean(), res_e.model.family.variance(res_e.mu)[\n :5\n], res_e.mu[:5]\n\n(res_e._results.resid_response ** 2 / res_e.model.family.variance(res_e.mu)).sum()\n\nres_e2._results.resid_response.mean(), res_e2.model.family.variance(res_e2.mu)[\n :5\n], res_e2.mu[:5]\n\n(res_e2._results.resid_response ** 2 / res_e2.model.family.variance(res_e2.mu)).sum()\n\n(res_e2._results.resid_response ** 2).sum(), (res_e._results.resid_response ** 2).sum()", "One possible reason for the incorrect sign is that we are subtracting quadratic terms that are divided by different denominators. In some related cases, the recommendation in the literature is to use a common denominator. We can compare pearson chi-squared statistic using the same variance assumption in the full and reduced model. \nIn this case we obtain the same pearson chi2 scaled difference between reduced and full model across all versions. (Issue #3616 is intended to track this further.)", "(\n (res_e2._results.resid_response ** 2 - res_e._results.resid_response ** 2)\n / res_e2.model.family.variance(res_e2.mu)\n).sum()\n\n(\n (res_a2._results.resid_response ** 2 - res_a._results.resid_response ** 2)\n / res_a2.model.family.variance(res_a2.mu)\n * res_a2.model.var_weights\n).sum()\n\n(\n (res_f2._results.resid_response ** 2 - res_f._results.resid_response ** 2)\n / res_f2.model.family.variance(res_f2.mu)\n * res_f2.model.freq_weights\n).sum()\n\n(\n (res_o2._results.resid_response ** 2 - res_o._results.resid_response ** 2)\n / res_o2.model.family.variance(res_o2.mu)\n).sum()", "Remainder\nThe remainder of the notebook just contains some additional checks and can be ignored.", "np.exp(res_e2.model.exposure)[:5], np.asarray(df_a[\"affairs_count\"])[:5]\n\nres_e2.resid_pearson.sum() - res_e.resid_pearson.sum()\n\nres_e2.mu[:5]\n\nres_a2.pearson_chi2, res_a.pearson_chi2, res_a2.resid_pearson.sum(), res_a.resid_pearson.sum()\n\n(\n (res_a2._results.resid_response ** 2)\n / res_a2.model.family.variance(res_a2.mu)\n * res_a2.model.var_weights\n).sum()\n\n(\n (res_a._results.resid_response ** 2)\n / res_a.model.family.variance(res_a.mu)\n * res_a.model.var_weights\n).sum()\n\n(\n (res_a._results.resid_response ** 2)\n / res_a.model.family.variance(res_a2.mu)\n * res_a.model.var_weights\n).sum()\n\nres_e.model.endog[:5], res_e2.model.endog[:5]\n\nres_a.model.endog[:5], res_a2.model.endog[:5]\n\nres_a2.model.endog[:5] * np.exp(res_e2.model.exposure)[:5]\n\nres_a2.model.endog[:5] * res_a2.model.var_weights[:5]\n\nfrom scipy import stats\n\nstats.chi2.sf(27.19530754604785, 1), stats.chi2.sf(29.083798806764687, 1)\n\nres_o.pvalues\n\nprint(res_e2.summary())\nprint(res_e.summary())\n\nprint(res_f2.summary())\nprint(res_f.summary())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aimalz/qp
docs/notebooks/demo.ipynb
mit
[ "qp Demo\nAlex Malz & Phil Marshall\nIn this notebook we use the qp module to approximate some simple, standard, 1-D PDFs using sets of quantiles, samples, and histograms, and assess their relative accuracy. We also show how such analyses can be extended to use \"composite\" PDFs made up of mixtures of standard distributions.\nRequirements\nTo run qp, you will need to first install the module.", "import numpy as np\nimport scipy.stats as sps\nimport scipy.interpolate as spi\n\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport qp", "The qp.PDF Class\nThis is the basic element of qp - an object representing a probability density function. This class is stored in the module pdf.py. The PDF must be initialized with some representation of the distribution.", "# ! cat qp/pdf.py\nP = qp.PDF(vb=True)", "Approximating a Gaussian\nLet's summon a PDF object, and initialize it with a standard function - a Gaussian.", "dist = sps.norm(loc=0, scale=1)\nprint(type(dist))\ndemo_limits = (-5., 5.)\nP = qp.PDF(funcform=dist, limits=demo_limits)\nP.plot()", "Samples\nLet's sample the PDF to see how it looks. When we plot the PDF object, both the true and sampled distributions are displayed.", "np.random.seed(42)\n\nsamples = P.sample(1000, using='mix_mod', vb=False)\nS = qp.PDF(samples=samples, limits=demo_limits)\nS.plot()", "Quantile Parametrization\nNow, let's compute a set of evenly spaced quantiles. These will be carried by the PDF object as p.quantiles. We also demonstrate the initialization of a PDF object with quantiles and no truth function.", "quantiles = P.quantize(N=10)\nQ = qp.PDF(quantiles=quantiles, limits=demo_limits)\nQ.plot()", "Histogram Parametrization\nLet's also compute a histogram representation, that will be carried by the PDF object as p.histogram. The values in each bin are the integrals of the PDF over the range defined by bin ends. We can also initialize a PDF object with a histogram and no truth function.", "histogram = P.histogramize(N=10, binrange=demo_limits)\nH = qp.PDF(histogram=histogram, limits=demo_limits)\nH.plot()\nprint H.truth", "Evaluating the Approximate PDF by Interpolation\nOnce we have chosen a parametrization to approximate the PDF with, we can evaluate the approximate PDF at any point by interpolation (or extrapolation). qp uses scipy.intepolate.interp1d to do this, with linear as the default interpolation scheme. (Most other options do not enable extrapolation, nearest being the exception.)\nLet's test this interpolation by evaluating an approximation at a single point using the quantile parametrization.", "print P.approximate(np.array([0.314]), using='quantiles')\n\nP.mix_mod", "(We can also integrate any approximation.)", "print P.integrate([0., 1.], using='quantiles')", "We can also interpolate the function onto an evenly spaced grid with points within and out of the quantile range, as follows:", "grid = np.linspace(-3., 3., 100)\ngridded = P.approximate(grid, using='quantiles')", "We can also change the interpolation scheme:", "print P.scheme\nprint P.approximate(np.array([0.314]), using='quantiles', scheme='nearest')\nprint P.scheme", "The \"Evaluated\" or \"Gridded\" Parametrization\nA qp.PDF object may also be initialized with a parametrization of a function evaluated on a grid. This is also what is produced by the qp.PDF.approximate() method. So, let's take the output of a qp.PDF approximation evaluation, and use it to instantiate a new qp.PDF object. Note that the evaluate method can be used to return PDF evaluations from either the true PDF or one of its approximations, via the using keyword argument.", "grid = np.linspace(-3., 3., 20)\ngridded = P.evaluate(grid, using='mix_mod', vb=False)\n\nG = qp.PDF(gridded=gridded, limits=demo_limits)\nG.sample(100, vb=False)\nG.plot()", "Let's unpack this a little. The G PDF object has an attribute G.gridded which contains the initial gridded function. This lookup table is used when making further approximations. To check this, let's look at whether this G PDF object knows what the true PDF is, which approximation it's going to use, and then how it performs at making a new approximation to the PDF on a coarser grid:", "print G.truth\n\nprint G.last,'approximation, ', G.scheme, 'interpolation'\n\n# 10-point grid for a coarse approximation:\ncoarse_grid = np.linspace(-3.5, 3.5, 10)\ncoarse_evaluation = G.approximate(coarse_grid, using='gridded')\nprint coarse_evaluation", "Mixture Model Fit\nWe can fit a parametric mixture model to samples from any parametrization. Currently, only a Gaussian mixture model is supported.", "MM = qp.PDF(funcform=dist, limits=demo_limits)\nMM.sample(1000, vb=False)\nMM.mix_mod_fit(n_components=5)\nMM.plot()", "Comparing Parametrizations\nqp supports both qualitative and quantitative comparisons between different distributions, across parametrizations.\nQualitative Comparisons: Plotting\nLet's visualize the PDF object in order to compare the truth and the approximations. The solid, black line shows the true PDF evaluated between the bounds. The green rugplot shows the locations of the 1000 samples we took. The vertical, dotted, blue lines show the percentiles we asked for, and the hotizontal, dotted, red lines show the 10 equally spaced bins we asked for. Note that the quantiles refer to the probability distribution between the bounds, because we are not able to integrate numerically over an infinite range. Interpolations of each parametrization are given as dashed lines in their corresponding colors. Note that the interpolations of the quantile and histogram parametrizations are so close to each other that the difference is almost imperceptible!", "P.plot()", "Quantitative Comparisons", "symm_lims = np.array([-1., 1.])\nall_lims = [symm_lims, 2.*symm_lims, 3.*symm_lims]", "Next, let's compare the different parametrizations to the truth using the Kullback-Leibler Divergence (KLD). The KLD is a measure of how close two probability distributions are to one another -- a smaller value indicates closer agreement. It is measured in units of bits of information, the information lost in going from the second distribution to the first distribution. The KLD calculator here takes in a shared grid upon which to evaluate the true distribution and the interpolated approximation of that distribution and returns the KLD of the approximation relative to the truth, which is not in general the same as the KLD of the truth relative to the approximation. Below, we'll calculate the KLD of the approximation relative to the truth over different ranges, showing that it increases as it includes areas where the true distribution and interpolated distributions diverge.", "for PDF in [Q, H, S]:\n D = []\n for lims in all_lims:\n D.append(qp.metrics.calculate_kld(P, PDF, limits=lims, vb=False))\n print(PDF.truth+' approximation: KLD over 1, 2, 3, sigma ranges = '+str(D))", "Holy smokes, does the quantile approximation blow everything else out of the water, thanks to using spline interpolation.\nThe progression of KLD values should follow that of the root mean square error (RMSE), another measure of how close two functions are to one another. The RMSE also increases as it includes areas where the true distribution and interpolated distribution diverge. Unlike the KLD, the RMSE is symmetric, meaning the distance measured is not that of one distribution from the other but of the symmetric distance between them.", "for PDF in [Q, H, S]:\n D = []\n for lims in all_lims:\n D.append(qp.metrics.calculate_rmse(P, PDF, limits=lims, vb=False))\n print(PDF.truth+' approximation: RMSE over 1, 2, 3, sigma ranges = '+str(D))", "Both the KLD and RMSE metrics suggest that the quantile approximation is better in the high density region, but samples work better when the tails are included. We might expect the answer to the question of which approximation to use to depend on the application, and whether the tails need to be captured or not.\nFinally, we can compare the meoments of each approximation and compare those to the moments ofthe true distribution.", "pdfs = [P, Q, H, S]\nwhich_moments = range(3)\nall_moments = []\nfor pdf in pdfs:\n moments = []\n for n in which_moments:\n moments.append(qp.metrics.calculate_moment(pdf, n))\n all_moments.append(moments)\n \nprint('moments: '+str(which_moments))\nfor i in range(len(pdfs)):\n print(pdfs[i].first+': '+str(all_moments[i]))", "The first three moments have an interesting interpretation. The zeroth moment should always be 1 when calculated over the entire range of redshifts, but the quantile approximation is off by about $7\\%$. We know the first moment in this case is 0, and indeed the evaluation of the first moment for the true distribution deviates from 0 by less than Python's floating point precision. The samples parametrization has a biased estimate for the first moment to the tune of $2\\%$. The second moment for the true distribution is 1, and the quantile parametrization (and, to a lesser extent, the histogram parametrization) fails to provide a good estimate of it.\nAdvanced Usage\nComposite PDFs\nIn addition to individual scipy.stats.rv_continuous objects, qp can be initialized with true distributions that are linear combinations of scipy.stats.rv_continuous objects. To do this, one must create the component distributions and specify their relative weights. This can be done by running qp.PDF.mix_mod_fit() on an existing qp.PDF object once samples have been calculated, or it can be done by hand.", "component_1 = {}\ncomponent_1['function'] = sps.norm(loc=-2., scale=1.)\ncomponent_1['coefficient'] = 4.\ncomponent_2 = {}\ncomponent_2['function'] = sps.norm(loc=2., scale=1.)\ncomponent_2['coefficient'] = 1.\ndist_info = [component_1, component_2]\n\ncomposite_lims = (-5., 5.)\n\nC_dist = qp.composite(dist_info)\nC = qp.PDF(funcform=C_dist, limits=composite_lims)\nC.plot()", "We can calculate the quantiles for such a distribution.", "Cq = qp.PDF(funcform=C_dist, limits = composite_lims)\nCq.quantize(N=20, limits=composite_lims, vb=False)\nCq.plot()", "Similarly, the histogram parametrization is also supported for composite PDFs.", "Ch = qp.PDF(funcform=C_dist, limits = composite_lims)\nCh.histogramize(N=20, binrange=composite_lims, vb=True)\nCh.plot()", "Finally, samples from this distribution may also be taken, and a PDF may be reconstructed from them. Note: this uses scipy.stats.gaussian_kde, which determines its bandwidth/kernel size using Scott's Rule, Silverman's Rule, a fixed bandwidth, or a callable function that returns a bandwidth.", "Cs = qp.PDF(funcform=C_dist, limits = composite_lims)\nCs.sample(N=20, using='mix_mod', vb=False)\nCs.plot()\n\nqD = qp.metrics.calculate_kld(C, Cq, limits=composite_lims, dx=0.001, vb=True)\nhD = qp.metrics.calculate_kld(C, Ch, limits=composite_lims, dx=0.001, vb=True)\nsD = qp.metrics.calculate_kld(C, Cs, limits=composite_lims, dx=0.001, vb=True)\nprint(qD, hD, sD)", "PDF Ensembles\nqp also includes infrastructure for handling ensembles of PDF objects with shared metaparameters, such as histogram bin ends, but unique per-object parameters, such as histogram bin heights. A qp.Ensemble object takes as input the number of items in the ensemble and, optionally, a list, with contents corresponding to one of the built-in formats.\nLet's demonstrate on PDFs with a functional form, which means the list of information for each member of the ensemble is scipy.stats.rv_continuous or qp.composite objects.", "N = 10\nin_dists = []\nfor i in range(N):\n dist = sps.norm(loc=sps.uniform.rvs(), scale=sps.uniform.rvs())\n in_dists.append(dist)\n \nE = qp.Ensemble(N, funcform=in_dists, vb=True) ", "As with individual qp.PDF objects, we can evaluate the PDFs at given points, convert to other formats, and integrate.", "eval_range = np.linspace(-5., 5., 100)\nE.evaluate(eval_range, using='mix_mod', vb=False)\n\nE.quantize(N=10)\n\nE.integrate(demo_limits, using='mix_mod')", "Previous versions of qp included a built-in function for \"stacking\" the member PDFs of a qp.Ensemble object. This functionality has been removed to discourage use of this procedure in science applications. However, we provide a simple function one may use should this functionality be desired.", "def stack(ensemble, loc, using, vb=True):\n \"\"\"\n Produces an average of the PDFs in the ensemble\n\n Parameters\n ----------\n ensemble: qp.Ensemble\n the ensemble of PDFs to stack\n loc: ndarray, float or float\n location(s) at which to evaluate the PDFs\n using: string\n which parametrization to use for the approximation\n vb: boolean\n report on progress\n\n Returns\n -------\n stacked: tuple, ndarray, float\n pair of arrays for locations where approximations were evaluated\n and the values of the stacked PDFs at those points\n \"\"\"\n evaluated = ensemble.evaluate(loc, using=using, norm=True, vb=vb)\n stack = np.mean(evaluated[1], axis=0)\n stacked = (evaluated[0], stack)\n return stacked\n\nstacked = stack(E, eval_range, using='quantiles')\nplt.plot(stacked[0], stacked[-1])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
david148877/Where-Should-I-Go-
201509ETL-master/201509ETL-master/20150911 ETL.ipynb
mit
[ "import requests\nres = requests.get('https://www.python.org/')\nprint res", "https://zh.wikipedia.org/wiki/HTTP%E7%8A%B6%E6%80%81%E7%A0%81", "#print res.text\nprint dir(res)\n\nprint res.status_code\nprint res.headers['content-type']\n\nimport requests\npayload ={\n'StartStation':'977abb69-413a-4ccf-a109-0272c24fd490',\n'EndStation':'fbd828d8-b1da-4b06-a3bd-680cdca4d2cd',\n'SearchDate':'2015/09/11',\n'SearchTime':'14:30',\n'SearchWay':'DepartureInMandarin'\n}\nres = requests.post('https://www.thsrc.com.tw/tw/TimeTable/SearchResult', data = payload)\nprint res\n\nfrom datetime import datetime\ndatetime.strptime('Fri Sep 11 12:56:09 2015', '%y %b %d %H:%M:%S %Y',)\n\nimport requests\nres = requests.get('http://24h.pchome.com.tw/prod/DRAA0C-A90067G2U')\nprint res.text\n\nimport requests\nres = requests.get('http://ecapi.pchome.com.tw/ecshop/prodapi/v2/prod/button&id=DRAA0C-A90067G2U&fields=Seq,Id,Price,Qty,ButtonType,SaleStatus&_callback=jsonp_button?_callback=jsonp_button')\nprint res.text", "http://release.seleniumhq.org/selenium-ide/2.9.0/selenium-ide-2.9.0.xpi", "# -*- coding: utf-8 -*-\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.support.ui import Select\nfrom selenium.common.exceptions import NoSuchElementException\nfrom selenium.common.exceptions import NoAlertPresentException\nimport time, re\n\ndriver = webdriver.Firefox()\ndriver.implicitly_wait(3)\nbase_url = \"http://www.agoda.com\"\n\ndriver.get(base_url + \"/zh-tw/city/taipei-tw.html\")\ndriver.find_element_by_id(\"CheckInMonthYear\").click()\n\ndriver.implicitly_wait(1)\nSelect(driver.find_element_by_id(\"CheckInMonthYear\")).select_by_visible_text(u\"2015年11月\")\ndriver.implicitly_wait(1)\ndriver.find_element_by_id(\"search-submit\").click()\ndriver.implicitly_wait(1)\ndriver.implicitly_wait(3)\ndriver.find_element_by_link_text(u\"下一頁\").click()", "http://phantomjs.org/download.html\nhttp://casperjs.org/", "from bs4 import BeautifulSoup \nhtml_sample = ' \\\n<html> \\\n<body> \\\n<h1 id=\"title\">Hello World</h1> \\\n<a href=\"#\" class=\"link\">This is link1</a> \\\n<a href=\"# link2\" class=\"link\">This is link2</a> \\ </body> \\\n</html>'\nsoup = BeautifulSoup(html_sample) \nprint soup.text\n\natag = soup.select('a')\nprint atag[0]\nprint atag[1]\n\nprint soup.select('#title') # id => #\nprint soup.select('#title')[0]\nprint soup.select('#title')[0].text\n\nprint soup.select('.link') # class => . \nprint soup.select('.link')[0]\nprint soup.select('.link')[0].text\n\n\n\nfor link in soup.select('.link'):\n print link.text\n\na = '<a href=\"#\" qoo=\"123\" abc=\"456\" class=\"link\"> </a>'\nsoup2 = BeautifulSoup(a)\nprint soup2.select('a')\nprint soup2.select('a')[0]\nprint soup2.select('a')[0]['href']\nprint soup2.select('a')[0]['class']\nprint soup2.select('a')[0]['qoo']\nprint soup2.select('a')[0]['abc']\n\nfor link in soup.select('.link'):\n print link['href']\n\nimport requests\nfrom bs4 import BeautifulSoup as bs\nres = requests.get('https://tw.stock.yahoo.com/q/h?s=4105')\nsoup = bs(res.text)\ntable = soup.select('table .yui-text-left')[0]\nfor tr in table.select('tr')[1:]:\n print tr.text.strip()", "https://chrome.google.com/webstore/detail/infolite/ipjb adabbpedegielkhgpiekdlmfpgal", "# -*- coding: utf-8 -*-\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.support.ui import Select\nfrom selenium.common.exceptions import NoSuchElementException\nfrom selenium.common.exceptions import NoAlertPresentException\nimport time, re\nfrom bs4 import BeautifulSoup\n\ndriver = webdriver.Firefox()\ndriver.implicitly_wait(3)\n\ndriver.get('http://24h.pchome.com.tw/prod/DRAA0C-A90067G2U')\ndriver.implicitly_wait(1)\nsoup = BeautifulSoup(driver.page_source)\nprint soup.select('#PriceTotal')[0].text\ndriver.close()\n\nimport bs4\nprint dir(bs4)\n\nfrom bs4 import BeautifulSoup \nprint dir(BeautifulSoup) \n\nimport bs4\ndoup = bs4.BeautifulSoup(res.text)\n#print doup" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ekansa/open-context-jupyter
notebooks/EOL to GBIF.ipynb
mit
[ "In order to widen Open Context's interoperability with other scientific information systems, we are starting to cross-reference Open Context published biological taxonomy categores with GBIF (Global Biodiversity Information Facility, https://gbif.org) identifiers.\nTo start this process, this Jupyter notebooks will find GBIF identifiers that correspond with EOL (Encyclopedia of Life, https://eol.org) identifiers already used by Open Context.\nThe datasets used and created by this notebook are stored in the /files/eol directory. The files used and created by this notebook include:\n\neol-gbif.csv.gz (This source of the data is: https://opendata.eol.org/dataset/identifier-map, dated 2019-12-20. The data is filtered to only include records where the resource_id is 767, which corresponds to GBIF.)\noc-eol-uris.csv (This is a CSV dump of from the Open Context, current as of 2020-01-15, link_entities model where URIs started with 'http://eol.org'. It represents all of the EOL entities that Open Context uses to cross-reference project-specific biological taxonomic concepts.)\noc-eol-gbif-with-missing.csv (This is the scratch, working data file that has oc-eol-uri.csv data, with joined records from eol-gbif.csv. Execution of this notebook creates this file and periodically updates this file with names and new IDs resulting from requests to the GBIF API.)\noc-eol-gbif.csv (This notebook generates this file which describes equivalences between the EOL items used by Open Context and corresponding GBIF identifiers.)\noc-eol-no-gbif.csv (This notebook generates this file which describes EOL items used by Open Context that lack corresponding GBIF identifiers. These records will probably need manual curation.)", "import json\nimport os\nimport requests\nfrom time import sleep\n\nimport numpy as np\nimport pandas as pd\n\n# Get the root_path for this jupyter notebook repo.\nrepo_path = os.path.dirname(os.path.abspath(os.getcwd()))\n\n# Path for the (gzip compressed) CSV data dump from EOL \n# with GBIF names and EOL IDs.\neol_gbif_names_path = os.path.join(\n repo_path, 'files', 'eol', 'eol-gbif.csv.gz'\n)\n\n# Path for the CSV data from Open Context of all EOL\n# URIs and IDs currently referenced by Open Context.\noc_eol_path = os.path.join(\n repo_path, 'files', 'eol', 'oc-eol-uris.csv'\n)\n\n# Path for the CSV data that has EOL URIs used by Open Context\n# with GBIF URIs and missing GBIF URIs\noc_eol_gbif_w_missing_path = os.path.join(\n repo_path, 'files', 'eol', 'oc-eol-gbif-with-missing.csv'\n)\n\n# Path for CSV data that has EOL URIs used by Open Context and\n# corresponding GBIF URIs and Names.\noc_eol_gbif_path = os.path.join(\n repo_path, 'files', 'eol', 'oc-eol-gbif.csv'\n)\n\n# Path for CSV data that has EOL URIs used by Open Context\n# but no corresponding GBIF URIs.\noc_eol_no_gbif_path = os.path.join(\n repo_path, 'files', 'eol', 'oc-eol-no-gbif.csv'\n)", "Now define some fuctions that we'll be using over and over.", "def save_result_files(\n df,\n path_with_gbif=oc_eol_gbif_path, \n path_without_gbif=oc_eol_no_gbif_path\n):\n \"\"\"Saves files for outputs with and without GBIF ids\"\"\"\n # Save the interim results with matches\n gbif_index = ~df['gbif_id'].isnull()\n df_ok_gbif = df[gbif_index].copy().reset_index(drop=True)\n print('Saving EOL matches with GBIF...')\n df_ok_gbif.to_csv(path_with_gbif, index=False)\n \n no_gbif_index = df['gbif_id'].isnull()\n df_ok_gbif = df[no_gbif_index].copy().reset_index(drop=True)\n print('Saving EOL records without GBIF matches...')\n df_ok_gbif.to_csv(path_without_gbif, index=False)\n \n\ndef get_gbif_cannonical_name(gbif_id, sleep_secs=0.25):\n \"\"\"Get the cannonical name from the GBIF API for an ID\"\"\"\n sleep(sleep_secs)\n url = 'https://api.gbif.org/v1/species/{}'.format(gbif_id)\n print('Get URL: {}'.format(url))\n r = requests.get(url)\n r.raise_for_status()\n json_r = r.json()\n return json_r.get('canonicalName')\n\n\ndef get_gbif_vernacular_name(gbif_id, lang_code='eng', sleep_secs=0.25):\n \"\"\"Get the first vernacular name from the GBIF API for an ID\"\"\"\n sleep(sleep_secs)\n url = 'http://api.gbif.org/v1/species/{}/vernacularNames'.format(\n gbif_id\n )\n print('Get URL: {}'.format(url))\n r = requests.get(url)\n r.raise_for_status()\n json_r = r.json()\n vern_name = None\n for result in json_r.get('results', []):\n if result.get('language') != lang_code:\n continue\n vern_name = result.get(\"vernacularName\")\n if vern_name is not None:\n break\n return vern_name\n\n\ndef add_names_to_gbif_ids(\n df, \n limit_by_method=None, \n save_path=oc_eol_gbif_w_missing_path\n):\n \"\"\"Adds names to GBIF ids where those names are missing\"\"\"\n gbif_index = ~df['gbif_id'].isnull()\n df.loc[gbif_index, 'gbif_uri'] = df[gbif_index]['gbif_id'].apply(\n lambda x: 'https://www.gbif.org/species/{}'.format(int(x))\n )\n df.to_csv(save_path, index=False)\n\n # Now use the GBIF API to fetch cannonical names for GBIF items\n # where we do not yet have those names.\n need_can_name_index = (df['gbif_can_name'].isnull() & gbif_index)\n if limit_by_method:\n need_can_name_index &= (df['gbif_rel_method'] == limit_by_method)\n df.loc[need_can_name_index, 'gbif_can_name'] = df[need_can_name_index]['gbif_id'].apply(\n lambda x: get_gbif_cannonical_name(int(x))\n )\n df.to_csv(save_path, index=False)\n \n # Now use the GBIF API to fetch vernacular names for GBIF items\n # where we do not yet have those names.\n need_vern_name_index = (df['gbif_vern_name'].isnull() & gbif_index)\n if limit_by_method:\n need_vern_name_index &= (df['gbif_rel_method'] == limit_by_method)\n df.loc[need_vern_name_index, 'gbif_vern_name'] = df[need_vern_name_index]['gbif_id'].apply(\n lambda x: get_gbif_vernacular_name(int(x))\n )\n df.to_csv(save_path, index=False)\n \n return df\n \n\ndef get_gbif_id_by_name(name, sleep_secs=0.25, allow_alts=False):\n \"\"\"Get a GBIF ID by seatching a name via the GBIF API\"\"\"\n sleep(sleep_secs)\n if ' ' in name:\n # Only use the first 2 parts of a name with a space\n name_sp = name.split(' ')\n # This also turns the space into a '+', good for URL enconding.\n if len(name_sp[0]) <= 2 or len(name_sp[1]) <= 2:\n return np.nan\n name = name_sp[0] + '+' + name_sp[1]\n \n url = 'https://api.gbif.org/v1/species/match?verbose=true&dataset_key=d7dddbf4-2cf0-4f39-9b2a-bb099caae36c'\n url += '&name={}'.format(name)\n print('Get URL: {}'.format(url))\n r = requests.get(url)\n r.raise_for_status()\n json_r = r.json()\n id = json_r.get('usageKey')\n if id is not None:\n return int(id)\n elif not allow_alts:\n # We don't have an ID, but we're not yet allowing alternatives\n return np.nan\n # Below is for multiple equal matches\n if not allow_alts or json_r.get('matchType') != 'NONE':\n # We don't have an exact match\n return np.nan\n alts = json_r.get('alternatives', [])\n if len(alts) == 0:\n # We don't have alternatives\n return np.nan\n # Chose the first alternative.\n id = alts[0].get('usageKey')\n if not id:\n return np.nan\n return int(id)\n\n\nif not os.path.isfile(oc_eol_gbif_w_missing_path):\n # We don't have the oc_eol_gbif_with missing data\n # so we need to make it.\n df_eol_gbif_names = pd.read_csv(eol_gbif_names_path)\n df_oc_eol = pd.read_csv(oc_eol_path, encoding='utf-8')\n df_oc_eol.rename(columns={'id': 'page_id'}, inplace=True)\n df = df_oc_eol.merge(df_eol_gbif_names, on=['page_id'], how='left')\n print('We have {} rows of EOL uris in OC to relate to GBIF'.format(\n len(df.index)\n )\n )\n df.sort_values(by=['page_id'], inplace=True)\n # Now pull out the GBIF integer ID\n df['gbif_id'] = pd.to_numeric(\n df['resource_pk'], \n errors='coerce', \n downcast='integer'\n )\n df['gbif_rel_method'] = np.nan\n df['gbif_uri'] = np.nan\n df['gbif_can_name'] = np.nan\n df['gbif_vern_name'] = np.nan\n \n # Now note that the rows where the gbif_id is not null\n # come from the EOL-GBIF names dataset\n gbif_index = ~df['gbif_id'].isnull()\n df.loc[gbif_index, 'gbif_rel_method'] = 'EOL-GBIF-names'\n df.to_csv(oc_eol_gbif_w_missing_path, index=False)\n \n \n \n\n# Get our working dataframe, now that we know that it\n# must have been initially created.\ndf = pd.read_csv(oc_eol_gbif_w_missing_path)", "Now that we have a main working dataset, we need to add cannonical and vernacular names to the GBIF IDs.", "# Use GBIF API calls to add names to records with GBIF IDs but currently\n# missing names.\ndf = add_names_to_gbif_ids(df, save_path=oc_eol_gbif_w_missing_path)\n\n", "Now that we have added GBIF names to rows that have GBIF IDs, we will save our interim results.", "# Save the Open Context EOL URIs with clear GBIF matches,\n# as well as a file without matches\nsave_result_files(df)", "At this point, we will still be missing GBIF IDs for many rows of EOL records. So now, we will use the GBIF search API to find related GBIF IDs.", "# Now try to look up GBIF items where we don't have\n# clear matches.\nlook_ups = [\n # Tuples are:\n # (field_for_name, allow_alts, gbif_rel_method,),\n ('preferred_canonical_for_page', False, 'EOL-pref-page-GBIF-exact-search',),\n ('preferred_canonical_for_page', True, 'EOL-pref-page-GBIF-search-w-alts',),\n ('label', False, 'EOL-OC-label-GBIF-exact-search',),\n ('label', True, 'EOL-OC-label-GBIF-search-w-alts',), \n]\n\n# Now iterate through these look_up configs.\nfor field_for_name, allow_alts, gbif_rel_method in look_ups:\n gbif_index = ~df['gbif_id'].isnull()\n ok_eol = df[gbif_index]['uri'].unique().tolist()\n no_gbif_index = (df['gbif_id'].isnull() & ~df['uri'].isin(ok_eol))\n\n # Get the index where there's a preferred_canonical_for_page (EOL) name, but\n # where we have no GBIF id yet.\n no_gbif_index_w_name = (~df[field_for_name].isnull() & no_gbif_index)\n # Use the GBIF API to lookup GBIF IDs.\n df.loc[no_gbif_index_w_name, 'gbif_id'] = df[no_gbif_index_w_name][field_for_name].apply(\n lambda x: get_gbif_id_by_name(x, allow_alts=allow_alts)\n )\n # The new GBIF IDs will have a gbif_rel_method of null. Make sure that we record\n # the gbif_rel_method at this point.\n new_gbif_id_index = (~df['gbif_id'].isnull() & df['gbif_rel_method'].isnull())\n df.loc[new_gbif_id_index, 'gbif_rel_method'] = gbif_rel_method\n \n # Save the interim results\n df.to_csv(oc_eol_gbif_w_missing_path, index=False)\n \n # Now add names to the rows where we just found new IDs.\n df = add_names_to_gbif_ids(\n df, \n limit_by_method=gbif_rel_method, \n save_path=oc_eol_gbif_w_missing_path\n )\n \n # Save the interim results, again.\n df.to_csv(oc_eol_gbif_w_missing_path, index=False)\n # Save the interim results with matches to a file\n # and without matches to another file.\n save_result_files(df)\n \n \n \n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
y2ee201/Deep-Learning-Nanodegree
first-neural-network/DLND Your first neural network.ipynb
mit
[ "Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "rides[:24*10].plot(x='dteday', y='cnt')", "Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]\n\n# input size\nprint(train_features.shape)", "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.", "class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n # self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n #\n def sigmoid(x):\n return 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation here\n self.activation_function = sigmoid\n \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n \n # X = np.reshape(X, (X.shape[0],1))\n \n # DEBUG 1\n # print('X')\n # print(X.shape)\n # print('self.weights_input_to_hidden.shape')\n # print(self.weights_input_to_hidden.shape)\n \n hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n error = y - final_outputs # Output layer error is the difference between desired target and actual output.\n \n # TODO: Calculate the hidden layer's contribution to the error\n hidden_error = np.dot(self.weights_hidden_to_output, error)\n \n # TODO: Backpropagated error terms - Replace these values with your calculations.\n output_error_term = error\n hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)\n \n # print('hidden_error.shape')\n # print(hidden_error)\n # print('hidden_outputs.shape')\n # print(hidden_outputs)\n # print('1 - hidden_outputs.shape')\n # print(1 - hidden_outputs)\n \n # DEBUG 2\n # print('output_error_term.shape')\n # print(output_error_term)\n # print('hidden_outputs.shape')\n # print(hidden_outputs)\n # print('delta_weights_h_o.shape')\n # print(delta_weights_h_o)\n # print((output_error_term * hidden_outputs))\n \n # print('delta_weights_i_h.shape')\n # print(delta_weights_i_h.shape)\n # print('hidden_error_term.shape')\n # print(hidden_error_term.shape)\n # print('X.shape')\n # print(X.shape)\n \n # Weight step (input to hidden)\n delta_weights_i_h += hidden_error_term * X[:, None]\n # Weight step (hidden to output)\n delta_weights_h_o += np.reshape(output_error_term * hidden_outputs, delta_weights_h_o.shape)\n \n\n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer \n \n return final_outputs\n\n# my debugging\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\n\ntestNN = NeuralNetwork(input_nodes=3, hidden_nodes=2, output_nodes=1, learning_rate=0.5)\ntestNN.train(features=inputs, targets=targets)\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.", "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)", "Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.", "import sys\n\n### Set the hyperparameters here ###\niterations = 30000\nlearning_rate = 0.0065\nhidden_nodes = 10\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()", "Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gcgruen/homework
foundations-homework/05/homework-05-gruen-nyt_graded.ipynb
mit
[ "All API's: http://developer.nytimes.com/\nArticle search API: http://developer.nytimes.com/article_search_v2.json\nBest-seller API: http://developer.nytimes.com/books_api.json#/Documentation\nTest/build queries: http://developer.nytimes.com/\nTip: Remember to include your API key in all requests! And their interactive web thing is pretty bad. You'll need to register for the API key.\n1) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?", "#API Key: 0c3ba2a8848c44eea6a3443a17e57448", "Graded = 8/8", "import requests\nbestseller_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/2009-05-10/hardcover-fiction?api-key=0c3ba2a8848c44eea6a3443a17e57448')\nbestseller_data = bestseller_response.json()\nprint(\"The type of bestseller_data is:\", type(bestseller_data))\nprint(\"The keys of bestseller_data are:\", bestseller_data.keys())\n\n# Exploring the data structure further\nbestseller_books = bestseller_data['results']\nprint(type(bestseller_books))\nprint(bestseller_books[0])\n\nfor book in bestseller_books:\n #print(\"NEW BOOK!!!\")\n #print(book['book_details'])\n #print(book['rank'])\n if book['rank'] == 1:\n for element in book['book_details']:\n print(\"The book that topped the hardcover fiction NYT Beststeller list on Mothers Day in 2009 was\", element['title'], \"written by\", element['author'])", "After writing a code that returns a result, now automating that for the various dates using a function:", "def bestseller(x, y):\n bestsellerA_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/'+ x +'/hardcover-fiction?api-key=0c3ba2a8848c44eea6a3443a17e57448')\n bestsellerA_data = bestsellerA_response.json()\n bestsellerA_books = bestsellerA_data['results']\n \n for book in bestsellerA_books:\n if book['rank'] == 1:\n for element in book['book_details']:\n print(\"The book that topped the hardcover fiction NYT Beststeller list on\", y, \"was\", \n element['title'], \"written by\", element['author'])\n\nbestseller('2009-05-10', \"Mothers Day 2009\")\nbestseller('2010-05-09', \"Mothers Day 2010\")\nbestseller('2009-06-21', \"Fathers Day 2009\")\nbestseller('2010-06-20', \"Fathers Day 2010\")\n\n#Alternative solution would be, instead of putting this code into a function to loop it: \n#1) to create a dictionary called dates containing y as keys and x as values to these keys\n#2) to take the above code and nest it into a for loop that loops through the dates, each time using the next key:value pair\n # for date in dates:\n # replace value in URL and run the above code used inside the function\n # replace key in print statement", "2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?", "# STEP 1: Exploring the data structure using just one of the dates from the question\nbookcat_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2009-06-06&api-key=0c3ba2a8848c44eea6a3443a17e57448')\nbookcat_data = bookcat_response.json()\nprint(type(bookcat_data))\nprint(bookcat_data.keys())\n\nbookcat = bookcat_data['results']\nprint(type(bookcat))\nprint(bookcat[0])\n\n# STEP 2: Writing a loop that runs the same code for both dates (no function, as only one variable)\ndates = ['2009-06-06', '2015-06-15']\nfor date in dates:\n bookcatN_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=' + date + '&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n bookcatN_data = bookcatN_response.json()\n bookcatN = bookcatN_data['results']\n \n category_listN = []\n for category in bookcatN:\n category_listN.append(category['display_name'])\n print(\" \")\n print(\"THESE WERE THE DIFFERENT BOOK CATEGORIES THE NYT RANKED ON\", date)\n for cat in category_listN:\n print(cat)\n\n# STEP 1a: EXPLORING THE DATA\n\ntest_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gaddafi+Libya&api-key=0c3ba2a8848c44eea6a3443a17e57448')\ntest_data = test_response.json()\nprint(type(test_data))\nprint(test_data.keys())\n\ntest_hits = test_data['response']\nprint(type(test_hits))\nprint(test_hits.keys())\n\n# STEP 1b: EXPLORING THE META DATA\n\ntest_hits_meta = test_data['response']['meta']\nprint(\"The meta data of the search request is a\", type(test_hits_meta))\nprint(\"The dictionary despot_hits_meta has the following keys:\", test_hits_meta.keys())\nprint(\"The search requests with the TEST URL yields total:\")\ntest_hit_count = test_hits_meta['hits']\nprint(test_hit_count)\n\n# STEP 2: BUILDING THE CODE TO LOOP THROUGH DIFFERENT SPELLINGS\ndespot_names = ['Gadafi', 'Gaddafi', 'Kadafi', 'Qaddafi']\n\nfor name in despot_names:\n despot_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=' + name +'+Libya&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n despot_data = despot_response.json()\n \n despot_hits_meta = despot_data['response']['meta']\n despot_hit_count = despot_hits_meta['hits']\n print(\"The NYT has referred to the Libyan despot\", despot_hit_count, \"times using the spelling\", name)", "4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?", "hip_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&fq=pub_year:1995&api-key=0c3ba2a8848c44eea6a3443a17e57448')\nhip_data = hip_response.json()\nprint(type(hip_data))\nprint(hip_data.keys())\n\n# STEP 1: EXPLORING THE DATA STRUCTURE:\n\nhipsters = hip_data['response']\n#print(hipsters)\n#hipsters_meta = hipsters['meta']\n#print(type(hipsters_meta))\nhipsters_results = hipsters['docs']\nprint(hipsters_results[0].keys())\n#print(type(hipsters_results))\n\n#STEP 2: LOOPING FOR THE ANSWER:\n\nearliest_date = '1996-01-01'\nfor mention in hipsters_results:\n if mention['pub_date'] < earliest_date:\n earliest_date = mention['pub_date']\n print(\"This is the headline of the first text to mention 'hipster' in 1995:\", mention['headline']['main'])\n print(\"It was published on:\", mention['pub_date']) \n print(\"This is its lead paragraph:\")\n print(mention['lead_paragraph'])", "5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?\nTip: You'll want to put quotes around the search term so it isn't just looking for \"gay\" and \"marriage\" in the same article.\nTip: Write code to find the number of mentions between Jan 1, 1950 and Dec 31, 1959.", "# data structure requested same as in task 3, just this time loop though different date ranges\n\ndef countmention(a, b, c):\n if b == ' ':\n marry_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=\"gay marriage\"&begin_date='+ a +'&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n else:\n marry_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=\"gay marriage\"&begin_date='+ a +'&end_date='+ b +'&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n \n marry_data = marry_response.json()\n\n marry_hits_meta = marry_data['response']['meta']\n marry_hit_count = marry_hits_meta['hits']\n print(\"The count for NYT articles mentioning 'gay marriage' between\", c, \"is\", marry_hit_count)\n\n#supposedly, there's a way to solve the following part in a more efficient way, but those I tried did not work, \n#so it ended up being more time-efficient just to type it:\ncountmention('19500101', '19591231', '1950 and 1959')\ncountmention('19600101', '19691231', '1960 and 1969')\ncountmention('19700101', '19791231', '1970 and 1979')\ncountmention('19800101', '19891231', '1980 and 1989')\ncountmention('19900101', '19991231', '1990 and 1999')\ncountmention('20000101', '20091231', '2000 and 2009')\ncountmention('20100101', ' ', '2010 and present')", "6) What section talks about motorcycles the most?\nTip: You'll be using facets", "moto_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycle&facet_field=section_name&facet_filter=true&api-key=0c3ba2a8848c44eea6a3443a17e57448')\nmoto_data = moto_response.json()\n\n#STEP 1: EXPLORING DATA STRUCTURE\n#print(type(moto_data))\n#print(moto_data.keys())\n#print(moto_data['response'])\n#print(moto_data['response'].keys())\n#print(moto_data['response']['facets'])\n\n#STEP 2: Code to get to the answer\nmoto_facets = moto_data['response']['facets']\n#print(moto_facets)\n#print(moto_facets.keys())\nmoto_sections = moto_facets['section_name']['terms']\n#print(moto_sections)\n\n#this for loop is not necessary, but it's nice to know the counts \n#(also to check whether the next loop identifies the right section)\nfor section in moto_sections:\n print(\"The section\", section['term'], \"mentions motorcycles\", section['count'], \"times.\")\n\nmost_motorcycles = 0\nfor section in moto_sections:\n if section['count'] > most_motorcycles:\n most_motorcycles = section['count']\n print(\" \")\n print(\"That means the section\", section['term'], \"mentions motorcycles the most, namely\", section['count'], \"times.\")", "7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?\nTip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.", "picks_offset_values = [0, 20, 40]\npicks_review_list = []\n\nfor value in picks_offset_values:\n picks_response = requests.get ('http://api.nytimes.com/svc/movies/v2/reviews/search.json?&offset=' + str(value) + '&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n picks_data = picks_response.json()\n\n#STEP 1: EXPLORING THE DATA STRUCTURE (without the loop)\n\n#print(picks_data.keys())\n#print(picks_data['num_results'])\n#print(picks_data['results'])\n#print(type(picks_data['results']))\n#print(picks_data['results'][0].keys())\n\n#STEP 2: After writing a test code (not shown) without the loop, now CODING THE LOOP\n\n last_reviews = picks_data['num_results']\n picks_results = picks_data['results']\n \n critics_pick_count = 0\n for review in picks_results:\n if review['critics_pick'] == 1:\n critics_pick_count = critics_pick_count + 1\n picks_new_count = critics_pick_count \n picks_review_list.append(picks_new_count)\n print(\"Out of the last\", last_reviews + value, \"movie reviews,\", sum(picks_review_list), \"were Critics' picks.\")", "8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?", "#STEP 1: EXPLORING THE DATA STRUCTURE (without the loop)\n#critics_response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?&offset=0&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n#critics_data = critics_response.json()\n#print(critics_data.keys())\n#print(critics_data['num_results'])\n#print(critics_data['results'])\n#print(type(critics_data['results']))\n#print(critics_data['results'][0].keys())\n\n#STEP 2: CREATE A LOOP, THAT GOES THROUGH THE SEARCH RESULTS FOR EACH OFFSET VALUE AND STORES THE RESULTS IN THE SAME LIST\n#(That list is then passed on to step 3)\n\ncritics_offset_value = [0, 20]\ncritics_list = [ ]\nfor value in critics_offset_value:\n critics_response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?&offset=' + str(value) + '&api-key=0c3ba2a8848c44eea6a3443a17e57448')\n critics_data = critics_response.json()\n \n critics = critics_data['results']\n\n for review in critics:\n critics_list.append(review['byline'])\n #print(critics_list)\nunique_critics = set(critics_list)\n#print(unique_critics)\n \n#STEP 3: FOR EVERY NAME IN THE UNIQUE CRITICS LIST, LOOP THROUGH NON-UNIQUE LIST TO COUNT HOW OFTEN THEY OCCUR\n#STEP 4: SELECT THE ONE THAT HAS WRITTEN THE MOST (from the #print statement below, I know it's two people with same score)\n\nmax_count = 0\nfor name in unique_critics:\n name_count = 0\n for critic in critics_list:\n if critic == name:\n name_count = name_count + 1\n if name_count > max_count:\n max_count = name_count\n max_name = name\n if name_count == max_count:\n same_count = name_count\n same_name = name\n #print(name, \"has written\", name_count, \"reviews out of the last 40 reviews.\")\nprint(max_name, \"has written the most of the last 40 reviews:\", max_count)\nprint(same_name, \"has written the most of the last 40 reviews:\", same_count)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
martinggww/lucasenlights
MachineLearning/DataScience-Python3/TopPages.ipynb
cc0-1.0
[ "Cleaning Your Data\nLet's take a web access log, and figure out the most-viewed pages on a website from it! Sounds easy, right?\nLet's set up a regex that lets us parse an Apache access log line:", "import re\n\nformat_pat= re.compile(\n r\"(?P<host>[\\d\\.]+)\\s\"\n r\"(?P<identity>\\S*)\\s\"\n r\"(?P<user>\\S*)\\s\"\n r\"\\[(?P<time>.*?)\\]\\s\"\n r'\"(?P<request>.*?)\"\\s'\n r\"(?P<status>\\d+)\\s\"\n r\"(?P<bytes>\\S*)\\s\"\n r'\"(?P<referer>.*?)\"\\s'\n r'\"(?P<user_agent>.*?)\"\\s*'\n)\n", "Here's the full path to the log file I'm analyzing; change this if you want to run this stuff yourself:", "logPath = \"E:\\\\sundog-consult\\\\Udemy\\\\DataScience\\\\access_log.txt\"", "Now we'll whip up a little script to extract the URL in each access, and use a dictionary to count up the number of times each one appears. Then we'll sort it and print out the top 20 pages. What could go wrong?", "URLCounts = {}\n\nwith open(logPath, \"r\") as f:\n for line in (l.rstrip() for l in f):\n match= format_pat.match(line)\n if match:\n access = match.groupdict()\n request = access['request']\n (action, URL, protocol) = request.split()\n if URLCounts.has_key(URL):\n URLCounts[URL] = URLCounts[URL] + 1\n else:\n URLCounts[URL] = 1\n\nresults = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True)\n\nfor result in results[:20]:\n print(result + \": \" + str(URLCounts[result]))", "Hm. The 'request' part of the line is supposed to look something like this:\nGET /blog/ HTTP/1.1\nThere should be an HTTP action, the URL, and the protocol. But it seems that's not always happening. Let's print out requests that don't contain three items:", "URLCounts = {}\n\nwith open(logPath, \"r\") as f:\n for line in (l.rstrip() for l in f):\n match= format_pat.match(line)\n if match:\n access = match.groupdict()\n request = access['request']\n fields = request.split()\n if (len(fields) != 3):\n print(fields)\n", "Huh. In addition to empty fields, there's one that just contains garbage. Well, let's modify our script to check for that case:", "URLCounts = {}\n\nwith open(logPath, \"r\") as f:\n for line in (l.rstrip() for l in f):\n match= format_pat.match(line)\n if match:\n access = match.groupdict()\n request = access['request']\n fields = request.split()\n if (len(fields) == 3):\n URL = fields[1]\n if URLCounts.has_key(URL):\n URLCounts[URL] = URLCounts[URL] + 1\n else:\n URLCounts[URL] = 1\n\nresults = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True)\n\nfor result in results[:20]:\n print(result + \": \" + str(URLCounts[result]))", "It worked! But, the results don't really make sense. What we really want is pages accessed by real humans looking for news from our little news site. What the heck is xmlrpc.php? A look at the log itself turns up a lot of entries like this:\n46.166.139.20 - - [05/Dec/2015:05:19:35 +0000] \"POST /xmlrpc.php HTTP/1.0\" 200 370 \"-\" \"Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)\"\nI'm not entirely sure what the script does, but it points out that we're not just processing GET actions. We don't want POSTS, so let's filter those out:", "URLCounts = {}\n\nwith open(logPath, \"r\") as f:\n for line in (l.rstrip() for l in f):\n match= format_pat.match(line)\n if match:\n access = match.groupdict()\n request = access['request']\n fields = request.split()\n if (len(fields) == 3):\n (action, URL, protocol) = fields\n if (action == 'GET'):\n if URLCounts.has_key(URL):\n URLCounts[URL] = URLCounts[URL] + 1\n else:\n URLCounts[URL] = 1\n\nresults = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True)\n\nfor result in results[:20]:\n print(result + \": \" + str(URLCounts[result]))", "That's starting to look better. But, this is a news site - are people really reading the little blog on it instead of news pages? That doesn't make sense. Let's look at a typical /blog/ entry in the log:\n54.165.199.171 - - [05/Dec/2015:09:32:05 +0000] \"GET /blog/ HTTP/1.0\" 200 31670 \"-\" \"-\"\nHm. Why is the user agent blank? Seems like some sort of malicious scraper or something. Let's figure out what user agents we are dealing with:", "UserAgents = {}\n\nwith open(logPath, \"r\") as f:\n for line in (l.rstrip() for l in f):\n match= format_pat.match(line)\n if match:\n access = match.groupdict()\n agent = access['user_agent']\n if UserAgents.has_key(agent):\n UserAgents[agent] = UserAgents[agent] + 1\n else:\n UserAgents[agent] = 1\n\nresults = sorted(UserAgents, key=lambda i: int(UserAgents[i]), reverse=True)\n\nfor result in results:\n print(result + \": \" + str(UserAgents[result]))", "Yikes! In addition to '-', there are also a million different web robots accessing the site and polluting my data. Filtering out all of them is really hard, but getting rid of the ones significantly polluting my data in this case should be a matter of getting rid of '-', anything containing \"bot\" or \"spider\", and W3 Total Cache.", "URLCounts = {}\n\nwith open(logPath, \"r\") as f:\n for line in (l.rstrip() for l in f):\n match= format_pat.match(line)\n if match:\n access = match.groupdict()\n agent = access['user_agent']\n if (not('bot' in agent or 'spider' in agent or \n 'Bot' in agent or 'Spider' in agent or\n 'W3 Total Cache' in agent or agent =='-')):\n request = access['request']\n fields = request.split()\n if (len(fields) == 3):\n (action, URL, protocol) = fields\n if (action == 'GET'):\n if URLCounts.has_key(URL):\n URLCounts[URL] = URLCounts[URL] + 1\n else:\n URLCounts[URL] = 1\n\nresults = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True)\n\nfor result in results[:20]:\n print(result + \": \" + str(URLCounts[result]))", "Now, our new problem is that we're getting a bunch of hits on things that aren't web pages. We're not interested in those, so let's filter out any URL that doesn't end in / (all of the pages on my site are accessed in that manner - again this is applying knowledge about my data to the analysis!)", "URLCounts = {}\n\nwith open(logPath, \"r\") as f:\n for line in (l.rstrip() for l in f):\n match= format_pat.match(line)\n if match:\n access = match.groupdict()\n agent = access['user_agent']\n if (not('bot' in agent or 'spider' in agent or \n 'Bot' in agent or 'Spider' in agent or\n 'W3 Total Cache' in agent or agent =='-')):\n request = access['request']\n fields = request.split()\n if (len(fields) == 3):\n (action, URL, protocol) = fields\n if (URL.endswith(\"/\")):\n if (action == 'GET'):\n if URLCounts.has_key(URL):\n URLCounts[URL] = URLCounts[URL] + 1\n else:\n URLCounts[URL] = 1\n\nresults = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True)\n\nfor result in results[:20]:\n print(result + \": \" + str(URLCounts[result]))", "This is starting to look more believable! But if you were to dig even deeper, you'd find that the /feed/ pages are suspect, and some robots are still slipping through. However, it is accurate to say that Orlando news, world news, and comics are the most popular pages accessed by a real human on this day.\nThe moral of the story is - know your data! And always question and scrutinize your results before making decisions based on them. If your business makes a bad decision because you provided an analysis of bad source data, you could get into real trouble.\nBe sure the decisions you make while cleaning your data are justifiable too - don't strip out data just because it doesn't support the results you want!\nActivity\nThese results still aren't perfect; URL's that include \"feed\" aren't actually pages viewed by humans. Modify this code further to strip out URL's that include \"/feed\". Even better, extract some log entries for these pages and understand where these views are coming from." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
particle-physics-playground/playground
activities/codebkg_DownloadData.ipynb
mit
[ "This notebook provides a way to download data files using the <a href=\"http://docs.python-requests.org/en/latest/\">Python requests library</a>. You'll need to have this library installed on your system to do any work. \nThe first thing we do is import some local helper code that allows us to to download a data file(s), given the url(s).\nWe also define where those data files can be found. \nMake sure you execute the cell below before trying download any of the CLEO or CMS data!", "import pps_tools as pps\n\n#pps.download_drive_file()\n#pps.download_file()", "All of the data files for Particle Physics Playground are currently hosted in this Google Drive folder. To download them, you will need the download_drive_file function, which takes the file name (with proper file ending) as an argument, and downloads it as a file of the same name to the 'data' directory included when you cloned Playground.\nThe download_file function can be used to download files from the web that aren't on Google Drive. This function takes a url address as an argument. Though this functionality is provided, it should be unnecessary for all of the included activities.\n<a href = \"https://en.wikipedia.org/wiki/CLEO_(particle_detector)\">CLEO</a> data\nHere is a list of Monte Carlo (MC) and data files from CLEO. The MC files are for specific decays of $D$ mesons, both charged and neutral. For any given file, there are always (CHECK THIS!!!!!!) two D mesons produced. One decays according to the measured branching fractions, and the other decays through a very specific process. The specific decay is in the name of the file. For example, \nSingle_D0_to_Kpi_LARGE.dat\n\nwould be simulating the following process:\n$$e^+e^- \\rightarrow D^0 \\bar{D}^0$$\n$$D^0 \\rightarrow \\textrm{standard decays}$$\n$$\\bar{D^0} \\rightarrow K^- \\pi^+$$\nwhere the $D^0$ and $\\bar{D}^0$ can be exchanged.", "cleo_MC_files = ['Single_D0B_to_KK_ISR_LARGE.dat',\n'Single_D0B_to_Kenu_ISR_LARGE.dat',\n'Single_D0B_to_Kpipi0_ISR_LARGE.dat',\n'Single_D0B_to_Kstenu_ISR_LARGE.dat',\n'Single_D0B_to_phigamma_ISR_LARGE.dat',\n'Single_D0B_to_pipi_ISR_LARGE.dat',\n'Single_D0_to_KK_ISR_LARGE.dat',\n'Single_D0_to_Kenu_ISR_LARGE.dat',\n'Single_D0_to_Kpi_LARGE.dat',\n'Single_D0_to_Kpipi0_ISR_LARGE.dat',\n'Single_D0_to_Kstenu_ISR_LARGE.dat',\n'Single_D0_to_phigamma_ISR_LARGE.dat',\n'Single_D0_to_pipi_ISR_LARGE.dat',\n'Single_Dm_to_Kpipi_ISR_LARGE.dat',\n'Single_Dp_to_Kpipi_ISR_LARGE.dat']\n\ncleo_data_files = ['data31_100k_LARGE.dat']", "Download the data here!\nThe snippet below can be used to download as much or as little of the extra data as you like. It is currently commented now and is set up to download the first two CLEO MC files, but you can edit it to grab whatever data you like. \nHave fun!", "'''\nfor filename in cleo_MC_files[0:2]: \n pps.download_drive_file(filename)\n''';", "<a href = \"https://en.wikipedia.org/wiki/Compact_Muon_Solenoid\">CMS</a> data\nCMS dimuon data", "cms_data_files = ['dimuons_100k.dat']\n\n'''\npps.download_drive_file(cms_data_files[0])\n''';", "CMS data for top-quark reconstruction exercise", "cms_top_quark_files = ['data.zip',\n 'ttbar.zip',\n 'wjets.zip',\n 'dy.zip',\n 'ww.zip',\n 'wz.zip',\n 'zz.zip',\n 'single_top.zip',\n 'qcd.zip']\n\n'''\nfor filename in cms_top_quark_files[0:2]:\n pps.download_drive_file(filename)\n''';", "BaBar data", "babar_data_files = ['basicPID_R24-AllEvents-Run1-OnPeak-R24-38.hdf5',\n 'basicPID_R24-AllEvents-Run1-OnPeak-R24-388.hdf5',\n 'basicPID_R24-AllEvents-Run1-OnPeak-R24-1133.hdf5',\n 'basicPID_R24-AllEvents-Run1-OnPeak-R24-1552.hdf5',\n 'basicPID_R24-AllEvents-Run1-OnPeak-R24-1694.hdf5',\n 'basicPID_R24-AllEvents-Run1-OnPeak-R24-1920.hdf5',\n 'basicPID_R24-AllEvents-Run1-OnPeak-R24-2026.hdf5',\n 'basicPID_R24-AllEvents-Run1-OnPeak-R24-2781.hdf5',\n 'basicPID_R24-AllEvents-Run1-OnPeak-R24-2835.hdf5']\n\n'''\nfor filename in babar_data_files[0:2]:\n pps.download_drive_file(filename)\n''';" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bartdevylder/bikecity-tutorial
source/1/index.ipynb
mit
[ "Python for Data Science Workshop @VeloCity\n1.1 Jupyter Notebook\nJupyter notebook is often used by data scientists who work in Python. It is loosely based on Mathematica and combines code, text and visual output in one page.\nSome relevant short cuts:\n* SHIFT + ENTER executes 1 block of code called a cell\n* Tab-completion is omnipresent after the import of a package has been executed\n* SHIFT + TAB gives you extra information on what parameters a function takes\n* Repeating SHIFT + TAB multiple times gives you even more information\nTo get used to these short cuts try them out on the cell below.", "print 'Hello world!'\nprint range(5)", "1.2 Numpy arrays\nWe'll be working often with numpy arrays so here's a very short introduction.", "import numpy as np\n\n# This is a two-dimensional numpy array\narr = np.array([[1,2,3,4],[5,6,7,8]])\nprint arr\n\n# The shape is a tuple describing the size of each dimension\nprint \"shape=\" + str(arr.shape)\n\n# The numpy reshape method allows one to change the shape of an array, while keeping the underlying data.\n# One can leave one dimension unspecified by passing -1, it will be determined from the size of the data.\n\nprint \"As 4x2 matrix\" \nprint np.reshape(arr, (4,2))\n\nprint \nprint \"As 8x1 matrix\" \nprint np.reshape(arr, (-1,1))\n\nprint \nprint \"As 2x2x2 array\" \nprint np.reshape(arr, (2,2,-1))", "Basic arithmetical operations on arrays of the same shape are done elementwise:", "x = np.array([1.,2.,3.])\ny = np.array([4.,5.,6.])\n\nprint x + y\nprint x - y\nprint x * y\nprint x / y", "1.3 Parts to be implemented\nIn cells like the following example you are expected to implement some code. The remainder of the tutorial won't work if you skip these.\nSometimes assertions are added as a check.", "### BEGIN SOLUTION\nthree = 3\n### END SOLUTION\n# three = ?\nassert three == 3", "2. Anomaly Detection\n2.1 Load Data\nFirst we will load the data using a pickle format. (We use import cPickle as pickle because cPickle is faster.)\nThe data we use contains the pageviews of one of our own websites and for convenience there is only 1 data point per hour.", "import cPickle as pickle\n\npast = pickle.load(open('data/past_data.pickle'))\nall_data = pickle.load(open('data/all_data.pickle'))", "2.2 Plot past data\nTo plot the past data we will use matplotlib.pyplot. For convenience we import it as plt. \n% matplotlib inline makes sure you can see the output in the notebook. \n(Use % matplotlib notebook if you want to make it ineractive. Don't forget to click the power button to finish the interaction and to be able to plot a new figure.)", "% matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(20,4)) # This creates a new figure with the dimensions of 20 by 4\nplt.plot(past) # This creates the actual plot\nplt.show() # This shows the plot", "2.3 Find the minimum and maximum\nUse np.nanmax() and np.nanmin() to find the minmum and maximum while ignoring the NaNs.", "import numpy as np\n\n### BEGIN SOLUTION\nmaximum = np.nanmax(past)\nminimum = np.nanmin(past)\n### END SOLUTION\n# maximum = ?\n# minimum = ?\nprint minimum, maximum", "And plot these together with the data using the plt.axhline() function.", "plt.figure(figsize=(20,4))\nplt.plot(past)\nplt.axhline(maximum, color='r')\nplt.axhline(minimum, color='r')\nplt.show()", "2.4 Testing the model on unseen data\nNow plot all the data instead of just the past data", "plt.figure(figsize=(20,4))\nplt.plot(all_data, color='g')\nplt.plot(past, color='b')\nplt.axhline(maximum, color='r')\nplt.axhline(minimum, color='r')\nplt.show()", "You can clearly see now that this model does not detect any anomalies. However, the last day of data clearly looks different compared to the other days.\nIn what follows we will build a better model for anomaly detection that is able to detect these 'shape shifts' as well.\n2.5 Building a model with seasonality\nTo do this we are going to take a step by step approach. Maybe it won't be clear at first why every step is necessary, but that will become clear throughout the process.\nFirst we are going to reshape the past data to a 2 dimensional array with 24 columns. This will give us 1 row for each day and 1 column for each hour. For this we are going to use the np.reshape() function. The newshape parameter is a tuple which in this case should be (-1, 24). If you use a -1 the reshape function will automatically compute that dimension. Pay attention to the order in which the numbers are repositonned (the default ordering should work fine here).", "### BEGIN SOLUTION\nreshaped_past = past.reshape((-1, 24))\n### END SOLUTION\n# reshaped_past = ?\n\nassert len(reshaped_past.shape) == 2\nassert reshaped_past.shape[1] == 24", "Now we are going to compute the average over all days. For this we are going to use the np.mean() with the axis variable set to the first dimension (axis=0). Next we are going to plot this.", "### BEGIN SOLUTION\naverage_past = np.mean(reshaped_past, axis=0)\n### END SOLUTION\n# average_past = \n\nassert average_past.shape == (24,)\n\nplt.plot(average_past)\nplt.show()", "What you can see in the plot above is the average number of pageviews for eacht hour of the day.\nNow let's plot this together with the past data on 1 plot. Use a for loop and the np.concatenate() function to concatenate this average 6 times into the variable model.", "model = []\nfor i in range(6):\n### BEGIN SOLUTION\n model = np.concatenate((model, average_past))\n### END SOLUTION\n# model = np.concatenate( ? )\n\nplt.figure(figsize=(20,4)) \nplt.plot(model, color='k')\nplt.plot(past, color='b')\nplt.show()", "In the next step we are going to compute the maximum (= positive) and minimum (= negative) deviations from the average to determine what kind of deviations are normal. (Just substract the average/model from the past and take the min and the max of that)", "### BEGIN SOLUTION\ndelta_max = np.nanmax(past - model)\ndelta_min = np.nanmin(past - model)\n### END SOLUTION\nprint delta_min, delta_max", "Now let's plot this.", "plt.figure(figsize=(20,4))\nplt.plot(model, color='k')\nplt.plot(past, color='b')\nplt.plot(model + delta_max, color='r')\nplt.plot(model + delta_min, color='r')\nplt.show()", "Now let's test this on all data", "model_all = np.concatenate((model, average_past))\n\nplt.figure(figsize=(20,4))\nplt.plot(all_data, color='g')\nplt.plot(model_all, color='k')\nplt.plot(past, color='b')\nplt.plot(model_all + delta_max, color='r')\nplt.plot(model_all + delta_min, color='r')\nplt.show()", "Now you can clearly see where the anomaly is detected by this more advanced model. The code below can gives you the exact indices where an anomaly is detected. The functions uses are the following np.where() and np.logical_or().", "anomaly_timepoints = np.where(np.logical_or(all_data < model_all + delta_min, all_data > model_all + delta_max))[0]\n\nplt.figure(figsize=(20,4))\nplt.scatter(anomaly_timepoints, all_data[anomaly_timepoints], color='r', linewidth=8)\nplt.plot(all_data, color='g')\nplt.plot(model_all, color='k')\nplt.plot(past, color='b')\nplt.plot(model_all + delta_max, color='r')\nplt.plot(model_all + delta_min, color='r')\nplt.xlim(0, len(all_data))\nplt.show()\n\nprint 'The anomaly occurs at the following timestamps:', anomaly_timepoints", "3. Modeling\nIt is often desired to understand the relationship between different sources of information. As an example we'll consider the historical request rate of a web server and compare it to its CPU usage. We'll try to predict the CPU usage of the server based on the request rates of the different pages. First some imports:", "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport pylab\npylab.rcParams['figure.figsize'] = (13.0, 8.0)\n%matplotlib inline", "3.1 Data import and inspection\nPandas is a popular library for data wrangling, we'll use it to load and inspect a csv file that contains the historical web request and cpu usage of a web server:", "data = pd.DataFrame.from_csv(\"data/request_rate_vs_CPU.csv\")", "The head command allows one to quickly see the structure of the loaded data:", "data.head()", "We can select the CPU column and plot the data:", "data.plot(figsize=(13,8), y=\"CPU\")", "Next we plot the request rates, leaving out the CPU column as it has another unit:", "data.drop('CPU',1).plot(figsize=(13,8))", "Now to continue and start to model the data, we'll work with basic numpy arrays. By doing this we also drop the time-information as shown in the plots above.\nWe extract the column labels as the request_names for later reference:", "request_names = data.drop('CPU',1).columns.values\nrequest_names", "We extract the request rates as a 2-dimensional numpy array:", "request_rates = data.drop('CPU',1).values", "and the cpu usage as a one-dimensional numpy array", "cpu = data['CPU'].values", "3.2 Simple linear regression\nFirst, we're going to work with the total request rate on the server, and compare it to the CPU usage. The numpy function np.sum can be used to calculate the total request rate when selecting the right direction (axis=1) for the summation.", "### BEGIN SOLUTION \ntotal_request_rate = np.sum(request_rates, axis=1) \n### END SOLUTION\n# total_request_rate =\n\nassert total_request_rate.shape == (288,)", "Let's plot the total request rate to check:", "plt.figure(figsize=(13,8))\nplt.plot(total_request_rate)", "We can make use of a PyPlot's scatter plot to understand the relation between the total request rate and the CPU usage:", "plt.figure(figsize=(13,8))\nplt.xlabel(\"Total request rate\")\nplt.ylabel(\"CPU usage\")\n### BEGIN SOLUTION\nplt.scatter(total_request_rate, cpu)\n### END SOLUTION\n# plt.scatter( ? , ? )", "There clearly is a strong correlation between the request rate and the CPU usage. Because of this correlation we can build a model to predict the CPU usage from the total request rate. If we use a linear model we get a formula like the following:\n$$ \\text{cpu} = c_0 + c_1 \\text{total_request_rate} $$\nSince we don't know the exact values for $c_0$ and $c_1$ we will have to compute them. For that we'll make use of the scikit-learn machine learning library for Python and use least-squares linear regression", "from sklearn import linear_model\nsimple_lin_model = linear_model.LinearRegression()", "Now we need to feed the data to the model to fit it. The model.fit(X,y) method in general takes an array X and vector y as arguments:\n```\n X = [[x_11, x_12, x_13, ...], y = [y_1,\n [x_21, x_22, x_23, ...], y_2,\n [x_31, x_32, x_33, ...], y_3,\n ...] ...]\n```\nand tries to find coefficients that allow to predict the y_i's from the x_ij's. In our case the matrix X will consist of only 1 column containing the total request rates. Our total_request_rate variable however, is still only a one-dimensional vector, so we need to np.reshape it into a two-dimensional array:", "### BEGIN SOLUTION\ntotal_request_rate_M = total_request_rate.reshape((-1,1))\n### END SOLUTION\n# total_request_rate_M = \n\n# Test to see it's a two dimensional array\nassert len(total_request_rate_M.shape) == 2\n# Test to see it's got only 1 column\nassert total_request_rate_M.shape[1] == 1", "Then we fit our model using the the total request rate and cpu. The coefficients found are automatically stored in the simple_lin_model object.", "### BEGIN SOLUTION\nsimple_lin_model.fit(total_request_rate_M, cpu)\n### END SOLUTION\n# simple_lin_model.fit( ? , ? ) ", "We can now inspect the coefficient $c_1$ and constant term (intercept) $c_0$ of the model:", "print \"Coefficient = %s, constant term = %f\" % (str(simple_lin_model.coef_), simple_lin_model.intercept_)", "So this means that each additional request adds about 0.11% CPU load to the server and all the other processes running on the server consume on average 0.72% CPU.\nOnce the model is trained we can use it to predict the outcome for a given input (or array of inputs). Note that the predict function requires a 2-dimensional array similar to the fit function.\nWhat is the expected CPU usage when we have 880 requests per second?", "### BEGIN SOLUTION\nsimple_lin_model.predict([[880]])\n### END SOLUTION\n# simple_lin_model.predict( [[ ? ]] )", "Now we plot the linear model together with our data to verify it captures the relationship correctly (the predict method can accept the entire total_request_rate_M array at once).", "plt.figure(figsize=(13,8))\n\nplt.scatter(total_request_rate, cpu, color='black')\nplt.plot(total_request_rate, simple_lin_model.predict(total_request_rate_M), color='blue', linewidth=3)\n\nplt.xlabel(\"Total request rate\")\nplt.ylabel(\"CPU usage\")\n\nplt.show()", "Our model can calculate a score indicating how well the linear model captures the data. A score of 1 means the data is perfectly linear, a score of 0 (or lower) means the data is not linear at all (and it does not make sense to try to model it that way). The score method takes the same arguments as the fit method:", "simple_lin_model.score(total_request_rate_M, cpu)", "3.3 Multiple linear regression\nNow let us consider the separate request rates instead and build a linear model for that. The model we try to fit takes the form:\n$$\\text{cpu} = c_0 + c_1 \\text{request_rate}_1 + c_2 \\text{request_rate}_2 + \\ldots + c_n \\text{request_rate}_n$$\nwhere the $\\text{request_rate}_i$'s correspond the our different requests:", "print request_names", "We start again by creating a LinearRegression model.", "multi_lin_model = linear_model.LinearRegression()", "Next we fit the model on the data, using multi_lin_model.fit(X,y). In contrast to the case above our request_rates variable already has the correct shape to pass as the X matrix: it has one column per request type.", "### BEGIN SOLUTION\nmulti_lin_model.fit(request_rates, cpu)\n### END SOLUTION\n# multi_lin_model.fit( ? , ? )", "Now, given the coefficients calculated by the model, which capture the contribution of each request type to the total CPU usage, we can start to answer some interesting questions. For example, \nwhich request causes most CPU usage, on a per visit basis? \nFor this we can generate a table of request names with their coefficients in descending order:", "# combine the requests and the output in a pandas data frame for easy printing\nresult_table = pd.DataFrame(zip(request_names, multi_lin_model.coef_), columns=['Request', 'Coef'])\n\n# sort the results in descending order\nresult_table = result_table.sort_values(by='Coef',ascending=False)\n\n# executing this as the last command returns a nice table\nresult_table", "From this table we see that 'resources/js/basket.js' consumes the most per CPU per request. It generates about 0.30% CPU load for each additional request. 'products/science.html' on the other hand is much leaner and only consumes about 0.04% CPU per request.\nNow let us investigate the constant term again.", "print 'The other processes on the server consume %.2f%%' % multi_lin_model.intercept_", "As you can see this term is very similar to the result achieved in single linear regression, but it is not entirely the same. This means that these models are not perfect. However, the seem to be able to give a reliable estimate.\n3.4 Multiple linear regression 'advanced'\nIn the previous section we have modeled how much load each individual request generates. But in some cases you might want to transfer one of the request to another server. Now, suppose we want to minimize average CPU usage on this server by deviating traffic of only one webpage to another server, which page should we choose?\nFor this we simulate diverting the traffic of one page to another server. This means that for the request that is diverted the rate becomes 0, for the other requests we use the average rate.\nWe implement this by first calculating the average_request_rates using np.mean. These average_request_rates are then fed to the multi_lin_model.predict() method but with setting each individual request rate to 0 once.\n(For linear models you can also compute the result based on the coefficients, but this approach also works for non-linear models.)", "### BEGIN SOLUTION\naverage_request_rates = np.mean(request_rates, axis=0)\n### END SOLUTION\n# average_request_rates = \nassert average_request_rates.shape == (6,)\n\nresults = []\n\n# Loop over all requests\nfor i in range(len(request_names)):\n # make a copy of the array to avoid overwriting\n tweaked_load = np.copy(average_request_rates)\n### BEGIN SOLUTION\n tweaked_load[i] = 0\n resulting_cpu = multi_lin_model.predict([tweaked_load])\n### END SOLUTION\n # tweaked_load[ ? ] = ?\n # resulting_cpu = ?\n \n results.append( (request_names[i], \n multi_lin_model.coef_[i], \n average_request_rates[i], \n resulting_cpu))\n\n# Now we store the results in a pandas dataframe for easier inspection.\nmlin_df = pd.DataFrame(results, columns=['Diverted request', 'Coef', 'Rate', 'Predicted CPU'])\nmlin_df = mlin_df.sort_values(by='Predicted CPU')\nmlin_df", "As you can see in the table above, it is best to divert the traffic of 'api/product/get.php' (Why is the result different than the table based on the coefficient)?\n4. Forecasting\nFor the forecasting we are going to use page views data, very similar to the data used in the anomaly detection section. It is also page view data and contains 1 sample per hour.", "train_set = pickle.load(open('data/train_set_forecasting.pickle'))\n\nplt.figure(figsize=(20,4))\nplt.plot(train_set)\nplt.show()", "In the graph above you can clearly see that there is a rising trend in the data.\n4.1 One-step ahead prediction\nThis forecasting section will describe the one-step ahead prediction. This means in this case that we will only predict the next data point which is in this case the number of pageviews in the next hour.\nNow let's first build a model that tries to predict the next data point from the previous one.", "import sklearn\nimport sklearn.linear_model\nimport sklearn.gaussian_process\n\nmodel = sklearn.linear_model.LinearRegression()\n\n# the input X contains all the data except the last data point\nX = train_set[ : -1].reshape((-1, 1)) # the reshape is necessary since sklearn requires a 2 dimensional array\n\n# the output y contains all the data except the first data point\ny = train_set[1 : ]\n\n# this code fits the model on the train data\nmodel.fit(X, y)\n\n# this score gives you how well it fits on the train set\n# higher is better and 1.0 is perfect\nprint 'The score of the linear model is', model.score(X, y)", "As you can see from the score above, the model is not perfect but it seems to get a relatively high score. Now let's make a prediction into the future and plot this.\nTo predict the datapoint after that we will use the predicted data to make a new prediction. The code below shows how this works for this data set using the linear model you used earlier. Don't forget to fill out the missing code.", "nof_predictions = 100\n\nimport copy\n# use the last data point as the first input for the predictions\nx_test = copy.deepcopy(train_set[-1]) # make a copy to avoid overwriting the training data\n\nprediction = []\nfor i in range(nof_predictions):\n # predict the next data point\n y_test = model.predict([[x_test]])[0] # sklearn requires a 2 dimensional array and returns a one-dimensional one\n \n ### BEGIN SOLUTION\n prediction.append(y_test)\n x_test = y_test\n ### END SOLUTION\n # prediction.append( ? )\n # x_test = ?\n\nprediction = np.array(prediction)\n\nplt.figure(figsize=(20,4))\nplt.plot(np.concatenate((train_set, prediction)), 'g')\nplt.plot(train_set, 'b')\nplt.show()", "As you can see from the image above the model doesn't quite seem to fit the data well. Let's see how we can improve this.\n4.2 Multiple features\nIf your model is not smart enough there is a simple trick in machine learning to make your model more intelligent (but also more complex). This is by adding more features.\nTo make our model better we will use more than 1 sample from the past. To make your life easier there is a simple function below that will create a data set for you. The width parameter sets the number of hours in the past that will be used.", "def convert_time_series_to_Xy(ts, width):\n X, y = [], []\n for i in range(len(ts) - width - 1):\n X.append(ts[i : i + width])\n y.append(ts[i + width])\n return np.array(X), np.array(y)\n\nwidth = 5\nX, y = convert_time_series_to_Xy(train_set, width)\n\nprint X.shape, y.shape", "As you can see from the print above both X and y contains 303 datapoints. For X you see that there are now 5 features which contain the pageviews from the 5 past hours.\nSo let's have a look what the increase from 1 to 5 features results to.", "width = 5\nX, y = convert_time_series_to_Xy(train_set, width)\nmodel = sklearn.linear_model.LinearRegression()\nmodel.fit(X,y)\nprint 'The score of the linear model with width =', width, 'is', model.score(X, y)", "Now change the width parameter to see if you can get a better score.\n4.3 Over-fitting\nNow execute the code below to see the prediction of this model.", "import copy\n\n# this is a helper function to make the predictions\ndef predict(model, train_set, width, nof_points):\n prediction = []\n # create the input data set for the first predicted output\n # copy the data to make sure the orriginal is not overwritten\n x_test = copy.deepcopy(train_set[-width : ]) \n for i in range(nof_points):\n # predict only the next data point\n prediction.append(model.predict(x_test.reshape((1, -1))))\n # use the newly predicted data point as input for the next prediction\n x_test[0 : -1] = x_test[1 : ]\n x_test[-1] = prediction[-1]\n return np.array(prediction)\n\nnof_predictions = 200\nprediction = predict(model, train_set, width, nof_predictions)\n\nplt.figure(figsize=(20,4))\nplt.plot(np.concatenate((train_set, prediction[:,0])), 'g')\nplt.plot(train_set, 'b')\nplt.show()", "As you can see in the image above the prediction is not what you would expect from a perfect model. What happened is that the model learned the training data by heart without 'understanding' what the data is really about. This fenomenon is called over-fitting and will always occur if you make your model more complex.\nNow play with the width variable below to see if you can find a more sensible width.", "width = 1 #find a better width\nX, y = convert_time_series_to_Xy(train_set, width)\nmodel = sklearn.linear_model.LinearRegression()\nmodel.fit(X,y)\nprint 'The score of the linear model with width =', width, 'is', model.score(X, y)\n\nprediction = predict(model, train_set, width, 200)\n\nplt.figure(figsize=(20,4))\nplt.plot(np.concatenate((train_set, prediction[:,0])), 'g')\nplt.plot(train_set, 'b')\nplt.show()", "As you will have noticed by now is that it is better to have a non-perfect score which will give you a much better outcome. Now try the same thing for the following models:\n* sklearn.linear_model.RidgeCV()\n* sklearn.linear_model.LassoCV()\n* sklearn.gaussian_process.GaussianProcess()\nThe first 2 models also estimate the noise that is present in the data to avoid overfitting. RidgeCV will keep the weights that are found small, but it won't put them to zero. LassoCV on the other hand will put several weights to 0. Execute model.coef_ to see the actual coefficients that have been found.\nGaussianProcess is a non-linear method. This makes this method a lot more complex and therefore it will need significantly less features to be able to learn the data by hart (and thus to over-fit). In many cases however this additional complexity allows to better understand the data. Additionally it has the advantage that it can estimate confidance intervals similar to the red lines used in the anomaly detection.\n4.4 Automation\nWhat we have done up to now is manually selecting the best outcome based on the test result. This can be considered cheating because you have just created a self-fulfilling prophecy. Additionally it is not only cheating it is also hard to find the exact width that gives the best result by just visually inspecting it. So we need a more objective approach to solve this.\nTo automate this process you can use a validation set. In this case we will use the last 48 hours of the training set to validate the score and select the best parameter value. This means that we will have to use a subset of the training set to fit the model.", "model_generators = [sklearn.linear_model.LinearRegression, sklearn.linear_model.RidgeCV,\n sklearn.linear_model.LassoCV, sklearn.gaussian_process.GaussianProcess]\nbest_score = 0\n\n### BEGIN SOLUTION \nfor model_gen in model_generators:\n for width in range(1, 200):\n### END SOLUTION \n# for model_gen in ? :\n# for width in range( ? , ? ): \n X, y = convert_time_series_to_Xy(train_set, width)\n # train the model on the first 48 hours\n X_train, y_train = X[ : -48, :], y[ : -48]\n # use the last 48 hours for validation\n X_val, y_val = X[-48 : ], y[-48 : ]\n \n ### BEGIN SOLUTION \n model = model_gen() \n ### END SOLUTION \n # model = \n \n # there is a try except clause here because some models do not converge for some data\n try:\n ### BEGIN SOLUTION \n model.fit(X_train, y_train)\n this_score = model.score(X_val, y_val)\n ### END SOLUTION \n # model.fit( ? , ? )\n # this_score = ?\n \n if this_score > best_score:\n best_score = this_score\n best_model_gen = model_gen\n best_width = width\n except:\n pass\n\nprint best_model_gen().__class__, 'was selected as the best model with a width of', best_width, 'and a validation score of', best_score", "If everything is correct the LassoCV methods was selected.\nNow we are going to train this best model on all the data. In this way we use all the available data to build a model.", "### BEGIN SOLUTION\nwidth = best_width\nmodel = best_model_gen()\n### END SOLUTION\n# width = ?\n# model = ?\n\nX, y = convert_time_series_to_Xy(train_set, width)\n\n### BEGIN SOLUTION\nmodel.fit(X,y) # train on the full data set\n### END SOLUTION\n# model.fit( ? , ? )\n\nnof_predictions = 200\nprediction = predict(model, train_set, width, nof_predictions)\n\nplt.figure(figsize=(20,4))\nplt.plot(np.concatenate((train_set, prediction[:,0])), 'g')\nplt.plot(train_set, 'b')\nplt.show()", "Altough the optimal result found here might not be the best visually, it is a far better result than the one you selected manually just because there was no cheating involved ;-).\nSome additional info:\n* This noise level of RidgeCV and LassoCV is estimated by automatically performing train and validation within the method itself. This will make them much more robust against over-fitting. The actual method used is Cross-validation which is a better approach of what we do here because it repeats the training and validation multiple times for different training and validation sets. The parameter that is set for these methods is often called the regularization parameter in literature and is well suited to avoid over-fitting.\n* Although sklearn supports estimating the noise level in Gaussian Processes it is not implemented within the method itself. Newer versions of sklearn seem to entail a lot of changes in this method so possibly it will be integrated in the (near) future. If you want to implement this noise level estimation yourself you can use their cross-validation tool to set the alpha parameter in version 0.18 of sklearn. (The version used here is 0.17.)\n5. Challenge\nDetails will be given near the end of the tutorial." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fullmetalfelix/ML-CSC-tutorial
NeuralNetwork - AtomicCharges.ipynb
gpl-3.0
[ "Atomic Charge Prediction\nIntroduction\nIn this notebook we will machine-learn the relationship between an atomic descriptor and its electron density using neural networks.\nThe atomic descriptor is a numerical representation of the chemical environment of the atom. Several choices are available for testing.\nReference Mulliken charges were calculated for 134k molecules at the CCSD level: each took 1-4 hours. \nThe problem is not trivial, even for humans.\n<table><tr>\n <td>\n <img src=\"./images/complex-CX.png\" width=\"350pt\">\n </td><td>\n <img src=\"./images/complex-CH3-X.png\" width=\"350pt\">\n </td></tr>\n</table>\n\nOn the left we see the distribution of s electron density on C atoms in the database. Different chemical environments are shown with different colours. The stacked histograms on the right show the details of charge density for CH$_3$-CX depending on the environment of CX. The total amount of possible environments for C up to the second order exceeds 100, and the figure suggests the presence of third order effects. This is too complex to treat accurately with human intuition. \nLet's see if we can train neural networks to give accurate predictions in milliseconds!\nSetup\nHere we use the ANN to model the relationship between the descriptors of atoms in molecules and the partial atomic charge density.", "# --- INITIAL DEFINITIONS ---\nfrom sklearn.neural_network import MLPRegressor\nimport numpy, math, random\nfrom scipy.sparse import load_npz\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom ase import Atoms\nfrom visualise import view", "Let's pick a descriptor and an atomic type.\nAvailable descriptors are:\n* boba: bag of bonds (per atom)\n* boba2: bag of bonds - advanced (per atom)\n* acsf: atom centered symmetry functions - 40k atoms max for each type\n* gnn: graph based fingerprint from NanoLayers\n* soap: smooth overlap of atomic positions (per atom)\n* mbtr: manybody tensor representation (per atom)\nPossible atom types are:\n* 1 = Hydrogen\n* 6 = Carbon\n* 7 = Nitrogen\n* 8 = Oxygen\n* 9 = Fluorine", "# Z is the atom type: allowed values are 1, 6, 7, 8, or 9\nZ = 6\n\n# TYPE is the descriptor type\nTYPE = \"mbtr\"\n\n#show descriptor details\nprint(\"\\nDescriptor details\")\ndesc = open(\"./data/descriptor.\"+TYPE+\".txt\",\"r\").readlines()\nfor l in desc: print(l.strip())\nprint(\" \")", "Load the databases with the descriptors (input) and the correct charge densities (output). Databases are quite big, so we can decide how many samples to use for training.", "# load input/output data\ntrainIn = load_npz(\"./data/charge.\"+str(Z)+\".input.\"+TYPE+\".npz\").toarray()\ntrainOut = numpy.load(\"./data/charge.\"+str(Z)+\".output.npy\")\ntrainOut = trainOut[0:trainIn.shape[0]]\n\n# decide how many samples to take from the database\nsamples = min(trainIn.shape[0], 9000) # change the number here if needed!\n\nprint(\"training samples: \"+str(samples))\nprint(\"validation samples: \"+str(trainIn.shape[0]-samples))\nprint(\"number of features: {}\".format(trainIn.shape[1]))\n\n# 70-30 split between training and validation\nvalidIn = trainIn[samples:]\nvalidOut = trainOut[samples:]\n\ntrainIn = trainIn[0:samples]\ntrainOut = trainOut[0:samples]\n\n# shift and scale the inputs - OPTIONAL\ntrain_mean = numpy.mean(trainIn, axis=0)\ntrain_std = numpy.std(trainIn, axis=0)\ntrain_std[train_std == 0] = 1\nfor a in range(trainIn.shape[1]):\n trainIn[:,a] -= train_mean[a]\n trainIn[:,a] /= train_std[a]\n\n# also for validation set\nfor a in range(validIn.shape[1]):\n validIn[:,a] -= train_mean[a]\n validIn[:,a] /= train_std[a]\n\n\n# show the first few descriptors\nprint(\"\\nDescriptors for the first 5 atoms:\")\nprint(trainIn[0:5])", "Next we setup a multilayer perceptron of suitable size. Out package of choice is scikit-learn, but more efficient ones are available.<br>\nCheck the scikit-learn <a href=\"http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html\">documentation</a> for a list of parameters.", "# setup the neural network\n# alpha is a regularisation parameter, explained later\n\nnn = MLPRegressor(hidden_layer_sizes=(10), activation='tanh', solver='adam', alpha=0.01, learning_rate='adaptive')", "Training\nNow comes the tough part! The idea of training is to evaluate the ANN with the training inputs and measure its error (since we know the correct outputs). It is then possible to compute the derivative (gradient) of the error w.r.t. each parameter (connections and biases). By shifting the parameters in the opposite direction of the gradient, we obtain a better set of parameters, that should give smaller error.\nThis procedure can be repeated until the error is minimised.<br><br>\nIt may take a while...", "# use this to change some parameters during training if needed\nnn.set_params(solver='lbfgs')\n\nnn.fit(trainIn, trainOut);", "Check the ANN quality with a regression plot, showing the mismatch between the exact and NN predicted outputs for the validation set.", "# evaluate the training and validation error\ntrainMLOut = nn.predict(trainIn)\nvalidMLOut = nn.predict(validIn)\n\nprint (\"Mean Abs Error (training) : \", (numpy.abs(trainMLOut-trainOut)).mean())\nprint (\"Mean Abs Error (validation): \", (numpy.abs(validMLOut-validOut)).mean())\n\nplt.plot(validOut,validMLOut,'o')\nplt.plot([Z-1,Z+1],[Z-1,Z+1]) # perfect fit line\nplt.xlabel('correct output')\nplt.ylabel('NN output')\nplt.show()\n\n# error histogram\nplt.hist(validMLOut-validOut,50)\nplt.xlabel(\"Error\")\nplt.ylabel(\"Occurrences\")\nplt.show()", "Exercises\n1. Compare descriptors\nTest the accuracy of different descriptors with the same NN size.", "# DIY code here...\n", "2. Optimal NN\nFind the smallest NN that gives good accuracy.", "# DIY code here...\n", "3. Training sample size issues\nCheck whether the descriptor fails because it does not contain enough information, or because there was not enough training data.", "# DIY code here...", "4. Combine with Principal Component Analysis - Advanced\nReduce the descriptor size with PCA (check the PCA.ipynb notebook) and train again. Can you get similar accuracy with much smaller networks?", "# DIY code here...", "5. Putting it all together\nAfter training NNs for each atomic species (MBTR), combine them into one program that predicts charges for all atoms in a molecule.\n\nCompute local MBTR for the molecule below.\nCompute all atomic charges with the NNs.\nIs the total charge zero? If not, normalise it.\n\nNote: Careful about the training: if the training data was transformed, the MBTR here should be as well.", "# atomic positions as matrix\nmolxyz = numpy.load(\"./data/molecule.coords.npy\")\n# atom types\nmoltyp = numpy.load(\"./data/molecule.types.npy\")\n\natoms_sys = Atoms(positions=molxyz, numbers=moltyp)\nview(atoms_sys)\n\n# compute MBTR descriptor for the molecule\n# ...\n\n\n# compute all atomic charghes using previously trained NNs\n# ...\n", "5. Analyse the chemical environments\nTry to plot the ACSF/SOAP chemical environemnts of C using t-SNE. Can you identify clusters of similar C atoms? What about their partial charge?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tuanavu/coursera-university-of-washington
machine_learning/3_classification/assigment/week2/module-4-linear-classifier-regularization-assignment-blank-graphlab.ipynb
mit
[ "Logistic Regression with L2 regularization\nThe goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following:\n\nExtract features from Amazon product reviews.\nConvert an SFrame into a NumPy array.\nWrite a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.\nImplement gradient ascent with an L2 penalty.\nEmpirically explore how the L2 penalty can ameliorate overfitting.\n\nFire up GraphLab Create\nMake sure you have the latest version of GraphLab Create. Upgrade by\npip install graphlab-create --upgrade\nSee this page for detailed instructions on upgrading.", "from __future__ import division\nimport graphlab", "Load and process review dataset\nFor this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.", "products = graphlab.SFrame('amazon_baby_subset.gl/')", "Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:\n\nRemove punctuation using Python's built-in string functionality.\nCompute word counts (only for the important_words)\n\nRefer to Module 3 assignment for more details.", "# The same feature processing (same as the previous assignments)\n# ---------------------------------------------------------------\nimport json\nwith open('important_words.json', 'r') as f: # Reads the list of most frequent words\n important_words = json.load(f)\nimportant_words = [str(s) for s in important_words]\n\n\ndef remove_punctuation(text):\n import string\n return text.translate(None, string.punctuation) \n\n# Remove punctuation.\nproducts['review_clean'] = products['review'].apply(remove_punctuation)\n\n# Split out the words into individual columns\nfor word in important_words:\n products[word] = products['review_clean'].apply(lambda s : s.split().count(word))", "Now, let us take a look at what the dataset looks like (Note: This may take a few minutes).", "products", "Train-Validation split\nWe split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.\nNote: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.", "train_data, validation_data = products.random_split(.8, seed=2)\n\nprint 'Training set : %d data points' % len(train_data)\nprint 'Validation set : %d data points' % len(validation_data)", "Convert SFrame to NumPy array\nJust like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. \nNote: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.", "import numpy as np\n\ndef get_numpy_data(data_sframe, features, label):\n data_sframe['intercept'] = 1\n features = ['intercept'] + features\n features_sframe = data_sframe[features]\n feature_matrix = features_sframe.to_numpy()\n label_sarray = data_sframe[label]\n label_array = label_sarray.to_numpy()\n return(feature_matrix, label_array)", "We convert both the training and validation sets into NumPy arrays.\nWarning: This may take a few minutes.", "feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')\nfeature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment') ", "Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)\nIt has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:\narrays = np.load('module-4-assignment-numpy-arrays.npz')\nfeature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']\nfeature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']\nBuilding on logistic regression with no L2 penalty assignment\nLet us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))},\n$$\nwhere the feature vector $h(\\mathbf{x}_i)$ is given by the word counts of important_words in the review $\\mathbf{x}_i$. \nWe will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)", "'''\nproduces probablistic estimate for P(y_i = +1 | x_i, w).\nestimate ranges between 0 and 1.\n'''\ndef predict_probability(feature_matrix, coefficients):\n # Take dot product of feature_matrix and coefficients \n ## YOUR CODE HERE\n scores = np.dot(feature_matrix, coefficients)\n \n # Compute P(y_i = +1 | x_i, w) using the link function\n ## YOUR CODE HERE\n predictions = 1./(1 + np.exp(-scores))\n \n return predictions", "Adding L2 penalty\nLet us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.\nRecall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:\n$$\n\\frac{\\partial\\ell}{\\partial w_j} = \\sum_{i=1}^N h_j(\\mathbf{x}_i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right)\n$$\n Adding L2 penalty to the derivative \nIt takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.\n\nRecall from the lecture that the link function is still the sigmoid:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))},\n$$\nWe add the L2 penalty term to the per-coefficient derivative of log likelihood:\n$$\n\\frac{\\partial\\ell}{\\partial w_j} = \\sum_{i=1}^N h_j(\\mathbf{x}_i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right) \\color{red}{-2\\lambda w_j }\n$$\n\nThe per-coefficient derivative for logistic regression with an L2 penalty is as follows:\n$$\n\\frac{\\partial\\ell}{\\partial w_j} = \\sum_{i=1}^N h_j(\\mathbf{x}i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right) \\color{red}{-2\\lambda w_j }\n$$\nand for the intercept term, we have\n$$\n\\frac{\\partial\\ell}{\\partial w_0} = \\sum{i=1}^N h_0(\\mathbf{x}_i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right)\n$$\nNote: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.\nWrite a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:\n * errors vector containing $(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w}))$ for all $i$\n * feature vector containing $h_j(\\mathbf{x}_i)$ for all $i$\n * coefficient containing the current value of coefficient $w_j$.\n * l2_penalty representing the L2 penalty constant $\\lambda$\n * feature_is_constant telling whether the $j$-th feature is constant or not.", "def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant): \n \n # Compute the dot product of errors and feature\n ## YOUR CODE HERE\n derivative = np.dot(errors, feature)\n\n # add L2 penalty term for any feature that isn't the intercept.\n if not feature_is_constant: \n ## YOUR CODE HERE\n derivative -= (2 * l2_penalty * coefficient)\n \n return derivative", "Quiz question: In the code above, was the intercept term regularized?\nTo verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).\n$$\\ell\\ell(\\mathbf{w}) = \\sum_{i=1}^N \\Big( (\\mathbf{1}[y_i = +1] - 1)\\mathbf{w}^T h(\\mathbf{x}_i) - \\ln\\left(1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))\\right) \\Big) \\color{red}{-\\lambda\\|\\mathbf{w}\\|_2^2} $$", "def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):\n indicator = (sentiment==+1)\n scores = np.dot(feature_matrix, coefficients)\n \n lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)\n \n return lp", "Quiz question: Does the term with L2 regularization increase or decrease $\\ell\\ell(\\mathbf{w})$?\nThe logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.", "def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):\n coefficients = np.array(initial_coefficients) # make sure it's a numpy array\n for itr in xrange(max_iter):\n # Predict P(y_i = +1|x_i,w) using your predict_probability() function\n ## YOUR CODE HERE\n predictions = predict_probability(feature_matrix, coefficients)\n \n # Compute indicator value for (y_i = +1)\n indicator = (sentiment==+1)\n \n # Compute the errors as indicator - predictions\n errors = indicator - predictions\n for j in xrange(len(coefficients)): # loop over each coefficient\n is_intercept = (j == 0)\n # Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].\n # Compute the derivative for coefficients[j]. Save it in a variable called derivative\n ## YOUR CODE HERE\n derivative = feature_derivative_with_L2(errors, feature_matrix[:,j], coefficients[j], l2_penalty, is_intercept)\n \n # add the step size times the derivative to the current coefficient\n ## YOUR CODE HERE\n coefficients[j] += step_size * derivative\n \n # Checking whether log likelihood is increasing\n if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \\\n or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:\n lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)\n print 'iteration %*d: log likelihood of observed labels = %.8f' % \\\n (int(np.ceil(np.log10(max_iter))), itr, lp)\n return coefficients", "Explore effects of L2 regularization\nNow that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.\nBelow, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.", "# run with L2 = 0\ncoefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=0, max_iter=501)\n\n# run with L2 = 4\ncoefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=4, max_iter=501)\n\n# run with L2 = 10\ncoefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=10, max_iter=501)\n\n# run with L2 = 1e2\ncoefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=1e2, max_iter=501)\n\n# run with L2 = 1e3\ncoefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=1e3, max_iter=501)\n\n# run with L2 = 1e5\ncoefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-6, l2_penalty=1e5, max_iter=501)", "Compare coefficients\nWe now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.\nBelow is a simple helper function that will help us create this table.", "table = graphlab.SFrame({'word': ['(intercept)'] + important_words})\ndef add_coefficients_to_table(coefficients, column_name):\n table[column_name] = coefficients\n return table", "Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.", "add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')\nadd_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')\nadd_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')\nadd_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')\nadd_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')\nadd_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')", "Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.\nQuiz Question. Which of the following is not listed in either positive_words or negative_words?", "table[['word','coefficients [L2=0]']].sort('coefficients [L2=0]', ascending = False)[0:5]\n\npositive_words = table.sort('coefficients [L2=0]', ascending = False)[0:5]['word']\nprint positive_words\n\nnegative_words = table.sort('coefficients [L2=0]', ascending = True)[0:5]['word']\nprint negative_words", "Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.", "import matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = 10, 6\n\ndef make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):\n cmap_positive = plt.get_cmap('Reds')\n cmap_negative = plt.get_cmap('Blues')\n \n xx = l2_penalty_list\n plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')\n \n table_positive_words = table.filter_by(column_name='word', values=positive_words)\n table_negative_words = table.filter_by(column_name='word', values=negative_words)\n del table_positive_words['word']\n del table_negative_words['word']\n \n for i in xrange(len(positive_words)):\n color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))\n plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),\n '-', label=positive_words[i], linewidth=4.0, color=color)\n \n for i in xrange(len(negative_words)):\n color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))\n plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),\n '-', label=negative_words[i], linewidth=4.0, color=color)\n \n plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)\n plt.axis([1, 1e5, -1, 2])\n plt.title('Coefficient path')\n plt.xlabel('L2 penalty ($\\lambda$)')\n plt.ylabel('Coefficient value')\n plt.xscale('log')\n plt.rcParams.update({'font.size': 18})\n plt.tight_layout()", "Run the following cell to generate the plot. Use the plot to answer the following quiz question.", "make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])", "Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.\nQuiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)\nMeasuring accuracy\nNow, let us compute the accuracy of the classifier model. Recall that the accuracy is given by\n$$\n\\mbox{accuracy} = \\frac{\\mbox{# correctly classified data points}}{\\mbox{# total data points}}\n$$\nRecall from lecture that that the class prediction is calculated using\n$$\n\\hat{y}_i = \n\\left{\n\\begin{array}{ll}\n +1 & h(\\mathbf{x}_i)^T\\mathbf{w} > 0 \\\n -1 & h(\\mathbf{x}_i)^T\\mathbf{w} \\leq 0 \\\n\\end{array} \n\\right.\n$$\nNote: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.\nBased on the above, we will use the same code that was used in Module 3 assignment.", "def get_classification_accuracy(feature_matrix, sentiment, coefficients):\n scores = np.dot(feature_matrix, coefficients)\n apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)\n predictions = apply_threshold(scores)\n \n num_correct = (predictions == sentiment).sum()\n accuracy = num_correct / len(feature_matrix) \n return accuracy", "Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.", "train_accuracy = {}\ntrain_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)\ntrain_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)\ntrain_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)\ntrain_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)\ntrain_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)\ntrain_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)\n\nvalidation_accuracy = {}\nvalidation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)\nvalidation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)\nvalidation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)\nvalidation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)\nvalidation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)\nvalidation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)\n\n# Build a simple report\nfor key in sorted(validation_accuracy.keys()):\n print \"L2 penalty = %g\" % key\n print \"train accuracy = %s, validation_accuracy = %s\" % (train_accuracy[key], validation_accuracy[key])\n print \"--------------------------------------------------------------------------------\"", "Quiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the training data?\nQuiz question: Which model (L2 = 0, 4, 10, 100, 1e3, 1e5) has the highest accuracy on the validation data?\nQuiz question: Does the highest accuracy on the training data imply that the model is the best one?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
opensanca/trilha-python
01-python-intro/aula-05/Aula 05.ipynb
mit
[ "[Py-Intro] Aula 05\nFunções e arquivos\nO que você vai aprender nesta aula?\n\n\nFunções\n\nDefinir nções\nFunções como objetos\nArgumentos padrão\nInvocar funções pelos nomes dos argumentos (keyword arguments)\nEmpacotamento e desempacotamento de argumentos\n\n\n\nArquivos\n\nComo ler e escrever arquivos\nTrabalhando com arquivos CSV (Comma-separated values)\n\n\n\nFunções\nDefinindo funções\nNas aulas anteriores já usamos e definimos muitas funções. Nesta aula revisaremos como essas coisas acontecem e aprofundaremos o assunto.", "def dobra(x):\n return x * 2\n\ndobra(10)", "Vale notar que o Python não faz checagem de tipos, então podemos usar nossa função dobra() com outros tipos de argumentos:", "dobra('do')\n\ndobra([1, 2, 3])", "Uma função pode receber mais de um parâmetro:", "def soma(a, b, c, d):\n return a + b + c + d\n\nsoma(1, 2, 3, 4)\n\nsoma('h', 'o', 'j', 'e')", "A documentação de funções é feita utilizando docstring. Docstring são lorem ipsum dolor sit amet:", "def fatorial(n):\n \"\"\" Retorna o fatorial de n (n!)\"\"\"\n return 1 if n < 1 else n * fatorial(n - 1)\n\nfatorial(3), fatorial(4), fatorial(5)", "Funções como objetos\nFunções em python podem ser tratadas como outros objetos (no jargão formal diz-se que funções são objetos de primeira classe)", "fat = fatorial\nfat(3), fat(4), fat(5)\n\nfat\n\ntype(fat)", "É possível acessar atributos desse objeto function:", "fat.__doc__", "Para conhecermos os atributos e métodos de um objeto function podemos usar a função dir() que retorna os métodos atributos de um objeto", "dir(fat)", "Os métodos e atributos envoltos em __ são conhecidos como Métodos Mágicos ou Métodos Dunder (Double UNDERline) e serão vistos no minicurso de Orientação a Objetos", "fat.__name__\n\nfat.__doc__", "É possível acessar o metadados e o bytecode dessas funções:", "fat.__code__\n\ndir(fat.__code__)\n\nfat.__code__.co_name\n\nfat.__code__.co_varnames", "Bytecode:", "fat.__code__.co_code\n\nimport dis\ndis.dis(fat)", "Como mostramos anteriormente é possível enviar funções como argumentos de outras funções. No caso usamos esse artifício para mudar o funcionamento padrão da função de ordenação sorted():", "bromas = {'z': 10, 'n': 5, 'm': 7}\nbromas\n\nsorted(bromas.items())\n\ndef pega_segundo(sequencia):\n return sequencia[1]\n\nsorted(bromas.items(), key=pega_segundo)", "Neste exemplo definimos a funçao pega_segundo e enviamos ela como argumento para a função sorted()\nValores padrões de argumentos (default arguments)\nO Python permite a atribuição de valores padrão para argumentos de uma função. Ao chamar essa função esses argumentos são opcionais, sendo utilizado o valor padrão fornecido na definição da função.\nPor exemplo vamos criar uma função que converte um valor em dólar para real com o preço do dólar como argumento com valor padrão:", "def dolar_para_real(valor_real, dolar=3.53):\n return valor_real * dolar", "Para calular um preço de um produto de, por exemplo, U$89,00 só precisamos passar esse valor:", "dolar_para_real(89)", "Supondo que queiramos calcular o preço do produto no ano passado quando o valor do dólar estava menor:", "dolar_para_real(89, 2.8)", "Muitas funções da biblioteca padrão usam argumentos padrão para simplificar e extender seus usos. Muita funções que vimos neste curso fazem isso, como, por exemplo a função str.split().\nPara mostrar isso vamos recorrer a sua documentação que é invocada ao passar essa função como argumento para a função help():", "help(str.split)", "Como visto a função split possui dois argumentos com valores padrão: separador e número máximo de splits. Por padrão o separador é um espaço em branco e o número máximo de splits é todos os possíveis, como podemos observar neste exemplo:", "'Frase sem sentido algum para ser usada como exemplo'.split()", "Podemos mudar esse comportamento passando outros argumentos:", "frase = 'Frase sem sentido algum para ser usada como exemplo'\nfrase.split(' ', 1) # somente 1 split foi feito gerando uma lista de dois elementos\n\nurl = 'www.dominio.com.br'\nurl.split('.')\n\nurl.split('.', 1) # para separar somente o www do resto", "Outra função que também faz isso é a função open() usada para abrir arquivos:", "arq = open('arq.txt', 'w') # passa nome do arquivo e modo abertura 'w' (escrita)\narq # arquivo aberto\n\narq.close() # fechando arquivo\n\narq = open('arq.txt') # por padrão o modo de abertura é 'r' (leitura)\narq\n\narq.close()", "Veremos mais sobre esta função ainda nesta aula.\nCuidado com argumentos padrões!\nOs argumentos padrões de funções são executados apenas uma vez e isso pode causar alguns comportamentos \"estranhos\".\nSuponhamos que queremos criar uma função anexa() que adiciona um elemento a uma lista e, se a lista não for passada, criamos uma nova:", "def anexa(elemento, lista=[]):\n lista.append(elemento)\n return lista\n\nanexa(1)\n\nanexa(2)\n\nanexa(3)", "Como dito anteriormente o valor do argumento da lista [] (que cria uma lista) é executado apenas uma vez, portanto a mesma lista é usada sempre que chamamos a função anexa(). Para criarmos uma anova lista quando não nos é passado uma fazemos:", "def anexa(elemento, lista=None):\n if not lista:\n lista = []\n lista.append(elemento)\n return lista", "Desse jeito criamos uma nova lista cada vez que a função é executada:", "lista = anexa(10)\nlista\n\nanexa(5)\n\nanexa(20, lista)\nlista", "Como já vimos anteriormente (porém não foi explicado como) o Python permite que os argumentos da função sejam chamados por seu nome e não somente por sua posição:", "anexa(elemento=100, lista=[1, 2, 3])\n\n'Exemplo de split chamado pelo nome dos argumentos'.split(sep=' ', maxsplit=-1)", "Exemplo de uso de argumentos nomeados: biblioteca datetime\nUma função da biblioteca padrão do Python que faz uso extensivo de argumentos padrões é a timedelta() da biblioteca datetime (que trabalha com datas e horários). Essa função é usada para representar durações, diferenças entre datas ou horários:", "from datetime import date, timedelta\nhoje = date.today()\nhoje # objeto do tipo date\n\nhoje.year, hoje.month, hoje.day # atributos de date: day, month e year", "foo", "hoje + timedelta(days=1) # amanhã\n\nhoje - timedelta(days=1) # ontem\n\nhoje + timedelta(days=2) # depois de amanhã\n\nhoje - timedelta(days=2) # antes de ontem\n\nhoje + timedelta(days=7) # semana que vem\n\nhoje\n\nhoje + timedelta(days=30) # mês que vem", "Como nem todo mês possui 30 dias pode ser necessário saber qual o próximo mês. Para isso é melhor usar a função datetime.replace() que retorna a mesma data com os valores fornecidos trocados:", "hoje.replace(month=hoje.month + 1) # mesmo dia e ano no próximo mês\n\nhoje.replace(year=hoje.year + 1) # mesmo dia e mês no próximo ano", "timedelta() também pode ser usado com datetimes (data e hora):", "from datetime import datetime\n\nagora = datetime.now()\nagora\n\nagora.year, agora.month, agora.day, agora.hour, agora.minute, agora.second, agora.microsecond\n\nagora + timedelta(hours=1) # daqui uma hora\n\nagora - timedelta(hours=1) # uma hora atrás\n\nagora + timedelta(hours=2, minutes=30) # daqui 2 horas e meia", "Subtrair dates e datetimes gera objetos timedelta que representam a diferença de tempo entre os dois:", "daqui_a_pouco = agora + timedelta(minutes=15, seconds=45)\ndaqui_a_pouco - agora", "Chamar funções dando nomes aos seus argumentos, em conjunto com bons nomes de funções e argumentos, é uma ótima forma de aumentar a legibilidade de seu código.\nExercícios\n\n\n\nEmpacotamento e desempacotamento de argumentos de funções\nA criação de funções com argumentos arbitrários é feita usando o conceito de empacotamento de argumentos. Algumas funções da biblioteca padrão do Python usam esse conceito:", "max(1, 2) # funciona com 2 argumentos\n\nmax(1, 2, 3) # 3 argumentos\n\nmax(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18) # muitos argumentos", "O que acontece é que esses vários argumentos são empacotados para uma tupla e então a partir dessa tupla de elementos é possível encontrar o máximo. A funcão sum() que soma os elementos de uma sequência não suporta argumentos abritrários, vamos fazer uma versão dessa função que suporte isso:", "sum(1, 2, 3, 4) # não suporta", "O empacotamento de argumentos é feito na definição dos argumentos da função usando o operador *, conforme é mostrado a seguir. Vamos começar analisando o resultado desse argumento para depois implementar a funcionalidade de soma:", "def soma(*numeros):\n print('O tipo é: {}'.format(type(numeros)))\n print('Valores: {}'.format(numeros))", "Só para não haver dúvidas: *numeros não é um ponteiro. O que realmente acontece é que os valores recebidos ao invocar a função soma() serão empacotados para o argumento numeros.", "soma(-1, 0, 1)", "Nesse exemplo a função soma() foi chamada com os valores -1, 0, 1 que foram empacotados na tupla numeros. Sabendo disso podemos calcular a soma desses elementos:", "def soma(*numeros):\n soma = 0\n for num in numeros:\n soma += num\n return soma\n\nsoma(1, 2, 3, 4)\n\nsoma(-2, 0, 3)", "É possível criar função que receba alguns argumentos fixos e argumentos arbitrários:", "def foo(bar, *baz):\n print(type(bar), type(baz))\n print('bar: {}, baz: {}'.format(bar, baz))\n\nfoo(1, 2, 3, 4, 5)\n\nfoo([1, 2, 3], 10, 'aba', False, (1, 2, 3))", "Do mesmo jeito que empacotamos argumentos recebidos na chamada de uma função podemos desempacotar argumentos para enviar as funções:", "numeros = -1, 0, 10\nmax(*numeros)", "No exemplo anterior desempacotamos a tupla com os valores -1, 0, 10 e, ao invés de enviar uma tupla, mandamos cada um deles como um argumento separado. Para deixar esse conceito claro criaremos uma função que recebe argumentos desempacotados:", "def soma(a, b, c):\n \"\"\" Soma três números a, b, e c (a + b + c) \"\"\"\n return a + b + c", "Normalmente faríamos:", "soma(1, 2, 3)", "Porém podemos desempacotar uma sequência de 3 elementos e enviá-los para a função:", "números = -1, 0, 1\nsoma(*números)", "Se a lista for maior ou menor uma exceção será levantada:", "números = -1, 0\nsoma(*números)\n\nnúmeros = -1, 0, 1, 10\nsoma(*números)", "Para ficar mais claro ainda vamos criar uma função que deve receber, obrigatoriamente, três argumentos fixos e, opcionalmente, quantos mais valores forem enviados:", "def foo(a, b, c, *args):\n print('a: {} {}'.format(a, type(a)))\n print('b: {} {}'.format(b, type(b)))\n print('c: {} {}'.format(c, type(c)))\n print('*args: {} {}'.format(args, type(args)))\n\nargs = ['foobarbaz', False, 10]\nfoo(*args)", "Ao receber argumentos empacotados, como feito na função foo(a, b, c, *args), permitimos o envio de uma quantidade arbitrária de argumentos. Isso inclui o envio de nenhum argumento, por esse motivos as funções embutidas min() e max() definem dois argumentos fixos e depois recebem mais argumentos empacotados:", "min(10) # levanta exceção\n\nmin(10, -10) # correto\n\nmin(10, -10, 0) # também correto", "O mesmo vale para nossa função foo():", "foo(1)\n\nfoo(1, 2)\n\nfoo(1, 2, 3)\n\nfoo(1, 2, 3, 4, 5, 6, 7, 8, 9)", "Empacotamento e desempacotamento de argumentos são conceitos importantes, uma vez que são usados extensivamente em bibliotecas e frameworks Python. Funções embutidas como format(), max() e min() usam. Além disso, muitos métodos das Class-Based Views no framework web Django também usam.\nAlém de empacotar e desempacotar argumentos em/de sequências também é possível fazer isso para dicionários:", "def foo(a, b, c):\n print('a: {} {}'.format(a, type(a)))\n print('b: {} {}'.format(b, type(b)))\n print('c: {} {}'.format(c, type(c)))\n\nkwargs = {'a': 1.5, 'b': True, 'c': 'alo'}\nfoo(**kwargs) # desempacotando dicionário kwargs para função foo()", "O que acontece por trás disso é: os argumentos com o nome das chaves do dicionário recebem o respectivo valor, portanto é necessário se atentar com as chaves e nomes do argumentos:", "kwargs = {'q': 10, 'x': 'foo', 'a': 123}\nfoo(**kwargs)", "Assim como visto anteriormente também é possível, ao criar uma função, empacotar os argumentos em dicionários:", "def foo(a, b, c, **kwargs): # kwargs = KeyWord Arguments (argumentos de palavra-chave)\n print('a: {} {}'.format(a, type(a)))\n print('b: {} {}'.format(b, type(b)))\n print('c: {} {}'.format(c, type(c)))\n print('kwargs: {} {}'.format(kwargs, type(kwargs)))\n\nfoo(1, 2, 3)\n\nfoo(1, 2, 3, nome='José', idade=100, vivo=True)", "A função str.format() recebe argumentos posicionais e de palavra-chave arbitrários que devem corresponder a quantidade de variáveis a ser substituidas na string de formatação:", "'{0}'.format(1)\n\n'{0} {1}'.format(1, 2)", "Podemos desempacotar uma sequência e enviar à função de formatação:", "numeros = [1, 2, 3, 4, 5]\n'{0} {1} {2} {3} {4}'.format(*numeros)\n\n'{nome} é {sexo} e tem {idade} anos de idade.'.format(nome='Joana', sexo='mulher', idade=35)", "Podemos desempacotar um dicionários e enviar essas informações à função:", "dados = {'nome': 'Joana', 'sexo': 'mulher', 'idade': 35}\n'{nome} é {sexo} e tem {idade} anos de idade.'.format(**dados)", "Para criar uma função que receba argumentos arbitrários é preciso criar uma função que empacote tanto os argumentos posicionais (em uma tupla) quanto os nomeados (em um dicionários):", "def silverbullet(*args, **kwargs):\n print('args: {} {}'.format(args, type(args)))\n print('kwargs: {} {}'.format(kwargs, type(kwargs)))\n\nsilverbullet(1, 2, 3, 4, a=10, b=20, c=30)\n\nfoo = 1, 2, 3\nbar = {'abc': False, 'def': 'alololo'}\nsilverbullet(-1, -10, *foo, a=150, b='oi', **bar)", "Arquivos\nJá vimos anteriormente um exemplo do uso de arquivos, aqui nesta seção vamos trabalhar mais a fundo com eles.\nPara abrir um arquivo existe a função embutida open() que recebe, além de outras coisas, o nome do arquivo e modo de abertura. Os modos suportados são:\n<table>\n<thead>\n<th>Character</th>\n<th>Meaning</th>\n</thead>\n<tbody>\n<tr>\n<td>'r'</td><td>abrir par leitura (padrão)</td>\n</tr>\n<tr>\n<td>'w'</td><td>abrir para escrita, o arquivo é truncado primeiro</td>\n</tr>\n<tr>\n<td>'x'</td><td>abrir para criação exclusiva, falhando se o arquivo existe</td>\n</tr>\n<tr>\n<td>'a'</td><td>abrir para escrita, anexando o conteúdo para o fim do arquivo caso ele exista</td>\n</tr>\n<tr>\n<td>'b'</td><td>modo binário (pode ser usado em conjunto com os de abertura)</td>\n</tr>\n<tr>\n<td>'t'</td><td>modo texto (padrão)</td>\n</tr>\n<tr>\n<td>'+'</td><td>abrir um arquivo do disco para atualização (funciona para escrita e leitura)</td>\n</tr>\n<tr>\n<td>'U'</td><td>modo quebra de linhas universal (depreciado)</td>\n</tr>\n</tbody>\n</table>\n\nVamos começar abrindo um arquivo de texto para escrita:", "arq = open('dados.txt', 'w')\narq\n\narq.write('você tem dado em casa?\\n')\narq.write('não teve graça... eu sei.')\narq.close()", "Agora abra o arquivo dados.txt e veja seu conteúdo.\nNote que após fechar o arquivo não podemos fazer operações nele:", "arq.write('esqueci de escrever esta frase')", "Agora vamos usar o próprio python para ler o arquivo:", "arq = open('dados.txt')\n\nconteudo = arq.read()\nconteudo", "Como visto a função file.read() nos dá o conteúdo de todo o arquivo como uma única string. Mais para frente veremos outros métodos de leitura e escrita.", "arq.close() # não podemos esquecer de fechar o arquivo", "Agora vamos gerar um arquivo mais complexo com dados mais úteis usando a famigerada biblioteca faker:", "from faker import Factory\nfaker = Factory.create('pt_BR') # cria fábrica de dados falsos em pt-BR\n\narq = open('dados.txt', 'w')\nfor _ in range(100):\n nome = faker.name()\n cargo = faker.job()\n empresa = faker.company()\n arq.write('\"{}\",\"{}\",\"{}\"\\n'.format(nome, cargo, empresa))\n \n \narq.close()", "Abra o arquivo dados.txt e dentro dele haverá 100 linhas. Cada linha contém os dados (nome, cargo e empresa) separados por vírgula.\nEsse formato de dados é muito popular e é chamado de CSV (Comma-separated Values ou Valores Separados por Vírgula), muitas empresas usam o CSV para mover dados entre sistemas que trabalham com formatos incompatíveis ou proprietários. Ainda nesta aula veremos mais sobre como trabalhar com esses dados.\nExistem algumas maneiras diferentes para ler arquivo. Podemos ler todas as linhas de uma só vez e jogá-la em uma lista (ou iterá-la diretamente) usando file.readlines():", "arq = open('dados.txt') # abrimos o arquivo no modo padrão (escrita)\nlinhas = arq.readlines() # o arquivo é todo lido e cada linha vira um elmento na lista linhas\nprint(type(linhas))\nprint(linhas)\narq.close()", "Podemos iterar a lista e trabalhar com os dados vindo do arquivo:", "for linha in linhas[:10]: # pegando só as 10 primeiras por brevidade\n print(linha.strip()) # .strip() remove a quebra de linha no final", "Como vocês viram anteriormente abrimos o arquivo, mexemos com ele e depois o fechamos. Trabalhar dessa forma geralmente pode levar a erros, pois é comum esquecer de fechar o arquivo e, ter vários arquivos abertos, pode deixar o programa lento/ineficiente. (eu mesmo esqueci de fechar os arquivos nos dois exemplos acima)\nUma maneira melhor de manipular arquivos é usando gerenciadores de contexto se responsabilizam pelo fechamento ou finalização de recursos utilizados e isso pode ser usado para trabalhar com arquivos ou cuidar de transações ao trabalhar com banco de dados, por exemplo.\nVamos mostrar como trabalhar com gerenciadores de contexto lendo o arquivo que criamos anteriormente:", "with open('dados.txt') as arq: # abre o arquivo numeros.txt e o coloca na variável arq\n for linha in arq.readlines()[-10:]: # pega as 10 últimas linhas por brevidade\n print(linha.strip())", "O arquivo arq já foi fechado, podemos verificar isso tentando ler uma linha do arquivo:", "arq.readline()", "Tambem é possível ler o arquivo linha a linha usando a função file.readline():", "with open('dados.txt') as arq:\n linha = arq.readline()\n while linha:\n print(linha, end='')\n linha = arq.readline()", "Trabalhando com arquivos CSV\nNão existe nenhum padrão de arquivos CSV. É comum ver arquivos desses formatos separados por outros caracteres que não a vírgula como: ; - e .. Em alguns casos cada coluna pode ser envolta em aspas simples ou aspas duplas. Por conta dessas particularidades o Python criou uma biblioteca csv que auxilia na manipulação de arquivos CSV.\nPara ler arquivos CSV precisamos primeiro importar a biblioteca e depois criamos um leitor CSV com a função csv.reader():", "import csv\n\nwith open(\"dados.txt\") as arq_csv: # abrindo o arquivo\n leitor = csv.reader(arq_csv)\n for linha in leitor:\n print(type(linha), linha)", "A função csv.reader() já retorna cada linha como uma lista Python com as aspas e quebra de linhas removidas.\nPodemos deixar a leitura do arquivo ainda melhor usando desempacotamento de sequências:", "with open('dados.txt') as arq_csv:\n leitor = csv.reader(arq_csv)\n for nome, cargo, empresa in leitor:\n print('{} trabalha como {} na {}'.format(nome, cargo, empresa))", "Agora vamos criar um arquivo CSV em um padrão diferente usando a função csv.writer(). Nossas colunas serão separadas por espaço em branco e cada coluna será separada por | ao invés de aspas:", "with open('mais-dados.csv', 'w') as arq_csv:\n escritor = csv.writer(arq_csv, delimiter=' ', quotechar='|')\n for _ in range(20):\n dados = faker.name(), faker.job(), faker.company()\n escritor.writerow(dados)", "Para ler o arquivo é só usar a mesma função csv.reader() usada anteriormente especificando o padrão:", "with open('mais-dados.csv') as arq_csv:\n for linha in csv.reader(arq_csv, delimiter=' ', quotechar='|'):\n print(linha)", "Para mais informações sobre como usar a biblioteca csv consulte sua documentação oficial\nFim da Aula 05" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
malogrisard/NTDScourse
toolkit/02_demo_exploitation.ipynb
mit
[ "A Python Tour of Data Science: Data Exploitation\nMichaël Defferrard, PhD student, EPFL LTS2\nThe data X.npy and y.npy can be obtained by running the data acquisition and exploration demo.", "# Cross-platform (Windows / Mac / Linux) paths.\nimport os.path\nfolder = os.path.join('..', 'data', 'credit_card_defaults')\n\nimport numpy as np\nX = np.load(os.path.join(folder, 'X.npy'))\ny = np.load(os.path.join(folder, 'y.npy'))\nn, d = X.shape\nprint('The data is a {} with {} samples of dimensionality {}.'.format(type(X), n, d))", "1 Pre-Processing\nBack to NumPy, the fundamental package for scientific computing with Python. It provides multi-dimensional arrays, data types and linear algebra routines. Note that scikit-learn provides many helpers for those tasks.\nPre-processing usually consists of:\n1. Data types transformation. The data has not necessarilly the format the chosen learning algorithm expects. This was done in the previous notebook before doing statistics with statsmodels.\n1. Data normalization. Some algorithms expect data to be centered and scaled. Some will train faster.\n1. Data randomization. If the samples are presented in sequence, it'll train faster if they are not correlated.\n1. Train / test splitting. You may have to be careful here, e.g. not including future events in the training set.", "# Center and scale.\n# Note: on a serious project, should be done after train / test split.\nX = X.astype(np.float)\nX -= X.mean(axis=0)\nX /= X.std(axis=0)\n\n# Training and testing sets.\ntest_size = 10000\nprint('Split: {} testing and {} training samples'.format(test_size, y.size - test_size))\nperm = np.random.permutation(y.size)\nX_test = X[perm[:test_size]]\nX_train = X[perm[test_size:]]\ny_test = y[perm[:test_size]]\ny_train = y[perm[test_size:]]", "2 A first Predictive Model\nThe ingredients of a Machine Learning (ML) model are:\n1. A predictive function, e.g. the linear transformation $f(x) = x^Tw + b$.\n1. An error function, e.g. the least squares $E = \\sum_{i=1}^n \\left( f(x_i) - y_i \\right)^2 = \\| f(X) - y \\|_2^2$.\n1. An optional regularization, e.g. the Thikonov regularization $R = \\|w\\|_2^2$.\n1. Which makes up the loss / objective function $L = E + \\alpha R$.\nOur model has a sole hyper-parameter, $\\alpha \\geq 0$, which controls the shrinkage.\nA Machine Learning (ML) problem can often be cast as a (convex or smooth) optimization problem which objective is to find the parameters (here $w$ and $b$) who minimize the loss, e.g.\n$$\\hat{w}, \\hat{b} = \\operatorname{arg min}_{w,b} L = \\operatorname{arg min}_{w,b} \\| Xw + b - y \\|_2^2 + \\alpha \\|w\\|_2^2.$$\nIf the problem is convex and smooth, one can compute the gradients\n$$\\frac{\\partial L}{\\partial{w}} = 2 X^T (Xw+b-y) + 2\\alpha w,$$\n$$\\frac{\\partial L}{\\partial{b}} = 2 \\sum_{i=1}^n (x_i^Tw+b-y_i) = 2 \\sum_{i=1}^n (x_i^Tw-y_i) + 2n \\cdot b,$$\nwhich can be used in a gradient descent scheme or to form closed-form solutions:\n$$\\frac{\\partial L}{\\partial{w}} = 0 \\ \\rightarrow \\ 2 X^T X\\hat{w} + 2\\alpha \\hat{w} = 2 X^T y - 2 X^T b \\ \\rightarrow \\ \\hat{w} = (X^T X + \\alpha I)^{-1} X^T (y-b),$$\n$$\\frac{\\partial L}{\\partial{b}} = 0 \\ \\rightarrow \\ 2n\\hat{b} = 2\\sum_{i=1}^n (y_i) - \\underbrace{2\\sum_{i=1}^n (x_i^Tw)}_{=0 \\text{ if centered}} \\ \\rightarrow \\ \\hat{b} = \\frac1n I^T y = \\operatorname{mean}(y).$$\nWhat if the resulting problem is non-smooth ? See the PyUNLocBoX, a convex optimization toolbox which implements proximal splitting methods.\n2.1 Take a symbolic Derivative\nLet's verify our manually derived gradients ! SymPy is our computer algebra system (CAS) (like Mathematica, Maple) of choice.", "import sympy as sp\nsp.init_printing()\n\nX, y, w, b, a = sp.symbols('x y w b a')\nL = (X*w + b - y)**2 + a*w**2\n\ndLdw = sp.diff(L, w)\ndLdb = sp.diff(L, b)\n\nfrom IPython.display import display\ndisplay(L)\ndisplay(dLdw)\ndisplay(dLdb)", "2.2 Build the Classifier\nRelying on the derived equations, we can implement our model relying only on the NumPy linear algebra capabilities (really wrappers to BLAS / LAPACK implementations such as ATLAS, OpenBLAS or MKL).\nA ML model is best represented as a class, with hyper-parameters and parameters stored as attributes, and is composed of two essential methods:\n1. y_pred = model.predict(X_test): return the predictions $y$ given the features $X$.\n1. model.fit(X_train, y_train): learn the model parameters such as to predict $y$ given $X$.", "class RidgeRegression(object):\n \"\"\"Our ML model.\"\"\"\n \n def __init__(self, alpha=0):\n \"The class' constructor. Initialize the hyper-parameters.\"\n self.a = alpha\n \n def predict(self, X):\n \"\"\"Return the predicted class given the features.\"\"\"\n return np.sign(X.dot(self.w) + self.b)\n \n def fit(self, X, y):\n \"\"\"Learn the model's parameters given the training data, the closed-form way.\"\"\"\n n, d = X.shape\n self.b = np.mean(y)\n Ainv = np.linalg.inv(X.T.dot(X) + self.a * np.identity(d))\n self.w = Ainv.dot(X.T).dot(y - self.b)\n\n def loss(self, X, y, w=None, b=None):\n \"\"\"Return the current loss.\n This method is not strictly necessary, but it provides\n information on the convergence of the learning process.\"\"\"\n w = self.w if w is None else w # The ternary conditional operator\n b = self.b if b is None else b # makes those tests concise.\n import autograd.numpy as np # See below for autograd.\n return np.linalg.norm(np.dot(X, w) + b - y)**2 + self.a * np.linalg.norm(w, 2)**2", "Now that our model can learn its parameters and predict targets, it's time to evaluate it. Our metric for binary classification is the accuracy, which gives the percentage of correcly classified test samples. Depending on the application, the time spent for inference or training might also be important.", "def accuracy(y_pred, y_true):\n \"\"\"Our evaluation metric, the classification accuracy.\"\"\"\n return np.sum(y_pred == y_true) / y_true.size\n\ndef evaluate(model):\n \"\"\"Helper function to instantiate, train and evaluate the model.\n It returns the classification accuracy, the loss and the execution time.\"\"\"\n import time\n t = time.process_time()\n model.fit(X_train, y_train)\n y_pred = model.predict(X_test)\n acc = accuracy(y_pred, y_test)\n loss = model.loss(X_test, y_test)\n t = time.process_time() - t\n print('accuracy: {:.2f}%, loss: {:.2f}, time: {:.2f}ms'.format(acc*100, loss, t*1000))\n return model\n\nalpha = 1e-2*n\nmodel = RidgeRegression(alpha)\nevaluate(model)\n\nmodels = []\nmodels.append(model)", "Okay we got around 80% accuracy with such a simple model ! Inference and training time looks good.\nFor those of you who don't now about numerical mathematics, solving a linear system of equations by inverting a matrix can be numerically instable. Let's do it the proper way and use a proper solver.", "def fit_lapack(self, X, y):\n \"\"\"Better way (numerical stability): solve the linear system with LAPACK.\"\"\"\n n, d = X.shape\n self.b = np.mean(y)\n A = X.T.dot(X) + self.a * np.identity(d)\n b = X.T.dot(y - self.b)\n self.w = np.linalg.solve(A, b)\n\n# Let's monkey patch our object (Python is a dynamic language).\nRidgeRegression.fit = fit_lapack\n\n# Yeah just to be sure.\nmodels.append(evaluate(RidgeRegression(alpha)))\nassert np.allclose(models[-1].w, models[0].w)", "2.3 Learning as Gradient Descent\nDescending the gradient of our objective will lead us to a local minimum. If the objective is convex, that minimum will be global. Let's implement the gradient computed above and a simple gradient descent algorithm\n$$w^{(t+1)} = w^{(t)} - \\gamma \\frac{\\partial L}{\\partial w}$$\nwhere $\\gamma$ is the learning rate, another hyper-parameter.", "class RidgeRegressionGradient(RidgeRegression):\n \"\"\"This model inherits from `ridge_regression`. We overload the constructor, add a gradient\n function and replace the learning algorithm, but don't touch the prediction and loss functions.\"\"\"\n \n def __init__(self, alpha=0, rate=0.1, niter=1000):\n \"\"\"Here are new hyper-parameters: the learning rate and the number of iterations.\"\"\"\n super().__init__(alpha)\n self.rate = rate\n self.niter = niter\n \n def grad(self, X, y, w):\n A = X.dot(w) + self.b - y\n return 2 * X.T.dot(A) + 2 * self.a * w\n \n def fit(self, X, y):\n n, d = X.shape\n self.b = np.mean(y)\n \n self.w = np.random.normal(size=d)\n for i in range(self.niter):\n self.w -= self.rate * self.grad(X, y, self.w)\n \n # Show convergence.\n if i % (self.niter//10) == 0:\n print('loss at iteration {}: {:.2f}'.format(i, self.loss(X, y)))\n \nmodels.append(evaluate(RidgeRegressionGradient(alpha, 1e-6)))", "Tyred of derivating gradients by hand ? Welcome autograd, our tool of choice for automatic differentation. Alternatives are Theano and TensorFlow.", "class RidgeRegressionAutograd(RidgeRegressionGradient):\n \"\"\"Here we derive the gradient during construction and update the gradient function.\"\"\"\n def __init__(self, *args):\n super().__init__(*args)\n from autograd import grad\n self.grad = grad(self.loss, argnum=2)\n\nmodels.append(evaluate(RidgeRegressionAutograd(alpha, 1e-6)))", "2.4 Learning as generic Optimization\nSometimes we don't want to implement the optimization by hand and would prefer a generic optimization algorithm. Let's make use of SciPy, which provides high-level algorithms for, e.g. optimization, statistics, interpolation, signal processing, sparse matrices, advanced linear algebra.", "class RidgeRegressionOptimize(RidgeRegressionGradient):\n \n def __init__(self, alpha=0, method=None):\n \"\"\"Here's a new hyper-parameter: the optimization algorithm.\"\"\"\n super().__init__(alpha)\n self.method = method\n \n def fit(self, X, y):\n \"\"\"Fitted with a general purpose optimization algorithm.\"\"\"\n n, d = X.shape\n self.b = np.mean(y)\n \n # Objective and gradient w.r.t. the variable to be optimized.\n f = lambda w: self.loss(X, y, w)\n jac = lambda w: self.grad(X, y, w)\n \n # Solve the problem !\n from scipy.optimize import minimize\n w0 = np.random.normal(size=d)\n res = minimize(f, w0, method=self.method, jac=jac)\n self.w = res.x\n\nmodels.append(evaluate(RidgeRegressionOptimize(alpha, method='Nelder-Mead')))\nmodels.append(evaluate(RidgeRegressionOptimize(alpha, method='BFGS')))", "Accuracy may be lower (depending on the random initialization) as the optimization may not have converged to the global minima. Training time is however much longer ! Especially for gradient-less optimizers such as Nelder-Mead.\n3 More interactivity\nInterlude: the interactivity of Jupyter notebooks can be pushed forward with IPython widgets. Below, we construct a slider for the model hyper-parameter $\\alpha$, which will train the model and print its performance at each change of the value. Handy when exploring the effects of hyper-parameters ! Although it's less usefull if the required computations are long.", "import ipywidgets\nfrom IPython.display import clear_output\n\nslider = ipywidgets.widgets.FloatSlider(\n value=-2,\n min=-4,\n max=2,\n step=1,\n description='log(alpha) / n',\n)\n\ndef handle(change):\n \"\"\"Handler for value change: fit model and print performance.\"\"\"\n value = change['new']\n alpha = np.power(10, value) * n\n clear_output()\n print('alpha = {:.2e}'.format(alpha))\n evaluate(RidgeRegression(alpha))\n\nslider.observe(handle, names='value')\ndisplay(slider)\n\nslider.value = 1 # As if someone moved the slider.", "4 Machine Learning made easier\nTired of writing algorithms ? Try scikit-learn, which provides many ML algorithms and related tools, e.g. metrics, cross-validation, model selection, feature extraction, pre-processing, for predictive modeling.", "from sklearn import linear_model, metrics\n\n# The previously developed model: Ridge Regression.\nmodel = linear_model.RidgeClassifier(alpha)\nmodel.fit(X_train, y_train)\ny_pred = model.predict(X_test)\nmodels.append(model)\n\n# Evaluate the predictions with a metric: the classification accuracy.\nacc = metrics.accuracy_score(y_test, y_pred)\nprint('accuracy: {:.2f}%'.format(acc*100))\n\n# It does indeed learn the same parameters.\nassert np.allclose(models[-1].coef_, models[0].w, rtol=1e-1)\n\n# Let's try another model !\nmodels.append(linear_model.LogisticRegression())\nmodels[-1].fit(X_train, y_train)\nacc = models[-1].score(X_test, y_test)\nprint('accuracy: {:.2f}%'.format(acc*100))", "5 Deep Learning (DL)\nOf course ! We got two low-level Python libraries: (1) TensorFlow and (2) Theano. Both of them treat data as tensors and construct a computational graph (dataflow paradigm), composed of any mathematical expressions, that get evaluated on CPUs or GPUs. Theano is the pioneer and features an optimizing compiler which will turn the computational graph into efficient code. TensorFlow has a cleaner API (not need to define expressions as strings) and does not require a compilation step (which is painful when developing models).\nWhile you'll only use Theano / TensorFlow to develop DL models, these are the higher-level libraries you'll use to define and test DL architectures on your problem:\n* Keras: TensorFlow & Theano backends\n* Lasagne: Theano backend\n* nolearn: sklearn-like abstraction of Lasagne\n* Blocks: Theano backend\n* TFLearn: TensorFlow backend", "import os\nos.environ['KERAS_BACKEND'] = 'theano'\nimport keras\n\nclass NeuralNet(object):\n \n def __init__(self):\n \"\"\"Define Neural Network architecture.\"\"\"\n self.model = keras.models.Sequential()\n self.model.add(keras.layers.Dense(output_dim=46, input_dim=23, activation='relu'))\n self.model.add(keras.layers.Dense(output_dim=1, activation='sigmoid'))\n self.model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])\n\n def fit(self, X, y):\n y = y / 2 + 0.5 # [-1,1] -> [0,1]\n self.model.fit(X, y, nb_epoch=5, batch_size=32)\n\n def predict(self, X):\n classes = self.model.predict_classes(X, batch_size=32)\n return classes[:,0] * 2 - 1\n \nmodels.append(NeuralNet())\nmodels[-1].fit(X_train, y_train)\n\nloss_acc = models[-1].model.evaluate(X_test, y_test/2+0.5, batch_size=32)\nprint('\\n\\nTesting set: {}'.format(loss_acc))", "6 Evaluation\nNow that we tried several predictive models, it is time to evaluate them with our chosen metrics and choose the one best suited to our particular problem. Let's plot the classification accuracy and the prediction time for each classifier with matplotlib, the goto 2D plotting library for scientific Python. Its API is similar to matlab.\nResult: The NeuralNet gives the best accuracy, by a small margin over the much simple logistic regression, but is the slowest method. Which to choose ? Again, it depends on your priorities.", "from matplotlib import pyplot as plt\nplt.style.use('ggplot')\n%matplotlib inline\n# Or notebook for interaction.\n\nnames, acc, times = [], [], []\nfor model in models:\n import time\n t = time.process_time()\n y_pred = model.predict(X_test)\n times.append((time.process_time()-t) * 1000)\n acc.append(accuracy(y_pred, y_test) * 100)\n names.append(type(model).__name__)\n\nplt.figure(figsize=(15,5))\nplt.subplot(121)\nplt.plot(acc, '.', markersize=20)\nplt.title('Accuracy [%]')\nplt.xticks(range(len(names)), names, rotation=90)\n\nplt.subplot(122)\nplt.plot(times, '.', markersize=20)\nplt.title('Prediction time [ms]')\nplt.xticks(range(len(names)), names, rotation=90)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
infilect/ml-course1
keras-notebooks/CNN/4.3 CIFAR10 CNN.ipynb
mit
[ "Convolutional Neural Network\nIn this second exercise-notebook we will play with Convolutional Neural Network (CNN). \nAs you should have seen, a CNN is a feed-forward neural network tipically composed of Convolutional, MaxPooling and Dense layers. \nIf the task implemented by the CNN is a classification task, the last Dense layer should use the Softmax activation, and the loss should be the categorical crossentropy.\nReference: https://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py\nTraining the network\nWe will train our network on the CIFAR10 dataset, which contains 50,000 32x32 color training images, labeled over 10 categories, and 10,000 test images. \nAs this dataset is also included in Keras datasets, we just ask the keras.datasets module for the dataset.\nTraining and test images are normalized to lie in the $\\left[0,1\\right]$ interval.", "from keras.datasets import cifar10\nfrom keras.utils import np_utils\n\n(X_train, y_train), (X_test, y_test) = cifar10.load_data()\nY_train = np_utils.to_categorical(y_train, nb_classes)\nY_test = np_utils.to_categorical(y_test, nb_classes)\nX_train = X_train.astype(\"float32\")\nX_test = X_test.astype(\"float32\")\nX_train /= 255\nX_test /= 255", "To reduce the risk of overfitting, we also apply some image transformation, like rotations, shifts and flips. All these can be easily implemented using the Keras Image Data Generator.\nWarning: The following cells may be computational Intensive....", "from keras.preprocessing.image import ImageDataGenerator\n\ngenerated_images = ImageDataGenerator(\n featurewise_center=True, # set input mean to 0 over the dataset\n samplewise_center=False, # set each sample mean to 0\n featurewise_std_normalization=True, # divide inputs by std of the dataset\n samplewise_std_normalization=False, # divide each input by its std\n zca_whitening=False, # apply ZCA whitening\n rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)\n width_shift_range=0.2, # randomly shift images horizontally (fraction of total width)\n height_shift_range=0.2, # randomly shift images vertically (fraction of total height)\n horizontal_flip=True, # randomly flip images\n vertical_flip=False) # randomly flip images\n\ngenerated_images.fit(X_train)", "Now we can start training. \nAt each iteration, a batch of 500 images is requested to the ImageDataGenerator object, and then fed to the network.", "X_train.shape\n\ngen = generated_images.flow(X_train, Y_train, batch_size=500, shuffle=True)\nX_batch, Y_batch = next(gen)\n\nX_batch.shape\n\nfrom keras.utils import generic_utils\n\nn_epochs = 2\nfor e in range(n_epochs):\n print('Epoch', e)\n print('Training...')\n progbar = generic_utils.Progbar(X_train.shape[0])\n \n for X_batch, Y_batch in generated_images.flow(X_train, Y_train, batch_size=500, shuffle=True):\n loss = model.train_on_batch(X_batch, Y_batch)\n progbar.add(X_batch.shape[0], values=[('train loss', loss[0])])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
timothydmorton/usrp-sciprog
day2/exercises/Exercises_Numpy.ipynb
mit
[ "Exercise numpy\nThe ultimate goal of this exercise is to compare the position of stars in a patch of sky as measured in two different surveys. The main task at hand is to identify matching positions of stars between the surveys. For this, we will need to compare the positions of all stars in one survey to the position of all stars in the other survey. This task can be extremely time consuming if not implemented properly, we will therefore use this to compare different coding style and their impact on computation time. \nIf time allows, we will move on to represent the results of our analysis in a meaningfull way.", "import numpy as np \nimport matplotlib.pyplot as plt #We might need this\n\n\n#First, let us load the data\n#Catalog from HSC \ncat_hsc = np.loadtxt('./Catalog_HSC.csv')\nx_hsc = cat_hsc[:,0]\ny_hsc = cat_hsc[:,1]\n#Catalog from HST\ncat_hst = np.loadtxt('./Catalog_HST.csv')\nx_hst = cat_hst[:,0]\ny_hst = cat_hst[:,1]", "Check that the loaded data are consistent with what we expect: (ra, dec) coordinates of the same patch of sky", "#First, check the number of stars in each survey:\nns_hst = #fill in\nns_hsc = #...\n#Print the result\nprint()\n\n#This is a graphic representation of our data content:\n%matplotlib qt\nplt.title('star catalogs in COSMOS')\nplt.plot(x_hsc, y_hsc, 'or', label = 'hsc catalog')\nplt.plot(x_hst, y_hst, 'ob', label = 'hst catalog')\nplt.legend()\nplt.xlabel('ra')\nplt.ylabel('dec')\nplt.show()", "To begin with, let's write a function that returns the algebraic distance between two points", "def distance(point1, point2):\n ''' Returns the distance between two points with coordinates (x,y).\n \n Parameters\n ----------\n point1: list\n 2D coordinates of a point \n point2: list\n 2D coordinates of a point \n \n Returns\n -------\n d: float\n the distance between point1 and point2\n '''\n \n return", "Now let's test it by comparing the distance between the first point of each dataset.", "point1 = [x_hst[0], y_hst[0]]\npoint2 = [x_hsc[0], y_hsc[0]]\nprint(distance(point1, point2))\n# Answer should be 0.6648994838877168", "Let's take it one step further and compare the distance between one point and a set of points", "def point_to_points_distance(point, coordinates):\n ''' Returns the distance between one point and all the points in coordinates.\n \n Parameters\n ----------\n point: list\n 2D coordinates of a point \n coordinates: list\n set of N 2D coordinates stored in a list with shape Nx2\n \n Returns\n -------\n d: list\n the distance between point and each point in coordinates in an array with size N\n '''\n #Declaring an empty list\n d = []\n for c in coordinates:\n # for each point in coordinates, take the distance to point and concatenate it to d \n d.append(distance(point, c))\n #make d a numpy array and return it\n return np.array(d)", "Let's test it on the first 10 points in the HSC catalog and the first point of the HST catalog", "coords = np.concatenate((x_hsc[:10,None], y_hsc[:10,None]), axis = 1)\nprint(point_to_points_distance(point1, coords))\n# The answer should look like [0.66489948 0.4628197 0.39672485 0.43854084 0.32165335 0.30223269\n# 0.65765909 0.65411548 0.6474303 0.79301678]", "Now let's get to work. We would like to associate stars in one survey to their conterpart (if it exists) in the other survey. We will start by comparing the positions between each point of one survey to the position of each point in the other survey.\nFirst, write a function that takes two sets of coordinates (hsc and hst) and returns the distance from each point of one survey to each point of the other, such that the output should be an array of size (n_hst x n_hsc) or (n_hsc x n_hst).\nPS: if you have several (different) ideas about how to implement this, feel free to code them!", "def your_function(coord1, coord2): # Choose an adequate name for your function\n ''' Returns the distance between points in two sets of coordinates.\n \n Parameters\n coord1: array\n array of size Nx2 that contains the [x, y] positions of a catalog \n coord2: array\n array of size Mx2 that contains the [x, y] positions of a catalog \n \n Returns\n dist: array\n array of size NxM that contains the euclidean distances between points in the two datasets\n '''\n \n return\n", "Now, let us take a look at the computation times:", "# In order not to spend the whole evening here, let us reduce the dataset size:\n#Select stars in hsc in the frame: 150.0<x<150.1 and 2.0<y<2.1\nloc_hsc = #please fill these\nx_hsc_exp = x_hsc[loc_hsc]\ny_hsc_exp = y_hsc[loc_hsc]\n\nloc_hst = #And that\nx_hst_exp = x_hst[loc_hst]\ny_hst_exp = y_hst[loc_hst]\n#Once you are done with the exercise, feel free to try with larger selections to see how it impacts computation time\n\nimport distances as dt\n# Insert the names of your functions in the following array:\nmethods = [your_function, dt.double_loop, dt.with_indices, dt.one_loop, dt.one_loop_reverse, dt.scipy_version, dt.newaxis_magic]\n#An empty variable to store computation time\ntimers = []\n# Making sets of coordinates of size Nx2 to feed your functions with the right format\nc2 = np.concatenate((x_hst_exp[:,None], y_hst_exp[:,None]), axis = 1)\nc1 = np.concatenate((x_hsc_exp[:,None], y_hsc_exp[:,None]), axis = 1)\n\nfor f in methods:\n print(f.__name__)\n r = %timeit -o f(c1, c2)\n timers.append(r)\n\n#View the results:\nplt.figure(figsize=(10,6))\nplt.bar(np.arange(len(methods)), [r.best*1000 for r in timers], log=True) # Set log to True for logarithmic scale\nplt.xticks(np.arange(len(methods))+0.2, [f.__name__ for f in methods], rotation=30)\nplt.xlabel('Method')\nplt.ylabel('Time (ms)')\nplt.yscale('log')\nplt.show()", "Identifying matching stars (optional)\nNow that we know all the distances, let us find the stars in each datasets that correspond to one another.\nThis is done by finding, for each star, the minimum distance recorded between the two datasets.\nOne problem that arises with deriving an array that computes all the distances is that we end up with a very LARGE array, which becomes impractical for fast computations. Instead, we will modify one of the previous functions so that it returns the coordinates of stars that have a match in both datasets along with their distance.\nBecause all stars in a given set do not have a counter part in the other, we will only accept a match if the minimum distance between two points is smaller than 0.17 arcseconds (the size of an HSC pixel).\nIn other words, for each star in one dataset, find the star in the other dataset that is the closest (minimum distance), check wether that star is closer that 0.17 arcseconds, if yes, store its coordinates along with the computed distance. At the end of the function, return arrays with the matching star coordinates and their distance to their match in the other dataset.", "#Let us compute the distances as we did before, but this time, with the whole dataset.\n#Of course, a fast method is to be prefered\n\nc1 = #Please fill these. Same as before but with all the dataset\nc2 = #\n\n\ndef get_match(coord_ref, coord2, rad):\n '''\n matches coordinates of stars between two datasets and computes the distance between the position of the stars in the 2 datasets\n\n Parameters\n coord_ref: numpy array (Nx2)\n coordinates (ra, dec) of stars in a FoV from a given dataset\n coord2: numpy array (Mx2)\n coordinates (ra dec) of stars in the same FoV in an other dataset\n rad: float\n radius (deg) around stars in coord_ref where to find a corresponding star in coord2\n \n Returns\n modulus:numpy array (N')\n containing the distance between matching stars\n v_coord: numpy array(N',2)\n coordinates in the coord_ref set of matching stars\n \n\n '''\n #Declare two empty arrays to store the coordinates and distances.\n #...\n s = np.size(coord_ref[:,0])#This is just for representation\n print('number of points in reference catalog: {0}'.format(s))\n #for each star in coord_ref\n for i,c in enumerate(coord_ref):\n\n #This is just here to keep track of the algorithm's progression\n if i % 3000 == 0:\n print('point number {0} out of {1}'.format(i, s))\n #compute the distance from c to all stars in coord2\n r = #...\n #Find the closest star from coord 2 to c\n loc = #...\n\n #Make sure that there is only one star matching (it can happen that 2 match)\n #Here I just arbitrarily pick one, but you can find a way to discard these stars\n if np.size(loc) > 1:\n loc = loc[0]\n\n #record the distance between matching stars\n rmin = #...\n \n #Check whether the closest distance is smaller than rad\n if #...:\n #if yes, place the coordinates and the distance in an array\n #... tip: use append()\n\n return #...\n\n# Use your function \ncoord , r = get_match(c1, c2, 0.3/3600.)", "Now I would like to have a representation for the work we have done that informs me about what is in my datasets. Namely, what is the error on star positions between the two datasets? We would like to have a global view of this error but also an impression of the error as a function of the position on the field. For the latter, I suggest you use the 'scatter' function from matplotlib.", "#Spatial distribution of distances\nplt.title('distribution of distances accross the FoV')\n#...\n\n#Global representation\n#..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
konstantinstadler/pymrio
doc/source/notebooks/aggregation_examples.ipynb
gpl-3.0
[ "Using the aggregation functionality of pymrio\nPymrio offers various possibilities to achieve an aggregation of a existing MRIO system.\nThe following section will present all of them in turn, using the test MRIO system included in pymrio.\nThe same concept can be applied to real life MRIOs.\nSome of the examples rely in the country converter coco. The minimum version required is coco >= 0.6.3 - install the latest version with\npip install country_converter --upgrade\nCoco can also be installed from the Anaconda Cloud - see the coco readme for further infos.\nLoading the test mrio\nFirst, we load and explore the test MRIO included in pymrio:", "import numpy as np\nimport pymrio\n\nio = pymrio.load_test()\nio.calc_all()\n\nprint(\n \"Sectors: {sec},\\nRegions: {reg}\".format(\n sec=io.get_sectors().tolist(), reg=io.get_regions().tolist()\n )\n)", "Aggregation using a numerical concordance matrix\nThis is the standard way to aggregate MRIOs when you work in Matlab.\nTo do so, we need to set up a concordance matrix in which the columns correspond to the orignal classification and the rows to the aggregated one.", "sec_agg_matrix = np.array(\n [[1, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 1, 1, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1]]\n)\n\nreg_agg_matrix = np.array([[1, 1, 1, 0, 0, 0], [0, 0, 0, 1, 1, 1]])\n\nio.aggregate(region_agg=reg_agg_matrix, sector_agg=sec_agg_matrix)\n\nprint(\n \"Sectors: {sec},\\nRegions: {reg}\".format(\n sec=io.get_sectors().tolist(), reg=io.get_regions().tolist()\n )\n)\n\nio.calc_all()\n\nio.emissions.D_cba", "To use custom names for the aggregated sectors or regions, pass a list of names in order of rows in the concordance matrix:", "io = (\n pymrio.load_test()\n .calc_all()\n .aggregate(\n region_agg=reg_agg_matrix,\n region_names=[\"World Region A\", \"World Region B\"],\n inplace=False,\n )\n)\n\nio.get_regions()", "Aggregation using a numerical vector\nPymrio also accepts the aggregatio information as numerical or string vector.\nFor these, each entry in the vector assignes the sector/region to a aggregation group.\nThus the two aggregation matrices from above (sec_agg_matrix and reg_agg_matrix) can also be represented as numerical or string vectors/lists:", "sec_agg_vec = np.array([0, 1, 1, 1, 1, 2, 2, 2])\nreg_agg_vec = [\"R1\", \"R1\", \"R1\", \"R2\", \"R2\", \"R2\"]", "can also be represented as aggregation vector:", "io_vec_agg = (\n pymrio.load_test()\n .calc_all()\n .aggregate(region_agg=reg_agg_vec, sector_agg=sec_agg_vec, inplace=False)\n)\n\nprint(\n \"Sectors: {sec},\\nRegions: {reg}\".format(\n sec=io_vec_agg.get_sectors().tolist(), reg=io_vec_agg.get_regions().tolist()\n )\n)\n\nio_vec_agg.emissions.D_cba_reg", "Regional aggregation using the country converter coco\nThe previous examples are best suited if you want to reuse existing aggregation information.\nFor new/ad hoc aggregation, the most user-friendly solution is to build the concordance with the country converter coco. The minimum version of coco required is 0.6.2. You can either use coco to build independent aggregations (first case below) or use the predefined classifications included in coco (second case - Example WIOD below).", "import country_converter as coco", "Independent aggregation", "io = pymrio.load_test().calc_all()\n\nreg_agg_coco = coco.agg_conc(\n original_countries=io.get_regions(),\n aggregates={\n \"reg1\": \"World Region A\",\n \"reg2\": \"World Region A\",\n \"reg3\": \"World Region A\",\n },\n missing_countries=\"World Region B\",\n)\n\nio.aggregate(region_agg=reg_agg_coco)\n\nprint(\n \"Sectors: {sec},\\nRegions: {reg}\".format(\n sec=io.get_sectors().tolist(), reg=io.get_regions().tolist()\n )\n)", "This can be passed directly to pymrio:", "io.emissions.D_cba_reg", "A pandas DataFrame corresponding to the output from coco can also be passed to sector_agg for aggregation.\nA sector aggregation package similar to the country converter is planned.\nUsing the build-in classifications - WIOD example\nThe country converter is most useful when you work with a MRIO which is included in coco. In that case you can just pass the desired country aggregation to coco and it returns the required aggregation matrix:\nFor the example here, we assume that a raw WIOD download is available at:", "wiod_raw = \"/tmp/mrios/WIOD2013\"", "We will parse the year 2000 and calculate the results:", "wiod_orig = pymrio.parse_wiod(path=wiod_raw, year=2000).calc_all()", "and then aggregate the database to first the EU countries and group the remaining countries based on OECD membership. In the example below, we single out Germany (DEU) to be not included in the aggregation:", "wiod_agg_DEU_EU_OECD = wiod_orig.aggregate(\n region_agg=coco.agg_conc(\n original_countries=\"WIOD\",\n aggregates=[{\"DEU\": \"DEU\"}, \"EU\", \"OECD\"],\n missing_countries=\"Other\",\n merge_multiple_string=None,\n ),\n inplace=False,\n)", "We can then rename the regions to make the membership clearer:", "wiod_agg_DEU_EU_OECD.rename_regions({\"OECD\": \"OECDwoEU\", \"EU\": \"EUwoGermany\"})", "To see the result for the air emission footprints:", "wiod_agg_DEU_EU_OECD.AIR.D_cba_reg", "For further examples on the capabilities of the country converter see the coco tutorial notebook\nAggregation by renaming\nOne alternative method for aggregating the MRIO system is to rename specific regions and/or sectors to duplicated names. \nDuplicated sectors and regions can then be automatically aggregated. This makes most sense when having some categories of some kind (e.g. consumption categories) or detailed classification which can easily be broadened (e.g. A01, A02, which could be renamed all to A).\nIn the example below, we will aggregate sectors to consumption categories using some predefined categories included in pymrio. Check the Adjusting, Renaming and Restructuring notebook for more details.", "mrio = pymrio.load_test()\n\nclass_info = pymrio.get_classification('test')\nrename_dict = class_info.get_sector_dict(orig=class_info.sectors.TestMrioName, new=class_info.sectors.Type)", "If we take a look at the rename_dict, we see that it maps several sectors of the original MRIO to combined regions (technically a many to one mapping).", "rename_dict", "Using this dict to rename sectors leads to an index with overlapping labels.", "mrio.rename_sectors(rename_dict)\nmrio.Z", "Which can then be aggregated with", "mrio.aggregate_duplicates()\nmrio.Z", "This method also comes handy when aggregating parts of the MRIO regions. E.g.:", "region_convert = {'reg1': 'Antarctica', 'reg2': 'Antarctica'}\nmrio.rename_regions(region_convert).aggregate_duplicates()\nmrio.Z", "Which lets us calculate the footprint of the consumption category 'eat' in 'Antarctica':", "mrio.calc_all()\nmrio.emissions.D_cba.loc[:, ('Antarctica', 'eat')]", "Aggregation to one total sector / region\nBoth, region_agg and sector_agg, also accept a string as argument. This leads to the aggregation to one total region or sector for the full IO system.", "pymrio.load_test().calc_all().aggregate(\n region_agg=\"global\", sector_agg=\"total\"\n).emissions.D_cba\n\n", "Pre- vs post-aggregation account calculations\nIt is generally recommended to calculate MRIO accounts with the highest detail possible and aggregated the results afterwards (post-aggregation - see for example Steen-Olsen et al 2014, Stadler et al 2014 or Koning et al 2015.\nPre-aggregation, that means the aggregation of MRIO sectors and regions before calculation of footprint accounts, might be necessary when dealing with MRIOs on computers with limited RAM resources. However, one should be aware that the results might change.\nPymrio can handle both cases and can be used to highlight the differences. To do so, we use the two concordance matrices defined at the beginning (sec_agg_matrix and reg_agg_matrix) and aggregate the test system before and after the calculation of the accounts:", "io_pre = (\n pymrio.load_test()\n .aggregate(region_agg=reg_agg_matrix, sector_agg=sec_agg_matrix)\n .calc_all()\n)\nio_post = (\n pymrio.load_test()\n .calc_all()\n .aggregate(region_agg=reg_agg_matrix, sector_agg=sec_agg_matrix)\n)\n\nio_pre.emissions.D_cba\n\nio_post.emissions.D_cba", "The same results as in io_pre are obtained for io_post, if we recalculate the footprint accounts based on the aggregated system:", "io_post.reset_all_full().calc_all().emissions.D_cba" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Stanford-BIS/syde556
SYDE 556 Lecture 5 Dynamics.ipynb
gpl-2.0
[ "SYDE 556/750: Simulating Neurobiological Systems\nAccompanying Readings: Chapter 8\nDynamics\n\nEverything we've looked at so far has been feedforward\nThere's some pattern of activity in one group of neurons representing $x$\nWe want that to cause some pattern of activity in another group of neurons to represent $y=f(x)$\nThese can be chained together to make more complex systems $z=h(f(x)+g(y))$\n\n\nWhat about recurrent networks?\nWhat happens when we connect a neural group back to itself?\n\n\n\n<img src=\"files/lecture5/recnet1.png\">\nRecurrent functions\n\nWhat if we do exactly what we've done so far in the past, but instead of connecting one group of neurons to another, we just connect it back to itself\nInstead of $y=f(x)$\nWe get $x=f(x)$ (???)\n\n\n\nAs written, this is clearly non-sensical\n\nFor example, if we do $f(x)=x+1$ then we'd have $x=x+1$, or $x-x=1$, or $0=1$\n\n\n\nBut don't forget about time\n\nWhat if it was $x_{t+1} = f(x_t)$\nWhich makes more sense because we're talking about a real physical system\nThis is a lot like a differential equation\nWhat would happen if we built this?\n\n\n\nTry it out\n\nLet's try implementing this kind of circuit\nStart with $x_{t+1}=x_t+1$", "%pylab inline\n\nimport nengo\n\nmodel = nengo.Network()\n\nwith model:\n ensA = nengo.Ensemble(100, dimensions=1)\n \n def feedback(x):\n return x+1\n \n conn = nengo.Connection(ensA, ensA, function=feedback, synapse = 0.1)\n\n ensA_p = nengo.Probe(ensA, synapse=.01)\n \nsim = nengo.Simulator(model)\nsim.run(.5)\n\nplot(sim.trange(), sim.data[ensA_p])\nylim(-1.5,1.5);", "That sort of makes sense\n$x$ increases quickly, then hits an upper bound\n\n\nHow quickly?\nWhat parameters of the system affect this?\n\n\n\nWhat are the precise dynamics?\n\n\nWhat about $f(x)=-x$?", "with model:\n def feedback(x):\n return -x\n \n conn.function = feedback\n\nsim = nengo.Simulator(model)\nsim.run(.5)\n\nplot(sim.trange(), sim.data[ensA_p])\nylim(-1.5,1.5);", "That also makes sense. What if we nudge it away from zero?", "from nengo.utils.functions import piecewise\n\nwith model:\n stim = nengo.Node(piecewise({0:1, .2:-1, .4:0}))\n nengo.Connection(stim, ensA)\n \nsim = nengo.Simulator(model)\nsim.run(.6)\n\nplot(sim.trange(), sim.data[ensA_p])\nylim(-1.5,1.5);", "With an input of 1, $x=0.5$\nWith an input of -1, $x=-0.5$\nWith an input of 0, it goes back to $x=0$\n\nDoes this make sense?\n\nWhy / why not?\nAnd why that particular timing/curvature?\n\n\n\nWhat about $f(x)=x^2$?", "with model:\n stim.output = piecewise({.1:.2, .2:.4, .4:0})\n def feedback(x):\n return x*x\n \n conn.function = feedback\n\nsim = nengo.Simulator(model)\nsim.run(.5)\n\nplot(sim.trange(), sim.data[ensA_p])\nylim(-1.5,1.5); ", "Well that's weird\nStable at $x=0$ with no input \nStable at .2 \nUnstable at .4, shoots up high\nSomething very strange happens around $x=1$ when the input is turned off\n\n\nWhy is this happening?\n\nMaking sense of dynamics\n\nLet's go back to something simple\nJust a single feed-forward neural population\nEncode $x$ into current, compute spikes, decode filtered spikes into $\\hat{x}$\n\n\nInstead of a constant input, let's change the input\nChange it suddenly from zero to one to get a sense of what's happening with changes", "import nengo\nfrom nengo.utils.functions import piecewise\n\nmodel = nengo.Network(seed=4)\n\nwith model:\n stim = nengo.Node(piecewise({.3:1}))\n ensA = nengo.Ensemble(100, dimensions=1)\n \n def feedback(x):\n return x\n \n nengo.Connection(stim, ensA)\n #conn = nengo.Connection(ensA, ensA, function=feedback)\n\n stim_p = nengo.Probe(stim)\n ensA_p = nengo.Probe(ensA, synapse=.1)\n \nsim = nengo.Simulator(model)\nsim.run(1)\n\nplot(sim.trange(), sim.data[ensA_p], label=\"$\\hat{x}$\")\nplot(sim.trange(), sim.data[stim_p], label=\"$x$\")\nlegend()\nylim(-.2,1.5);", "This was supposed to compute $f(x)=x$\nFor a constant input, that works\nBut we get something else when there's a change in the input\n\n\nWhat is this difference?\nWhat affects it?", "with model:\n ensA_p = nengo.Probe(ensA, synapse=.03)\n\nsim = nengo.Simulator(model)\nsim.run(1)\n\nplot(sim.trange(), sim.data[ensA_p], label=\"$\\hat{x}$\")\nplot(sim.trange(), sim.data[stim_p], label=\"$x$\")\nlegend()\nylim(-.2,1.5);", "The time constant of the post-synaptic filter\nWe're not getting $f(x)=x$\nInstead we're getting $f(x(t))=x(t)*h(t)$", "tau = 0.03\nwith model:\n ensA_p = nengo.Probe(ensA, synapse=tau)\n\nsim = nengo.Simulator(model)\nsim.run(1)\n\nstim_filt = nengo.synapses.filt(sim.data[stim_p], synapse=tau, dt=sim.dt)\n\nplot(sim.trange(), sim.data[ensA_p], label=\"$\\hat{x}$\")\nplot(sim.trange(), sim.data[stim_p], label=\"$x$\")\nplot(sim.trange(), stim_filt, label=\"$h(t)*x(t)$\")\nlegend()\nylim(-.2,1.5);", "So there are dynamics and filtering going on, since there is always a synaptic filter on a connection\nRecurrent connections are dynamic as well (i.e. passing past information to future state of the population)\nLet's take a look more carefully\n\nRecurrent connections\n\nSo a connection actually approximates $f(x(t))*h(t)$\n\nSo what does a recurrent connection do?\n\nAlso $x(t) = f(x(t))*h(t)$\n\n\n\nwhere $$\nh(t) = \\begin{cases}\n e^{-t/\\tau} &\\mbox{if } t > 0 \\ \n 0 &\\mbox{otherwise} \n \\end{cases}\n$$\n\n\nHow can we work with this?\n\n\nGeneral rule of thumb: convolutions are annoying, so let's get rid of them\n\nWe could do a Fourier transform\n$X(\\omega)=F(\\omega)H(\\omega)$\nBut, since we are studying the response of a system (rather than a continuous signal), there's a more general and appropriate transform that makes life even easier:\nLaplace transform (it is more general because $s = a + j\\omega$)\nThe Laplace transform of our equations are:\n$X(s)=F(s)H(s)$\n$H(s)={1 \\over {1+s\\tau}}$\nRearranging:\n\n$X(s)=F(s){1 \\over {1+s\\tau}}$\n$X(s)(1+s\\tau) = F(s)$\n$X(s) + X(s)s\\tau = F(s)$\n$sX(s) = {1 \\over \\tau} (F(s)-X(s))$\n\nConvert back into the time domain (inverse Laplace):\n\n${dx \\over dt} = {1 \\over \\tau} (f(x(t))-x(t))$\nDynamics\n\n\nThis says that if we introduce a recurrent connection, we end up implementing a differential equation\n\n\nSo what happened with $f(x)=x+1$?\n\n$\\dot{x} = {1 \\over \\tau} (x+1-x)$\n$\\dot{x} = {1 \\over \\tau}$\n\n\nWhat about $f(x)=-x$?\n$\\dot{x} = {1 \\over \\tau} (-x-x)$\n$\\dot{x} = {-2x \\over \\tau}$\n\n\n\nAnd $f(x)=x^2$? \n\n$\\dot{x} = {1 \\over \\tau} (x^2-x)$\n\n\n\nWhat if there's some differential equation we really want to implement?\n\nWe want $\\dot{x} = f(x)$\nSo we do a recurrent connection of $f'(x)=\\tau f(x)+x$\nThe resulting model will end up implementing $\\dot{x} = {1 \\over \\tau} (\\tau f(x)+x-x)=f(x)$\n\n\n\nInputs\n\n\nWhat happens if there's an input as well?\n\nWe'll call the input $u$ from another population, and it is also computing some function $g(u)$\n$x(t) = f(x(t))h(t)+g(u(t))h(t)$\n\n\n\nFollow the same derivation steps\n\n$\\dot{x} = {1 \\over \\tau} (f(x)-x + g(u))$\n\n\n\nSo if you have some input that you want added to $\\dot{x}$, you need to scale it by $\\tau$\n\n\nThis lets us do any differential equation of the form $\\dot{x}=f(x)+g(u)$\n\n\nA derivation\nLinear systems\n\nThe book shows that we can implement any equation of the form\n\n$\\dot{x}(t) = A x(t) + B u(t)$\n\nWhere $A$ and $x$ are a matrix and vector -- giving a standard control theoretic structure\n<img src=\"files/lecture5/control_sys.png\" width=\"400\">\n\nOur goal is to convert this to a structure which has $h(t)$ as the transfer function instead of the standard $\\int$\n<img src=\"files/lecture5/control_sysh.png\" width=\"400\">\n\n\nUsing Laplace on the standard form gives:\n\n\n$sX(s) = A X(s) + B U(s)$\n\nLaplace on the 'neural control' form gives (as before where $F(s) = A'X(s) + B'U(s)$):\n\n$X(s) = {1 \\over {1 + s\\tau}} (A'X(s) + B'U(s))$\n$X(s) + \\tau sX(s) = (A'X(s) + B'U(s))$\n$sX(s) = {1 \\over \\tau} (A'X(s) + B'U(s) - X(s))$\n$sX(s) = {1 \\over \\tau} ((A' - 1) X(s) + B'U(s))$\n\nMaking the 'standard' and 'neural' equations equal to one another, we find that for any system with a given A and B, the A' and B' of the equivalent neural system are given by:\n\n$A' = \\tau A + I$ and\n$B' = \\tau B$\n\n\nwhere $I$ is the identity matrix\n\n\nThis is nice because lots of engineers think of the systems they build in these terms (i.e. as linear control systems).\n\n\nNonlinear systems\n\nIn fact, these same steps can be taken to account for nonlinear control systems as well:\n\n$\\dot{x}(t) = F(x(t),u(t),t)$\n\nFor a neural system with transfer function $h(t)$:\n\n$X(s) = H(s)F'(X(s),U(s),s)$\n$X(s) = {1 \\over {1 + s\\tau}} F'(X(s),U(s),s)$\n$sX(s) = {1 \\over \\tau} (F'(X(s),U(s),s) - X(s))$\n\nThis gives the general result (slightly more general than what we saw earlier):\n\n$F'(X(s),U(s),s) = \\tau(F(X(s),U(s),s)) + X(s)$\nApplications\nEye control\n\nPart of the brainstem called the nuclei prepositus hypoglossi\nInput is eye velocity $v$\nOutput is eye position $x$\n\n$\\dot{x}=v$\n\nThis is an integrator ($x$ is the integral of $v$)\n\n\n\nIt's a linear system, so, to get it in the standard control form $\\dot{x}=Ax+Bu$ we have:\n\n$A=0$\n$B=1$\n\n\nSo that means we need $A'=\\tau 0 + I = 1$ and $B'=\\tau 1 = \\tau$\n<img src=\"files/lecture5/eye_sys.png\">", "import nengo\nfrom nengo.utils.functions import piecewise\n\ntau = 0.01\n\nmodel = nengo.Network('Eye control', seed=4)\n\nwith model:\n stim = nengo.Node(piecewise({.3:1, .6:0 }))\n velocity = nengo.Ensemble(100, dimensions=1)\n position = nengo.Ensemble(100, dimensions=1)\n \n def feedback(x):\n return 1*x\n \n conn = nengo.Connection(stim, velocity)\n conn = nengo.Connection(velocity, position, transform=tau, synapse=tau)\n conn = nengo.Connection(position, position, function=feedback, synapse=tau)\n\n stim_p = nengo.Probe(stim)\n position_p = nengo.Probe(position, synapse=.01)\n velocity_p = nengo.Probe(velocity, synapse=.01)\n \nsim = nengo.Simulator(model)\nsim.run(1)\n\nplot(sim.trange(), sim.data[stim_p], label = \"stim\")\nplot(sim.trange(), sim.data[position_p], label = \"position\")\nplot(sim.trange(), sim.data[velocity_p], label = \"velocity\")\nlegend(loc=\"best\");", "That's pretty good... the area under the input is about equal to the magnitude of the output.\nBut, in order to be a perfect integrator, we'd need exactly $x=1\\times x$\nWe won't get exactly that\nNeural implementations are always approximations\n\n\nTwo forms of error:\n$E_{distortion}$, the decoding error\n$E_{noise}$, the random noise error\n\n\nWhat will they do?\n\nDistortion error\n<img src=\"files/lecture5/integrator_error.png\">\n\nWhat affects this?", "import nengo\nfrom nengo.dists import Uniform\nfrom nengo.utils.ensemble import tuning_curves\n\nmodel = nengo.Network(label='Neurons')\nwith model:\n neurons = nengo.Ensemble(200, dimensions=1, max_rates=Uniform(100,200))\n\n connection = nengo.Connection(neurons, neurons)\n \nsim = nengo.Simulator(model)\n\nd = sim.data[connection].weights.T\n\nx, A = tuning_curves(neurons, sim)\n\nxhat = numpy.dot(A, d)\n\nplot(x, xhat-x)\naxhline(0, color='k')\nxlabel('$x$')\nylabel('$\\hat{x}-x$');", "So we can think of the distortion error as introducing a bunch of local attractors into the representation\nAny 'downward' x-crossing will be a stable point ('upwards' is unstable).\nThere will be a tendency to drift towards one of these even if the input is zero.\n\n\n\nNoise error\n\nWhat will random noise do?\nPush the representation back and forth\nWhat if it is small?\nWhat if it is large?\n\n\n\nWhat will changing the post-synaptic time constant $\\tau$ do?\n\nHow does that interact with noise?\n\n\n\nBut real eyes aren't perfect integrators\n\nIf you get someone to look at someting, then turn off the lights but tell them to keep looking in the same direction, their eye will drift back to centre\nHow do we implement that?\n\n\n\n$\\dot{x}=-{1 \\over \\tau_c}x + v$\n\n\n$\\tau_c$ is the time constant of that return to centre\n\n\n$A'=\\tau {-1 \\over \\tau_c}+1$\n\n$B' = \\tau$", "import nengo\nfrom nengo.utils.functions import piecewise\n\ntau = 0.1\ntau_c = 2.0\n\nmodel = nengo.Network('Eye control', seed=5)\n\nwith model:\n stim = nengo.Node(piecewise({.3:1, .6:0 }))\n velocity = nengo.Ensemble(100, dimensions=1)\n position = nengo.Ensemble(200, dimensions=1)\n \n def feedback(x):\n return (-tau/tau_c + 1)*x\n \n conn = nengo.Connection(stim, velocity)\n conn = nengo.Connection(velocity, position, transform=tau, synapse=tau)\n conn = nengo.Connection(position, position, function=feedback, synapse=tau)\n\n stim_p = nengo.Probe(stim)\n position_p = nengo.Probe(position, synapse=.01)\n velocity_p = nengo.Probe(velocity, synapse=.01)\n \nsim = nengo.Simulator(model)\nsim.run(5)\n\nplot(sim.trange(), sim.data[stim_p], label = \"stim\")\nplot(sim.trange(), sim.data[position_p], label = \"position\")\nplot(sim.trange(), sim.data[velocity_p], label = \"velocity\")\nlegend(loc=\"best\");", "That also looks right. Note that as $\\tau_c \\rightarrow \\infty$ this will approach the integrator.\nHumans (a) and Goldfish (b)\nHumans have more neurons doing this than goldfish (~1000 vs ~40)\nThey also have slower decay (70 s vs. 10 s).\nWhy do these fit together?\n\n<img src=\"files/lecture5/integrator_decay.png\">\nControlled Integrator\n\nWhat if we want an integrator where we can adjust the decay on-the-fly?\nSeparate input telling us what the decay constant $d$ should be\n\n$\\dot{x} = -d x + v$\n\n\nSo there are two inputs: $v$ and $d$\n\n\nThis is no longer in the standard $Ax + Bu$ form. Sort of...\n\nLet $A = -d(t)$, so it's not a matrix\nBut it is of the more general form: ${dx \\over dt}=f(x)+g(u)$\n\n\n\nWe need to compute a nonlinear function of an input ($d$) and the state variable ($x$)\n\nHow can we do this?\nGoing to 2D so we can compute the nonlinear function\nLet's have the state variable be $[x, d]$\n\n\n\n<img src=\"files/lecture5/controlled_integrator.png\" width = 400>", "import nengo\nfrom nengo.utils.functions import piecewise\n\ntau = 0.1\n\nmodel = nengo.Network('Controlled integrator', seed=1)\n\nwith model:\n vel = nengo.Node(piecewise({.2:1.5, .5:0 }))\n dec = nengo.Node(piecewise({.7:.2, .9:0 }))\n \n velocity = nengo.Ensemble(100, dimensions=1)\n decay = nengo.Ensemble(100, dimensions=1)\n position = nengo.Ensemble(400, dimensions=2)\n \n def feedback(x):\n return -x[1]*x[0]+x[0], 0\n \n conn = nengo.Connection(vel, velocity)\n conn = nengo.Connection(dec, decay)\n conn = nengo.Connection(velocity, position[0], transform=tau, synapse=tau)\n conn = nengo.Connection(decay, position[1], synapse=0.01)\n conn = nengo.Connection(position, position, function=feedback, synapse=tau)\n\n position_p = nengo.Probe(position, synapse=.01)\n velocity_p = nengo.Probe(velocity, synapse=.01)\n decay_p = nengo.Probe(decay, synapse=.01)\n \nsim = nengo.Simulator(model)\nsim.run(1)\n\nplot(sim.trange(), sim.data[decay_p])\nlineObjects = plot(sim.trange(), sim.data[position_p])\nplot(sim.trange(), sim.data[velocity_p])\nlegend(('decay','position','decay','velocity'),loc=\"best\");\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"controlled_integrator.py.cfg\")", "Other fun functions\n\nOscillator\n$F = -kx = m \\ddot{x}$ let $\\omega = \\sqrt{\\frac{k}{m}}$\n$\\frac{d}{dt} \\begin{bmatrix}\n\\omega x \\\n\\dot{x}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n0 & \\omega \\\n-\\omega & 0\n\\end{bmatrix}$\n$\\dot{x}=[x_1, -x_0]$", "import nengo\n\nmodel = nengo.Network('Oscillator')\n\nfreq = 1\n\nwith model:\n stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])\n \n osc = nengo.Ensemble(200, dimensions=2)\n \n def feedback(x):\n return x[0]+freq*x[1], -freq*x[0]+x[1]\n \n nengo.Connection(osc, osc, function=feedback, synapse=.01)\n nengo.Connection(stim, osc)\n \n osc_p = nengo.Probe(osc, synapse=.01)\n \nsim = nengo.Simulator(model)\nsim.run(.5)\n\nfigure(figsize=(12,4))\nsubplot(1,2,1)\nplot(sim.trange(), sim.data[osc_p]);\nxlabel('Time (s)')\nylabel('State value')\n \nsubplot(1,2,2)\nplot(sim.data[osc_p][:,0],sim.data[osc_p][:,1])\nxlabel('$x_0$')\nylabel('$x_1$');\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"oscillator.py.cfg\")", "Lorenz Attractor (a chaotic attractor)\n\n$\\dot{x}=[10x_1-10x_0, -x_0 x_2-x_1, x_0 x_1 - {8 \\over 3}(x_2+28)-28]$", "import nengo\n\nmodel = nengo.Network('Lorenz Attractor', seed=3)\n\ntau = 0.1\nsigma = 10\nbeta = 8.0/3\nrho = 28\n\ndef feedback(x):\n dx0 = -sigma * x[0] + sigma * x[1]\n dx1 = -x[0] * x[2] - x[1]\n dx2 = x[0] * x[1] - beta * (x[2] + rho) - rho\n return [dx0 * tau + x[0], \n dx1 * tau + x[1], \n dx2 * tau + x[2]]\n\nwith model:\n lorenz = nengo.Ensemble(2000, dimensions=3, radius=60)\n \n nengo.Connection(lorenz, lorenz, function=feedback, synapse=tau)\n \n lorenz_p = nengo.Probe(lorenz, synapse=tau)\n \nsim = nengo.Simulator(model)\nsim.run(14)\n\nfigure(figsize=(12,4))\nsubplot(1,2,1)\nplot(sim.trange(), sim.data[lorenz_p]);\nxlabel('Time (s)')\nylabel('State value')\n \nsubplot(1,2,2)\nplot(sim.data[lorenz_p][:,0],sim.data[lorenz_p][:,1])\nxlabel('$x_0$')\nylabel('$x_1$');\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"lorenz.py.cfg\")", "Note: This is not the original Lorenz attractor. \nThe original is $\\dot{x}=[10x_1-10x_0, x_0 (28-x_2)-x_1, x_0 x_1 - {8 \\over 3}(x_2)]$\nWhy change it to $\\dot{x}=[10x_1-10x_0, -x_0 x_2-x_1, x_0 x_1 - {8 \\over 3}(x_2+28)-28]$?\nWhat's being changed here?\n\n\n\nOscillators with different paths\n\nSince we can implement any function, we're not limited to linear oscillators\nWhat about a \"square\" oscillator?\nInstead of the value going in a circle, it traces out a square\n\n\n\n$$\n{\\dot{x}} = \\begin{cases}\n [r, 0] &\\mbox{if } |x_1|>|x_0| \\wedge x_1>0 \\ \n [-r, 0] &\\mbox{if } |x_1|>|x_0| \\wedge x_1<0 \\ \n [0, -r] &\\mbox{if } |x_1|<|x_0| \\wedge x_0>0 \\ \n [0, r] &\\mbox{if } |x_1|<|x_0| \\wedge x_0<0 \\ \n \\end{cases}\n$$", "import nengo\n\nmodel = nengo.Network('Square Oscillator')\n\ntau = 0.02\nr=4\n\ndef feedback(x): \n if abs(x[1])>abs(x[0]):\n if x[1]>0: dx=[r, 0]\n else: dx=[-r, 0]\n else:\n if x[0]>0: dx=[0, -r]\n else: dx=[0, r]\n return [tau*dx[0]+x[0], tau*dx[1]+x[1]] \n\nwith model:\n stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])\n \n square_osc = nengo.Ensemble(1000, dimensions=2)\n \n nengo.Connection(square_osc, square_osc, function=feedback, synapse=tau)\n nengo.Connection(stim, square_osc)\n \n square_osc_p = nengo.Probe(square_osc, synapse=tau)\n \nsim = nengo.Simulator(model)\nsim.run(2)\n\nfigure(figsize=(12,4))\nsubplot(1,2,1)\nplot(sim.trange(), sim.data[square_osc_p]);\nxlabel('Time (s)')\nylabel('State value')\n \nsubplot(1,2,2)\nplot(sim.data[square_osc_p][:,0],sim.data[square_osc_p][:,1])\nxlabel('$x_0$')\nylabel('$x_1$');", "Does this do what you expect?\n\nHow is it affected by:\n\nNumber of neurons?\nPost-synaptic time constant?\nDecoding filter time constant?\nSpeed of oscillation (r)?\n\n\n\nWhat about this shape?", "import nengo\n\nmodel = nengo.Network('Heart Oscillator')\n\ntau = 0.02\nr=4\n\ndef feedback(x): \n return [-tau*r*x[1]+x[0], tau*r*x[0]+x[1]]\n\ndef heart_shape(x):\n theta = np.arctan2(x[1], x[0])\n r = 2 - 2 * np.sin(theta) + np.sin(theta)*np.sqrt(np.abs(np.cos(theta)))/(np.sin(theta)+1.4)\n return -r*np.cos(theta), r*np.sin(theta)\n\nwith model:\n stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])\n \n heart_osc = nengo.Ensemble(1000, dimensions=2)\n heart = nengo.Ensemble(100, dimensions=2, radius=4)\n \n nengo.Connection(stim, heart_osc)\n nengo.Connection(heart_osc, heart_osc, function=feedback, synapse=tau)\n nengo.Connection(heart_osc, heart, function=heart_shape, synapse=tau)\n \n heart_p = nengo.Probe(heart, synapse=tau)\n \nsim = nengo.Simulator(model)\nsim.run(4)\n\nfigure(figsize=(12,4))\nsubplot(1,2,1)\nplot(sim.trange(), sim.data[heart_p]);\nxlabel('Time (s)')\nylabel('State value')\n \nsubplot(1,2,2)\nplot(sim.data[heart_p][:,0],sim.data[heart_p][:,1])\nxlabel('$x_0$')\nylabel('$x_1$');", "We are doing things differently here\nThe actual $x$ value is a normal circle oscillator\nThe heart shape is a function of $x$\nBut that's just a different decoder\n\n\nWould it be possible to do an oscillator where $x$ followed this shape?\nHow could we tell them apart in terms of neural behaviour?\n\n\n\nControlled Oscillator\n\n\nChange the frequency of the oscillator on-the-fly\n\n\n$\\dot{x}=[x_1 x_2, -x_0 x_2]$", "import nengo\nfrom nengo.utils.functions import piecewise\n\n\nmodel = nengo.Network('Controlled Oscillator')\n\ntau = 0.1\nfreq = 20\n\ndef feedback(x):\n return x[1]*x[2]*freq*tau+1.1*x[0], -x[0]*x[2]*freq*tau+1.1*x[1], 0\n\nwith model:\n stim = nengo.Node(lambda t: [20,20] if t<.02 else [0,0])\n freq = nengo.Node(piecewise({0:1, 2:.5, 6:-1}))\n \n ctrl_osc = nengo.Ensemble(500, dimensions=3)\n \n nengo.Connection(ctrl_osc, ctrl_osc, function=feedback, synapse=tau)\n nengo.Connection(stim, ctrl_osc[0:2])\n nengo.Connection(freq, ctrl_osc[2])\n \n ctrl_osc_p = nengo.Probe(ctrl_osc, synapse=0.01)\n \nsim = nengo.Simulator(model)\nsim.run(8)\n\nfigure(figsize=(12,4))\nsubplot(1,2,1)\nplot(sim.trange(), sim.data[ctrl_osc_p]);\nxlabel('Time (s)')\nylabel('State value')\n \nsubplot(1,2,2)\nplot(sim.data[ctrl_osc_p][:,0],sim.data[ctrl_osc_p][:,1])\nxlabel('$x_0$')\nylabel('$x_1$');\n\nfrom nengo_gui.ipython import IPythonViz\nIPythonViz(model, \"controlled_oscillator.py.cfg\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rsignell-usgs/notebook
ERDDAP/OOI-ERDDAP_Search.ipynb
mit
[ "Search OOI ERDDAP for Pioneer Glider Data\nUse ERDDAP's RESTful advanced search to try to find OOI Pioneer glider water temperatures from the OOI ERDDAP. Use case from Stace Beaulieu (sbeaulieu@whoi.edu)", "import pandas as pd", "First try just searching for \"glider\"", "url = 'http://ooi-data.marine.rutgers.edu/erddap/search/advanced.csv?page=1&itemsPerPage=1000&searchFor=glider'\ndft = pd.read_csv(url, usecols=['Title', 'Summary', 'Institution', 'Dataset ID']) \ndft.head()", "Now search for all temperature data in specified bounding box and temporal extent", "start = '2000-01-01T00:00:00Z'\nstop = '2017-02-22T00:00:00Z'\nlat_min = 39.\nlat_max = 41.5\nlon_min = -72.\nlon_max = -69.\nstandard_name = 'sea_water_temperature'\nendpoint = 'http://ooi-data.marine.rutgers.edu/erddap/search/advanced.csv'\n\nimport pandas as pd\n\n\nbase = (\n '{}'\n '?page=1'\n '&itemsPerPage=1000'\n '&searchFor='\n '&protocol=(ANY)'\n '&cdm_data_type=(ANY)'\n '&institution=(ANY)'\n '&ioos_category=(ANY)'\n '&keywords=(ANY)'\n '&long_name=(ANY)'\n '&standard_name={}'\n '&variableName=(ANY)'\n '&maxLat={}'\n '&minLon={}'\n '&maxLon={}'\n '&minLat={}'\n '&minTime={}'\n '&maxTime={}').format\n\nurl = base(\n endpoint,\n standard_name,\n lat_max,\n lon_min,\n lon_max,\n lat_min,\n start,\n stop\n)\n\nprint(url)\n\ndft = pd.read_csv(url, usecols=['Title', 'Summary', 'Institution','Dataset ID']) \n\nprint('Datasets Found = {}'.format(len(dft)))\nprint(url)\ndft", "Define a function that returns a Pandas DataFrame based on the dataset ID. The ERDDAP request variables (e.g. \"ctdpf_ckl_wfp_instrument_ctdpf_ckl_seawater_temperature\") are hard-coded here, so this routine should be modified for other ERDDAP endpoints or datasets. \nSince we didn't actually find any glider data, we just request the last temperature value from each dataset, using the ERDDAP orderByMax(\"time\") constraint. This way we can see when the data ends, and if the mooring locations look correct", "def download_df(glider_id):\n from pandas import DataFrame, read_csv\n# from urllib.error import HTTPError\n uri = ('http://ooi-data.marine.rutgers.edu/erddap/tabledap/{}.csv'\n '?trajectory,'\n 'time,latitude,longitude,'\n 'ctdpf_ckl_wfp_instrument_ctdpf_ckl_seawater_temperature'\n '&orderByMax(\"time\")'\n '&time>={}'\n '&time<={}'\n '&latitude>={}'\n '&latitude<={}'\n '&longitude>={}'\n '&longitude<={}').format\n url = uri(glider_id,start,stop,lat_min,lat_max,lon_min,lon_max)\n print(url)\n # Not sure if returning an empty df is the best idea.\n try:\n df = read_csv(url, index_col='time', parse_dates=True, skiprows=[1])\n except:\n df = pd.DataFrame()\n return df\n\ndf = pd.concat(list(map(download_df, dft['Dataset ID'].values)))\n\nprint('Total Data Values Found: {}'.format(len(df)))\n\ndf\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport cartopy.crs as ccrs\nfrom cartopy.feature import NaturalEarthFeature\n\nbathym_1000 = NaturalEarthFeature(name='bathymetry_J_1000',\n scale='10m', category='physical')\n\nfig, ax = plt.subplots(\n figsize=(9, 9),\n subplot_kw=dict(projection=ccrs.PlateCarree())\n)\nax.coastlines(resolution='10m')\nax.add_feature(bathym_1000, facecolor=[0.9, 0.9, 0.9], edgecolor='none')\n\ndx = dy = 0.5\nax.set_extent([lon_min-dx, lon_max+dx, lat_min-dy, lat_max+dy])\n\ng = df.groupby('trajectory')\nfor glider in g.groups:\n traj = df[df['trajectory'] == glider]\n ax.plot(traj['longitude'], traj['latitude'], 'o', label=glider)\n\ngl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,\n linewidth=2, color='gray', alpha=0.5, linestyle='--')\n\nax.legend();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/inm/cmip6/models/inm-cm4-8/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: INM\nSource ID: INM-CM4-8\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:04\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inm', 'inm-cm4-8', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
WNoxchi/Kaukasos
misc/6.S094_DeepTraffic_par-search.ipynb
mit
[ "Wayne H Nixalo\nMIT 6.S094: Deep Learning for Self-Driving Cars\nParameter / Architecture Search for DeepTraffic.\n\nI. Basic Parameter Search\nI'll be going through:\n\nlaneside\npatchesAhead\npatchesBehind\nnum_neurons -- in FC layer\n\nI'm not sure yet if I'll touch:\n\ntrainIterations\notherAgents\nnumber of FC layers\nlearning_rate\nmomentum\nbatch_size\nl2_decay\n\n1. Stock Settings: 51.51 mph", "lanesSide = 0\npatchesAhead = 1\npatchesBehind = 0\ntrainIterations = 10000\n\notherAgents = 0; # max of 10\n\n# FC Layer 1\nnum_neurons = 1\n\nlearning_rate = 0.001\nmomentum = 0.0\nbatch_size = 64\nl2_decay = 0.01", "lanesSide = 1: 51.51\nlanesSide = 2: 51.63\nlanesSide = 4: 51.79\n```\nlanesSide = 1\npatchesAhead = 2\n```\nspeed: 51.51\nlanesSide = 1\npatchesAhead = 4\nspeed: 51.51\nlanesSide = 1\npatchesAhead = 8\nspeed: 51.57\n```\nlanesSide = 2\npatchesAhead = 2\n```\nspeed: 51.69\nlanesSide = 2\npatchesAhead = 4\nspeed: 52.28\nlanesSide = 2\npatchesAhead = 8\nspeed: 51.51\nI seem to be within signal noise a lot. I'm going to guess some \"good\" parameters and see how tweaking that works.\n2. Guessed Settings: 62.27\nI had to balance with training time. Looking at the simulation, it looked like it took some time for the network to make a turn decision. With this combined w/ wanting to cut down on train time, I thought I could get better performance by trading side-to-side view distance for forward view distance -- giving the network more time to act on a decision, even if it had less total input to make it. ie: being able to execute an Okay decision is better than being unable a decision at all, Great or not.", "lanesSide = 2\npatchesAhead = 16\npatchesBehind = 1\ntrainIterations = 10000\n\notherAgents = 0; # max of 10\n\n# FC Layer 1\nnum_neurons = network_size // num_actions # 355//5=71\n\nlearning_rate = 0.001\nmomentum = 0.0\nbatch_size = 64\nl2_decay = 0.01\n\nlanesSide = 2; patchesAhead=8; patchesBehind=0; num_actions=5\n\nnum_inputs = (lanesSide * 2 + 1) * (patchesAhead + patchesBehind)\ntemporal_window = 3\nnetwork_size = num_inputs * temporal_window + num_actions * temporal_window + num_inputs\nnetwork_size", "64.11 mph\n```\n//<![CDATA[\n// SELF-NOTE: dunno how to get a Conv layer up in this.. couldn't quickly\n// figure out how to format in/output tensor shape -- probably\n// why it wasn't working. Apparently someone who did well last\n// year basically brute-forced this? https://github.com/jordanott/Deep-Traffic/blob/master/deep_traffic.js\n// I guess I'll see what a bigger FullNet can do since I'm\n// already close to the passing 65mph. May change if I see\n// working syntax for Conv->Pool->Full somewhere for ConvNet.js.\n// -- WNixalo\n// a few things don't have var in front of them - they update already existing variables the game needs\nlanesSide = 3;\npatchesAhead = 17;\npatchesBehind = 1;\ntrainIterations = 35000;\n// the number of other autonomous vehicles controlled by your network\notherAgents = 0; // max of 10\nvar num_inputs = (lanesSide * 2 + 1) * (patchesAhead + patchesBehind);\nvar num_actions = 5;\nvar temporal_window = 3;\nvar network_size = num_inputs * temporal_window + num_actions * temporal_window + num_inputs;\nvar layer_defs = [];\n layer_defs.push({\n type: 'input',\n out_sx: 1,\n out_sy: 1,\n out_depth: network_size\n});\n// layer_defs.push({\n// type: 'conv',\n// filters: 5,\n// stride:1,\n// activation='relu'\n// });\n// layers_defs.push({\n// type:'pool',\n// sx:2,\n// stride:2})\nlayer_defs.push({\n type: 'fc',\n num_neurons: parseInt(network_size / num_actions / 1),\n activation: 'relu'\n});\n// Local Contrast Normalization https://cs.stanford.edu/people/karpathy/convnetjs/docs.html\nlayer_defs.push(\n {type:'lrn',\n k:1,\n n:3,\n alpha:0.1,\n beta:0.75});\nlayer_defs.push({\n type: 'fc',\n num_neurons: parseInt(network_size / num_actions / 1),\n activation: 'relu'\n});\nlayer_defs.push({\n type: 'regression',\n num_neurons: num_actions\n});\nvar tdtrainer_options = {\n learning_rate: 0.001,\n momentum: 0.0,\n batch_size: 128,\n l2_decay: 0.01\n};\nvar opt = {};\nopt.temporal_window = temporal_window;\nopt.experience_size = 3000;\nopt.start_learn_threshold = 500;\nopt.gamma = 0.7;\nopt.learning_steps_total = 10000;\nopt.learning_steps_burnin = 1000;\nopt.epsilon_min = 0.0;\nopt.epsilon_test_time = 0.0;\nopt.layer_defs = layer_defs;\nopt.tdtrainer_options = tdtrainer_options;\nbrain = new deepqlearn.Brain(num_inputs, num_actions, opt);\nlearn = function (state, lastReward) {\nbrain.backward(lastReward);\nvar action = brain.forward(state);\ndraw_net();\ndraw_stats();\nreturn action;\n}\n//]]>\n```\n65.13 mph" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/text_classification/labs/text_similarity.ipynb
apache-2.0
[ "Evaluating ROUGE-L Text Similarity Metric\nLearning objectives\n\nInstall TF.Text TensorFlow library.\nCompute LCS-based similarity score.\n\nOverview\nTensorFlow Text provides a collection of text-metrics-related classes and ops ready to use with TensorFlow 2.0. The library contains implementations of text-similarity metrics such as ROUGE-L, required for automatic evaluation of text generation models.\nThe benefit of using these ops in evaluating your models is that they are compatible with TPU evaluation and work nicely with TF streaming metric APIs.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.\nSetup", "# Install TF.Text TensorFlow library\n# TODO 1: Your code here", "Please ignore any incompatibility warnings and errors.\nRestart the kernel to use updated packages. (On the Notebook menu, select Kernel > Restart Kernel > Restart).", "import tensorflow as tf\nimport tensorflow_text as text", "ROUGE-L\nThe Rouge-L metric is a score from 0 to 1 indicating how similar two sequences are, based on the length of the longest common subsequence (LCS). In particular, Rouge-L is the weighted harmonic mean (or f-measure) combining the LCS precision (the percentage of the hypothesis sequence covered by the LCS) and the LCS recall (the percentage of the reference sequence covered by the LCS).\nSource: https://www.microsoft.com/en-us/research/publication/rouge-a-package-for-automatic-evaluation-of-summaries/\nThe TF.Text implementation returns the F-measure, Precision, and Recall for each (hypothesis, reference) pair.\nConsider the following hypothesis/reference pair:", "hypotheses = tf.ragged.constant([['captain', 'of', 'the', 'delta', 'flight'],\n ['the', '1990', 'transcript']])\nreferences = tf.ragged.constant([['delta', 'air', 'lines', 'flight'],\n ['this', 'concludes', 'the', 'transcript']])", "The hypotheses and references are expected to be tf.RaggedTensors of tokens. Tokens are required instead of raw sentences because no single tokenization strategy fits all tasks.\nNow we can call text.metrics.rouge_l and get our result back:", "result = text.metrics.rouge_l(hypotheses, references)\nprint('F-Measure: %s' % result.f_measure)\nprint('P-Measure: %s' % result.p_measure)\nprint('R-Measure: %s' % result.r_measure)", "ROUGE-L has an additional hyperparameter, alpha, which determines the weight of the harmonic mean used for computing the F-Measure. Values closer to 0 treat Recall as more important and values closer to 1 treat Precision as more important. alpha defaults to .5, which corresponds to equal weight for Precision and Recall.", "# Compute ROUGE-L with alpha=0\nresult = text.metrics.rouge_l(hypotheses, references, alpha=0)\nprint('F-Measure (alpha=0): %s' % result.f_measure)\nprint('P-Measure (alpha=0): %s' % result.p_measure)\nprint('R-Measure (alpha=0): %s' % result.r_measure)\n\n# Compute ROUGE-L with alpha=1\n# TODO 2: Your code here" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wobiskai/anomaly-detection-in-Bitcoin
Bitcoin Anomaly Analysis.ipynb
apache-2.0
[ "Load data", "import numpy as np\nimport pandas as pd\n\ncolumn_names = ['txn_key', 'from_user', 'to_user', 'date', 'amount']\ndf = pd.read_csv('../data/bitcoin_uic_data_and_code_20130410/user_edges.txt', names=column_names)\n\ndf.head()", "Select transactions in or before 2010", "df[ df.date < 20110000000000 ].to_csv('../data/subset/user_edges_2010.csv', index=False)\n\ndf = pd.read_csv('../data/subset/user_edges_2010.csv')\n\ndf['date'] = pd.to_datetime(df.date, format='%Y-%m-%d %H:%M:%S')\n\ndf.to_csv('../data/subset/user_edges_2010.csv', index=False)", "Transaction Features to use\n\nNumber of transaction under this key \nTransaction amount, total amount under this key \nFrom equals to? \nNumber of unique from/to under this key \nTransaction date \nYear \nMonth \nDay \nDay of week \nDay of year \nHour \nMinute \n\nSecond \n\n\nFrom/to in/out (unique) degree \n\nFrom/to clustering coefficient \nFrom/to in/out transaction frequency \nAll \nWithin ±12 hours \nFrom/to transaction volume \nAll \nWithin ±12 hours \nFrom/to first transaction date \nFrom/to average in/out transaction amount \nFrom/to average time between in/out transactions \n\nBuild graphs from transaction data - Undirected, directed and multi-directed", "import networkx as nx\n\n# for features only defined in undirected graph\nG = nx.from_pandas_dataframe(df, \n source='from_user', target='to_user', \n edge_attr=['txn_key', 'amount', 'date'], \n create_using=nx.Graph()\n )\n\n# unique links between users\nG_di = nx.from_pandas_dataframe(df, \n source='from_user', target='to_user', \n edge_attr=['txn_key', 'amount', 'date'], \n create_using=nx.DiGraph()\n )\n\n# the full graph\nG_mdi = nx.from_pandas_dataframe(df, \n source='from_user', target='to_user', \n edge_attr=['txn_key', 'amount', 'date'], \n create_using=nx.MultiDiGraph()\n )\n\n# transaction feature maps\ncount_by_key = df.groupby('txn_key').size()\namount_by_key = df.groupby('txn_key').amount.sum()\nufrom_by_key = df.groupby('txn_key').from_user.agg(pd.Series.nunique)\nuto_by_key = df.groupby('txn_key').to_user.agg(pd.Series.nunique)\n\n# user feature maps\nin_txn_count = df.groupby('to_user').size()\nin_key_count = df.groupby('to_user').txn_key.agg(pd.Series.nunique)\n\nout_txn_count = df.groupby('from_user').size()\nout_key_count = df.groupby('from_user').txn_key.agg(pd.Series.nunique)\n\ntotal_in_txn_amt = df.groupby('to_user').amount.sum()\ntotal_out_txn_amt = df.groupby('from_user').amount.sum()\n\navg_in_txn_amt = df.groupby('to_user').amount.mean()\navg_out_txn_amt = df.groupby('from_user').amount.mean()\n\nfrom_fst_txn_date = df.groupby('from_user').date.min()\n\n\ndf_feat = df.assign(\n # transaction features\n count_by_key = df.txn_key.map(count_by_key), \n amount_by_key = df.txn_key.map(amount_by_key), \n from_eq_to = df.from_user == df.to_user, \n ufrom_by_key = df.txn_key.map(ufrom_by_key), \n uto_by_key = df.txn_key.map(uto_by_key), \n \n # transaction date features\n date_year = df.date.dt.year, \n date_month = df.date.dt.month, \n date_day = df.date.dt.day, \n date_dayofweek = df.date.dt.dayofweek, \n date_dayofyear = df.date.dt.dayofyear, \n date_hour = df.date.dt.hour, \n date_minute = df.date.dt.minute, \n date_second = df.date.dt.second, \n \n # user features\n from_in_txn_count = df.from_user.map(in_txn_count), \n from_in_key_count = df.from_user.map(in_key_count), \n from_out_txn_count = df.from_user.map(out_txn_count), \n from_out_key_count = df.from_user.map(out_key_count), \n \n to_in_txn_count = df.to_user.map(in_txn_count), \n to_in_key_count = df.to_user.map(in_key_count), \n to_out_txn_count = df.to_user.map(out_txn_count), \n to_out_key_count = df.to_user.map(out_key_count), \n \n from_total_in_txn_amt = df.from_user.map(total_in_txn_amt), \n from_total_out_txn_amt = df.from_user.map(total_out_txn_amt), \n \n to_total_in_txn_amt = df.to_user.map(total_in_txn_amt), \n to_total_out_txn_amt = df.to_user.map(total_out_txn_amt), \n \n from_avg_in_txn_amt = df.from_user.map(avg_in_txn_amt), \n from_avg_out_txn_amt = df.from_user.map(avg_out_txn_amt), \n \n to_avg_in_txn_amt = df.to_user.map(avg_in_txn_amt), \n to_avg_out_txn_amt = df.to_user.map(avg_out_txn_amt), \n \n from_in_deg = df.from_user.map(G_mdi.in_degree()), \n from_out_deg = df.from_user.map(G_mdi.out_degree()), \n from_in_udeg = df.from_user.map(G_di.in_degree()), \n from_out_udeg = df.from_user.map(G_di.out_degree()), \n \n to_in_deg = df.to_user.map(G_mdi.in_degree()), \n to_out_deg = df.to_user.map(G_mdi.out_degree()), \n to_in_udeg = df.to_user.map(G_di.in_degree()), \n to_out_udeg = df.to_user.map(G_di.out_degree()), \n \n from_cc = df.from_user.map(nx.clustering(G)), \n to_cc = df.to_user.map(nx.clustering(G))\n \n)\n\ndf_feat.fillna(0, inplace=True)", "Isolation Forest for anomaly detection", "from sklearn.ensemble import IsolationForest\n\nnot_train_cols = ['txn_key', 'from_user', 'to_user', 'date']\nX_train = df_feat[ [col for col in df_feat.columns if col not in not_train_cols] ].values\n\nclf = IsolationForest(n_estimators=100, \n contamination=0.01, \n n_jobs=-1, random_state=42)\nclf.fit(X_train)\n\nclf.threshold_\n\npred = clf.predict(X_train)\nanomalies = (pred != 1)", "Anomaly Scores", "import seaborn as sns\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(12, 8))\nsns.distplot(scores, kde=False)\nline = plt.vlines(clf.threshold_, 0, 30000, colors='r', linestyles='dotted')\nline.set_label('Threshold = -0.0948')\nplt.legend(loc='upper left', fontsize='medium')\nplt.title('Anomaly Scores returned by Isolation Forest', fontsize=16);", "Visualizing the Transactions", "from sklearn.manifold import TSNE\n\ntsne = TSNE(n_components=2, #perplexity=50, \n #n_iter=200, n_iter_without_progress=10, \n #angle=0.7, \n random_state=42)\nX_tsne = tsne.fit_transform(X_train[150000:155000])\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\noutlier = anomalies[150000:155000]\n\nplt.figure(figsize=(12,8))\nplt.scatter(X_tsne[~outlier][:,0], X_tsne[~outlier][:,1], marker='.', c='b', alpha=.2)\nplt.scatter(X_tsne[outlier][:,0], X_tsne[outlier][:,1], marker='o', c='r', alpha=1)\nplt.legend(['Normal Transactions', 'Abnormal Transactions'])\nplt.title('t-SNE Visualization of Normal Transactions vs Abnormal Transactions', fontsize=16);", "D3 Network Visualization", "json_date_format = '%Y-%m-%dT%H:%M:%SZ'\n# df['date'] = df.date.dt.strftime(json_date_format)\n\nfor scc in nx.strongly_connected_components(G_di):\n if len(scc) > 5:\n intersect = np.intersect1d(list(scc), anomalies_id)\n if intersect.size > 0:\n G_sub = G_di.subgraph(scc)\n \n#G_sub_json = json_graph.node_link_data(G_sub)\n\n#with open('../d3/json/network.json', 'w') as json_file:\n# json.dump(G_sub_json, json_file)\n\nanomalies = (pred != 1)\nnp.concatenate((df[anomalies].from_user, df[anomalies].to_user))\n\nanomalies_pairs = zip(df[anomalies].from_user, df[anomalies].to_user)\n\nfor i, j in anomalies_pairs:\n neigh = G_di.neighbors(i)\n neigh += G_di.neighbors(j)\n nodes = neigh + [i, j]\n if len(nodes) > 10 and len(nodes) < 20:\n G_sub = nx.subgraph(G_di, nodes)\n\nfrom networkx.readwrite import json_graph\nimport json\n\nanomalies_pairs = zip(df[anomalies].from_user, df[anomalies].to_user)\n\nfor e in G_sub.edges_iter():\n if e[:2] in anomalies_pairs:\n G_sub.edge[e[0]][e[1]]['type'] = 'licensing'\n else:\n G_sub.edge[e[0]][e[1]]['type'] = 'suit'\n\nG_sub_json = json_graph.node_link_data(G_sub)\n\nwith open('../d3/json/network.json', 'w') as json_file:\n json.dump(G_sub_json, json_file)\n\nG_di = nx.from_pandas_dataframe(df, \n source='from_user', target='to_user', \n edge_attr=['txn_key', 'amount', 'date'], \n create_using=nx.DiGraph()\n )\n\nfrom IPython.display import IFrame\nIFrame('./d3/html/network.html', width=1000, height=500)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
awwong1/nd101
jupyter/keyboard-shortcuts.ipynb
mit
[ "Keyboard shortcuts\nIn this notebook, you'll get some practice using keyboard shortcuts. These are key to becoming proficient at using notebooks and will greatly increase your work speed.\nFirst up, switching between edit mode and command mode. Edit mode allows you to type into cells while command mode will use key presses to execute commands such as creating new cells and openning the command palette. When you select a cell, you can tell which mode you're currently working in by the color of the box around the cell. In edit mode, the box and thick left border are colored green. In command mode, they are colored blue. Also in edit mode, you should see a cursor in the cell itself.\nBy default, when you create a new cell or move to the next one, you'll be in command mode. To enter edit mode, press Enter/Return. To go back from edit mode to command mode, press Escape.\n\nExercise: Click on this cell, then press Enter + Shift to get to the next cell. Switch between edit and command mode a few times.", "# mode practice", "Help with commands\nIf you ever need to look up a command, you can bring up the list of shortcuts by pressing H in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now.\nCreating new cells\nOne of the most common commands is creating new cells. You can create a cell above the current cell by pressing A in command mode. Pressing B will create a cell below the currently selected cell.\n\nExercise: Create a cell above this cell using the keyboard command.\nExercise: Create a cell below this cell using the keyboard command.\n\nSwitching between Markdown and code\nWith keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to cell, press Y. To switch from code to Markdown, press M.\n\nExercise: Switch the cell below between Markdown and code cells.", "## Practice here\n\ndef fibo(n): # Recursive Fibonacci sequence!\n if n == 0:\n return 0\n elif n == 1:\n return 1\n return fibo(n-1) + fibo(n-2)", "Line numbers\nA lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing L (in command mode of course) on a code cell.\n\nExercise: Turn line numbers on and off in the above code cell.\n\nDeleting cells\nDeleting cells is done by pressing D twice in a row so D, D. This is to prevent accidently deletions, you have to press the button twice!\n\nExercise: Delete the cell below.", "# DELETE ME", "Saving the notebook\nNotebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press S. So easy!\nThe Command Palette\nYou can easily access the command palette by pressing Shift + Control/Command + P. \n\nNote: This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari.\n\nThis will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in \"move\" which will bring up the move commands.\n\nExercise: Use the command palette to move the cell below down one position.", "# Move this cell down\n\n# below this cell", "Finishing up\nThere is plenty more you can do such as copying, cutting, and pasting cells. I suggest getting used to using the keyboard shortcuts, you’ll be much quicker at working in notebooks. When you become proficient with them, you'll rarely need to move your hands away from the keyboard, greatly speeding up your work.\nRemember, if you ever need to see the shortcuts, just press H in command mode." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
brianoleary15/Hands-On-Machine-Learning-with-ScikitLearn-and-TensorFlow
07_ensemble_learning_and_random_forests.ipynb
apache-2.0
[ "Chapter 7 – Ensemble Learning and Random Forests\nThis notebook contains all the sample code and solutions to the exercices in chapter 7.\nSetup\nFirst, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:", "# To support both python 2 and python 3\nfrom __future__ import division, print_function, unicode_literals\n\n# Common imports\nimport numpy as np\nimport os\n\n# to make this notebook's output stable across runs\nnp.random.seed(42)\n\n# To plot pretty figures\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nplt.rcParams['axes.labelsize'] = 14\nplt.rcParams['xtick.labelsize'] = 12\nplt.rcParams['ytick.labelsize'] = 12\n\n# Where to save the figures\nPROJECT_ROOT_DIR = \".\"\nCHAPTER_ID = \"ensembles\"\n\ndef image_path(fig_id):\n return os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID, fig_id)\n\ndef save_fig(fig_id, tight_layout=True):\n print(\"Saving figure\", fig_id)\n if tight_layout:\n plt.tight_layout()\n plt.savefig(image_path(fig_id) + \".png\", format='png', dpi=300)", "Voting classifiers", "heads_proba = 0.51\ncoin_tosses = (np.random.rand(10000, 10) < heads_proba).astype(np.int32)\ncumulative_heads_ratio = np.cumsum(coin_tosses, axis=0) / np.arange(1, 10001).reshape(-1, 1)\n\nplt.figure(figsize=(8,3.5))\nplt.plot(cumulative_heads_ratio)\nplt.plot([0, 10000], [0.51, 0.51], \"k--\", linewidth=2, label=\"51%\")\nplt.plot([0, 10000], [0.5, 0.5], \"k-\", label=\"50%\")\nplt.xlabel(\"Number of coin tosses\")\nplt.ylabel(\"Heads ratio\")\nplt.legend(loc=\"lower right\")\nplt.axis([0, 10000, 0.42, 0.58])\nsave_fig(\"law_of_large_numbers_plot\")\nplt.show()\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.datasets import make_moons\n\nX, y = make_moons(n_samples=500, noise=0.30, random_state=42)\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import VotingClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC\n\nlog_clf = LogisticRegression(random_state=42)\nrnd_clf = RandomForestClassifier(random_state=42)\nsvm_clf = SVC(random_state=42)\n\nvoting_clf = VotingClassifier(\n estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],\n voting='hard')\nvoting_clf.fit(X_train, y_train)\n\nfrom sklearn.metrics import accuracy_score\n\nfor clf in (log_clf, rnd_clf, svm_clf, voting_clf):\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n print(clf.__class__.__name__, accuracy_score(y_test, y_pred))\n\nlog_clf = LogisticRegression(random_state=42)\nrnd_clf = RandomForestClassifier(random_state=42)\nsvm_clf = SVC(probability=True, random_state=42)\n\nvoting_clf = VotingClassifier(\n estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],\n voting='soft')\nvoting_clf.fit(X_train, y_train)\n\nfrom sklearn.metrics import accuracy_score\n\nfor clf in (log_clf, rnd_clf, svm_clf, voting_clf):\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n print(clf.__class__.__name__, accuracy_score(y_test, y_pred))", "Bagging ensembles", "from sklearn.ensemble import BaggingClassifier\nfrom sklearn.tree import DecisionTreeClassifier\n\nbag_clf = BaggingClassifier(\n DecisionTreeClassifier(random_state=42), n_estimators=500,\n max_samples=100, bootstrap=True, n_jobs=-1, random_state=42)\nbag_clf.fit(X_train, y_train)\ny_pred = bag_clf.predict(X_test)\n\nfrom sklearn.metrics import accuracy_score\nprint(accuracy_score(y_test, y_pred))\n\ntree_clf = DecisionTreeClassifier(random_state=42)\ntree_clf.fit(X_train, y_train)\ny_pred_tree = tree_clf.predict(X_test)\nprint(accuracy_score(y_test, y_pred_tree))\n\nfrom matplotlib.colors import ListedColormap\n\ndef plot_decision_boundary(clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.5, contour=True):\n x1s = np.linspace(axes[0], axes[1], 100)\n x2s = np.linspace(axes[2], axes[3], 100)\n x1, x2 = np.meshgrid(x1s, x2s)\n X_new = np.c_[x1.ravel(), x2.ravel()]\n y_pred = clf.predict(X_new).reshape(x1.shape)\n custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])\n plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap, linewidth=10)\n if contour:\n custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])\n plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)\n plt.plot(X[:, 0][y==0], X[:, 1][y==0], \"yo\", alpha=alpha)\n plt.plot(X[:, 0][y==1], X[:, 1][y==1], \"bs\", alpha=alpha)\n plt.axis(axes)\n plt.xlabel(r\"$x_1$\", fontsize=18)\n plt.ylabel(r\"$x_2$\", fontsize=18, rotation=0)\n\nplt.figure(figsize=(11,4))\nplt.subplot(121)\nplot_decision_boundary(tree_clf, X, y)\nplt.title(\"Decision Tree\", fontsize=14)\nplt.subplot(122)\nplot_decision_boundary(bag_clf, X, y)\nplt.title(\"Decision Trees with Bagging\", fontsize=14)\nsave_fig(\"decision_tree_without_and_with_bagging_plot\")\nplt.show()", "Random Forests", "bag_clf = BaggingClassifier(\n DecisionTreeClassifier(splitter=\"random\", max_leaf_nodes=16, random_state=42),\n n_estimators=500, max_samples=1.0, bootstrap=True, n_jobs=-1, random_state=42)\n\nbag_clf.fit(X_train, y_train)\ny_pred = bag_clf.predict(X_test)\n\nfrom sklearn.ensemble import RandomForestClassifier\n\nrnd_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, n_jobs=-1, random_state=42)\nrnd_clf.fit(X_train, y_train)\n\ny_pred_rf = rnd_clf.predict(X_test)\n\nnp.sum(y_pred == y_pred_rf) / len(y_pred) # almost identical predictions\n\nfrom sklearn.datasets import load_iris\niris = load_iris()\nrnd_clf = RandomForestClassifier(n_estimators=500, n_jobs=-1, random_state=42)\nrnd_clf.fit(iris[\"data\"], iris[\"target\"])\nfor name, score in zip(iris[\"feature_names\"], rnd_clf.feature_importances_):\n print(name, score)\n\nrnd_clf.feature_importances_\n\nplt.figure(figsize=(6, 4))\n\nfor i in range(15):\n tree_clf = DecisionTreeClassifier(max_leaf_nodes=16, random_state=42 + i)\n indices_with_replacement = np.random.randint(0, len(X_train), len(X_train))\n tree_clf.fit(X[indices_with_replacement], y[indices_with_replacement])\n plot_decision_boundary(tree_clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.02, contour=False)\n\nplt.show()", "Out-of-Bag evaluation", "bag_clf = BaggingClassifier(\n DecisionTreeClassifier(random_state=42), n_estimators=500,\n bootstrap=True, n_jobs=-1, oob_score=True, random_state=40)\nbag_clf.fit(X_train, y_train)\nbag_clf.oob_score_\n\nbag_clf.oob_decision_function_\n\nfrom sklearn.metrics import accuracy_score\ny_pred = bag_clf.predict(X_test)\naccuracy_score(y_test, y_pred)", "Feature importance", "from sklearn.datasets import fetch_mldata\nmnist = fetch_mldata('MNIST original')\n\nrnd_clf = RandomForestClassifier(random_state=42)\nrnd_clf.fit(mnist[\"data\"], mnist[\"target\"])\n\ndef plot_digit(data):\n image = data.reshape(28, 28)\n plt.imshow(image, cmap = matplotlib.cm.hot,\n interpolation=\"nearest\")\n plt.axis(\"off\")\n\nplot_digit(rnd_clf.feature_importances_)\n\ncbar = plt.colorbar(ticks=[rnd_clf.feature_importances_.min(), rnd_clf.feature_importances_.max()])\ncbar.ax.set_yticklabels(['Not important', 'Very important'])\n\nsave_fig(\"mnist_feature_importance_plot\")\nplt.show()", "AdaBoost", "from sklearn.ensemble import AdaBoostClassifier\n\nada_clf = AdaBoostClassifier(\n DecisionTreeClassifier(max_depth=1), n_estimators=200,\n algorithm=\"SAMME.R\", learning_rate=0.5, random_state=42)\nada_clf.fit(X_train, y_train)\n\nplot_decision_boundary(ada_clf, X, y)\n\nm = len(X_train)\n\nplt.figure(figsize=(11, 4))\nfor subplot, learning_rate in ((121, 1), (122, 0.5)):\n sample_weights = np.ones(m)\n for i in range(5):\n plt.subplot(subplot)\n svm_clf = SVC(kernel=\"rbf\", C=0.05, random_state=42)\n svm_clf.fit(X_train, y_train, sample_weight=sample_weights)\n y_pred = svm_clf.predict(X_train)\n sample_weights[y_pred != y_train] *= (1 + learning_rate)\n plot_decision_boundary(svm_clf, X, y, alpha=0.2)\n plt.title(\"learning_rate = {}\".format(learning_rate - 1), fontsize=16)\n\nplt.subplot(121)\nplt.text(-0.7, -0.65, \"1\", fontsize=14)\nplt.text(-0.6, -0.10, \"2\", fontsize=14)\nplt.text(-0.5, 0.10, \"3\", fontsize=14)\nplt.text(-0.4, 0.55, \"4\", fontsize=14)\nplt.text(-0.3, 0.90, \"5\", fontsize=14)\nsave_fig(\"boosting_plot\")\nplt.show()\n\nlist(m for m in dir(ada_clf) if not m.startswith(\"_\") and m.endswith(\"_\"))", "Gradient Boosting", "np.random.seed(42)\nX = np.random.rand(100, 1) - 0.5\ny = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)\n\nfrom sklearn.tree import DecisionTreeRegressor\n\ntree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)\ntree_reg1.fit(X, y)\n\ny2 = y - tree_reg1.predict(X)\ntree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)\ntree_reg2.fit(X, y2)\n\ny3 = y2 - tree_reg2.predict(X)\ntree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)\ntree_reg3.fit(X, y3)\n\nX_new = np.array([[0.8]])\n\ny_pred = sum(tree.predict(X_new) for tree in (tree_reg1, tree_reg2, tree_reg3))\n\ny_pred\n\ndef plot_predictions(regressors, X, y, axes, label=None, style=\"r-\", data_style=\"b.\", data_label=None):\n x1 = np.linspace(axes[0], axes[1], 500)\n y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)\n plt.plot(X[:, 0], y, data_style, label=data_label)\n plt.plot(x1, y_pred, style, linewidth=2, label=label)\n if label or data_label:\n plt.legend(loc=\"upper center\", fontsize=16)\n plt.axis(axes)\n\nplt.figure(figsize=(11,11))\n\nplt.subplot(321)\nplot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label=\"$h_1(x_1)$\", style=\"g-\", data_label=\"Training set\")\nplt.ylabel(\"$y$\", fontsize=16, rotation=0)\nplt.title(\"Residuals and tree predictions\", fontsize=16)\n\nplt.subplot(322)\nplot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label=\"$h(x_1) = h_1(x_1)$\", data_label=\"Training set\")\nplt.ylabel(\"$y$\", fontsize=16, rotation=0)\nplt.title(\"Ensemble predictions\", fontsize=16)\n\nplt.subplot(323)\nplot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label=\"$h_2(x_1)$\", style=\"g-\", data_style=\"k+\", data_label=\"Residuals\")\nplt.ylabel(\"$y - h_1(x_1)$\", fontsize=16)\n\nplt.subplot(324)\nplot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label=\"$h(x_1) = h_1(x_1) + h_2(x_1)$\")\nplt.ylabel(\"$y$\", fontsize=16, rotation=0)\n\nplt.subplot(325)\nplot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label=\"$h_3(x_1)$\", style=\"g-\", data_style=\"k+\")\nplt.ylabel(\"$y - h_1(x_1) - h_2(x_1)$\", fontsize=16)\nplt.xlabel(\"$x_1$\", fontsize=16)\n\nplt.subplot(326)\nplot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label=\"$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$\")\nplt.xlabel(\"$x_1$\", fontsize=16)\nplt.ylabel(\"$y$\", fontsize=16, rotation=0)\n\nsave_fig(\"gradient_boosting_plot\")\nplt.show()\n\nfrom sklearn.ensemble import GradientBoostingRegressor\n\ngbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)\ngbrt.fit(X, y)\n\ngbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.1, random_state=42)\ngbrt_slow.fit(X, y)\n\nplt.figure(figsize=(11,4))\n\nplt.subplot(121)\nplot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label=\"Ensemble predictions\")\nplt.title(\"learning_rate={}, n_estimators={}\".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)\n\nplt.subplot(122)\nplot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])\nplt.title(\"learning_rate={}, n_estimators={}\".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)\n\nsave_fig(\"gbrt_learning_rate_plot\")\nplt.show()", "Gradient Boosting with Early stopping", "import numpy as np\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import mean_squared_error\n\nX_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)\n\ngbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)\ngbrt.fit(X_train, y_train)\n\nerrors = [mean_squared_error(y_val, y_pred)\n for y_pred in gbrt.staged_predict(X_val)]\nbst_n_estimators = np.argmin(errors)\n\ngbrt_best = GradientBoostingRegressor(max_depth=2,n_estimators=bst_n_estimators, random_state=42)\ngbrt_best.fit(X_train, y_train)\n\nmin_error = np.min(errors)\n\nplt.figure(figsize=(11, 4))\n\nplt.subplot(121)\nplt.plot(errors, \"b.-\")\nplt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], \"k--\")\nplt.plot([0, 120], [min_error, min_error], \"k--\")\nplt.plot(bst_n_estimators, min_error, \"ko\")\nplt.text(bst_n_estimators, min_error*1.2, \"Minimum\", ha=\"center\", fontsize=14)\nplt.axis([0, 120, 0, 0.01])\nplt.xlabel(\"Number of trees\")\nplt.title(\"Validation error\", fontsize=14)\n\nplt.subplot(122)\nplot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])\nplt.title(\"Best model (%d trees)\" % bst_n_estimators, fontsize=14)\n\nsave_fig(\"early_stopping_gbrt_plot\")\nplt.show()\n\ngbrt = GradientBoostingRegressor(max_depth=2, warm_start=True, random_state=42)\n\nmin_val_error = float(\"inf\")\nerror_going_up = 0\nfor n_estimators in range(1, 120):\n gbrt.n_estimators = n_estimators\n gbrt.fit(X_train, y_train)\n y_pred = gbrt.predict(X_val)\n val_error = mean_squared_error(y_val, y_pred)\n if val_error < min_val_error:\n min_val_error = val_error\n error_going_up = 0\n else:\n error_going_up += 1\n if error_going_up == 5:\n break # early stopping\n\nprint(gbrt.n_estimators)", "Exercise solutions\nComing soon" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bzamecnik/ml
chord-recognition/notebooks/prepare_features.ipynb
mit
[ "%pylab inline\n\nimport glob\nimport matplotlib.pyplot as plt\nimport numexpr\nimport os\nimport pandas as pd\nimport seaborn as sns", "Preparing the data\nData loading and putting it into a suitable format\nParsing chord labels to binary pitch class sets\n\nfor chord label dataset\n[x] see tools/add_pitch_class_sets.sh\n\nJoining DataFrames from each album and track into a common DataFrame\nLoading key labels and cleaning them\n\nsilence/key\n[x] just root pitch class\n[x] convert minor keys to parallel majors\n[x] take care of trash such as D:dorian\n\nData sanity checks\n\nin each segment the end time is greater than the start time\nthe start time of each segment is greater or equal than the end time of previous segment\nkey segments are aligned with chord segments\nthe files for each song should match between chords and keys\neach chord and key file should span the same time interval\nie. end time of last chord and key segment in a song should be equal\n\n\n\nFeatures for key classification\n\nfor each chord segment row we need the key\nchoose the length of a chord context\neg. 8 chords\nwith 12 pitch classes per row it makes a block of 12*16 = 192 columns per row\n\n\nsplit the dataset into blocks of adjacent chords and put each block into a wide row -> new dataframe\n\nMissing data\n\n10CD2_-_The_Beatles/CD2_-_12_-_Revolution_9.lab was missing in the original dataset\nwe added \"silence\" throughout the song instead\nbetter would be to perform the actual annotation and add the right data", "def impute_missing_key_files():\n path = 'data/beatles/keylab/The_Beatles/10CD2_-_The_Beatles/CD2_-_12_-_Revolution_9.lab'\n if not os.path.exists(path):\n df = pd.DataFrame.from_records(\n [(\"0.0\", \"502.204082\", \"Silence\", None)],\n columns=['start', 'end', 'key_indicator', 'key_label'])\n df.to_csv(path, sep='\\t', index=None, header=None)\n\nimpute_missing_key_files()", "Keys", "files = glob.glob('data/beatles/keylab/The_Beatles/*/*.lab')\nprint('key files count:', len(files))\nfiles\n\ndef read_key_file(path):\n return pd.read_csv(path, sep='\\t', header=None, names=['start','end','key_indicator','key_label'])\n\nread_key_file(files[0])\n\ndef add_track_id(df, track_id):\n df['track_id'] = track_id\n return df\n\nkeys = pd.concat(add_track_id(read_key_file(file), track_id) for (track_id, file) in enumerate(files))\n\nkeys\n\nkeys['duration'] = keys['end'] - keys['start']\n\nkeys['key_label'].value_counts()\n\nprint('total number of key segments:', len(keys))\n\nkeys['duration'].describe()\n\nsns.distplot(keys['duration'], bins=50)\ntitle('distribution of key segment duration (sec)');\n\nkeys['key_indicator'].value_counts()\n\n# Total duration of segments with some key and with silence:\nkeys.groupby('key_indicator').sum()['duration']\n\n# The same in percentage:\nkeys.groupby('key_indicator').sum()['duration'] / keys['duration'].sum() * 100", "Distribution of number of key segments among songs.", "keys.groupby('track_id').count()['start'].describe()", "We need to map symbolic key labels to pitch classes. Since the label are not always referring to the diatonic key but sometimes to modes, we normalize the pitch class to represent the underlying diatonic key. This would help in our further classification since it reduces the number of classes and their meaning. In order key we can limit ourselves not to discriminate between modes.", "diatonic_pitch_classes = {'C': 0, 'D': 2, 'E': 4, 'F': 5, 'G': 7, 'A': 9, 'B': 11}\naccidental_shifts = {'': 0, 'b': -1, '#': 1}\nmode_shifts = {\n '': 0,\n 'major': 0,\n 'ionian': 0,\n 'dorian': -2,\n 'phrygian': -4,\n 'lydian': -5,\n 'mixolydian': -7,\n 'aeolian': -9,\n 'minor': -9,\n 'locrian': -11}\n\ndef tone_label_to_pitch_class(label):\n base = label[0].upper()\n pitch_class = diatonic_pitch_classes[base]\n accidental = label[1] if len(label) > 1 else ''\n shift = accidental_shifts[accidental]\n return ((pitch_class + shift) + 12) % 12\n\ndef diatonic_root_for_key_label(key_label):\n if type(key_label) is not str:\n return\n parts = key_label.split(':')\n modal_root_label = parts[0]\n modal_root_pitch_class = tone_label_to_pitch_class(modal_root_label)\n mode_label = parts[1].lower() if len(parts) > 1 else ''\n mode_shift = mode_shifts[mode_label] if mode_label in mode_shifts else 0\n diatonic_root = modal_root_pitch_class + mode_shift\n# return (modal_root_label, mode_label, modal_root_pitch_class, mode_shift, diatonic_root)\n return (diatonic_root + 12) % 12\n \nunique_key_labels = list(keys['key_label'].value_counts().index)\n[(label, diatonic_root_for_key_label(label)) for label in unique_key_labels]\n\nkeys['key_diatonic_root'] = keys['key_label'].apply(diatonic_root_for_key_label)\n\ncanonic_pitch_class_labels = dict(enumerate(['C', 'Db', 'D', 'Eb', 'E', 'F', 'Gb', 'G', 'Ab', 'A', 'Bb', 'B']))\ndef label_for_pitch_class(pc):\n if pc in canonic_pitch_class_labels:\n return canonic_pitch_class_labels[pc]\n\nkeys['key_diatonic_root_label'] = keys['key_diatonic_root'].apply(label_for_pitch_class)\n\ndiatonic_keys_hist = keys.dropna()['key_diatonic_root_label'].value_counts()\ndiatonic_keys_hist\n\nplot(diatonic_keys_hist)\nxticks(np.arange(len(diatonic_keys_hist)), diatonic_keys_hist.index)\ntitle('diatonic key usage among all songs');\n\n# all tracks start at 0.0 time\nassert (keys.groupby('track_id').first()['start'] == 0).all()\n\n# duration of each segment should be positive\n# assert (keys.groupby('track_id')['duration'] < 0).all()\n\nkeys.groupby('track_id').last()['end']\n\nkeys", "Export the prepared DataFrame to TSV file.", "keys = keys[[\n 'track_id', 'start', 'end', 'duration', 'key_indicator',\n 'key_label', 'key_diatonic_root_label', 'key_diatonic_root']]\n\nkeys.to_csv('data/beatles/derived/all_keys.tsv', sep='\\t', index=False, float_format='%.3f')", "Chords", "files = glob.glob('data/beatles/chordlab/The_Beatles/*/*.lab.pcs.tsv')\nprint('chord files count:', len(files))\nfiles\n\ndef read_chord_file(path):\n return pd.read_csv(path, sep=' ', header=None, names=['start','end','chord_label'])\n\ndef read_chord_file_with_pitch_classes(path):\n return pd.read_csv(path, sep='\\t')\n\nread_chord_file_with_pitch_classes(files[0])\n\nchords = pd.concat(add_track_id(read_chord_file_with_pitch_classes(file), track_id) for (track_id, file) in enumerate(files))\n\nchords['duration'] = chords['end'] - chords['start']\n\nchords = chords.reindex_axis(['track_id', 'start', 'end', 'duration', 'label', 'root', 'bass', 'C','Db','D','Eb','E','F','Gb','G','Ab','A','Bb','B'], axis=1)\nchords\n\n# all tracks start at 0.0 time\nassert (chords.groupby('track_id').first()['start'] == 0).all()", "Chords vs. keys - alignment", "chords_keys_end_diff = chords.groupby('track_id').last()[['end']].rename(columns={'end': 'chords_end'}).join(keys.groupby('track_id').last()[['end']].rename(columns={'end': 'keys_end'}))\nchords_keys_end_diff['diff'] = chords_keys_end_diff['chords_end'] - chords_keys_end_diff['keys_end']\n\nchords_keys_end_diff['diff'].describe()", "Track lengths are not precisely aligned between chords and keys dataset. Roughly they're are alrigth, however.", "chords_keys_end_diff\n\nchords_keys_end_diff['diff'].hist(bins=50);", "Merge chords and keys", "chords.head()\n\nkeys.head()", "Let's try to merge keys to chords from a single example track.", "track_keys = keys[keys['track_id'] == 109]\ntrack_keys\n\ntrack_chords = chords[chords['track_id'] == 109]\ntrack_chords\n\ndef plot_time_intervals(starts, ends, **kwargs):\n x_lines = [el for (s, e) in zip(starts, ends) for el in (s, e, None)]\n y_lines = [el for i in range(len(starts)) for el in (i, i, None)]\n plot(x_lines, y_lines, **kwargs)\n\ndef plot_chords_and_keys(track_chords, track_keys):\n plot_time_intervals(track_chords['start'], track_chords['end'], label='chord segments')\n plot_time_intervals(track_keys['start'], track_keys['end'], label='key segments')\n title('chord and key segments in time')\n legend(loc='center right')\n xlabel('time (sec)')\n ylabel('segment index')\n\nplot_chords_and_keys(track_chords, track_keys);\n\ndef time_range(df, start, end):\n return df[(df['start'] >= start) & (df['end'] <= end)]\n\nplot_chords_and_keys(time_range(track_chords, 120, 220), time_range(track_keys, 120, 220))\n\ndef find_key(keys, track_id, start):\n \"Finds the first key segment within a track that the chord segment spans.\"\n possible_keys = keys.query('(start <= '+str(start)+') & (track_id == '+str(track_id)+')')\n if len(possible_keys) > 0:\n row = possible_keys.iloc[-1]\n# return pd.Series([start, ['start']])\n return row\n\nfind_key(keys, 109, 198.0)\n\ntrack_chords['start'].apply(lambda start: find_key(track_keys, 109, start)[['key_diatonic_root_label', 'key_diatonic_root']])\n\ntrack_keys\n\ntrack_chords\n\nchord = chords.iloc[0]\nkeys_for_track = keys[keys['track_id'] == chord.track_id]\nkeys_for_track[keys_for_track['start'] >= chord['start']]\n\n# this is very inefficient, it takes many seconds\n# TODO: optimize this!\nkey_labels_for_chords = chords[['track_id','start']].apply(\n lambda row: find_key(keys, row['track_id'], row['start']),\n axis=1)[['key_diatonic_root_label', 'key_diatonic_root']]\n\nkey_labels_for_chords[:10]", "Append the computed key for each chord segment.", "for col in ('key_diatonic_root_label', 'key_diatonic_root'):\n chords[col] = key_labels_for_chords[col]\n\nchords", "Key distribution in chord segments.", "chords.key_diatonic_root_label.value_counts()\n\nchords.to_csv('data/beatles/derived/all_chords_with_keys.tsv', sep='\\t', index=False, float_format='%.6f')", "We can see that the key labels are heavily skewed and this might not be good for our ML models.\nIn order to deskew the class distribution we can generate more data from the existing data by transposing each data point to all 12 keys. This way we'll have a 12x larger dataset and uniform classes.", "chords.head()\n\ndef add_pitch_classes(a, b):\n return ((a + b) + 12) % 12\n\ndef rotate_columns(cols, shift):\n return cols[shift:] + cols[:shift]\n\npcs_columns = ['C','Db','D','Eb','E','F','Gb','G','Ab','A','Bb','B']\n\n[rotate_columns(pcs_columns, i) for i in range(12)]\n\ndef rotate_binary_pcs_cols(df, cols, shift):\n return df.rename_axis(dict(zip(cols, rotate_columns(cols, shift))), axis=1)\n \nrotate_binary_pcs_cols(chords, pcs_columns, 2)[pcs_columns].head()\n\nchords[pcs_columns].head()\n\ndef transpose_col(df, shift):\n return df.apply(lambda pc: add_pitch_classes(pc, shift))\n\ntranspose_col(chords[['root', 'bass', 'key_diatonic_root']], 11).head()\n\ndef transpose_chords_df(chords, shift):\n chords_copy = chords.copy()\n chords_copy['synth_transposition'] = shift\n pc_cols = ['root', 'bass', 'key_diatonic_root']\n chords_copy[pc_cols] = transpose_col(chords_copy[pc_cols], shift)\n chords_copy['key_diatonic_root_label'] = chords_copy['key_diatonic_root'].apply(label_for_pitch_class)\n chords_copy = rotate_binary_pcs_cols(chords_copy, pcs_columns, shift)\n chords_copy = chords_copy.rename_axis({'label': 'orig_chord_label'}, axis=1)\n return chords_copy\n\ntranspose_chords_df(chords, 1).head()\n\ndef generate_transpositions(chords):\n df = pd.concat([transpose_chords_df(chords, shift) for shift in range(12)])\n df = df[['track_id', 'synth_transposition',\n 'start', 'end', 'duration',\n 'orig_chord_label', 'root', 'bass',\n 'C', 'Db', 'D', 'Eb', 'E', 'F', 'Gb', 'G', 'Ab', 'A', 'Bb', 'B',\n 'key_diatonic_root_label', 'key_diatonic_root']]\n return df\n\nchords_all_synth = generate_transpositions(chords)\n\nchords_all_synth.columns\n\nlen(chords_all_synth)", "All key classes are now of uniform probability.", "chords_all_synth['key_diatonic_root'].value_counts()\n\nchords_all_synth[pcs_columns].mean()\n\nchords_all_synth.head()\n\nchords_all_synth[abs(chords_all_synth['start'] - 2.612267) < 1e-3]\n\nchords_all_synth.to_csv('data/beatles/derived/all_chords_with_keys_synth.tsv', sep='\\t', index=False, float_format='%.6f')", "Chord sequences", "chords_all = pd.read_csv('data/beatles/derived/all_chords_with_keys.tsv', sep='\\t')\nchords_all_synth = pd.read_csv('data/beatles/derived/all_chords_with_keys_synth.tsv', sep='\\t')\n\nchords_all = chords_all.dropna()\nchords_all_synth = chords_all_synth.dropna()\nchords_all.head()\n\ntrack_chords = chords_all[(chords_all['track_id'] == 40)].copy()\ntrack_chords.head(10)\n\nchords_all['key_index'] = ((chords_all['key_diatonic_root'].diff() != 0) | (chords_all['track_id'].diff() != 0)).cumsum()\n\nchords_all\n\n# track_chords.groupby(['track_id']).last()['key_index']\n(chords_all.groupby(['track_id', 'key_index']).count()['start'] > 16).mean()\n\nc = chords_all[:200]\nc\n\ndef pcs_block_cols(block_size):\n return ['%s_%02d'%(pc, i) for i in range(block_size) for pc in pcs_columns]\n\ndef block_columns(block_size): \n return pcs_block_cols(block_size) + ['key_diatonic_root']\n\ndef merge_chord_block(block_df):\n all_pcs = block_df.as_matrix(columns=pcs_columns).ravel()\n most_frequent_key = block_df['key_diatonic_root'].value_counts().index[0]\n return np.hstack([all_pcs, most_frequent_key])\n\ndef roll_chords(chords_df, window_size=4):\n blocks = (chords_df.iloc[start:start+window_size] for start in range(len(chords_df) - window_size + 1))\n c_rolling = (merge_chord_block(block) for block in blocks)\n df_rolling = pd.DataFrame(c_rolling, columns=block_columns(window_size)).astype(np.int16)\n return df_rolling", "Generate data points by reshaping input rows in the rolling window and selecting the most frequent output label. Do this for both original and synthetic data and for a different window sizes.", "# TODO: optimize this, since it is not really efficient (~20 minutes for all the files...)\n\nfor postfix, chords in [('', chords_all), ('_synth', chords_all_synth)]:\n for window in (1,2,4,8,16):\n print('window:', window)\n chords_rolling = roll_chords(chords, window_size=window)\n print('shape:', chords_rolling.shape)\n chords_rolling.to_csv('data/beatles/derived/all_chords_with_keys'+postfix+'_rolling_'+str(window)+'.tsv', sep='\\t', index=False)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
balopat/pyquil
examples/getting_started.ipynb
apache-2.0
[ "Installation and Getting Started\n\nEnvironment Setup\nPrerequisites\nInstallation\nConnecting to a Rigetti QVM\n\n\nBasic Usage\nSome Program Construction Features\nFixing a Mistaken Instruction\nThe Standard Gate Set\nDefining New Gates\n\n\nAdvanced Usage\nQuantum Fourier Transform (QFT)\nClassical Control Flow\nParametric Depolarizing Noise\nPauli Operator Algebra\n\n\nExercises\nExercise 1 - Quantum Dice\nExercise 2 - Controlled Gates\nExercise 3 - Grover's Algorithm\n\n\n\nThis toolkit provides some simple libraries for writing quantum programs using the quantum instruction language Quil. pyQuil is part of the Forest suite of tools for quantum programming and is currently in private beta.\npython\nimport pyquil.quil as pq\nimport pyquil.forest as forest\nfrom pyquil.gates import *\nqvm = forest.Connection()\np = pq.Program()\np.inst(H(0), CNOT(0, 1))\n &lt;pyquil.pyquil.Program object at 0x101ebfb50&gt;\nqvm.wavefunction(p)\n [(0.7071067811865475+0j), 0j, 0j, (0.7071067811865475+0j)]\nIt comes with a few parts:\n\nQuil: The Quantum Instruction Language standard. Instructions written in Quil can be executed on any implementation of a quantum abstract machine, such as the quantum virtual machine (QVM), or on a real quantum processing unit (QPU). More details regarding Quil can be found in the whitepaper.\nQVM: A Quantum Virtual Machine, which is an implementation of the quantum abstract machine on classical hardware. The QVM lets you use a regular computer to simulate a small quantum computer. You can access the Rigetti QVM running in the cloud with your API key. Sign up here to get your key.\npyQuil: A Python library to help write and run Quil code and quantum programs.\n\nEnvironment Setup\nPrerequisites\nBefore starting, ensure that you have an installation of Python 2.7 (version 2.7.10 or greater) and the Python package manager pip. We recommend installing Anaconda for an all-in-one installation of Python 2.7. If pip is not installed, it can be installed with easy_install pip.\nInstallation\nAfter obtaining pyQuil from GitHub or from a source distribution, navigate into its directory in a terminal and run\npip install pyquil\nThe library will now be available globally.\nConnecting to a Rigetti QVM\nIn order to connect to a Rigetti QVM you need to configure your pyQuil installation with your QVM API key. For permanent one-time setup, you can do this by creating a file in your home directory with the following lines:\n[Rigetti Forest]\nurl: &lt;FOREST_URL&gt;\nkey: &lt;YOUR_FOREST_API_KEY&gt;\nLook here to see more information about setting up your connection to Forest.\nIf this configuration is not set, pyQuil will default to looking for a local QVM at https://api.rigetti.com/qvm.\nBasic Usage\nTo ensure that your installation is working correctly, try running the following Python commands interactively. First, import the quil module (which constructs quantum programs) and the forest module (which allows connections to the Rigetti QVM). We'll also import some basic gates for pyQuil.", "import pyquil.quil as pq\nimport pyquil.forest as forest\nfrom pyquil.gates import *", "Next, we want to open a connection to the QVM.", "qvm = forest.Connection()", "Now we can make a program by adding some Quil instruction using the inst method on a Program object.", "p = pq.Program()\np.inst(X(0)).measure(0, 0)", "This program simply applies the $X$-gate to the zeroth qubit, measures that qubit, and stores the measurement result in the zeroth classical register. We can look at the Quil code that makes up this program simply by printing it.", "print p", "Most importantly, of course, we can see what happens if we run this program on the QVM:", "classical_regs = [0] # A list of which classical registers to return the values of.\n\nqvm.run(p, classical_regs)", "We see that the result of this program is that the classical register [0] now stores the state of qubit 0, which should be $\\left\\vert 1\\right\\rangle$ after an $X$-gate. We can of course ask for more classical registers:", "qvm.run(p, [0, 1, 2])", "The classical registers are initialized to zero, so registers [1] and [2] come out as zero. If we stored the measurement in a different classical register we would obtain:", "p = pq.Program() # clear the old program\np.inst(X(0)).measure(0, 1)\nqvm.run(p, [0, 1, 2])", "We can also run programs multiple times and accumulate all the results in a single list:", "coin_flip = pq.Program().inst(H(0)).measure(0, 0)\nnum_flips = 5\nqvm.run(coin_flip, [0], num_flips)", "Try running the above code several times. You will, with very high probability, get different results each time.\nAs the QVM is a virtual machine, we can also inspect the wavefunction of a program directly, even without measurements:", "coin_flip = pq.Program().inst(H(0))\nqvm.wavefunction(coin_flip)", "It is important to remember that this wavefunction method is just a useful debugging tool for small quantum systems, and it cannot be feasibly obtained on a quantum processor.\nSome Program Construction Features\nMultiple instructions can be applied at once or chained together. The following are all valid programs:", "print \"Multiple inst arguments with final measurement:\"\nprint pq.Program().inst(X(0), Y(1), Z(0)).measure(0, 1)\n\nprint \"Chained inst with explicit MEASURE instruction:\"\nprint pq.Program().inst(X(0)).inst(Y(1)).measure(0, 1).inst(MEASURE(1, 2))\n\nprint \"A mix of chained inst and measures:\"\nprint pq.Program().inst(X(0)).measure(0, 1).inst(Y(1), X(0)).measure(0, 0)\n\nprint \"A composition of two programs:\"\nprint pq.Program(X(0)) + pq.Program(Y(0))", "Fixing a Mistaken Instruction\nIf an instruction was appended to a program incorrectly, one can pop it off.", "p = pq.Program().inst(X(0))\np.inst(Y(1))\nprint \"Oops! We have added Y 1 by accident:\"\nprint p\n\nprint \"We can fix by popping:\"\np.pop()\nprint p\n\nprint \"And then add it back:\"\np += pq.Program(Y(1))\nprint p", "The Standard Gate Set\nThe following gates methods come standard with Quil and gates.py:\n\n\nPauli gates I, X, Y, Z\n\n\nHadamard gate: H\n\n\nPhase gates: PHASE( $\\theta$ ), S, T\n\n\nControlled phase gates: CPHASE00( $\\alpha$ ), CPHASE01( $\\alpha$ ), CPHASE10( $\\alpha$ ), CPHASE( $\\alpha$ )\n\n\nCartesian rotation gates: RX( $\\theta$ ), RY( $\\theta$ ), RZ( $\\theta$ )\n\n\nControlled $X$ gates: CNOT, CCNOT\n\n\nSwap gates: SWAP, CSWAP, ISWAP, PSWAP( $\\alpha$ )\n\n\nThe parameterized gates take a real or complex floating point number as an argument.\nDefining New Gates\nNew gates can be easily added inline to Quil programs. All you need is a matrix representation of the gate. For example, below we define a $\\sqrt{X}$ gate.", "import numpy as np\n\n# First we define the new gate from a matrix\nx_gate_matrix = np.array(([0.0, 1.0], [1.0, 0.0]))\nsqrt_x = np.array([[ 0.5+0.5j, 0.5-0.5j],\n [ 0.5-0.5j, 0.5+0.5j]])\np = pq.Program().defgate(\"SQRT-X\", sqrt_x)\n\n# Then we can use the new gate,\np.inst((\"SQRT-X\", 0))\nprint p\n\nqvm.wavefunction(p)", "Quil in general supports defining parametric gates, though right now only static gates are supported by pyQuil. Below we show how we can define $X_1\\otimes \\sqrt{X_0} $ as a single gate.", "# A multi-qubit defgate example\nx_gate_matrix = np.array(([0.0, 1.0], [1.0, 0.0]))\nsqrt_x = np.array([[ 0.5+0.5j, 0.5-0.5j],\n [ 0.5-0.5j, 0.5+0.5j]])\nx_sqrt_x = np.kron(x_gate_matrix, sqrt_x)\np = pq.Program().defgate(\"X-SQRT-X\", x_sqrt_x)\n\n# Then we can use the new gate\np.inst((\"X-SQRT-X\", 0, 1))\nqvm.wavefunction(p)", "Advanced Usage\nQuantum Fourier Transform (QFT) <a id='Quantum-Fourier-Transform'></a>\nLet's do an example that includes multi-qubit parameterized gates.\nHere we wish to compute the discrete Fourier transform of [0, 1, 0, 0, 0, 0, 0, 0]. We do this in three steps:\n\nWrite a function called qft3 to make a 3-qubit QFT quantum program.\nWrite a state preparation quantum program.\nExecute state preparation followed by the QFT on the QVM.\n\nFirst we define a function to make a 3-qubit QFT quantum program. This is a mix of Hadamard and CPHASE gates, with a final bit reversal correction at the end consisting of a single SWAP gate.", "from math import pi\n\ndef qft3(q0, q1, q2):\n p = pq.Program()\n p.inst( H(q2),\n CPHASE(pi/2.0, q1, q2),\n H(q1),\n CPHASE(pi/4.0, q0, q2),\n CPHASE(pi/2.0, q0, q1),\n H(q0),\n SWAP(q0, q2) )\n return p", "There is a very important detail to recognize here: The function qft3 doesn't compute the QFT, but rather it makes a quantum program to compute the QFT on qubits q0, q1, and q2.\nWe can see what this program looks like in Quil notation by doing the following:", "print qft3(0, 1, 2)", "Next, we want to prepare a state that corresponds to the sequence we want to compute the discrete Fourier transform of. Fortunately, this is easy, we just apply an $X$-gate to the zeroth qubit.", "state_prep = pq.Program().inst(X(0))", "We can verify that this works by computing its wavefunction. However, we need to add some \"dummy\" qubits, because otherwise wavefunction would return a two-element vector.", "add_dummy_qubits = pq.Program().inst(I(2))\nqvm.wavefunction(state_prep + add_dummy_qubits)", "If we have two quantum programs a and b, we can concatenate them by doing a + b. Using this, all we need to do is compute the QFT after state preparation to get our final result.", "qvm.wavefunction(state_prep + qft3(0, 1, 2))", "We can verify this works by computing the (inverse) FFT from NumPy.", "from numpy.fft import ifft\nifft([0,1,0,0,0,0,0,0], norm=\"ortho\")", "Classical Control Flow\nHere are a couple quick examples that show how much richer the classical control of a Quil program can be. In this first example, we have a register called classical_flag_register which we use for looping. Then we construct the loop in the following steps:\n\n\nWe first initialize this register to 1 with the init_register program so our while loop will execute. This is often called the loop preamble or loop initialization.\n\n\nNext, we write body of the loop in a program itself. This will be a program that computes an $X$ followed by an $H$ on our qubit.\n\n\nLastly, we put it all together using the while_do method.", "# Name our classical registers:\nclassical_flag_register = 2\n\n# Write out the loop initialization and body programs:\ninit_register = pq.Program(TRUE([classical_flag_register]))\nloop_body = pq.Program(X(0), H(0)).measure(0, classical_flag_register)\n\n# Put it all together in a loop program:\nloop_prog = init_register.while_do(classical_flag_register, loop_body)\n\nprint loop_prog", "Notice that the init_register program applied a Quil instruction directly to a classical register. There are several classical commands that can be used in this fashion:\n\nTRUE which sets a single classical bit to be 1\nFALSE which sets a single classical bit to be 0\nNOT which flips a classical bit\nAND which operates on two classical bits\nOR which operates on two classical bits\nMOVE which moves the value of a classical bit at one classical address into another\nEXCHANGE which swaps the value of two classical bits\n\nIn this next example, we show how to do conditional branching in the form of the traditional if construct as in many programming languages. Much like the last example, we construct programs for each branch of the if, and put it all together by using the if_then method.", "# Name our classical registers:\ntest_register = 1\nanswer_register = 0\n\n# Construct each branch of our if-statement. We can have empty branches\n# simply by having empty programs.\nthen_branch = pq.Program(X(0))\nelse_branch = pq.Program()\n\n# Make a program that will put a 0 or 1 in test_register with 50% probability:\nbranching_prog = pq.Program(H(1)).measure(1, test_register)\n\n# Add the conditional branching:\nbranching_prog.if_then(test_register, then_branch, else_branch)\n\n# Measure qubit 0 into our answer register:\nbranching_prog.measure(0, answer_register)\n\nprint branching_prog", "We can run this program a few times to see what we get in the answer_register.", "qvm.run(branching_prog, [answer_register], 10)", "Parametric Depolarizing Noise\nThe Rigetti QVM has support for emulating certain types of noise models. One such model is parametric depolarizing noise, which is defined by a set of 6 probabilities:\n\n\nThe probabilities $P_X$, $P_Y$, and $P_Z$ which define respectively the probability of a Pauli $X$, $Y$, or $Z$ gate getting applied to each qubit after every gate application. These probabilities are called the gate noise probabilities.\n\n\nThe probabilities $P_X'$, $P_Y'$, and $P_Z'$ which define respectively the probability of a Pauli $X$, $Y$, or $Z$ gate getting applied to the qubit being measured before it is measured. These probabilities are called the measurement noise probabilities.\n\n\nWe can instantiate a noisy QVM by creating a new connection with these probabilities specified.", "# 20% chance of a X gate being applied after gate applications and before measurements.\ngate_noise_probs = [0.2, 0.0, 0.0]\nmeas_noise_probs = [0.2, 0.0, 0.0]\nnoisy_qvm = forest.Connection(gate_noise=gate_noise_probs, measurement_noise=meas_noise_probs)", "We can test this by applying an $X$ gate and measuring. Nominally, we should always measure 1.", "p = pq.Program().inst(X(0)).measure(0, 0)\nprint \"Without Noise:\", qvm.run(p, [0], 10)\nprint \"With Noise :\", noisy_qvm.run(p, [0], 10)", "Parametric Programs\nA big advantage of working in pyQuil is that you are able to leverage all the functionality of Python to generate Quil programs. In quantum/classical hybrid algorithms this often leads to situations where complex classical functions are used to generate Quil programs. pyQuil provides a convenient construction to allow you to use Python functions to generate templates of Quil programs, called ParametricPrograms:", "# This function returns a quantum circuit with different rotation angles on a gate on qubit 0\ndef rotator(angle):\n return pq.Program(RX(angle, 0))\n\nfrom pyquil.parametric import ParametricProgram\npar_p = ParametricProgram(rotator) # This produces a new type of parameterized program object", "The parametric program par_p now takes the same arguments as rotator:", "print par_p(0.5)", "We can think of ParametricPrograms as a sort of template for Quil programs. They cache computations\nthat happen in Python functions so that templates in Quil can be efficiently substituted.\nPauli Operator Algebra\nMany algorithms require manipulating sums of Pauli combinations, such as \\[\\sigma = \\tfrac{1}{2}I - \\tfrac{3}{4}X_0Y_1Z_3 + (5-2i)Z_1X_2,\\] where $G_n$ indicates the gate $G$ acting on qubit $n$. We can represent such sums by constructing PauliTerm and PauliSum. The above sum can be constructed as follows:", "from pyquil.paulis import ID, sX, sY, sZ\n\n# Pauli term takes an operator \"X\", \"Y\", \"Z\", or \"I\"; a qubit to act on, and\n# an optional coefficient.\na = 0.5 * ID\nb = -0.75 * sX(0) * sY(1) * sZ(3)\nc = (5-2j) * sZ(1) * sX(2)\n\n# Construct a sum of Pauli terms.\nsigma = a + b + c\nprint \"sigma =\", sigma", "There are two primary things one can do with Pauli terms and sums:\n\n\nA Pauli sum's fully \"tensored up\" form can be computed with the tensor_up function.\n\n\nQuil code can be generated to compute the exponentiation of a Pauli term, i.e., $\\exp[-i\\sigma]$.\n\n\nWhen arithmetic is done with Pauli sums, simplification is automatically done.\nThe following shows an instructive example of all three.", "import pyquil.paulis as pl\n\n# Simplification\nsigma_cubed = sigma * sigma * sigma\nprint \"Simplified :\", sigma_cubed\nprint\n\n#Produce Quil code to compute exp[iX]\nH = -1.0 * sX(0)\nprint \"Quil to compute exp[iX] on qubit 0:\"\nprint pl.exponential_map(H)(1.0)", "Exercises\nExercise 1 - Quantum Dice\nWrite a quantum program to simulate throwing an 8-sided die. The Python function you should produce is:\ndef throw_octahedral_die():\n # return the result of throwing an 8 sided die, an int between 1 and 8, by running a quantum program\nNext, extend the program to work for any kind of fair die:\ndef throw_polyhedral_die(num_sides):\n # return the result of throwing a num_sides sided die by running a quantum program\nExercise 2 - Controlled Gates\nWe can use the full generality of NumPy and SciPy to construct new gate matrices.\n\n\nWrite a function controlled which takes a $2\\times 2$ matrix $U$ representing a single qubit operator, and makes a $4\\times 4$ matrix which is a controlled variant of $U$, with the first argument being the control qubit.\n\n\nWrite a Quil program to define a controlled-$Y$ gate in this manner. Find the wavefunction when applying this gate to qubit 1 controlled by qubit 0.\n\n\nExercise 3 - Grover's Algorithm\nWrite a quantum program for the single-shot Grover's algorithm. The Python function you should produce is:\n```\ndata is an array of 0's and 1's such that there are exactly three times as many\n0's as 1's\ndef single_shot_grovers(data):\n # return an index that contains the value 1\n```\nAs an example: single_shot_grovers([0,0,1,0]) should return 2.\nHINT - Remember that the Grover's diffusion operator is:\n$$\n\\begin{pmatrix}\n2/N - 1 & 2/N & \\cdots & 2/N \\\n2/N & & &\\\n\\vdots & & \\ddots & \\\n2/N & & & 2/N-1\n\\end{pmatrix}\n$$" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/ai-for-finance/practice/freestyle.ipynb
apache-2.0
[ "Machine Learning for Finance Freestyle\nIn this lab you'll be given the opportunity to apply everything you have learned to build a trading strategy for SP500 stocks. First, let's introduce the dataset you'll be using.\nThe Data\nUse BigQuery's magic function to pull data as follows:\nDataset Name: ml4f\nTable Name: percent_change_sp500\n\nThe following query will pull 10 rows of data from the table:", "%%bigquery df\nSELECT \n *\nFROM\n `cloud-training-prod-bucket.ml4f.percent_change_sp500`\nLIMIT\n 10\n\ndf.head()", "As you can see, the table contains daily open and close data for SP500 stocks. The table also contains some features that have been generated for you using navigation functions and analytic functions. Let's dig into the schema a bit more.", "%%bigquery \nSELECT\n * EXCEPT(is_generated, generation_expression, is_stored, is_updatable)\nFROM\n `cloud-training-prod-bucket.ml4f`.INFORMATION_SCHEMA.COLUMNS\nWHERE\n table_name = \"percent_change_sp500\"", "Most of the features, like open and close are pretty straightforward. The features generated using analytic functions, such as close_MIN_prior_5_days are best described using an example. Let's take the 6 most recent rows of data for IBM and reproduce the close_MIN_prior_5_days column.", "%%bigquery\nSELECT \n *\nFROM\n `cloud-training-prod-bucket.ml4f.percent_change_sp500`\nWHERE\n symbol = 'IBM'\nORDER BY \n Date DESC\nLIMIT 6", "For Date = 2013-02-01 how did we arrive at close_MIN_prior_5_days = 0.989716? The minimum close over the past five days was 203.07. This is normalized by the current day's close of 205.18 to get close_MIN_prior_5_days = 203.07 / 205.18 = 0.989716. The other features utilizing analytic functions were generated in a similar way. Here are explanations for some of the other features:\n\nscaled_change: tomo_close_m_close / close\ns_p_scaled_change: This value is calculated the same way as scaled_change but for the S&P 500 index. \nnormalized_change: scaled_change - s_p_scaled_change The normalization using the S&P index fund helps ensure that the future price of a stock is not due to larger market effects. Normalization helps us isolate the factors contributing to the performance of a stock_market.\n\ndirection: This is the target variable we're trying to predict. The logic for this variable is as follows: \nsql\nCASE \n WHEN normalized_change &lt; -0.01 THEN 'DOWN'\n WHEN normalized_change &gt; 0.01 THEN 'UP'\n ELSE 'STAY'\nEND AS direction\n\n\nCreate classification model for direction\nIn this example, your job is to create a classification model to predict the direction of each stock. Be creative! You can do this in any number of ways. For example, you can use BigQuery, Scikit-Learn, or AutoML. Feel free to add additional features, or use time series models. \nEstablish a Simple Benchmark\nOne way to assess the performance of a model is to compare it to a simple benchmark. We can do this by seeing what kind of accuracy we would get using the naive strategy of just predicting the majority class. Across the entire dataset, the majority class is 'STAY'. Using the following query we can see how this naive strategy would perform.", "%%bigquery\nWITH subset as (\n SELECT \n Direction\n FROM\n `cloud-training-prod-bucket.ml4f.percent_change_sp500`\n WHERE\n tomorrow_close IS NOT NULL\n)\nSELECT \n Direction,\n 100.0 * COUNT(*) / (SELECT COUNT(*) FROM subset) as percentage\nFROM\n subset\nGROUP BY\n Direction", "So, the naive strategy of just guessing the majority class would have accuracy of around 54% across the entire dataset. See if you can improve on this. \nTrain Your Own Model", "# TODO: Write code to build a model to predict Direction" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
poppy-project/community-notebooks
tutorials-education/poppy_ergo_jr__decouverte_du_robot/TP2_mouvement_et_cartes_cor_prof.ipynb
lgpl-3.0
[ "from poppy.creatures import PoppyErgoJr\n\npoppy = PoppyErgoJr()\n\n", "Encore une instruction pour bouger\nQUESTIONS \n\nLorsque la liste pos contient 6 angles en degrés, que permet de faire le jeu d'instructions suivant ? \nLe jeu d'instructions suivant permet de faire aller les moteurs de la liste poppy.motors à la position correspondante de la liste pos en 0,5 seconde et d'attendre que le mouvement soit terminé pour passer à l'instruction suivante.\n\n\nQuelle différence avec m.goal_position = 30 par exemple ? \nIci, on a la possibilité d'attendre que le mouvement se termine pour passer au suivant. Le déplacement ne se fait pas à la vitesse m.moving_speed.\nPar ailleurs, on obtient des mouvements plus harmonieux.", "pos = [-20, -20, 40, -30, 40, 20]\ni = 0\nfor m in poppy.motors:\n m.compliant = False\n m.goto_position(pos[i], 0.5, wait = True)\n i = i + 1\n\n\n# importation des outils nécessaires \nimport cv2\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom hampy import detect_markers\n\n# affichage de l'image capturée\nimg = poppy.camera.frame\nplt.imshow(img)\n# récupère dans une liste les marqueurs trouvés dans l'image\nmarkers = detect_markers(img)\n\nvaleur = 0 \nfor m in markers:\n print('Found marker {} at {}'.format(m.id, m.center))\n m.draw_contour(img)\n valeur = m.id\n print(valeur)\n\nmarkers\n", "Quelques remarques :\n\nmarkers est une liste, elle contient les identifiants des marqueurs trouvés et la position du centre.\nplusieurs marqueurs peuvent être trouvés dans une même image capturée. \nm est un itérateur qui parcourt ici la liste des marqueurs. \nl'instruction m.draw_coutour(img) permet de dessiner les contours des marqueurs dans l'image img.", "import time\nRIGH = 82737172\n\nLEFT = 76697084\n\nNEXT = 78698884\n\nPREV = 80826986\n# la variable liste_moteur permet de n'avoir à modifier \n# le nom du conteneur du robot qu'une fois.\n# Si on ne l'a pas instancié en tant que poppy par exemple\nliste_moteur = [m for m in poppy.motors]\nnum_moteur = 0\n#éteindre toutes les leds des moteurs\nfor i in range (0,6): \n liste_moteur[i].led = 'pink'\n# tant que le dernier moteur n'est pas atteint \nwhile num_moteur < 6: \n #capturer une image et détecter si elle comporte un marqueur\n img = poppy.camera.frame\n markers = detect_markers(img)\n valeur = 0 \n \n for m in markers:\n print 'Found marker {} at {}'.format(m.id, m.center)\n m.draw_contour(img)\n valeur = m.id\n print(valeur)\n # mettre la led du moteur courant au rouge\n liste_moteur[num_moteur].led = 'red'\n # effectuer l'action correspondant au marqueur détecté\n if valeur == RIGH: \n liste_moteur[num_moteur].led = 'green'\n liste_moteur[num_moteur].goto_position(\n liste_moteur[num_moteur].present_position - 5, \n 0.5, \n wait = True)\n liste_moteur[num_moteur].led = 'pink'\n valeur = 0\n\n if valeur == PREV: \n if num_moteur != 0: \n liste_moteur[num_moteur].led = 'pink'\n num_moteur = num_moteur - 1\n liste_moteur[num_moteur].led = 'red'\n time.sleep(2.0)\n valeur = 0\n \n if valeur == LEFT: \n liste_moteur[num_moteur].led = 'green'\n liste_moteur[num_moteur].goto_position(\n liste_moteur[num_moteur].present_position + 5,\n 0.5,\n wait = True)\n liste_moteur[num_moteur].led = 'pink'\n valeur = 0\n \n if valeur == NEXT:\n if num_moteur != 6: \n liste_moteur[num_moteur].led = 'pink'\n num_moteur = num_moteur + 1\n if num_moteur != 6:\n liste_moteur[num_moteur].led = 'red'\n time.sleep(2.0)\n valeur = 0 \n \n\n\n", "Auteur : Georges Saliba, Lycée Victor Louis, Talence, sous licence CC BY SA" ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
hasadna/knesset-data-pipelines
jupyter-notebooks/committee meeting attendees.ipynb
mit
[ "Example flow for processing and aggregating stats about committee meeting attendees and protocol parts\nSee the DataFlows documentation for more details regarding the Flow object and processing functions.\nFeel free to modify and commit changes which demonstrate additional functionality or relevant data.\nConstants", "# Limit processing of protocol parts for development\nPROCESS_PARTS_LIMIT = 500\n\n# Enable caching of protocol parts data (not efficient, should only be used for local development with sensible PROCESS_PARTS_LIMIT)\nPROCESS_PARTS_CACHE = True\n\n# Filter the meetings to be processed, these kwargs are passed along to DataFlows filter_rows processor for meetings resource\nMEETINGS_FILTER_ROWS_KWARGS = {'equals': [{'KnessetNum': 20}]}\n\n# Don'e use local data - loads everything from knesset data remote storage\n# When set to False - also enables caching, so you won't download from remote storage on 2nd run.\nUSE_DATA = False", "Load source data", "from dataflows import filter_rows, cache\nfrom datapackage_pipelines_knesset.common_flow import load_knesset_data, load_member_names\n\n# Loads a dict containing mapping between knesset member id and the member name\nmember_names = load_member_names(use_data=USE_DATA)\n\n# define flow steps for loading the source committee meetings data\n# the actual loading is done later in the Flow\nload_steps = (\n load_knesset_data('people/committees/meeting-attendees/datapackage.json', USE_DATA),\n filter_rows(**MEETINGS_FILTER_ROWS_KWARGS)\n)\n\nif not USE_DATA:\n # when loading from URL - enable caching which will skip loading on 2nd run\n load_steps = (cache(*load_steps, cache_path='.cache/people-committee-meeting-attendees-knesset-20'),)", "Inspect the datapackages which will be loaded\nLast command's output log should contain urls to datapackage.json files, open them and check the table schema to see the resource metadata and available fields which you can use in the processing functions.\nCheck the frictionlessdata docs for more details about the datapackage file format.\nMain processing functions", "from collections import defaultdict\nfrom dataflows import Flow\n\nstats = defaultdict(int)\nmember_attended_meetings = defaultdict(int)\n\ndef process_meeting_protocol_part(row):\n stats['processed parts'] += 1\n if row['body'] and 'אנחנו ככנסת צריכים להיות ערוכים' in row['body']:\n stats['meetings contain text: we as knesset need to be prepared'] += 1\n\ndef process_meeting(row):\n stats['total meetings'] += 1\n if row['attended_mk_individual_ids']:\n for mk_id in row['attended_mk_individual_ids']:\n member_attended_meetings[mk_id] += 1\n parts_filename = row['parts_parsed_filename']\n if parts_filename:\n if PROCESS_PARTS_LIMIT and stats['processed parts'] < PROCESS_PARTS_LIMIT:\n steps = (load_knesset_data('committees/meeting_protocols_parts/' + parts_filename, USE_DATA),)\n if not USE_DATA and PROCESS_PARTS_CACHE:\n steps = (cache(*steps, cache_path='.cache/committee-meeting-protocol-parts/' + parts_filename),)\n steps += (process_meeting_protocol_part,)\n Flow(*steps).process()\n\nprocess_steps = (process_meeting,)", "Run the flow", "from dataflows import Flow, dump_to_path\n\nFlow(*load_steps, *process_steps, dump_to_path('data/committee-meeting-attendees-parts')).process()", "Aggregate and print stats", "from collections import deque\nimport yaml\n\ntop_attended_member_names = [member_names[mk_id] for mk_id, num_attended in\n deque(sorted(member_attended_meetings.items(), key=lambda kv: kv[1]), maxlen=5)]\nprint('\\n')\nprint('-- top attended members --')\nprint(top_attended_member_names)\nprint('\\n')\nprint('-- stats --')\nprint(yaml.dump(dict(stats), default_flow_style=False, allow_unicode=True))", "Get output data\nOutput data is available in the left sidebar under data directory, you can check the datapackage.json and created csv file to explore the data and schema." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bsmithyman/zephyr
Demo 2 - Remote parallel computation [distributed].ipynb
mit
[ "Demo 2 - Remote parallel computation [distributed]\nDemo for site visit | Brendan Smithyman | April 8, 2015\nChoice of IPython / jupyter cluster profile", "# profile = 'phobos' # remote workstation\n# profile = 'pantheon' # remote cluster\nprofile = 'mpi' # local machine", "Importing libraries\n\nnumpy is the de facto standard Python numerical computing library\nzephyr.Dispatcher is zephyr's primary parallel remote problem interface\nIPython.parallel provides parallel task control (nominally, this is to be handled inside the Dispatcher object)", "import numpy as np\nfrom zephyr.Dispatcher import SeisFDFDDispatcher\nfrom IPython.parallel import Reference", "Plotting configuration\nThese lines import matplotlib, which is a standard Python plotting library, and configure the output formats for figures.", "import matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nimport matplotlib\n%matplotlib inline\n\nimport mpld3\nmpld3.enable_notebook()", "These lines define some plotting functions, which are used later.", "lclip = 2000\nhclip = 3000\nclipscale = 0.1\nsms = 0.5\nrms = 0.5\n\ndef plotField(u):\n clip = clipscale*abs(u).max()\n plt.imshow(u.real, cmap=cm.bwr, vmin=-clip, vmax=clip)\n\ndef plotModel(v):\n plt.imshow(v.real, cmap=cm.jet, vmin=lclip, vmax=hclip)\n\ndef plotGeometry(geom):\n \n srcpos = geom['src'][:,::2]\n recpos = geom['rec'][:,::2]\n \n axistemp = plt.axis()\n plt.plot(srcpos[:,0], srcpos[:,1], 'kx', markersize=sms)\n plt.plot(recpos[:,0], recpos[:,1], 'kv', markersize=rms)\n plt.axis(axistemp)", "System / modelling configuration\nThis code sets up the seismic problem; see the comments inline. In a live inversion problem this would most likely be read from a configuration file (but could be defined interactively for development purposes).\nProperties of the grid and forward modelling", "cellSize = 1 # m\nnx = 164 # count\nnz = 264 # count\nfreqs = [1e2] # Hz\nfreeSurf = [False, False, False, False] # t r b l\nnPML = 32 # number of PML points\nnky = 80 # number of y-directional plane-wave components", "Properties of the model", "velocity = 2500 # m/s\nvanom = 500 # m/s\ndensity = 2700 # units of density\nQ = 500 # can be inf", "Array geometry", "srcs = np.array([np.ones(101)*32, np.zeros(101), np.linspace(32, 232, 101)]).T\nrecs = np.array([np.ones(101)*132, np.zeros(101), np.linspace(32, 232, 101)]).T\nnsrc = len(srcs)\nnrec = len(recs)\nrecmode = 'fixed'\n\ngeom = {\n 'src': srcs,\n 'rec': recs,\n 'mode': 'fixed',\n}", "Numerical / parallel parameters", "cache = False # whether to cache computed wavefields for a given source\ncacheDir = '.'\n\nparFac = 2\nchunksPerWorker = 0.5 # NB: parFac * chunksPerWorker = number of source array subsets\nensembleClear = False", "Computed properties", "dims = (nx,nz) # tuple\nrho = np.fliplr(np.ones(dims) * density)\nnfreq = len(freqs) # number of frequencies\nnsp = nfreq * nky # total number of 2D subproblems\ncPert = np.zeros(dims)\ncPert[(nx/2)-20:(nx/2)+20,(nz/2)-20:(nz/2)+20] = vanom\nc = np.fliplr(np.ones(dims) * velocity)\ncFlat = c\nc += np.fliplr(cPert)\ncTrue = c", "Problem geometry", "fig = plt.figure()\n\nax1 = fig.add_subplot(1,2,1)\nplotModel(c.T)\nplotGeometry(geom)\nax1.set_title('Velocity Model')\nax1.set_xlabel('X')\nax1.set_ylabel('Z')\n\nfig.tight_layout()", "Configuration dictionary\n(assembled from previous sections)", "# Base configuration for all subproblems\nsystemConfig = {\n 'dx': cellSize, # m\n 'dz': cellSize, # m\n 'c': c.T, # m/s\n 'rho': rho.T, # density\n 'Q': Q, # can be inf\n 'nx': nx, # count\n 'nz': nz, # count\n 'freeSurf': freeSurf, # t r b l\n 'nPML': nPML,\n 'geom': geom,\n 'cache': cache,\n 'cacheDir': cacheDir,\n 'freqs': freqs,\n 'nky': nky,\n 'parFac': parFac,\n 'chunksPerWorker': chunksPerWorker,\n 'profile': profile,\n 'ensembleClear': ensembleClear,\n# 'MPI': False,\n# 'Solver': Reference('SimPEG.SolverWrapD(scipy.sparse.linalg.splu)'),#Solver,\n}", "Parallel computations\nThis section runs each of the parallel computations on the remote worker nodes.\nSet up problem\n\nCreate the Dispatcher object using the systemConfig dictionary as input\nSpawn survey and problem interfaces, which implement the SimPEG standard properties\nGenerate a set of \"transmitter\" objects, each of which knows about its respective \"receivers\" (in seismic parlance, these would be \"sources\" and \"receivers\"; the term \"transmitter\" is more common in EM and potential fields geophysics)\nTell the dispatcher object about the transmitters", "%%time\nsp = SeisFDFDDispatcher(systemConfig)\nsurvey, problem = sp.spawnInterfaces()\nsxs = survey.genSrc()\nsp.srcs = sxs", "Forward modelling and backpropagation\nExample (commented out) showing how to generate synthetic data using the SimPEG-style survey and problem interfaces. In this implementation, both are essentially expressions of the Dispatcher. The Dispatcher API has yet to be merged into SimPEG.", "# d = survey.projectFields()\n# uF = problem.fields()", "This code runs the forward modelling on the [remote] workers. It returns asynchronously, so the code can run in the background.", "%%time\nsp.forward()\n\nsp.forwardGraph", "However, it will block if we ask for the data or wavefields:", "%%time\nd = sp.dPred\nuF = sp.uF\n\nd[0].shape", "Results\nWe show the resulting data and wavefield properties.\nData selection:", "freqNum = 0\nsrcNum = 0\n\nfrt = uF[freqNum]\ndrt = d[freqNum]\nclipScaleF = 1e-1 * abs(frt[srcNum]).max()", "Geometry, data and forward wavefield:", "fig = plt.figure()\n\nax1 = fig.add_subplot(1,3,1)\nplotModel(c.T)\nplotGeometry(geom)\nax1.set_title('Velocity Model')\nax1.set_xlabel('X')\nax1.set_ylabel('Z')\n\nax2 = fig.add_subplot(1,3,2)\nplt.imshow(drt.real, cmap=cm.bwr)\nax2.set_title('Real part of d: $\\omega = %0.1f$'%(freqs[freqNum],))\nax2.set_xlabel('Receiver #')\nax2.set_ylabel('Source #')\n\nax3 = fig.add_subplot(1,3,3)\nplt.imshow(frt[srcNum].real, vmin=-clipScaleF, vmax=clipScaleF, cmap=cm.bwr)\nplt.title('uF: $\\omega = %0.1f$, src. %d'%(freqs[freqNum], srcNum))\n\nfig.tight_layout()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Open-Power-System-Data/conventional_power_plants
download_and_process_EU.ipynb
mit
[ "<div style=\"width:100%; background-color: #D9EDF7; border: 1px solid #CFCFCF; text-align: left; padding: 10px;\">\n <b>Conventional Power Plants: Power Plants in Europe</b>\n <ul>\n <li><a href=\"main.ipynb\">Main Notebook</a></li>\n <li><a href=\"download_and_process_DE.ipynb\">Processing notebook for German power plant</a></li>\n <li>Processing notebook for European power plants</li>\n </ul>\n <br>This notebook is part of the <a href=\"http://data.open-power-system-data.org/DATA PACKAGE NAME HERE\"> Data package name here Data Package</a> of <a href=\"http://open-power-system-data.org\">Open Power System Data</a>.\n</div>\n\nTable of Contents\n\n1. Script setup\n2. Data import\n2.1 Data sources\n2.2 Functions\n2.3 Definition of harmonized output scheme\n\n\n3. Data processing per country\n3.1 Belgium BE\n3.2 The Netherlands NL\n3.3 France FR\n3.4 Poland PL\n3.5 Czech Republic CZ\n3.6 Switzerland CH\n3.7 Italy IT\n3.8 Finland FI\n3.9 Spain ES\n3.10 United Kingdom UK\n3.11 Norway NO\n3.12 Sweden SE\n3.13 Slovakia SK\n3.14 Slovenia SI\n3.15 Austria AT\n3.16 Denmark DK\n\n\n4. Consolidation of processed country data\n4.1 Implementation of energy source levels\n4.2 Definition of structure and data types\n\n\n5. Result export\n\n1. Script setup\nImport of Python modules needed to process the data and creation of required output folders.", "import numpy as np\nimport pandas as pd\nimport os\nimport yaml\nimport json\nimport sqlite3\nimport hashlib\nfrom download_and_process_functions import get_sha_hash\nfrom download_and_process_functions import add_location_and_EIC\n\n\n# create output folder if they do not exist\nos.makedirs(os.path.join('output'), exist_ok=True)\n\n# set data & input directory\ndata_directory = os.path.join('input','data')\nlocations_directory = os.path.join('input', 'locations')", "2. Data import\n2.1 Data sources\nUnlike the previous releases of this package, where the data was partially downloaded within this script, all data in the current release is pre-downloaded and provided. The following states all relevant data sources.", "meta_data = \"\"\"\n\n BE:\n filename: ProductionParkOverview.xls\n source: https://www.elia.be/en/grid-data/power-generation/generating-facilities#\n source_file: https://griddata.elia.be/eliabecontrols.prod/interface/fdn/download/generatingfacilities/xls\n filetype: xls\n date_of_access: Feb 2020\n manually_assembled: no\n provider: Elia\n institution: TSO\n \n NL:\n filename: export.csv\n source: https://www.tennet.org/english/operational_management/export_data.aspx\n source_file: www.tennet.org/english/operational_management/export_data.aspx?exporttype=installedcapacity&format=csv&quarter=2019-4&submit=3\n filetype: csv\n date_of_access: Feb 2020\n manually_assembled: no\n provider: Tennet\n institution: TSO\n\n FR:\n filename: Production_Capacities.csv\n source: https://www.services-rte.com/en/view-data-published-by-rte/production-installed-capacity.html\n source_file: NA\n filetype: csv\n date_of_access: Feb 2020\n manually_assembled: no\n provider: RTE\n institution: TSO\n \n PL:\n filename: units_list_2019_11_29_PL.csv\n source: http://gpi.tge.pl/en/wykaz-jednostek\n source_file: http://gpi.tge.pl/en/wykaz-jednostek\n date_of_access: Dec 2019\n manually_assembled: no\n provider: GPI Power Market Data\n institution: Information platform\n \n CZ:\n filename: 21915_2019.pdf\n source: https://www.ceps.cz/cs/priprava-provozu\n source_file: https://www.ceps.cz/cs/priprava-provozu\n date_of_access: Feb 2020\n manually_assembled: no\n provider: Ceps\n institution: TSO\n \n CH:\n filename: 2018 Statistik der Wasserkraftanlagen der Schweiz 31.12.2018.csv\n source: https://www.bfe.admin.ch/bfe/de/home/versorgung/statistik-und-geodaten/geoinformation/geodaten/wasser/statistik-der-wasserkraftanlagen.html\n source_file: https://www.bfe.admin.ch/bfe/de/home/versorgung/statistik-und-geodaten/geoinformation/geodaten/wasser/statistik-der-wasserkraftanlagen.html\n date_of access: Dec 2019\n manually_assempled: no\n provider: Swiss Federal office of energy\n institution: Federal Administration\n \n IT:\n filename: 18.xlsx\n source: http://www2018.terna.it/it-it/sistemaelettrico/transparencyreport/generation/installedgenerationcapacity.aspx\n source_file: http://download.terna.it/terna/0000/0216/18.XLSX\n date_of_access: Feb 2020\n manually_assembled: no\n provider: Terna\n institution: TSO\n\n FI:\n filename: Energiaviraston voimalaitosrekisteri.csv\n source: https://energiavirasto.fi/toimitusvarmuus\n source_file: https://energiavirasto.fi/toimitusvarmuus\n date_of_access: Feb 2020\n manually_assembled: no\n provider: energiavirasto\n institution: energy agency\n \n ES:\n filename: Registro_16_12_2019.csv \n source: https://sede.minetur.gob.es/en-US/datosabiertos/catalogo/registro-productores-electrica\n source_file: https://sede.minetur.gob.es/en-US/datosabiertos/catalogo/registro-productores-electrica\n date_of_access: Dec 2019 \n manually_assembled: no \n provider: \n institution: \n \n UK:\n filename: DUKES_5.11_UK.csv\n source: https://www.gov.uk/government/statistics/electricity-chapter-5-digest-of-united-kingdom-energy-statistics-dukes#content\n source_file: https://www.gov.uk/government/statistics/electricity-chapter-5-digest-of-united-kingdom-energy-statistics-dukes#content\n date_of_access: Dec 2019\n manually_assembled: no\n provider: UK gov\n institution: UK statistics\n\n NO:\n filename_thermal: termiske-kraftverk-i-norge-2019.xlsx\n filename_hydro: Vannkraftverk.csv\n source: https://www.nve.no/\n source_file_thermal: https://www.nve.no/media/8967/termiske-kraftverk-i-norge-2019.xlsx\n source_file_hydro: https://www.nve.no/energiforsyning/kraftproduksjon/vannkraft/vannkraftdatabase/#\n date_of_access: Feb 2020\n manually_assembled: no\n provider: Norwegian Water Resources and Energy Directorate\n institution: Ministry\n\n SE:\n filename: input_plant-list_SE.csv\n source: https://www.nordpoolgroup.com/\n source_file: NA\n date_of_access: 2014\n manually_assembled: no\n provider: Nordpool Group\n institution: Market operator\n\n SK:\n filename: input_plant-list_SK.csv\n source: https://www.seas.sk/thermal-power-plants\n source_file: https://www.seas.sk/thermal-power-plants\n date_of_access: Feb 2020\n manually_assembled: yes \n provider: Slovenské elektrárne\n institution: joint-stock company\n \n SI:\n filename: input_plant-list_SI.csv\n source: multiples (in document)\n source_file: \n date_of_access: Dec 2019\n manually_assembled: yes\n provider: multiples (in document)\n institution: Private company\n \n AT:\n filename_thermal: input_plant-list_AT_thermal.csv\n filename_hydro: input_plant-list_AT_hydro.csv\n source: multiples (in document)\n source_file:\n date_of_access: Feb 2020\n manually_assembled: yes\n provider: multiples (in document)\n institution: multiples (in document) \n \n DK:\n filename: input_plant-list_DK.csv \n source: multiples (in document)\n source_file: \n date_of_access: Jan 2020\n manually_assembled: yes\n provider: multiples (in document)\n institution: multiples (in document)\n \n\"\"\"\n\n# Conversion to JSON (if needed)\n# meta_data = yaml.load(meta_data, Loader=yaml.BaseLoader)", "2.2 Definition of harmonized output scheme\nTo provide a standardizes set of power plant information among all national data sources, a set of required columns is defined which is subsequently filled with available data. The following columns and their structure are the basis for all national data sources. \nNote: If information for specific columns are not available, the data entry is empty. On the other hand, if the national data sources provides other information than required by the scheme, these information are not processed.", "columns_sorted = ['name',\n 'company',\n 'street',\n 'postcode',\n 'city',\n 'country',\n 'capacity',\n 'energy_source',\n 'technology',\n 'chp',\n 'commissioned',\n 'type',\n 'lat',\n 'lon',\n 'eic_code',\n 'additional_info',\n 'comment',\n 'source']", "3. Data processing per country\n3.1 Belgium BE\n3.1.1 Data import\nThe data is provided by the Belgian transmission network operator ELIA. It encompasses a detailed list of Belgian generation units with comprehensive information on technologies and energy fuels.", "filepath_BE = os.path.join(data_directory, 'BE','ProductionParkOverview.xls')\ndata_BE = pd.read_excel(filepath_BE,\n sheet_name='ProductionParkOverview',\n skiprows=1)\n\ndata_BE.head()", "3.1.2 Translation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "# Translate columns\ndict_columns_BE = {'ARP': 'company',\n 'Generation plant': 'name',\n 'Plant Type': 'technology',\n 'Technical Nominal Power (MW)': 'capacity',\n 'Remarks': 'comment',\n 'Fuel': 'energy_source',\n 'Country': 'country',\n 'Source': 'source'}\n\n\n# Apply general template of columns\ndata_BE = data_BE.rename(columns=dict_columns_BE).reindex(columns=columns_sorted)\n\n# Drop rows without capacity entries, so that row with \n# \"Unit connected to Distribution Grid\" is dropped\ndata_BE.dropna(subset=['capacity'], inplace=True)\n\n# Adjust types of entries in all columns\ndata_BE.capacity = data_BE.capacity.astype(float)", "3.1.3 Definition of generation type\nThe generation type provides information on the 'usage' of the power plants (beside electricity generaiton), thus if the plant is an industrial power plant or provides thermal heat for district heating. \nThe Belgian data source provides only a general information on the heat supply (here: WKK). Thus, due to these general informaiton, we classify corresponding plants as both, industrial or combined heat power plant, and cannot distringuish both types.", "# Generate entries in column \"type\" according to technology \"WKK\"\ndata_BE.loc[data_BE['technology'] == 'WKK', 'type'] = 'CHP/IPP'\ndata_BE.loc[data_BE['name'].str.contains('WKK'), 'type'] = 'CHP/IPP'\n\n# Generate entries in column \"CHP\" according to column \"type\"\ndata_BE.loc[(data_BE['type'] == 'CHP') |\n (data_BE['type'] == 'IPP') |\n (data_BE['type'] == 'CHP/IPP'), 'chp'] = 'Yes'", "3.1.4 Definition of generation technology types\nOverall translation of all technology types mentioned in the column \"technology\".", "# Translate technologies\ndict_technology_BE = {'GT': 'Gas turbine',\n 'BG': np.nan,\n 'CL': 'Steam turbine',\n 'WKK': np.nan,\n 'CCGT': 'Combined cycle',\n 'D': np.nan,\n 'HU': np.nan,\n 'IS': np.nan,\n 'NU': 'Steam turbine',\n 'TJ': 'Gas turbine',\n 'WT': np.nan,\n ' ': np.nan,\n 'nan': np.nan,\n }\ndata_BE[\"technology\"].replace(dict_technology_BE, inplace=True)\n\n# add technology parameter for steam and gas turbines\ndata_BE.loc[data_BE['name'].str.contains('ST') &\n data_BE['technology'].isna(), 'technology'] = 'Steam turbine'\n\ndata_BE.loc[data_BE['name'].str.contains('GT') &\n data_BE['technology'].isna(), 'technology'] = 'Gas turbine'\n\ndata_BE.head()", "3.1.5 Definition of energy sources\nOverall translation of all energy sources types mentioned in the column \"energy_sources\" and subsequent translation check. Deletion of rows containing \"wind\" as energy source.", "# Translate energy sources\ndict_energysources_BE = {'BIO': 'Biomass and biogas',\n 'BF': 'Other fossil fuels',\n 'CL': 'Lignite',\n 'CP': 'Hard coal',\n 'CG': 'Other fossil fuels',\n 'GO': 'Oil',\n 'LF': 'Oil',\n 'LV': 'Oil',\n 'CP/BF': 'Mixed fossil fuels',\n 'CP/CG': 'Mixed fossil fuels',\n 'FA/BF': 'Mixed fossil fuels',\n 'NG/BF': 'Mixed fossil fuels',\n 'NG': 'Natural gas',\n 'NU': 'Nuclear',\n 'WR': 'Non-renewable waste',\n 'WA': 'Hydro',\n 'WI': 'Wind',\n 'WP': 'Biomass and biogas'}\ndata_BE[\"energy_source\"].replace(dict_energysources_BE, inplace=True)\ndata_BE[\"energy_source\"].replace('NaN', np.nan, inplace=True)\n\n\n# Delete unwanted energy source\ndata_BE = data_BE[data_BE.energy_source != 'Wind']", "3.1.6 Additional information on geographic coordinates and EIC codes\nIn this section a manually compiled list is used to define the geographic coordinates of indivdual power plants.", "data_BE = add_location_and_EIC('BE', data_BE)\n\n# add source\ndata_BE[\"source\"] = \"https://www.elia.be/en/grid-data/power-generation/generating-facilities#\"\n\ndata_BE.head()", "3.2 The Netherlands NL\n3.2.1 Data import and merger\nThe data is provided by the Dutch transmission network operator TenneT. It encompasses the daily available generation capacity, thus a list of Dutch generation units being operational on a specific day. The data is downloaded for all four quarters in 2018.\nImport of quartely data", "filepath_NL = os.path.join(data_directory, 'NL', 'export.csv')\ndata_NL = pd.read_csv(filepath_NL, encoding='utf-8')\n\ndata_NL.head()", "3.2.2 Translation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "# Merge columns \"street\" and \"Number\" to one column called \"Street\"\ndata_NL['street'] = data_NL[['street', 'Number']].apply(\n lambda x: '{} {}'.format(x[0], x[1]), axis=1)\n\n# Drop columns not needed anymore\ncolsToDrop = ['Location', 'Date', 'Number']\ndata_NL = data_NL.drop(colsToDrop, axis=1)\n\n# Rename columns\ndict_columns_NL = {'Connected body': 'company',\n 'Entity': 'name',\n 'Fuel': 'energy_source',\n 'Capacity': 'capacity',\n 'zipcode': 'postcode',\n 'place-name': 'city'}\ndata_NL.rename(columns=dict_columns_NL, inplace=True)\n\n# Adjust types of entries in all columns\ndata_NL.capacity = data_NL.capacity.astype(float)", "3.2.3 Definition of energy sources\nOverall translation of all energy sources types mentioned in the column \"energy_sources\". Generation of entries for the column \"technology\" according to information given in the column \"energy_source\" by TenneT.", "# Rename types of energy sources\ndict_energysources_NL = {'E01': 'Solar',\n 'E02': 'Wind',\n 'E03': 'Hydro',\n 'E04': 'Biomass and biogas',\n 'E05': 'Hard coal',\n 'E06': 'Natural gas',\n 'E07': 'Oil',\n 'E08': 'Nuclear',\n 'E09': 'Other or unspecified energy sources'}\n\ndata_NL[\"energy_source\"].replace(dict_energysources_NL, inplace=True)\n\n# Generate technology entry according to energy source\ndata_NL.loc[data_NL['energy_source'] == 'Nuclear',\n 'technology'] = 'Steam turbine'\ndata_NL.loc[data_NL['energy_source'] == 'Hard coal',\n 'technology'] = 'Steam turbine'\n\n# Delete unwanted energy sources in column \"energy_source\"\ndata_NL = data_NL[data_NL.energy_source != 'Solar']\ndata_NL = data_NL[data_NL.energy_source != 'Wind']", "3.2.4 Select daily entry with highest available capacity\nWe estimate the installed capacity by the highest available daily capacity for each unit.", "# Filter rows by considering \"name\" and maximum \"capacity\ndata_NL = data_NL.sort_values(\n 'capacity', ascending=False).groupby('name', as_index=False).first()\n\n# Apply general template of columns\ndata_NL = data_NL.reindex(columns=columns_sorted)", "3.2.5 Additional information on geographic coordinates and EIC codes\nIn this section a manually compiled list is used to define the geographic coordinates of indivdual power plants.", "data_NL = add_location_and_EIC('NL', data_NL)\n\n# add source\ndata_NL[\"source\"] = \"https://www.tennet.org/english/operational_management/export_data.aspx\"\n\ndata_NL.head()", "3.3 France FR\n3.3.1 Data import\nThe data is provided by the French transmission network operator RTE. It encompasses a detailed list of French generation units with a capacity of more than 100 MW.", "filepath_FR = os.path.join(data_directory, 'FR', 'Production_Capacities.csv')\ndata_FR = pd.read_csv(filepath_FR)\n\ndata_FR.head()", "3.3.2 Translation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "# Translate columns\ndict_columns_FR = {'Type': 'energy_source',\n 'Name': 'name',\n 'Installed capacity (MW)': 'capacity',\n 'Start date of the current version': 'commissioned',\n 'Location': 'country'\n }\ndata_FR.rename(columns=dict_columns_FR, inplace=True)\n\n# Apply general template of columns\ndata_FR = data_FR.reindex(columns=columns_sorted)\n\n# Delete place holder datetime\ndata_FR[\"commissioned\"].replace('01/01/2000', np.nan, inplace=True)\n\n# Map commissioned year to Timestamp col\ndata_FR['commissioned_year'] = pd.to_datetime(data_FR['commissioned'], format='%d/%m/%Y')\n# Reassing commissioned col with year only\nmask = data_FR['commissioned_year'].notna()\ndata_FR.loc[mask, 'commissioned'] = data_FR.loc[mask].commissioned_year.apply(lambda x: x.year)\n# Drop not needed col\ndata_FR.drop('commissioned_year', axis=1, inplace=True)\n\n# Adjust types of entries in all columns\ndata_FR.capacity = data_FR.capacity.astype(float)", "3.3.4 Definition of energy sources and generation of technology types\nGeneration of entries for technologies. Overall translation of all energy sources types mentioned in the column \"energy_sources\" and subsequent translation check.", "# Generate technology entries according to energy sources\ndata_FR.loc[data_FR['energy_source'] == 'Pumping',\n 'technology'] = 'Pumped storage'\ndata_FR.loc[data_FR['energy_source'] == 'Hydraulic over water / guided through',\n 'technology'] = 'Run-of-river'\ndata_FR.loc[data_FR['energy_source'] == 'Hydraulic lakes',\n 'technology'] = 'Reservoir'\ndata_FR.loc[data_FR['energy_source'] == 'Nuclear',\n 'technology'] = 'Steam turbine'\ndata_FR.loc[data_FR['energy_source'] == 'Hard coal',\n 'technology'] = 'Steam turbine'\n\n# Translate types of energy sources\ndict_energysources_FR = {'Other': 'Other or unspecified energy sources',\n 'Gas': 'Natural gas',\n 'Pumping': 'Hydro',\n 'Hydraulic over water / guided through': 'Hydro',\n 'Hydraulic lakes': 'Hydro',\n 'Biomass': 'Biomass and biogas'}\ndata_FR[\"energy_source\"].replace(dict_energysources_FR, inplace=True)\n\n\n# Delete unwanted energy sources in column \"energy_source\"\ndata_FR = data_FR[data_FR.energy_source != 'Wind']\ndata_FR = data_FR[data_FR.energy_source != 'Solar']\ndata_FR = data_FR[data_FR.energy_source != 'Marine']", "3.3.5 Additional information on geographic coordinates and EIC codes\nIn this section a manually compiled list is used to define the geographic coordinates of indivdual power plants.", "data_FR = add_location_and_EIC('FR', data_FR)\n\n# add source\ndata_FR[\"source\"] = \"https://www.services-rte.com/en/view-data-published-by-rte/production-installed-capacity.html\"\n\ndata_FR.head()", "3.4 Poland PL\n3.4.1 Data import\nThe data is provided by the Polish Power Exchange GPI. It encompasses a detailed list of large Polish generation units with information on the single power plant blocks.", "filepath_PL = os.path.join(data_directory, 'PL', 'units_list_2019_11_29_PL.csv')\ndata_PL = pd.read_csv(filepath_PL, sep=';')\n\ndata_PL.head()", "3.4.2 Translation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "# Rename columns\ndict_columns_PL = {'Generating unit name': 'name',\n 'Comments': 'comment',\n 'Available capacity [MW]': 'capacity',\n 'Basic fuel': 'energy_source',\n 'Country': 'country',\n 'Source': 'source'}\n\n\n\n# Rename first column\ndata_PL.columns.values[0] = 'company'\n\n# Rename columns\ndict_columns_PL = {'Generating unit name': 'name',\n 'Comments': 'comment',\n 'Available capacity [MW]': 'capacity',\n 'Basic fuel': 'energy_source',\n 'Country': 'country',\n 'Source': 'source'}\ndata_PL = data_PL.rename(columns=dict_columns_PL)\n\n# Fill columns \"energy_source\" and \"company\" with the belonging entries\ncols = ['energy_source', 'company']\ndata_PL[cols] = data_PL[cols].ffill()\n\n# Delete empty and therefore unwanted rows by referring to column \"Generating unit code\"\ndata_PL = data_PL.dropna(subset=['Generating unit code'])\n\n# Apply general template of columns\ndata_PL = data_PL.reindex(columns=columns_sorted)\n\n# Adjust types of entries in all columns\ndata_PL.capacity = data_PL.capacity.astype(float)", "3.4.3 Definition of energy sources\nOverall translation of all energy sources types mentioned in the column \"energy_sources\".", "# Rename energy sources types\ndict_energysources_PL = {'Brown coal': 'Lignite',\n 'Black coal': 'Hard coal',\n 'Water': 'Hydro',\n 'Natural gas': 'Natural gas',\n }\ndata_PL[\"energy_source\"].replace(dict_energysources_PL, inplace=True)", "3.4.4 Definition of generation technology types\nGeneration of entries for the column \"technology\" according to information given in the column \"energy_source\".", "# Generate entries in column \"technology\" according to energy source \"hydro\"\ndata_PL.loc[data_PL['energy_source'] == 'Hydro', 'technology'] = 'Pumped storage'", "3.4.5 Additional information on further power plants, geographic coordinates and EIC codes\nIn this section a manually compiled list is used to define the geographic coordinates of indivdual power plants.", "data_PL = add_location_and_EIC('PL', data_PL)\n\n# add source\ndata_PL[\"source\"] = \"http://gpi.tge.pl/en/wykaz-jednostek\"\n\ndata_PL.head()", "3.5 Czech Republic CZ\n3.5.1 Data import\nThe data is provided by the Czech transmission network operator CEPS. It encompasses the daily available capacity reported by the transmission system operator.", "filepath_CZ = os.path.join(data_directory, 'CZ', '21915_2019.csv')\ndata_CZ = pd.read_csv(filepath_CZ, encoding='utf-8')\n\ndata_CZ.head()", "3.5.2 Translation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "# Remove white space from names\ndata_CZ['název elektrárny'] = data_CZ['název elektrárny'].str.strip()\ndata_CZ['oznacení bloku'] = data_CZ['oznacení bloku'].str.strip()\ndata_CZ['Typ'] = data_CZ['Typ'].str.strip()\ndata_CZ['Palivo'] = data_CZ['Palivo'].str.strip()\n\n# Insert dummy G1 where plant block is NA\ndata_CZ.loc[data_CZ['oznacení bloku'].isna(), 'oznacení bloku'] = 'G1'\n\n# Merge name and block to one column called \"name\"\ndata_CZ['name'] = data_CZ['název elektrárny'].map(str) + ' ' + data_CZ['oznacení bloku']\n\n# Rename columns\ndict_columns_CZ = {'Typ': 'technology',\n 'Palivo': 'energy_source',\n 'výkon instalovaný (MW)': 'capacity'\n }\ndata_CZ.rename(columns=dict_columns_CZ, inplace=True)\n\n# Apply general template of columns\ndata_CZ = data_CZ.reindex(columns=columns_sorted)\n\n# Adjust types of entries in all columns\ndata_CZ.capacity = data_CZ.capacity.astype(float)", "3.5.3 Definition of generation technology types\nOverall translation of all technology types mentioned in the column \"technology\".", "# Translate energy source\ndict_energy_source_CZ = {'VODA': 'Hydro',\n 'PLYN': 'Natural gas',\n 'OLEJ': 'Oil',\n 'URAN': 'Nuclear',\n 'HU': 'Lignite',\n 'CU': 'Hard coal',\n 'BIO': 'Bioenergy'}\ndata_CZ[\"energy_source\"].replace(dict_energy_source_CZ, inplace=True)\n\n# Translate technologies\ndict_technologies_CZ = {'PE': 'Steam turbine',\n 'PPE': 'Combined cycle',\n 'PSE': 'Combined cycle',\n 'JE': 'Steam turbine',\n 'VE': np.nan,\n 'PVE': 'Pumped storage'}\ndata_CZ[\"technology\"].replace(dict_technologies_CZ, inplace=True)", "3.5.4 Additional information on further power plants, geographic coordinates and EIC codes\nIn this section a manually compiled list is used to define the geographic coordinates of indivdual power plants.", "data_CZ = add_location_and_EIC('CZ', data_CZ)\n\n# add source\ndata_CZ[\"source\"] = \"https://www.ceps.cz/cs/priprava-provozu\"\n \ndata_CZ.head()", "3.6 Switzerland CH\n3.6.1 Data import\nThe data is provided by the Swiss Ministry of Energy BFE. It encompasses a detailed list of Swiss hydro generation units with comprehensive information on technical specifications. The list of nuclear generators is manually assembled and provided separately.", "filepath_CH_hydro = os.path.join(data_directory, 'CH', '2018 Statistik der Wasserkraftanlagen der Schweiz 31.12.2018.csv')\ndata_CH = pd.read_csv(filepath_CH_hydro, error_bad_lines=False, sep=';',decimal=',')\n\nfilepath_CH_nuclear = os.path.join(data_directory, 'CH', 'input_plant-list_CH_nuclear.csv')\ndata_nuclear_CH = pd.read_csv(filepath_CH_nuclear, encoding='utf-8', header=0, index_col=None)", "3.6.2 Processing of Hydro generator list\nIn this section, the imported generator list of hydro generators is standardized.\nConsolidation of columns", "# Merge columns \"ZE-Erste Inbetriebnahme\" and \"ZE-Letzte Inbetriebnahme\" to one column called \"Commissioned\"\ndata_CH['commissioned'] = data_CH[\n ['ZE-Erste-Inbetriebnahme', 'ZE-Letzte-Inbetriebnahme']].apply(\n lambda x: max(x[0], x[1]), axis=1)\n\n# Merge columns \"Bemerkung (1) - (10)\" to one column \"Comment\"\ndata_CH['comment'] = data_CH[['Bemerkung (1)',\n 'Bemerkung (2)',\n 'Bemerkung (3)',\n 'Bemerkung (4)',\n 'Bemerkung (5)',\n 'Bemerkung (6)',\n 'Bemerkung (7)',\n 'Bemerkung (8)',\n 'Bemerkung (9)',\n 'Bemerkung (10)']].apply(\n lambda x:\n '{}; {}; {}; {}; {}; {}; {}; {}; {}; {}'.format(\n x[0],\n x[1],\n x[2],\n x[3],\n x[4],\n x[5],\n x[6],\n x[7],\n x[8],\n x[9]), axis=1)\n\ndata_CH['comment'] = data_CH['comment'].str.replace('nan; ', '')\ndata_CH['comment'] = data_CH['comment'].str.replace('nan', '')", "Translation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "# Translate columns\ndict_columns_CH = {'WKA-Name': 'name',\n 'ZE-Standort': 'city',\n 'WKA-Typ': 'technology',\n 'ZE-Status': 'availability',\n 'Inst. Turbinenleistung': 'capacity'}\ndata_CH.rename(columns=dict_columns_CH, inplace=True)\n\n# Adjust type of entries in column \"Capacity\"\ndata_CH.capacity = data_CH.capacity.astype(float)\n\n# Adjust availabilities\ndict_availabilities_CH = {'im Normalbetrieb': '1',\n 'im Bau': '0',\n 'im Umbau': '0',\n 'stillgelegt': '0'}\ndata_CH[\"availability\"].replace(dict_availabilities_CH, inplace=True)\n\n# List only operating plants\ndata_CH = data_CH[data_CH.availability != '0']\n\n# Apply general template of columns\ndata_CH = data_CH.reindex(columns=columns_sorted)", "Definition of generation technology types", "# Set energy source to \"hydro\"\ndata_CH['energy_source'] = 'Hydro'\n\n# Adjust technologies\ndict_technologies_CH = {'L': 'Run-of-river',\n 'S': 'Reservoir',\n 'P': 'Pumped storage with natural inflow',\n 'U': 'Pumped storage'}\ndata_CH[\"technology\"].replace(dict_technologies_CH, inplace=True)", "3.6.3 Merge hydro and nuclear power plant data", "# add source for hydro\ndata_CH[\"source\"] = \"https://www.bfe.admin.ch/bfe/de/home/versorgung/statistik-und-geodaten/geoinformation/geodaten/wasser/statistik-der-wasserkraftanlagen.html\"\n\n# Concat dataframes\ndata_CH = pd.concat([data_CH, data_nuclear_CH], ignore_index=True, sort=False)\n\ndata_CH.head()", "3.6.4 Additional information on geographic coordinates and EIC codes\nIn this section a manually compiled list is used to define the geographic coordinates of indivdual hydro power plants.", "data_CH = add_location_and_EIC('CH', data_CH)\n\ndata_CH.head()", "3.7 Italy IT\n3.7.1 Data import\nThe data is provided by the Italian transmission network operator TERNA. It encompasses a detailed list of Italian generation units of more than 100 MW.", "filepath_IT = os.path.join(data_directory, 'IT', '18.XLSX')\ndata_IT = pd.read_excel(filepath_IT, sheet_name='UPR PmaxOver 100MW') \n\ndata_IT.head()", "3.7.2 Translation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "# Translate columns\ndict_columns_IT = {'Descrizione Impianto': 'name',\n 'TIPOLOGIA': 'energy_source',\n 'Comune': 'city',\n 'PMAX [MW]': 'capacity',\n 'Country': 'country',\n 'Source': 'source',\n 'Zona': 'additional_info'}\ndata_IT.rename(columns=dict_columns_IT, inplace=True)\n\n# Apply general template of columns\ndata_IT = data_IT.reindex(columns=columns_sorted)\n\n# Consider of geographical information in column \"additional_info\"\ndata_IT['additional_info'] = data_IT[['additional_info']].apply(\n lambda x: 'Zone: {}'.format(x[0]), axis=1)\n\n# Adjust types of entries in all columns\ndata_IT.capacity = data_IT.capacity.astype(float)", "3.7.3 Definition of energy sources\nOverall translation of all energy source types mentioned in the column \"energy_sources\". Deletion of rows containing \"wind\" and \"geothermal_power\"as energy source.", "# Translate types of energy sources\ndict_energysources_IT = {'GEOTERMICO': 'Geothermal',\n 'TERMOELETTRICO': 'Mixed fossil fuels',\n 'IDROELETTRICO': 'Hydro',\n 'EOLICO': 'Wind'}\ndata_IT[\"energy_source\"].replace(dict_energysources_IT, inplace=True)\n\n# Delete unwanted energy sources in column \"energy_source\"\ndata_IT = data_IT[data_IT.energy_source != 'Wind']\ndata_IT = data_IT[data_IT.energy_source != 'Geothermal']", "3.7.4 Additional information on geographic coordinates and EIC codes\nIn this section a manually compiled list is used to define the geographic coordinates of indivdual power plants.", "data_IT = add_location_and_EIC('IT', data_IT)\n\n# add source\ndata_IT[\"source\"] = \"http://www2018.terna.it/it-it/sistemaelettrico/transparencyreport/generation/installedgenerationcapacity.aspx\"\n\ndata_IT.head()", "3.8 Finland FI\n3.8.1 Data import\nThe data is provided by the Finnish Energy Authority. It encompasses a detailed list of Finnish generation units of at least one megavolt ampere [1 MVA].", "filepath_FI = os.path.join(data_directory, 'FI', 'Energiaviraston voimalaitosrekisteri.csv')\ndata_FI = pd.read_csv(filepath_FI, sep=';') \n\ndata_FI.head()", "3.8.2 Translation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "# Generate entries in column \"CHP\"\ndata_FI.loc[data_FI[\n 'Combined Heat and Power Production, Industry,Maximum, Total, MW'] > 0,\n 'chp'] = 'Yes'\ndata_FI.loc[data_FI[\n 'Combined Heat and Power Production, District Heating, Total, MW'] > 0,\n 'chp'] = 'Yes'\n\n# Rename columns\ndict_columns_FI = {'Name': 'name',\n 'Company': 'company',\n 'Type': 'type',\n 'Address': 'street',\n 'Town': 'city',\n 'Postal code': 'postcode',\n 'Maximum, total, MW': 'capacity',\n 'Main fuel': 'energy_source',\n 'Country': 'country',\n 'Source': 'source'}\ndata_FI.rename(columns=dict_columns_FI, inplace=True)\n\n# Apply general template of columns\ndata_FI = data_FI.reindex(columns=columns_sorted)\n\n# Adjust types of entries in all columns\ndata_FI.capacity = data_FI.capacity.astype(float)", "3.8.3 Definition of energy sources\nOverall translation of all energy sources types mentioned in the column \"energy_sources\". Generation of entries for the column \"energy_scoures\" according to information given in the column \"type\".", "# Rename types of energy sources\ndict_energysources_FI = {'Biogas': 'Biomass and biogas',\n 'Black liquor and concentrated liquors': 'Biomass and biogas',\n 'Blast furnace gas': 'Other fossil fuels',\n 'By-products from wood processing industry': 'Biomass and biogas',\n 'Exothermic heat from industry': 'Other or unspecified energy sources',\n 'Forest fuelwood': 'Biomass and biogas',\n 'Gasified waste': 'Non-renewable waste',\n 'Hard coal and anthracite': 'Hard coal',\n 'Heavy distillates': 'Oil',\n 'Industrial wood residues': 'Biomass and biogas',\n 'Light distillates': 'Oil',\n 'Medium heavy distillates': 'Oil',\n 'Mixed fuels': 'Mixed fossil fuels',\n 'Natural gas': 'Natural gas',\n 'Nuclear energy': 'Nuclear',\n 'Other by-products and wastes used as fuel': 'Other fossil fuels',\n 'Other non-specified energy sources': 'Other or unspecified energy sources',\n 'Peat': 'Biomass and biogas',\n ' ': 'Other or unspecified energy sources',\n np.nan: 'Other or unspecified energy sources'}\ndata_FI[\"energy_source\"].replace(dict_energysources_FI, inplace=True)\ndata_FI[\"energy_source\"].replace('NaN', np.nan, inplace=True)\n\n# Generate entries in column \"energy_sources\" for hydro and wind stations according to column \"type\"\ndata_FI.loc[data_FI['type'] == 'Hydro power', 'energy_source'] = 'Hydro'\ndata_FI.loc[data_FI['type'] == 'Wind power', 'energy_source'] = 'Wind'", "3.8.4 Definition of generation technology types\nGeneration of entries for the column \"technology\" according to information given in the column \"energy_source\". Deletion of rows containing \"wind\" as energy source.", "# Generate entries in column \"technology\" according to column \"energy_source\"\ndata_FI.loc[data_FI['energy_source'] == 'Nuclear',\n 'technology'] = 'Steam turbine'\ndata_FI.loc[data_FI['energy_source'] == 'Hard coal',\n 'technology'] = 'Steam turbine'\n\n# Delete unwanted energy source (wind) in column \"energy_source\"\ndata_FI = data_FI[data_FI.energy_source != 'Wind']", "3.8.5 Definition of generation type\nOverall translation of all types mentioned in the column \"type\" and subsequent translation check.", "# Rename types\ndict_types_FI = {'District heating CHP': 'CHP',\n 'Hydro power': 'NaN',\n 'Industry CHP': 'IPP',\n 'Nuclear energy': 'NaN',\n 'Separate electricity production': 'NaN',\n 'Wind power': 'NaN'}\ndata_FI[\"type\"].replace(dict_types_FI, inplace=True)\ndata_FI[\"type\"].replace('NaN', np.nan, inplace=True)\n\n# drop solar generator (redundant)\ndata_FI = data_FI[data_FI.type != 'Solar']", "3.8.6 Additional information on geographic coordinates and EIC codes\nIn this section a manually compiled list is used to define the geographic coordinates of indivdual power plants.", "data_FI = add_location_and_EIC('FI', data_FI)\n\n# add source\ndata_FI[\"source\"] = \"https://energiavirasto.fi/toimitusvarmuus\"\n\ndata_FI.head()", "3.9 Spain ES\n3.9.1 Data import\nThe data is provided by the Spanish SEDE - Ministry of Industry, Energy and Tourism. It encompasses a detailed list of Spanish generation units with comprehensive information on technologies and energy fuels.", "filepath_ES = os.path.join(data_directory, 'ES', 'Registro_16_12_2019.csv')\ndata_ES = pd.read_csv(filepath_ES, error_bad_lines=False, sep=';',decimal=',') \n\ndata_ES.head()", "3.9.2 Translation and harmonization of columns\nOverall adjustment of all columns within the dataframe. Translation, addition, deletion, sorting of columns as well as adjustment of the column entries' types. Adjustment of the entries' units from kW to MW in the columns \"Capacity\" (corresponding to the net capacity in the original data set).", "# Select value of 'Potencia Instalada en MW' if gross capacity is empty\ndata_ES['Potencia Bruta Total en MW'] = np.where(data_ES['Potencia Bruta Total en MW'].isnull(),data_ES['Potencia Instalada en MW'],data_ES['Potencia Bruta Total en MW'])\n\n# Rename columns\ndict_columns_ES = {'Nombre del Titular de la Unidad de Producción': 'company',\n 'Nombre de la Unidad de Producción': 'name',\n 'Municipio de la Unidad de Producción': 'city',\n 'CPostal del Titular': 'postcode',\n 'Tecnología de la Unidad de Producción': 'technology',\n 'Comment': 'comment',\n 'Potencia Bruta Total en MW': 'capacity',\n 'Tipo de Unidad de Producción': 'energy_source',\n 'Fecha de la puesta en servicio de la Unidad de Producción': 'commissioned',\n 'Country': 'country',\n 'Source': 'source'}\ndata_ES.rename(columns=dict_columns_ES, inplace=True)\n\n#Fix capacity entries to float\nnumeric_capacity = []\nfor cap in data_ES.capacity:\n if isinstance(cap, float):\n numeric_capacity.append(cap)\n else:\n split_cap = cap.split(',')\n if len(split_cap) == 1:\n numeric_capacity.append(int(split_cap[0]))\n elif len(split_cap) == 2:\n numeric_capacity.append(int(split_cap[0]) + float('.' + split_cap[1]))\n elif len(split_cap) == 3:\n numeric_capacity.append(int(split_cap[0])*1000 + int(split_cap[1]) + float('.' + split_cap[2]))\n else:\n numeric_capacity.append(np.nan)\n \ndata_ES[\"capacity\"] = numeric_capacity \n\n# Apply general template of columns\ndata_ES = data_ES.reindex(columns=columns_sorted)", "3.9.3 Definition of energy sources\nOverall translation of all energy sources types mentioned in the column \"energy_sources\".", "dict_energysources_ES = {'Biocombustibles liquidos': 'Biomass and biogas',\n 'Biogas': 'Biomass and biogas',\n 'Biogas de digestion': 'Biomass and biogas',\n 'Biogas de vertedero': 'Biomass and biogas',\n 'Biomasa industrial agricola': 'Biomass and biogas',\n 'Biomasa industrial forestal': 'Biomass and biogas',\n 'Biomasa primaria': 'Biomass and biogas',\n 'Calor residual': 'Other or unspecified energy sources',\n 'Carbon': 'Hard coal',\n 'CARBON IMPORTADO': 'Hard coal',\n 'Cultivos energeticos agricolas o forestales': 'Biomass and biogas',\n 'DIESEL': 'Oil',\n 'Energias residuales': 'Non-renewable waste',\n 'Fuel': 'Oil',\n 'FUEL-OIL 0,3': 'Oil',\n 'FUELOLEO': 'Oil',\n 'GAS DE REFINERIA': 'Natural gas',\n 'Gas natural': 'Natural gas',\n 'GAS NATURAL': 'Natural gas',\n 'Gas residual': 'Natural gas',\n 'Gasoleo': 'Oil',\n 'GASOLEO': 'Oil',\n 'HULLA+ANTRACITA': 'Hard coal',\n 'Licores negros': 'Biomass and biogas',\n 'LIGNITO NEGRO': 'Lignite',\n 'LIGNITO PARDO': 'Lignite',\n 'NUCLEAR': 'Nuclear',\n 'Propano': 'Natural gas',\n 'Residuo aprovechamiento forestal o selvicola': 'Other bioenergy and renewable waste',\n 'Residuos': 'Non-renewable waste',\n 'Residuos actividad agricolas o jardineria': 'Other bioenergy and renewable waste',\n 'Residuos industriales': 'Non-renewable waste',\n 'Residuos solidos urbanos': 'Non-renewable waste',\n 'RESIDUOS SOLIDOS URBANOS': 'Non-renewable waste',\n ' ': 'Other or unspecified energy sources',\n np.nan: 'Other or unspecified energy sources',\n 'HIDRÁULICA': 'Hydro',\n 'TERMONUCLEAR': 'Nuclear',\n 'TÉRMICA': 'Hard coal',\n 'TÉRMICA CLÁSICA': 'Hard coal'}\n\ndata_ES[\"energy_source\"].replace(dict_energysources_ES, inplace=True)\ndata_ES[\"energy_source\"].replace('NaN', np.nan, inplace=True)", "3.9.4 Definition of generation technology types\nOverall translation of all technology types mentioned in the column \"technology\".", "# Translate technologies\ndata_ES.loc[data_ES.technology == \"COGENERACIÓN\", \"chp\"] = \"yes\"\n\ndata_ES[\"technology\"].replace('COGENERACIÓN','chp', inplace=True)\n\ndict_technologies_ES = {'FLUYENTE': 'Run-of-river',\n 'EMBALSE': 'Reservoir',\n 'BOMBEO MIXTO': 'Pumped storage with natural inflow',\n 'CT CARBÓN': '',\n 'CN PWR': '',\n 'CN BWR': '',\n 'COGENERACIÓN': 'chp',\n 'Turbinas de Vapor de Fuel': '',\n 'CICLO COMBINADO': 'Combined cycle',\n 'Ciclo combinado configuración 2x1': 'Combined cycle',\n 'RESÍDUOS SÓLIDOS URBANOS': '',\n 'Turbinas de vapor de Carbón': 'Steam turbine',\n 'Ciclo combinado configuración 3x1': 'Combined cycle',\n 'Turbinas de gas aeroderivadas': 'Gas turbine',\n 'TURBINA DE GAS Y DE VAPOR': 'Gas turbine',\n 'BOMBEO PURO': 'Pumped storage',\n 'CT FUELOLEO': '',\n 'Grupos Diésel - 4T': '',\n 'MOTORES DIESEL': '',\n 'Turbinas de gas heavy duty': 'Gas turbine',\n 'Grupos Diésel - 2T': '',\n 'BOMBEO+ EOLICA': '',\n 'TURBINA DE GAS': 'Gas turbine',\n 'CT FUEL-GAS': '',\n }\ndata_ES.loc[:, \"technology\"] = data_ES[\"technology\"].replace(dict_technologies_ES)\n\ndata_ES.loc[data_ES.technology == \"chp\", \"chp\"] = \"yes\"", "3.9.5 Delete unwanted energy sources/power stations with no names and adjust commissioning year\nExclude renewable energy sources, delete power stations with no names and capacities, and adjust the format of commissioning year.", "# Delete unwanted energy source in column \"energy_source\" & \"technology\"\ndata_ES = data_ES[data_ES.energy_source != 'TERMOELÉCTRICA']\ndata_ES = data_ES[data_ES.energy_source != 'COGENERACIÓN']\ndata_ES = data_ES[data_ES.energy_source != 'EXPERIMENTAL']\ndata_ES = data_ES[data_ES.name != '']\n\n# Delete power stations with no name and no capacities\ndata_ES = data_ES[data_ES.name.notna()]\n\n# Map commissioned year to Timestamp col\ndata_ES['commissioned_year'] = pd.to_datetime(data_ES['commissioned'], format='%d.%m.%Y %H:%M')\n# Reassing commissioned col with year only\nmask = data_ES['commissioned_year'].notna()\ndata_ES.loc[mask, 'commissioned'] = data_ES.loc[mask].commissioned_year.apply(lambda x: x.year)\n# Drop not needed col\ndata_ES.drop('commissioned_year', axis=1, inplace=True)", "3.9.6 Additional information on geographic coordinates and EIC codes\nIn this section a manually compiled list is used to define the geographic coordinates of indivdual power plants.", "data_ES = add_location_and_EIC('ES', data_ES)\n\n# add source\ndata_ES[\"source\"] = \"https://sede.minetur.gob.es/en-US/datosabiertos/catalogo/registro-productores-electrica\"\n\ndata_ES.head()", "3.10 United Kingdom UK\n3.10.1 Data import\nThe data is provided by the British government's Statistical Office. It encompasses a detailed list of British generation units with comprehensive information on technologies and energy fuels.", "filepath_UK = os.path.join(data_directory, 'UK', 'DUKES_5.11_UK.csv')\ndata_UK = pd.read_csv(filepath_UK, sep=';') \n\ndata_UK.head()", "3.10.2 Translation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "# Rename sixth column\ndata_UK.columns.values[6] = 'Location'\n\n\n# Rename columns\ndict_columns_UK = {'Company Name': 'company',\n 'Station Name': 'name',\n 'Installed Capacity (MW)': 'capacity',\n 'Country': 'country',\n 'Location': 'location',\n 'Fuel': 'energy_source',\n 'Year of commission or year generation began': 'commissioned',\n 'Source': 'source'}\ndata_UK.rename(columns=dict_columns_UK, inplace=True)\n\n \ndict_regions_UK = {'East': 'England',\n 'East Midlands': 'England',\n 'London': 'England',\n 'North East': 'England',\n 'North West': 'England',\n 'South East': 'England',\n 'South West': 'England',\n 'West Midlands': 'England',\n 'Yorkshire and the Humber': 'England',\n 'N Ireland': 'Northern Ireland'}\ndata_UK[\"location\"]=data_UK[\"location\"].replace(dict_regions_UK, inplace=True)\n\n# Merge columns \"Country\" and \"Location\" to one column called \"Country\"\ndata_UK['additional_info'] = data_UK[['location']].apply(\n lambda x: 'Region: {}'.format(x[0]), axis=1)\n\n# Drop column \"Location\" after merger\ncolsToDrop = ['location']\ndata_UK = data_UK.drop(colsToDrop, axis=1)\n\n# Apply general template of columns\ndata_UK = data_UK.reindex(columns=columns_sorted)\n\n#Set specific territory to \"additional_info\"\ndata_UK['additional_info'] = data_UK['country']\n\n# Solve comma problem in capacity column and convert to float\ndata_UK.capacity = data_UK.capacity.str.replace(',', '').astype(float)", "3.10.3 Definition of generation technology types\nGeneration of entries for the column \"technology\" according to information given in the column \"energy_source\".", "# Generate entries in column \"technology\" according to column \"energy_source\"\ndata_UK.loc[data_UK['energy_source'] == 'Hydro / pumped storage', \n 'technology'] = 'Pumped storage'\ndata_UK.loc[data_UK['energy_source'] == 'Pumped storage',\n 'technology'] = 'Pumped storage'\ndata_UK.loc[data_UK['energy_source'] == 'Wind',\n 'technology'] = 'Onshore'\ndata_UK.loc[data_UK['energy_source'] == 'Wind (offshore)',\n 'technology'] = 'Offshore'\ndata_UK.loc[data_UK['energy_source'] == 'Nuclear',\n 'technology'] = 'Steam turbine'\ndata_UK.loc[data_UK['energy_source'] == 'CCGT',\n 'technology'] = 'Combined cycle'\ndata_UK.loc[data_UK['energy_source'] == 'OCGT',\n 'technology'] = 'Gas turbine'", "3.10.4 Definition of energy sources\nOverall translation of all energy sources types mentioned in the column \"energy_source\" and subsequent translation check. Deletion of rows containing \"wind\" as energy source.", "dict_energysources_UK = {'Biomass': 'Biomass and biogas',\n 'Biomass / gas / waste derived fuel': 'Mixed fossil fuels',\n 'Natural Gas': 'Natural gas',\n 'CCGT': 'Natural gas',\n 'Sour gas': 'Natural gas',\n 'Coal': 'Hard coal',\n 'Coal / biomass': 'Mixed fossil fuels',\n 'Coal / biomass / gas / waste derived fuel': 'Mixed fossil fuels',\n 'Coal / oil': 'Mixed fossil fuels',\n 'Coal/oil': 'Mixed fossil fuels',\n 'Diesel': 'Oil',\n 'Gas': 'Natural gas',\n 'Gas / oil': 'Mixed fossil fuels',\n 'Diesel/gas oil': 'Mixed fossil fuels',\n 'Gas oil': 'Oil',\n 'Gas oil / kerosene': 'Oil',\n 'Hydro': 'Hydro',\n 'Hydro / pumped storage': 'Hydro',\n 'Pumped Storage': 'Hydro',\n 'Light oil': 'Oil',\n 'Meat & bone meal': 'Other bioenergy and renewable waste',\n 'Nuclear': 'Nuclear',\n 'OCGT': 'Natural gas',\n 'Oil': 'Oil',\n 'Light oil ': 'Oil',\n 'Pumped storage': 'Hydro',\n 'Straw': 'Biomass and biogas',\n 'Biomass (wood pellets, sunflower/oat husk pellets)': 'Biomass and biogas',\n 'Biomass (woodchip)': 'Biomass and biogas',\n 'Biomass (litter, woodchip)': 'Biomass and biogas',\n 'Biomass (meat and bone meal)': 'Biomass and biogas',\n 'Biomass (poultry litter, waste wood)': 'Biomass and biogas',\n 'Biomass (straw)': 'Biomass and biogas',\n 'Biomass (recycled wood)': 'Biomass and biogas',\n 'Biomass (poultry litter, woodchip)': 'Biomass and biogas',\n 'Biomass (wood pellets)': 'Biomass and biogas',\n 'Waste (municipal solid waste)': 'Non-renewable waste',\n 'Biomass (recycled wood, virgin wood)': 'Biomass and biogas',\n 'Biomass (virgin wood)': 'Biomass and biogas',\n 'Waste': 'Non-renewable waste',\n 'Waste (anaerobic digestion)': 'Non-renewable waste',\n 'Wind': 'Wind',\n 'Wind (offshore)': 'Wind',\n 'Wind (onshore)': 'Wind',\n 'Solar': 'Solar'}\ndata_UK[\"energy_source\"].replace(dict_energysources_UK, inplace=True)\n\n# Delete unwanted energy sources\ndata_UK = data_UK[data_UK.energy_source != 'Wind']\ndata_UK = data_UK[data_UK.energy_source != 'Solar']", "3.10.5 Additional information on geographic coordinates and EIC codes\nIn this section a manually compiled list is used to define the geographic coordinates of indivdual power plants.", "data_UK = add_location_and_EIC('UK', data_UK)\n\n# add source\ndata_UK[\"source\"] = \"https://www.gov.uk/government/statistics/electricity-chapter-5-digest-of-united-kingdom-energy-statistics-dukes#content\"\n\ndata_UK.head()", "3.11 Norway NO\nThe data is provided by the Norwegian Water Resources and Energy Directorate. It encompasses a database on the installed capacity of thermal generators as well as on hydro generators.\n3.11.1 Data import", "filepath_NO_hydro = os.path.join(data_directory, 'NO', 'Vannkraftverk.csv')\ndata_NO_hydro = pd.read_csv(filepath_NO_hydro, \n skiprows=2,\n sep=\";\",\n decimal=\",\",\n header=0,\n index_col=False,\n encoding='latin-1')\n\nfilepath_NO_thermal = os.path.join(data_directory, 'NO', 'termiske-kraftverk-i-norge-2019.xlsx')\ndata_NO_thermal = pd.read_excel(filepath_NO_thermal,\n sheet_name='Ark1')", "3.11.2 Hydro generators\nTranslation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "# Translate columns\ndict_columns_NO_hydro = {'Navn': 'name',\n 'Type': 'technology',\n 'Kommune': 'city',\n 'Kommunenr.': 'postcode',\n 'Maks ytelse [MW]': 'capacity',\n 'Hovedeier': 'company',\n 'Dato for første utnyttelse av fallet': 'commissioned',\n 'Elspotområde': 'additional_info'}\ndata_NO_hydro.rename(columns=dict_columns_NO_hydro, inplace=True)\n\n# Apply general template of columns\ndata_NO_hydro = data_NO_hydro.reindex(columns=columns_sorted)\n\n# Fill with general information\ndata_NO_hydro['country'] = 'NO'\ndata_NO_hydro['energy_source'] = 'Hydro'\ndata_NO_hydro['additional_info'] = 'Zone: NO' + data_NO_hydro['additional_info'].astype(str)\n# Change comissioning date to year only\ndata_NO_hydro['commissioned'] = data_NO_hydro['commissioned'].apply(lambda x: x[0:4]).astype(int)", "Definition of energy sources\nOverall translation of all technology types mentioned in the column \"technology\".", "# Add comment for plants with pump only (later categorized as pumped storage)\ndata_NO_hydro.loc[data_NO_hydro['technology'] == 'Pumpe', 'comment'] = 'Pump only'\n# Take absolute of negative capacity of plants with pump only\nmask = data_NO_hydro['technology'] == 'Pumpe'\ndata_NO_hydro.loc[mask, 'capacity'] = data_NO_hydro.loc[mask].capacity.apply(lambda x: abs(x))\n\n# Translate technologies\ndict_technologies_NO_hydro = {\n 'Kraftverk': 'Reservoir',\n 'Pumpekraftverk': 'Pumped storage',\n 'Pumpe': 'Pumped storage'\n }\ndata_NO_hydro['technology'].replace(dict_technologies_NO_hydro, inplace=True)\n\ndata_NO_hydro.head()", "3.11.3 Thermal generators\nTranslation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "# Translate columns\ndict_columns_NO_thermal = {'Kraftverk': 'name',\n 'Kommentar': 'comment',\n 'Brensel': 'energy_source',\n 'Kommune': 'city',\n 'Kommunenr.': 'postcode',\n 'Installert effekt [MW]': 'capacity',\n 'Idriftsettelsesår': 'commissioned',\n 'Elspotområde': 'additional_info'}\ndata_NO_thermal.rename(columns=dict_columns_NO_thermal, inplace=True)\n\n# Apply general template of columns\ndata_NO_thermal = data_NO_thermal.reindex(columns=columns_sorted)\n\n# Fill with general information\ndata_NO_thermal['country'] = 'NO'\ndata_NO_thermal['additional_info'] = 'Zone: ' + data_NO_thermal['additional_info'].astype(str)", "Definition of energy sources\nOverall translation of all technology types mentioned in the column \"technology\".", "# Translate energy sources\ndict_energy_sources_NO_thermal = {\n 'Avfallsforbrenning': 'Non-renewable waste',\n 'Varmegjenvinning': 'Other fossil fuels',\n 'Naturgass': 'Natural gas',\n 'Biogass fra avfall': 'Biomass and biogas',\n 'Ukjent': np.nan,\n 'Bark, returfiberavfall, slam, rivningsvirke og olje ': 'Mixed fossil fuels',\n 'Flis fra impregnert tre, avfallsforbrenning': 'Other bioenergy and renewable waste',\n 'Biogass': 'Biomass and biogas',\n 'CO gass': 'Other fossil fuels'\n }\ndata_NO_thermal['energy_source'].replace(dict_energy_sources_NO_thermal,\n inplace=True)\n\n# Delete unwanted energy sources\ndata_NO_thermal = data_NO_thermal[data_NO_thermal.energy_source != 'Biomass and biogas']\ndata_NO_thermal = data_NO_thermal[data_NO_thermal.energy_source != 'Other bioenergy and renewable waste']\n\ndata_NO_thermal.head()", "3.11.4 Combine hydro and thermal data frames", "# add sources\ndata_NO_hydro[\"source\"] = \"https://www.nve.no/energiforsyning/kraftproduksjon/vannkraft/vannkraftdatabase/#\"\ndata_NO_thermal[\"source\"] = \"https://www.nve.no/media/8967/termiske-kraftverk-i-norge-2019.xlsx\"\n\ndata_NO = pd.concat([data_NO_hydro, data_NO_thermal], ignore_index=True)\n\ndata_NO.head()", "3.12 Sweden SE\nThe data is provided by the power exchange Nordpool. It encompasses a detailed list of Swedish generation units with a capacity of more than 100 MW for 2014. Since there was no new data on the Swedish generators found, the list from 2014 is still used in this release.", "filepath_SE = os.path.join(data_directory, 'SE', 'input_plant-list_SE.csv')\ndata_SE = pd.read_csv(filepath_SE, encoding='utf-8', header=0, index_col=None)\n\ndata_SE.head()", "3.13 Slovakia SK\nThe data is provided by the Slovakian utility Slovenské elektrárne a.s. (SEAS). It encompasses a detailed list of Slovak generation units with comprehensive information on technologies and energy fuels.", "filepath_SK = os.path.join(data_directory, 'SK', 'input_plant-list_SK.csv')\ndata_SK = pd.read_csv(filepath_SK, encoding='utf-8', header=0, index_col=None) \n\ndata_SK.head()", "3.14 Slovenia SI\nThe data is provided by several Slovenian utilities. The respective data links are given in the column \"source\". This list encompasses Slovenian generation units with comprehensive information on technologies and energy fuels.", "filepath_SI = os.path.join(data_directory, 'SI', 'input_plant-list_SI.csv')\ndata_SI = pd.read_csv(filepath_SI, encoding='utf-8')\n\ndata_SI.head()", "3.15 Austria AT\nThe data for conventional power plants is provided by several Austrian utilities. The respective data links are given in the column \"source\". The specifications of Austrian hydro power plants, however, solely are based on Verbund AG. The resulting list encompasses Austrian generation units with comprehensive information on technologies and energy fuels.\n3.15.1 Data import", "filepath_AT_hydro = os.path.join(data_directory, 'AT', 'input_plant-list_AT_hydro.csv')\ndata_AT_hydro = pd.read_csv(filepath_AT_hydro, encoding=\"latin1\")\n\nfilepath_AT_thermal = os.path.join(data_directory, 'AT', 'input_plant-list_AT_thermal.csv')\ndata_AT_thermal = pd.read_csv(filepath_AT_thermal, encoding=\"latin1\")", "3.15.2 Translation and harmonization of columns\nThe imported data is standardized with respect to the columns as defined in section 2.3. In a first step, existing and output-relevant columns are translated and remaining columns are deleted in a second step. Columns which are not exist in the data set, but required for the output are additionally added in this process.", "#Delete MW in capacity column\ndata_AT_hydro.capacity = data_AT_hydro.capacity.apply(lambda x: x.replace('MW',''))\n\n#Apply general template of columns\ndata_AT_hydro = data_AT_hydro.reindex(columns=columns_sorted)\ndata_AT_thermal = data_AT_thermal.reindex(columns=columns_sorted)", "3.15.3 Combine hydro and thermal data frames", "data_AT = pd.concat([data_AT_hydro, data_AT_thermal], ignore_index=True)\n\ndata_AT.head()", "3.16 Denmark DK\nThe data is assembled using the information of several websites. The sources can be found within the document. It encompasses a detailed list of Danish generation units with comprehensive information on technologies and energy fuels.\n3.16.1 Data import", "filepath_DK = os.path.join(data_directory, 'DK', 'input_plant-list_DK.csv')\ndata_DK = pd.read_csv(filepath_DK, encoding='utf-8', header=0, index_col=None)", "3.17.2 Translation and harmonization of columns\nAll generators, which are not or only partly available, are dropped. The imported data is then standardized with respect to the columns as defined in section 2.3.", "# List only operating plants\ndata_DK = data_DK[data_DK.availability != '0']\ndata_DK = data_DK[data_DK.availability != 'partly']\n\n#Drop unwanted columns\ndata_DK = data_DK.drop('availability', axis=1)\n\n# Apply general template of columns\ndata_DK=data_DK.reindex(columns=columns_sorted)\n\ndata_DK.head()", "4. Consolidation of processed country data\nIn the following, the national datasets are consolidated to a single European dataset. Unfortunately, the Belgian dataset cannot be integrated due to the copyright by the data owner ELIA.", "dataframes = [data_BE,\n data_NL,\n data_FR,\n data_PL,\n data_CZ,\n data_CH,\n data_IT,\n data_FI,\n data_ES,\n data_UK,\n data_NO,\n data_SE,\n data_SK,\n data_SI,\n data_AT,\n data_DK]\n\ndata_EU = pd.concat(dataframes, sort=False)\n\ndata_EU.head()", "4.1 Implementation of energy source levels", "# Import energy source level definition\nenergy_source_levels = pd.read_csv(\n os.path.join('input', 'energy_source_levels.csv'), index_col=None, header=0)\n\n# Merge energy source levels to data set\ndata_EU = data_EU.reset_index().merge(\n energy_source_levels,\n how='left',\n left_on='energy_source',\n right_on='energy_source_level_1').drop_duplicates(\n subset=['name',\n 'city',\n 'country',\n 'capacity'], keep='first').set_index('name')\n\ndata_EU = data_EU.reset_index().merge(\n energy_source_levels,\n how='left',\n left_on='energy_source',\n right_on='energy_source_level_2').drop_duplicates(\n subset=['name',\n 'city',\n 'country',\n 'capacity'], keep='first').set_index('name')\n\ndata_EU = data_EU.reset_index().merge(\n energy_source_levels,\n how='left',\n left_on='energy_source',\n right_on='energy_source_level_3').drop_duplicates(\n subset=['name',\n 'city',\n 'country',\n 'capacity'], keep='first').set_index('name')\n\n# Combine different energy source levels created by merge\ndata_EU['energy_source_level_1'] = data_EU[\n ['energy_source_level_1',\n 'energy_source_level_1_x',\n 'energy_source_level_1_y']].fillna('').sum(axis=1)\n\ndata_EU['energy_source_level_2'] = data_EU[\n ['energy_source_level_2',\n 'energy_source_level_2_y']].fillna('').sum(axis=1)\n\ndata_EU['energy_source_level_3'] = data_EU[\n ['energy_source_level_3']].fillna('').sum(axis=1)\n\n# Drop auxiliary columns due to merge\ncolsToDrop = ['energy_source_level_1_y',\n 'energy_source_level_2_y',\n 'energy_source_level_3_y',\n 'energy_source_level_1_x',\n 'energy_source_level_2_x',\n 'energy_source_level_3_x']\ndata_EU = data_EU.drop(colsToDrop, axis=1)\n\n# replace false energy source levels for plants without energy source in original data\nindex_with_NAN = data_EU[data_EU.energy_source.isna()].index\n\ndata_EU.loc[index_with_NAN,['energy_source_level_1', \n 'energy_source_level_2', \n 'energy_source_level_3']] = np.NaN\n\ndata_EU.loc[index_with_NAN, 'comment'] = 'Energy source not in original data'\n\n\ndata_EU.head()", "4.2 Definition of structure and data types\nFirst, we define the ordering of the columns. Secondly, the data types are redefined. At the moment, this has the drawback that empty columns are redefined as float instead of object.", "columns_sorted_output = ['name',\n 'company',\n 'street',\n 'postcode',\n 'city',\n 'country',\n 'capacity',\n 'energy_source',\n 'technology',\n 'chp',\n 'commissioned',\n 'type',\n 'lat',\n 'lon',\n 'eic_code',\n 'energy_source_level_1',\n 'energy_source_level_2',\n 'energy_source_level_3',\n 'additional_info',\n 'comment',\n 'source']\n\n# Set ordering of columns\ndata_EU = data_EU.reset_index()\ndata_EU = data_EU.reindex(columns=columns_sorted_output)\n\n# Set data types for columns\ndata_EU = data_EU.astype(str)\ndata_EU[['capacity', 'commissioned', 'lat', 'lon']] = data_EU[\n ['capacity', 'commissioned', 'lat', 'lon']].astype(float)\n\ndata_EU.replace('nan', np.nan, inplace=True)\n\n# data_EU.dtypes\n\n# Set index\ndata_EU = data_EU.set_index('name')\n\ndata_EU.head()", "5. Result export\nObtain the DE list and concatinate. Note, the stand alone DE list contains more information.", "# call DE script if not already executed\n# %run ./download_and_process_DE.ipynb\n\ndata_DE = pd.read_csv('output/conventional_power_plants_DE.csv', index_col=0)\n\ndata_DE = data_DE.rename(columns={'eic_code_plant': 'eic_code',\n 'capacity_net_bnetza': 'capacity'})\ndata_DE[\"source\"] = \"BNetzA/UBA\"\ndata_DE[\"additional_info\"] = \"\"\n\ndata_EU = pd.concat([data_EU, data_DE.loc[:, data_EU.columns]])", "Write the final list to file.", "output_path = 'output'\n\ndata_EU.to_csv(os.path.join(\n output_path, 'conventional_power_plants_EU.csv'),\n encoding='utf-8',\n index_label='name')\n\ndata_EU.to_excel(\n os.path.join(output_path, 'conventional_power_plants_EU.xlsx'),\n sheet_name='plants',\n index_label='name')\n\ndata_EU.to_sql(\n 'conventional_power_plants_EU',\n sqlite3.connect(os.path.join(output_path, 'conventional_power_plants.sqlite')),\n if_exists=\"replace\",\n index_label='name')", "End of script." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Danghor/Formal-Languages
Ply/Symbolic-Calculator.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open (\"../style.css\", \"r\") as file:\n css = file.read()\nHTML(css)", "A Simple Symbolic Calculator\nThis file shows how a simply symbolic calculator can be implemented using Ply.\nSpecification of the Scanner", "import ply.lex as lex\n\ntokens = [ 'NUMBER', 'IDENTIFIER', 'ASSIGN_OP' ]", "The token Number specifies a fully featured floating point number.", "def t_NUMBER(t):\n r'0|[1-9][0-9]*(\\.[0-9]+)?(e[+-]?([1-9][0-9]*))?'\n t.value = float(t.value)\n return t", "The token IDENTIFIER specifies the name of a variable.", "def t_IDENTIFIER(t):\n r'[a-zA-Z][a-zA-Z0-9_]*'\n return t", "The token ASSIGN_OP specifies the assignment operator.", "def t_ASSIGN_OP(t):\n r':='\n return t\n\nliterals = ['+', '-', '*', '/', '(', ')', ';']\n\nt_ignore = ' \\t'\n\ndef t_newline(t):\n r'\\n+'\n t.lexer.lineno += t.value.count('\\n')\n\ndef t_error(t):\n print(f\"Illegal character '{t.value[0]}'\")\n t.lexer.skip(1)\n\n__file__ = 'main'\n\nlexer = lex.lex()", "Specification of the Parser", "import ply.yacc as yacc", "The start variable of our grammar is statement.", "start = 'stmnt'", "There are two grammar rules for stmnts:\nstmnt : IDENTIFIER \":=\" expr \";\"\n | expr ';'\n ;\n- If a stmnt is an assignment, the expression on the right hand side of the assignment operator is \n evaluated and the value is stored in the dictionary Names2Values. The key used in this dictionary\n is the name of the variable on the left hand side ofthe assignment operator.\n- If a stmnt is an expression, the expression is evaluated and the result of this evaluation is printed.\nIt is <b>very important</b> that in the grammar rules below the : is surrounded by space characters, for otherwise Ply will throw mysterious error messages at us!\nBelow, Names2Values is a dictionary mapping variable names to their values. It will be defined later.", "def p_stmnt_assign(p):\n \"stmnt : IDENTIFIER ASSIGN_OP expr ';'\"\n Names2Values[p[1]] = p[3]\n\ndef p_stmnt_expr(p):\n \"stmnt : expr ';'\"\n print(p[1])", "An expr is a sequence of prods that are combined with the operators + and -.\nThe corresponding grammar rules are:\nexpr : expr '+' prod\n | expr '-' prod\n | prod\n ;", "def p_expr_plus(p):\n \"expr : expr '+' prod\"\n p[0] = p[1] + p[3]\n \ndef p_expr_minus(p):\n \"expr : expr '-' prod\"\n p[0] = p[1] - p[3]\n \ndef p_expr_prod(p):\n \"expr : prod\"\n p[0] = p[1]", "A prod is a sequence of factors that are combined with the operators * and /.\nThe corresponding grammar rules are:\nprod : prod '*' factor\n | prod '/' factor\n | factor\n ;", "def p_prod_mult(p):\n \"prod : prod '*' factor\"\n p[0] = p[1] * p[3]\n \ndef p_prod_div(p):\n \"prod : prod '/' factor\"\n p[0] = p[1] / p[3]\n \ndef p_prod_factor(p):\n \"prod : factor\"\n p[0] = p[1]", "A factor can is either an expression in parenthesis, a number, or an identifier.\nfactor : '(' expr ')'\n | NUMBER\n | IDENTIFIER\n ;", "def p_factor_group(p):\n \"factor : '(' expr ')'\"\n p[0] = p[2]\n\ndef p_factor_number(p):\n \"factor : NUMBER\"\n p[0] = p[1]\n\ndef p_factor_id(p):\n \"factor : IDENTIFIER\"\n p[0] = Names2Values.get(p[1], float('nan'))\n\ndef p_error(p):\n if p:\n print(f'Syntax error at {p.value} in line {p.lexer.lineno}.')\n else:\n print('Syntax error at end of input.')", "Setting the optional argument write_tables to False <B style=\"color:red\">is required</B> to prevent an obscure bug where the parser generator tries to read an empty parse table.", "parser = yacc.yacc(write_tables=False, debug=True)", "Let's look at the action table that is generated.", "!cat parser.out\n\nNames2Values = {}\n\ndef main():\n while True:\n s = input('calc > ')\n if s == '':\n break\n yacc.parse(s)\n\nmain()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Serulab/Py4Bio
notebooks/Chapter 6 - Code Modularizing.ipynb
mit
[ "Python for Bioinformatics\n\nThis Jupyter notebook is intented to be used alongside the book Python for Bioinformatics\nChapter 6: Code Modularizing", "len('Hello')", "Listing 6.1: netchargefn: Function to calculate the net charge of a protein", "def protcharge(aa_seq):\n \"\"\"Returns the net charge of a protein sequence\"\"\"\n protseq = aa_seq.upper()\n charge = -0.002\n aa_charge = {'C':-.045, 'D':-.999, 'E':-.998, 'H':.091,\n 'K':1, 'R':1, 'Y':-.001}\n for aa in protseq:\n charge += aa_charge.get(aa,0)\n return charge\n\nprotcharge('EEARGPLRGKGDQKSAVSQKPRSRGILH')\n\nprotcharge()", "Listing 6.2: netchargefn: Function that returns two values", "def charge_and_prop(aa_seq):\n \"\"\" Returns the net charge of a protein sequence\n and proportion of charged amino acids\n \"\"\"\n protseq = aa_seq.upper()\n charge = -0.002\n cp = 0\n aa_charge = {'C':-.045, 'D':-.999, 'E':-.998, 'H':.091,\n 'K':1, 'R':1, 'Y':-.001}\n for aa in protseq:\n charge += aa_charge.get(aa,0)\n if aa in aa_charge:\n cp += 1\n prop = 100.*cp/len(aa_seq)\n return (charge,prop)\n\ncharge_and_prop('EEARGPLRGKGDQKSAVSQKPRSRGILH')\n\ncharge_and_prop('EEARGPLRGKGDQKSAVSQKPRSRGILH')[1]", "Listing 6.3: convertlist.py: Converts a list into a text file", "def save_list(input_list, file_name):\n \"\"\"A list (input_list) is saved in a file (file_name)\"\"\"\n with open(file_name, 'w') as fh:\n for item in input_list:\n fh.write('{0}\\n'.format(item))\n return None\n\ndef duplicate(x):\n y = 1\n print('y = {0}'.format(y))\n return(2*x)\n\nduplicate(5)\n\ny\n\ndef duplicate(x):\n print('y = {0}'.format(y))\n return(2*x)\n\nduplicate(5)\n\ny = 3\ndef duplicate(x):\n print('y = {0}'.format(y))\n return(2*x)\n\nduplicate(5)\n\ny = 3\ndef duplicate(x):\n y = 1\n print('y = {0}'.format(y))\n return(2*x)\n\nduplicate(5)\n\ndef test(x):\n global z\n z = 10\n print('z = {0}'.format(z))\n return x*2\n\nz = 1\ntest (4)\n\nz", "Listing 6.4: list2textdefault.py: Function with a default parameter", "def save_list(input_list, file_name='temp.txt'):\n \"\"\"A list (input_list) is saved in a file (file_name)\"\"\"\n with open(file_name, 'w') as fh:\n for item in input_list:\n fh.write('{0}\\n'.format(item))\n return None\n\n save_list(['MS233','MS772','MS120','MS93','MS912'])", "Listing 6.5: getaverage.py: Function to calculate the average of values entered\nas parameters", "def average(*numbers):\n if len(numbers)==0:\n return None\n else:\n total = sum(numbers)\n return total / len(numbers)\n\naverage(2,3,4,3,2)\n\naverage(2,3,4,3,2,1,8,10)", "Listing 6.6: list2text2.py: Converts a list into a text file, using print and *", "def save_list(input_list, file_name='temp.txt'):\n \"\"\"A list (input_list) is saved to a file (file_name)\"\"\"\n with open(file_name, 'w') as fh:\n print(*input_list, sep='\\n', file=fh)\n return None", "Listing 6.7: list2text2.py: Function that accepts a variable number of arguments", "def commandline(name, **parameters):\n line = ''\n for item in parameters:\n line += ' -{0} {1}'.format(item, parameters[item])\n return name + line\n\ncommandline('formatdb', t='Caseins', i='indata.fas')\n\n commandline('formatdb', t='Caseins', i='indata.fas', p='F')", "Listing 6.8: allprimes.py: Function that returns all prime numbers up to a given\nvalue", "def is_prime(n):\n \"\"\"Returns True is n is prime, False if not\"\"\"\n for i in range(2,n-1):\n if n%i == 0:\n return False\n return True\n\ndef all_primes(n):\n primes = []\n for number in range(1,n):\n if isprime(number):\n primes.append(number)\n return p", "Listing 6.9: allprimesg.py: Generator that replaces putn() in code 6.8.", "def g_all_primes(n):\n for number in range(1,n):\n if is_prime(number):\n yield number", "Modules and Packages", "# utils.py file\ndef save_list(input_list, file_name='temp.txt'):\n \"\"\"A list (input_list) is saved to a file (file_name)\"\"\"\n with open(file_name, 'w') as fh:\n print(*input_list, sep='\\n', file=fh)\n return None", "Since utils.py is not present in this shell, the following command will retrieve this file from GitHub and store it in the local shell so it is available for importing by Python", "!curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/code/ch6/utils.py -o utils.py\n\nimport utils\nutils.save_list([1,2,3])\n\n!cat temp.txt" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eyaltrabelsi/my-notebooks
Lectures/Debugging Notebooks.ipynb
mit
[ "Debugging Notebooks\nNaive Way - print", "import random\ndef find_max (values):\n max = 0\n print(f\"Initial max is {max}\")\n for val in values:\n if val > max:\n max = val\n return max\n\nsample = random.sample(range(100), 10)\nfind_max(sample)", "Advantages\n\nEasy \nNo installation required\n\nDisadvantages:\n\nHard to pinpoint error prown locations\nCan be spammy\n\n\nClassical Way - PDB\nPDB is used in the following ways:\n\nusing \"breakpoint()\" since python 3.7.\nusing \"from IPython.core.debugger import set_trace;set_trace()\" for python notebooks.\nusing \"import pdb; pdb.set_trace()\".", "import random\ndef find_max (values):\n max = 0\n import pdb; pdb.set_trace()\n for val in values:\n if val > max:\n max = val\n return max\n\nsample = random.sample(range(100), 10)\nfind_max(sample[:-3])", "A nice cheatsheet:\n\nAdvantages\n\nNo installation required\nDynamic\nMature:\nMore features\nDocumentation (Stackoverflow ftw)\nLess bugs\n\n\n\nDisadvantages:\n\nVery scary\nLearning curve\n\nAdditional resources:\n\nIntroduction to PDB\nThe Glory of pdb's set trace \n\n\nBetter Way Jupyter Notebooks - pixie_debugger", "import contextlib\n\nwith contextlib.redirect_stdout(None):\n import pixiedust\n\n%%pixie_debugger\nimport random\ndef find_max (values):\n max = 0\n for val in values:\n if val > max:\n max = val\n return max\nfind_max(random.sample(range(100), 10))", "Advantages\n\nEasy \nDynamic\n\nDisadvantages:\n\nAdditional installation required\nNot mature:\nDocumentation\nSupported evaluation\nWorking on Jupyter notebooks but not in jupyterlab\n\n\n\nSetup:\n\npip install pixiedust \n\nAdditional resources:\n\nblog post\n\n\nBetter Way Jupyter lab - xpython", "import random\ndef find_max (values):\n max = 0\n for val in values:\n if val > max:\n max = val\n return max\nfind_max(random.sample(range(100), 10))", "Advantages\n\nEasy \nDynamic\nNo additional code\n\nDisadvantages:\n\nAdditional installation required\nRequire diffrent interperter\nNot mature:\nDocumentation\nNo evaluation change\nInstallation problematic for some environments\n\n\n\nSetup:\nFor each conda environment:\n * conda install -y -c conda-forge xeus-python=0.6.12 notebook=6 ptvsd\n * jupyter labextension install @jupyterlab/debugger\n\nHonorable mentions\n\nPDB On steroids-ipdb\nPDB With gui-pudb" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jdossgollin/CWC_ANN
Week05/notebooks/edward_getting_started.ipynb
mit
[ "Your first Edward program\nnote this is just copied from edward: see https://github.com/blei-lab/edward/blob/master/notebooks/getting_started.ipynb\nProbabilistic modeling in Edward uses a simple language of random variables. Here we will show a Bayesian neural network. It is a neural network with a prior distribution on its weights.\nA webpage version is available at \nhttp://edwardlib.org/getting-started.", "%matplotlib inline\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport edward as ed\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\n\nfrom edward.models import Normal\n\nplt.style.use('ggplot')\n\ndef build_toy_dataset(N=50, noise_std=0.1):\n x = np.linspace(-3, 3, num=N)\n y = np.cos(x) + np.random.normal(0, noise_std, size=N)\n x = x.astype(np.float32).reshape((N, 1))\n y = y.astype(np.float32)\n return x, y\n\n\ndef neural_network(x, W_0, W_1, b_0, b_1):\n h = tf.tanh(tf.matmul(x, W_0) + b_0)\n h = tf.matmul(h, W_1) + b_1\n return tf.reshape(h, [-1])", "First, simulate a toy dataset of 50 observations with a cosine relationship.", "ed.set_seed(42)\n\nN = 50 # number of data ponts\nD = 1 # number of features\n\nx_train, y_train = build_toy_dataset(N)", "Next, define a two-layer Bayesian neural network. Here, we define the neural network manually with tanh nonlinearities.", "W_0 = Normal(loc=tf.zeros([D, 2]), scale=tf.ones([D, 2]))\nW_1 = Normal(loc=tf.zeros([2, 1]), scale=tf.ones([2, 1]))\nb_0 = Normal(loc=tf.zeros(2), scale=tf.ones(2))\nb_1 = Normal(loc=tf.zeros(1), scale=tf.ones(1))\n\nx = x_train\ny = Normal(loc=neural_network(x, W_0, W_1, b_0, b_1),\n scale=0.1 * tf.ones(N))", "Next, make inferences about the model from data. We will use variational inference. Specify a normal approximation over the weights and biases.", "qW_0 = Normal(loc=tf.Variable(tf.random_normal([D, 2])),\n scale=tf.nn.softplus(tf.Variable(tf.random_normal([D, 2]))))\nqW_1 = Normal(loc=tf.Variable(tf.random_normal([2, 1])),\n scale=tf.nn.softplus(tf.Variable(tf.random_normal([2, 1]))))\nqb_0 = Normal(loc=tf.Variable(tf.random_normal([2])),\n scale=tf.nn.softplus(tf.Variable(tf.random_normal([2]))))\nqb_1 = Normal(loc=tf.Variable(tf.random_normal([1])),\n scale=tf.nn.softplus(tf.Variable(tf.random_normal([1]))))", "Defining tf.Variable allows the variational factors’ parameters to vary. They are initialized randomly. The standard deviation parameters are constrained to be greater than zero according to a softplus transformation.", "# Sample functions from variational model to visualize fits.\nrs = np.random.RandomState(0)\ninputs = np.linspace(-5, 5, num=400, dtype=np.float32)\nx = tf.expand_dims(inputs, 1)\nmus = tf.stack(\n [neural_network(x, qW_0.sample(), qW_1.sample(),\n qb_0.sample(), qb_1.sample())\n for _ in range(10)])\n\n# FIRST VISUALIZATION (prior)\n\nsess = ed.get_session()\ntf.global_variables_initializer().run()\noutputs = mus.eval()\n\nfig = plt.figure(figsize=(10, 6))\nax = fig.add_subplot(111)\nax.set_title(\"Iteration: 0\")\nax.plot(x_train, y_train, 'ks', alpha=0.5, label='(x, y)')\nax.plot(inputs, outputs[0].T, 'r', lw=2, alpha=0.5, label='prior draws')\nax.plot(inputs, outputs[1:].T, 'r', lw=2, alpha=0.5)\nax.set_xlim([-5, 5])\nax.set_ylim([-2, 2])\nax.legend()\nplt.show()", "Now, run variational inference with the Kullback-Leibler divergence in order to infer the model’s latent variables with the given data. We specify 1000 iterations.", "inference = ed.KLqp({W_0: qW_0, b_0: qb_0,\n W_1: qW_1, b_1: qb_1}, data={y: y_train})\ninference.run(n_iter=1000, n_samples=5)", "Finally, criticize the model fit. Bayesian neural networks define a distribution over neural networks, so we can perform a graphical check. Draw neural networks from the inferred model and visualize how well it fits the data.", "# SECOND VISUALIZATION (posterior)\n\noutputs = mus.eval()\n\nfig = plt.figure(figsize=(10, 6))\nax = fig.add_subplot(111)\nax.set_title(\"Iteration: 1000\")\nax.plot(x_train, y_train, 'ks', alpha=0.5, label='(x, y)')\nax.plot(inputs, outputs[0].T, 'r', lw=2, alpha=0.5, label='posterior draws')\nax.plot(inputs, outputs[1:].T, 'r', lw=2, alpha=0.5)\nax.set_xlim([-5, 5])\nax.set_ylim([-2, 2])\nax.legend()\nplt.show()", "The model has captured the cosine relationship between $x$ and $y$ in the observed domain.\nTo learn more about Edward, delve in!\nIf you prefer to learn via examples, then check out some\ntutorials." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
StefanoAllesina/ISC
python/solutions/Lahti2014_solution.ipynb
gpl-2.0
[ "Solution of Lahti et al. 2014\nWrite a function that takes as input a dictionary of constraints and returns a dictionary tabulating the BMI group for all the records matching the constraints. For example, calling:\nget_BMI_count({'Age': '28', 'Sex': 'female'}) \nshould return:\n{'NA': 3, 'lean': 8, 'overweight': 2, 'underweight': 1}\nImport csv for reading the file.", "import csv", "Now write the function. For each row in the file, you need to make sure all the constraints are matching the desired ones. If so, keep track of the BMI group using a dictionary.", "def get_BMI_count(dict_constraints):\n \"\"\" Take as input a dictionary of constraints\n for example, {'Age': '28', 'Sex': 'female'}\n And return the count of the various groups of BMI\n \"\"\"\n # We use a dictionary to store the results\n BMI_count = {}\n # Open the file, build a csv DictReader\n with open('../data/Lahti2014/Metadata.tab') as f:\n csvr = csv.DictReader(f, delimiter = '\\t')\n # For each row\n for row in csvr:\n # check that all conditions are met\n matching = True\n for e in dict_constraints:\n if row[e] != dict_constraints[e]:\n # The constraint is not met. Move to the next record\n matching = False\n break\n # matching is True only if all the constraints have been met\n if matching == True:\n # extract the BMI_group\n my_BMI = row['BMI_group']\n if my_BMI in BMI_count.keys():\n # If we've seen it before, add one record to the count\n BMI_count[my_BMI] = BMI_count[my_BMI] + 1\n else:\n # If not, initialize at 1\n BMI_count[my_BMI] = 1\n return BMI_count\n\nget_BMI_count({'Nationality': 'US', 'Sex': 'female'})", "Write a function that takes as input the constraints (as above), and a bacterial \"genus\". The function returns the average abundance (in logarithm base 10) of the genus for each group of BMI in the sub-population. For example, calling:\nget_abundance_by_BMI({'Time': '0', 'Nationality': 'US'}, 'Clostridium difficile et rel.')\nshould return:\n```\n\nAbundance of Clostridium difficile et rel. In sub-population:\n\nNationality -> US\nTime -> 0\n\n3.08 NA\n3.31 underweight\n3.84 lean\n2.89 overweight\n3.31 obese\n3.45 severeobese\n\n```", "import scipy # For log10\n\ndef get_abundance_by_BMI(dict_constraints, genus = 'Aerococcus'):\n # We use a dictionary to store the results\n BMI_IDs = {}\n # Open the file, build a csv DictReader\n with open('../data/Lahti2014/Metadata.tab') as f:\n csvr = csv.DictReader(f, delimiter = '\\t')\n # For each row\n for row in csvr:\n # check that all conditions are met\n matching = True\n for e in dict_constraints:\n if row[e] != dict_constraints[e]:\n # The constraint is not met. Move to the next record\n matching = False\n break\n # matching is True only if all the constraints have been met\n if matching == True:\n # extract the BMI_group\n my_BMI = row['BMI_group']\n if my_BMI in BMI_IDs.keys():\n # If we've seen it before, add the SampleID\n BMI_IDs[my_BMI] = BMI_IDs[my_BMI] + [row['SampleID']]\n else:\n # If not, initialize\n BMI_IDs[my_BMI] = [row['SampleID']]\n # Now let's open the other file, and keep track of the abundance of the genus for each \n # BMI group\n abundance = {}\n with open('../data/Lahti2014/HITChip.tab') as f:\n csvr = csv.DictReader(f, delimiter = '\\t')\n # For each row\n for row in csvr:\n # check whether we need this SampleID\n matching = False\n for g in BMI_IDs:\n if row['SampleID'] in BMI_IDs[g]:\n if g in abundance.keys():\n abundance[g][0] = abundance[g][0] + float(row[genus])\n abundance[g][1] = abundance[g][1] + 1\n \n else:\n abundance[g] = [float(row[genus]), 1]\n # we have found it, so move on\n break\n # Finally, calculate means, and print results\n print(\"____________________________________________________________________\")\n print(\"Abundance of \" + genus + \" In sub-population:\")\n print(\"____________________________________________________________________\")\n for key, value in dict_constraints.items():\n print(key, \"->\", value)\n print(\"____________________________________________________________________\")\n for ab in ['NA', 'underweight', 'lean', 'overweight', \n 'obese', 'severeobese', 'morbidobese']:\n if ab in abundance.keys():\n abundance[ab][0] = scipy.log10(abundance[ab][0] / abundance[ab][1])\n print(round(abundance[ab][0], 2), '\\t', ab)\n print(\"____________________________________________________________________\")\n print(\"\")\n\nget_abundance_by_BMI({'Time': '0', 'Nationality': 'US'}, \n 'Clostridium difficile et rel.')", "Repeat this analysis for all genera, and for the records having Time = 0.\nA simple function for extracting all the genera in the database:", "def get_all_genera():\n with open('../data/Lahti2014/HITChip.tab') as f:\n header = f.readline().strip()\n genera = header.split('\\t')[1:]\n return genera", "Testing:", "get_all_genera()[:6]", "Now use the function we wrote above to print the results for all genera:", "for g in get_all_genera()[:5]:\n get_abundance_by_BMI({'Time': '0'}, g)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AndreySheka/dl_ekb
hw10/Bonus-handcrafted-rnn.ipynb
mit
[ "import numpy as np\nimport theano\nimport theano.tensor as T\nimport lasagne\nimport os", "Generate names\n\nStruggle to find a name for the variable? Let's see how you'll come up with a name for your son/daughter. Surely no human has expertize over what is a good child name, so let us train NN instead.\nDataset contains ~8k human names from different cultures[in latin transcript]\nObjective (toy problem): learn a generative model over names.", "start_token = \" \"\n\nwith open(\"names\") as f:\n names = f.read()[:-1].split('\\n')\n names = [start_token+name for name in names]\n \n\nprint 'n samples = ',len(names)\nfor x in names[::1000]:\n print x", "Text processing", "#all unique characters go here\ntoken_set = set()\nfor name in names:\n for letter in name:\n token_set.add(letter)\n\ntokens = list(token_set)\n\nprint 'n_tokens = ',len(tokens)\n\n\n#!token_to_id = <dictionary of symbol -> its identifier (index in tokens list)>\ntoken_to_id = {t:i for i,t in enumerate(tokens) }\n\n#!id_to_token = < dictionary of symbol identifier -> symbol itself>\nid_to_token = {i:t for i,t in enumerate(tokens)}\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.hist(map(len,names),bins=25);\n\n# truncate names longer than MAX_LEN characters. \nMAX_LEN = min([60,max(list(map(len,names)))])\n#ADJUST IF YOU ARE UP TO SOMETHING SERIOUS\n", "Cast everything from symbols into identifiers", "names_ix = list(map(lambda name: list(map(token_to_id.get,name)),names))\n\n\n#crop long names and pad short ones\nfor i in range(len(names_ix)):\n names_ix[i] = names_ix[i][:MAX_LEN] #crop too long\n \n if len(names_ix[i]) < MAX_LEN:\n names_ix[i] += [token_to_id[\" \"]]*(MAX_LEN - len(names_ix[i])) #pad too short\n \nassert len(set(map(len,names_ix)))==1\n\nnames_ix = np.array(names_ix)", "Input variables", "from agentnet import Recurrence\nfrom lasagne.layers import *\nfrom agentnet.memory import *\nfrom agentnet.resolver import ProbabilisticResolver\n\nsequence = T.matrix('token sequence','int64')\n\ninputs = sequence[:,:-1]\ntargets = sequence[:,1:]\n\n\nl_input_sequence = InputLayer(shape=(None, None),input_var=inputs)\n", "Build NN\nYou'll be building a model that takes token sequence and predicts next tokens at each tick\nThis is basically equivalent to how rnn step was described in the lecture", "###One step of rnn\nclass step:\n \n #inputs\n inp = InputLayer((None,),name='current character')\n h_prev = InputLayer((None,10),name='previous rnn state')\n \n #recurrent part\n emb = EmbeddingLayer(inp, len(tokens), 30,name='emb')\n \n h_new = RNNCell(h_prev,emb,name=\"rnn\") #just concat -> denselayer\n \n next_token_probas = DenseLayer(h_new,len(tokens),nonlinearity=T.nnet.softmax)\n \n #pick next token from predicted probas\n next_token = ProbabilisticResolver(next_token_probas)\n \n\n\ntraining_loop = Recurrence(\n state_variables={step.h_new:step.h_prev},\n input_sequences={step.inp:l_input_sequence},\n tracked_outputs=[step.next_token_probas,],\n unroll_scan=False,\n)\n\n\n# Model weights\nweights = lasagne.layers.get_all_params(training_loop,trainable=True)\nprint weights\n\npredicted_probabilities = lasagne.layers.get_output(training_loop[step.next_token_probas])\n#If you use dropout do not forget to create deterministic version for evaluation\n\n\nloss = lasagne.objectives.categorical_crossentropy(predicted_probabilities.reshape((-1,len(tokens))),\n targets.reshape((-1,))).mean()\n#<Loss function - a simple categorical crossentropy will do, maybe add some regularizer>\n\nupdates = lasagne.updates.adam(loss,weights)", "Compiling it", "\n#training\ntrain_step = theano.function([sequence], loss,\n updates=training_loop.get_automatic_updates()+updates)\n", "generation\nhere we re-wire the recurrent network so that it's output is fed back to it's input", "n_steps = T.scalar(dtype='int32')\nfeedback_loop = Recurrence(\n state_variables={step.h_new:step.h_prev,\n step.next_token:step.inp},\n tracked_outputs=[step.next_token_probas,],\n batch_size=theano.shared(1),\n n_steps=n_steps,\n unroll_scan=False,\n)\n\n\ngenerated_tokens = get_output(feedback_loop[step.next_token])\n\ngenerate_sample = theano.function([n_steps],generated_tokens,updates=feedback_loop.get_automatic_updates())\n\ndef generate_string(length=MAX_LEN):\n output_indices = generate_sample(length)[0]\n \n return ''.join(tokens[i] for i in output_indices)\n \n\ngenerate_string()", "Model training\nHere you can tweak parameters or insert your generation function\nOnce something word-like starts generating, try increasing seq_length", "def sample_batch(data, batch_size):\n \n rows = data[np.random.randint(0,len(data),size=batch_size)]\n \n return rows\n\n\nprint(\"Training ...\")\n\n\n#total N iterations\nn_epochs=100\n\n# how many minibatches are there in the epoch \nbatches_per_epoch = 500\n\n#how many training sequences are processed in a single function call\nbatch_size=10\n\n\nfor epoch in xrange(n_epochs):\n\n avg_cost = 0;\n for _ in range(batches_per_epoch):\n \n avg_cost += train_step(sample_batch(names_ix,batch_size))\n \n print(\"\\n\\nEpoch {} average loss = {}\".format(epoch, avg_cost / batches_per_epoch))\n\n print \"Generated names\"\n for i in range(10):\n print generate_string(),\n", "And now,\n\ntry lstm/gru\ntry several layers\ntry mtg cards\ntry your own dataset of any kind" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
konstantinstadler/pymrio
doc/source/notebooks/working_with_eora26.ipynb
gpl-3.0
[ "Parsing the Eora26 EE MRIO database\nGetting Eora26\nThe Eora 26 database is available at http://www.worldmrio.com . \nYou need to register there and can then download the files from http://www.worldmrio.com/simplified .\nParse\nTo parse a single year do:", "import pymrio\n\neora_storage = '/tmp/mrios/eora26'\n\neora = pymrio.parse_eora26(year=2005, path=eora_storage)", "Explore\nEora includes (almost) all countries:", "eora.get_regions()", "This can easily be aggregated to, for example, the OECD/NON_OECD countries with the help of the country converter coco.", "import country_converter as coco\n\neora.aggregate(region_agg = coco.agg_conc(original_countries='Eora',\n aggregates=['OECD'],\n missing_countries='NON_OECD')\n )\n\neora.get_regions()\n\neora.calc_all()\n\nimport matplotlib.pyplot as plt\nwith plt.style.context('ggplot'):\n eora.Q.plot_account(('Total cropland area', 'Total'), figsize=(8,5))\n plt.show()", "See the other notebooks for further information on aggregation and file io." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mismosmi/idea2birds
src/evaluate.ipynb
mit
[ "Bird Simulation Evaluation Script\nImports & Preparations", "import numpy as np\nimport scipy as sp\nimport birds\nimport argparse\nimport matplotlib.pyplot as plt\nfrom matplotlib.path import Path\nfrom matplotlib.animation import FuncAnimation\nfrom matplotlib.collections import PathCollection\nfrom IPython.display import HTML\nfrom scipy.optimize import curve_fit\n#%matplotlib ipympl\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\nfrom matplotlib import rcParams\nrcParams.update({'figure.autolayout': True})", "Figure output Settings", "figpath = '../img/'\nfigwidth = 4 #figure width in inches\nfigsize = (figwidth,figwidth*2.5/4)", "Run with default Settings", "frames = 1000\nbirds.param_record = False\nbirds.trace = None\nbirds.flock = birds.Flock()\nfig = plt.figure(figsize=(5, 5*birds.flock.args['height']/birds.flock.args['width']), facecolor=\"white\")\nax = fig.add_axes([0.0, 0.0, 1.0, 1.0], aspect=1, frameon=False)\nbirds.collection = birds.MarkerCollection(birds.flock.args['n'])\nax.add_collection(birds.collection._collection)\nax.set_xlim(0, birds.flock.args['width'])\nax.set_ylim(0, birds.flock.args['height'])\nax.set_xticks([])\nax.set_yticks([])\n\nanimation = FuncAnimation(fig, birds.update, interval=10, frames=frames)\nHTML(animation.to_html5_video())\n", "Find moving phase\nRun with varying Eta", "def avg_with_error(f, avg_time, va_t = False, eta_index = 0, prerun_time = 500):\n var = np.zeros(avg_time)\n va_avg = 0\n for t in range(avg_time):\n va_tmp = f.get_va()\n va_avg += va_tmp\n var[t] = va_tmp\n f.run()\n va_avg /= avg_time\n if va_t is not False:\n va_t[eta_index, prerun_time:] = var\n var = np.sum((var-va_avg)**2 / avg_time)\n return va_avg, var\n\nres = 30\nprerun_time = 500\naveraging_time = 1000\nrepeat = 3\nrho=4\nEta = np.linspace(0.,7.,res)\nN = [50,100,400]\n\nva_t = np.zeros((6, prerun_time+averaging_time)) \n\nva = np.zeros((len(N),res))\nvas = np.zeros(repeat)\nerrorbars = np.zeros_like(va)\nvariance = np.zeros(repeat)\nfor c,n in enumerate(N):\n for i,eta in enumerate(Eta):\n for j in range(repeat):\n f = birds.Flock(n=n,eta=eta,rho=rho)\n record_time = (n==100 and i%5==0 and j==0)\n for t in range(prerun_time):\n f.run()\n if record_time:\n va_t[int(i/5),t]=f.get_va()\n \n if record_time:\n va_avg, vari = avg_with_error(f, averaging_time, va_t, int(i/5), prerun_time)\n else:\n va_avg, vari = avg_with_error(f, averaging_time)\n vas[j] = va_avg\n variance[j] = vari\n va[c][i] = vas.sum()/repeat\n errorbars[c][i] = np.sqrt(variance.sum()/repeat)\n \nprint('Has been run.')\n\nplt.figure(figsize=(10,8))\nx = np.linspace(0,1500,1500)\nfor e,vt in enumerate(va_t):\n plt.plot(x, vt, label='Eta = '+ '%1.2f' % Eta[e*5])\nplt.legend()\nplt.xlim([0, 40])\nplt.xlabel('$t$')\nplt.ylabel('$v_a$')\nplt.show()\nlen(x)", "Fit $\\eta_c$", "prec = 0.05\n\n# Initial square/lin fit to determinine p0-parameters\ndef finvsquare(x, sqwidth):\n return 1-(x*sqwidth)**2\ndef flin(x, m, b):\n return m*x+b\n\n# find array index of eta_c as first approx to split fit in linear- and phase-relation-parts\nvamin = np.argmin(va[0]) \n \n# initial linear fit for better guess of eta_c\nlinparams = sp.optimize.curve_fit(flin,Eta[vamin:len(Eta)],va[0][vamin:len(va[0])],p0=[0.2,-0.5],sigma=errorbars[0][vamin:len(errorbars[0])])\nvalin = flin(Eta,*linparams[0])\n\n# set fit parameters for square/lin fit\neta_c_ind = np.where(np.absolute(va[0] - valin) > prec)[0][-1]\nx = Eta[0:eta_c_ind]\ny = va[0][0:eta_c_ind]\nerr = errorbars[0][0:eta_c_ind]\n\n# fit it with square + line\nsqwidth = sp.optimize.curve_fit(finvsquare,x,y,p0=1/6,sigma=err)\nlinparams = sp.optimize.curve_fit(flin,Eta[eta_c_ind:len(Eta)],va[0][eta_c_ind:len(va[0])],p0=[0.2,-0.5],sigma=errorbars[0][eta_c_ind:len(errorbars[0])])\n\n# calculate second approx for eta_c from fit\np,q = linparams[0][0]/sqwidth[0][0]**2,(linparams[0][1]-1)/sqwidth[0][0]**2\neta_c_0 = -p/2 + np.sqrt((p/2)**2 - q)\n\n# recalculate fit parameters\neta_c_ind = np.where(Eta < eta_c_0)[0][-1]\nx = Eta[0:eta_c_ind]\ny = va[0][0:eta_c_ind]\nerr = errorbars[0][0:eta_c_ind]\n\n# fit beta\nxlog = np.log(eta_c_0-x)\nylog = np.log(y)\nerrlog = np.log(err)\n\n# logarithmic fit\ndef fphase_temp_log(x, beta, offset):\n return beta * x + offset\ntempparams = sp.optimize.curve_fit(fphase_temp_log,xlog,ylog,p0=[0.5,0],sigma=errlog)\n\n# final fit for eta_c and beta using former fits as start values\ndef fphase(x, eta_c, beta, offset):\n return (eta_c - x)**beta * np.e**offset\ndef fphaselog(x, eta_c, beta, offset):\n return np.log(eta_c - x)*beta + offset\nphaseparams = sp.optimize.curve_fit(fphaselog,x,ylog,p0=[eta_c_0,*tempparams[0]],sigma=errlog)", "Plot $v_a$ over $\\eta$", "plt.figure(figsize=figsize)\nfor c,n in enumerate(N):\n plt.errorbar(Eta, va[c], yerr=errorbars[c], fmt='.', label=\"N=\"+str(n))\nx = np.linspace(Eta[0],Eta[-1])\nxphase = x[np.where(phaseparams[0][0] > x)]\nplt.plot(x,finvsquare(x,sqwidth[0]), label='Square Fit', linewidth=0.5)\nplt.plot(xphase,fphase(xphase,*phaseparams[0]), label='Phase relation Fit')\nplt.plot(x,flin(x,*linparams[0]), label='linear Fit', linewidth=0.5)\n\nplt.xlabel(\"$\\\\eta$\")\nplt.ylabel(\"$v_a$\")\nplt.xlim([0,5.5])\nplt.ylim([0,1])\nplt.legend()\n\nplt.savefig(figpath+'va_over_eta.eps')", "Plot $v_a$ over $(\\eta_c - \\eta)/\\eta_c)$", "plt.figure(figsize=figsize)\nplt.ylim(ymin=0.2)\nplt.xlim(xmin=0.01)\neta_c = 4.5 # This is a guess of the critical eta value. There must be a better way of determining it\nfor c,n in enumerate(N):\n plt.plot( (eta_c-Eta)/eta_c, va[c],'.',label=\"N=\"+str(n))\nca = plt.gca()\nca.set_xscale('log')\nca.set_yscale('log')\nplt.xlabel(\"$\\\\frac{\\\\eta_c - \\\\eta}{\\\\eta_c}$\")\nplt.ylabel(\"$v_a$\")\n\n\n\nx = (eta_c-Eta)/eta_c\nselect = x > 0\nx = np.log(x[select])\ny = np.log(va[-1][select])\ncoef = np.polyfit(x,y,1)\nplt.plot(np.logspace(-1,0,10), np.logspace(-1,0,10)**coef[0], label=\"$\\\\beta=$\"+str(coef[0]))\nplt.legend();\n\n# if you come up with a better naming scheme PLEASE change this\nplt.savefig(figpath+'va_over_etac_minus_eta_over_etac.eps')\nprint('eta_c='+str(eta_c))", "Run with varying density", "res = 15\ntime = 1000\naveraging_time = 500\nrepeat = 5\neta = .3\nRho = np.logspace(-3,-0, res)\nN = [100]\n\nva = np.zeros((len(N), res))\nvas = np.zeros(repeat)\nerrorbars = np.zeros_like(va)\nvariance = np.zeros(repeat)\nfor c,n in enumerate(N):\n for i,rho in enumerate(Rho):\n for j in range(repeat):\n f = birds.Flock(n=n, eta=eta, rho=rho)\n for t in range(time):\n f.run()\n va_avg, vari = avg_with_error(f, averaging_time)\n vas[j] = va_avg\n variance[j] = vari\n va[c][i] = vas.sum()/repeat\n errorbars[c][i] = np.sqrt(variance.sum()/repeat)\n\n\nplt.figure(figsize=figsize)\nfor c,n in enumerate(N):\n plt.errorbar(Rho, va[c], yerr=errorbars[c], fmt='.', label=\"N=\"+str(n))\n\nplt.xlabel(\"$\\\\rho$\")\nplt.ylabel(\"$v_a$\")\nplt.legend()\nplt.title(\"Alignment dependance on density\");\n\nplt.savefig(figpath+'va_over_rho.eps')", "Run with varying angle\nThis has to be done for low rho and eta. This should be evident from the graphs above as a high value of eta reduces the alignment going to almost zero.\nA high density causes more alignment, and thus if we attempt running with higher densities, they all align anyway.", "res = 20\ntime = 1000\naveraging_time = 1000\nrepeat = 3\neta = 0\nAngle = np.linspace(1,180,res,dtype=int)\nRho= [0.01,0.1,1] # np.logspace(-3, 0, 5)\nn = 100\n\nva = np.zeros((len(Rho), res))\nvas = np.zeros(repeat)\nerrorbars = np.zeros_like(va)\nvariance = np.zeros(repeat)\nfor c,rho in enumerate(Rho):\n for i,angle in enumerate(Angle):\n for j in range(repeat):\n f = birds.Flock(n=n, eta=eta, rho=rho, angle=angle)\n for t in range(time):\n f.run()\n va_avg, vari = avg_with_error(f, averaging_time)\n vas[j] = va_avg\n variance[j] = vari\n va[c][i] = vas.sum()/repeat\n errorbars[c][i] = np.sqrt(variance.sum()/repeat)\n\nplt.figure()\nfor c,rho in enumerate(Rho):\n plt.errorbar(Angle/360*2*np.pi, va[c], yerr=errorbars[c], fmt='.', label=\"$\\\\rho$=\"+str(np.round(rho,decimals=4)))\nplt.xlabel(\"$\\\\theta_{cone}$\")\nplt.ylabel(\"$v_a$\")\nplt.legend()\nplt.title(\"Alignment dependance on angle\");\n\nplt.savefig(figpath+'va_over_angle.eps')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
huajianmao/learning
coursera/deep-learning/1.neural-networks-deep-learning/week2/pa.1.Python_Basics_With_Numpy_v2.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Python-Basics-with-Numpy-(optional-assignment)\" data-toc-modified-id=\"Python-Basics-with-Numpy-(optional-assignment)-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Python Basics with Numpy (optional assignment)</a></div><div class=\"lev2 toc-item\"><a href=\"#About-iPython-Notebooks\" data-toc-modified-id=\"About-iPython-Notebooks-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>About iPython Notebooks</a></div><div class=\"lev2 toc-item\"><a href=\"#1---Building-basic-functions-with-numpy\" data-toc-modified-id=\"1---Building-basic-functions-with-numpy-12\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>1 - Building basic functions with numpy</a></div><div class=\"lev3 toc-item\"><a href=\"#1.1---sigmoid-function,-np.exp()\" data-toc-modified-id=\"1.1---sigmoid-function,-np.exp()-121\"><span class=\"toc-item-num\">1.2.1&nbsp;&nbsp;</span>1.1 - sigmoid function, np.exp()</a></div><div class=\"lev3 toc-item\"><a href=\"#1.2---Sigmoid-gradient\" data-toc-modified-id=\"1.2---Sigmoid-gradient-122\"><span class=\"toc-item-num\">1.2.2&nbsp;&nbsp;</span>1.2 - Sigmoid gradient</a></div><div class=\"lev3 toc-item\"><a href=\"#1.3---Reshaping-arrays\" data-toc-modified-id=\"1.3---Reshaping-arrays-123\"><span class=\"toc-item-num\">1.2.3&nbsp;&nbsp;</span>1.3 - Reshaping arrays</a></div><div class=\"lev3 toc-item\"><a href=\"#1.4---Normalizing-rows\" data-toc-modified-id=\"1.4---Normalizing-rows-124\"><span class=\"toc-item-num\">1.2.4&nbsp;&nbsp;</span>1.4 - Normalizing rows</a></div><div class=\"lev3 toc-item\"><a href=\"#1.5---Broadcasting-and-the-softmax-function\" data-toc-modified-id=\"1.5---Broadcasting-and-the-softmax-function-125\"><span class=\"toc-item-num\">1.2.5&nbsp;&nbsp;</span>1.5 - Broadcasting and the softmax function</a></div><div class=\"lev2 toc-item\"><a href=\"#2)-Vectorization\" data-toc-modified-id=\"2)-Vectorization-13\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>2) Vectorization</a></div><div class=\"lev3 toc-item\"><a href=\"#2.1-Implement-the-L1-and-L2-loss-functions\" data-toc-modified-id=\"2.1-Implement-the-L1-and-L2-loss-functions-131\"><span class=\"toc-item-num\">1.3.1&nbsp;&nbsp;</span>2.1 Implement the L1 and L2 loss functions</a></div>\n\n# Python Basics with Numpy (optional assignment)\n\nWelcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need. \n\n**Instructions:**\n- You will be using Python 3.\n- Avoid using for-loops and while-loops, unless you are explicitly told to do so.\n- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.\n- After coding your function, run the cell right below it to check if your result is correct.\n\n**After this assignment you will:**\n- Be able to use iPython Notebooks\n- Be able to use numpy functions and numpy matrix/vector operations\n- Understand the concept of \"broadcasting\"\n- Be able to vectorize code\n\nLet's get started!\n\n## About iPython Notebooks ##\n\niPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing \"SHIFT\"+\"ENTER\" or by clicking on \"Run Cell\" (denoted by a play symbol) in the upper bar of the notebook. \n\nWe will often specify \"(≈ X lines of code)\" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.\n\n**Exercise**: Set test to `\"Hello World\"` in the cell below to print \"Hello World\" and run the two cells below.", "### START CODE HERE ### (≈ 1 line of code)\ntest = \"Hello World\"\n### END CODE HERE ###\n\nprint (\"test: \" + test)", "Expected output:\ntest: Hello World\n<font color='blue'>\nWhat you need to remember:\n- Run your cells using SHIFT+ENTER (or \"Run cell\")\n- Write code in the designated areas using Python 3 only\n- Do not modify the code outside of the designated areas\n1 - Building basic functions with numpy\nNumpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.\n1.1 - sigmoid function, np.exp()\nBefore using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().\nExercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.\nReminder:\n$sigmoid(x) = \\frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.\nTo refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().", "# GRADED FUNCTION: basic_sigmoid\n\nimport math\n\ndef basic_sigmoid(x):\n \"\"\"\n Compute sigmoid of x.\n\n Arguments:\n x -- A scalar\n\n Return:\n s -- sigmoid(x)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n s = 1.0 / (1 + math.exp(-x))\n ### END CODE HERE ###\n \n return s\n\nbasic_sigmoid(3)", "Expected Output: \n<table style = \"width:40%\">\n <tr>\n <td>** basic_sigmoid(3) **</td> \n <td>0.9525741268224334 </td> \n </tr>\n\n</table>\n\nActually, we rarely use the \"math\" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.", "### One reason why we use \"numpy\" instead of \"math\" in Deep Learning ###\n# x = [1, 2, 3]\n# basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.", "In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$", "import numpy as np\n\n# example of np.exp\nx = np.array([1, 2, 3])\nprint(np.exp(x)) # result is (exp(1), exp(2), exp(3))", "Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \\frac{1}{x}$ will output s as a vector of the same size as x.", "# example of vector operation\nx = np.array([1, 2, 3])\nprint (x + 3)", "Any time you need more info on a numpy function, we encourage you to look at the official documentation. \nYou can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.\nExercise: Implement the sigmoid function using numpy. \nInstructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.\n$$ \\text{For } x \\in \\mathbb{R}^n \\text{, } sigmoid(x) = sigmoid\\begin{pmatrix}\n x_1 \\\n x_2 \\\n ... \\\n x_n \\\n\\end{pmatrix} = \\begin{pmatrix}\n \\frac{1}{1+e^{-x_1}} \\\n \\frac{1}{1+e^{-x_2}} \\\n ... \\\n \\frac{1}{1+e^{-x_n}} \\\n\\end{pmatrix}\\tag{1} $$", "# GRADED FUNCTION: sigmoid\n\nimport numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()\n\ndef sigmoid(x):\n \"\"\"\n Compute the sigmoid of x\n\n Arguments:\n x -- A scalar or numpy array of any size\n\n Return:\n s -- sigmoid(x)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n s = 1 / (1 + np.exp(-x))\n ### END CODE HERE ###\n \n return s\n\nx = np.array([1, 2, 3])\nsigmoid(x)", "Expected Output: \n<table>\n <tr> \n <td> **sigmoid([1,2,3])**</td> \n <td> array([ 0.73105858, 0.88079708, 0.95257413]) </td> \n </tr>\n</table>\n\n1.2 - Sigmoid gradient\nAs you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.\nExercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \\sigma'(x) = \\sigma(x) (1 - \\sigma(x))\\tag{2}$$\nYou often code this function in two steps:\n1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.\n2. Compute $\\sigma'(x) = s(1-s)$", "# GRADED FUNCTION: sigmoid_derivative\n\ndef sigmoid_derivative(x):\n \"\"\"\n Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.\n You can store the output of the sigmoid function into variables and then use it to calculate the gradient.\n \n Arguments:\n x -- A scalar or numpy array\n\n Return:\n ds -- Your computed gradient.\n \"\"\"\n \n ### START CODE HERE ### (≈ 2 lines of code)\n s = sigmoid(x)\n ds = s * (1 - s)\n ### END CODE HERE ###\n \n return ds\n\nx = np.array([1, 2, 3])\nprint (\"sigmoid_derivative(x) = \" + str(sigmoid_derivative(x)))", "Expected Output: \n<table>\n <tr> \n <td> **sigmoid_derivative([1,2,3])**</td> \n <td> [ 0.19661193 0.10499359 0.04517666] </td> \n </tr>\n</table>\n\n1.3 - Reshaping arrays\nTwo common numpy functions used in deep learning are np.shape and np.reshape(). \n- X.shape is used to get the shape (dimension) of a matrix/vector X. \n- X.reshape(...) is used to reshape X into some other dimension. \nFor example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you \"unroll\", or reshape, the 3D array into a 1D vector.\n<img src=\"images/image2vector_kiank.png\" style=\"width:500px;height:300;\">\nExercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:\npython\nv = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c\n- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.", "# GRADED FUNCTION: image2vector\ndef image2vector(image):\n \"\"\"\n Argument:\n image -- a numpy array of shape (length, height, depth)\n \n Returns:\n v -- a vector of shape (length*height*depth, 1)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n v = image.reshape(image.shape[0] * image.shape[1] * image.shape[2], 1)\n ### END CODE HERE ###\n \n return v\n\n# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values\nimage = np.array([[[ 0.67826139, 0.29380381],\n [ 0.90714982, 0.52835647],\n [ 0.4215251 , 0.45017551]],\n\n [[ 0.92814219, 0.96677647],\n [ 0.85304703, 0.52351845],\n [ 0.19981397, 0.27417313]],\n\n [[ 0.60659855, 0.00533165],\n [ 0.10820313, 0.49978937],\n [ 0.34144279, 0.94630077]]])\n\nprint (\"image2vector(image) = \" + str(image2vector(image)))", "Expected Output: \n<table style=\"width:100%\">\n <tr> \n <td> **image2vector(image)** </td> \n <td> [[ 0.67826139]\n [ 0.29380381]\n [ 0.90714982]\n [ 0.52835647]\n [ 0.4215251 ]\n [ 0.45017551]\n [ 0.92814219]\n [ 0.96677647]\n [ 0.85304703]\n [ 0.52351845]\n [ 0.19981397]\n [ 0.27417313]\n [ 0.60659855]\n [ 0.00533165]\n [ 0.10820313]\n [ 0.49978937]\n [ 0.34144279]\n [ 0.94630077]]</td> \n </tr>\n\n\n</table>\n\n1.4 - Normalizing rows\nAnother common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \\frac{x}{\\| x\\|} $ (dividing each row vector of x by its norm).\nFor example, if $$x = \n\\begin{bmatrix}\n 0 & 3 & 4 \\\n 2 & 6 & 4 \\\n\\end{bmatrix}\\tag{3}$$ then $$\\| x\\| = np.linalg.norm(x, axis = 1, keepdims = True) = \\begin{bmatrix}\n 5 \\\n \\sqrt{56} \\\n\\end{bmatrix}\\tag{4} $$and $$ x_normalized = \\frac{x}{\\| x\\|} = \\begin{bmatrix}\n 0 & \\frac{3}{5} & \\frac{4}{5} \\\n \\frac{2}{\\sqrt{56}} & \\frac{6}{\\sqrt{56}} & \\frac{4}{\\sqrt{56}} \\\n\\end{bmatrix}\\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.\nExercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).", "# GRADED FUNCTION: normalizeRows\n\ndef normalizeRows(x):\n \"\"\"\n Implement a function that normalizes each row of the matrix x (to have unit length).\n \n Argument:\n x -- A numpy matrix of shape (n, m)\n \n Returns:\n x -- The normalized (by row) numpy matrix. You are allowed to modify x.\n \"\"\"\n \n ### START CODE HERE ### (≈ 2 lines of code)\n norm = np.linalg.norm(x, axis=1, keepdims=True)\n x = x / norm\n ### END CODE HERE ###\n\n return x\n\nx = np.array([\n [0, 3, 4],\n [2, 6, 4]])\nprint(\"normalizeRows(x) = \" + str(normalizeRows(x)))", "Expected Output: \n<table style=\"width:60%\">\n <tr> \n <td> **normalizeRows(x)** </td> \n <td> [[ 0. 0.6 0.8 ]\n [ 0.13736056 0.82416338 0.54944226]\n ]</td> \n </tr>\n</table>\n\nNote:\nIn normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now! \n1.5 - Broadcasting and the softmax function\nA very important concept to understand in numpy is \"broadcasting\". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.\nExercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.\nInstructions:\n- $ \\text{for } x \\in \\mathbb{R}^{1\\times n} \\text{, } softmax(x) = softmax(\\begin{bmatrix}\n x_1 &&\n x_2 &&\n ... &&\n x_n\n\\end{bmatrix}) = \\begin{bmatrix}\n \\frac{e^{x_1}}{\\sum_{j}e^{x_j}} &&\n \\frac{e^{x_2}}{\\sum_{j}e^{x_j}} &&\n ... &&\n \\frac{e^{x_n}}{\\sum_{j}e^{x_j}} \n\\end{bmatrix} $ \n\n$\\text{for a matrix } x \\in \\mathbb{R}^{m \\times n} \\text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\\begin{bmatrix}\n x_{11} & x_{12} & x_{13} & \\dots & x_{1n} \\\n x_{21} & x_{22} & x_{23} & \\dots & x_{2n} \\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\n x_{m1} & x_{m2} & x_{m3} & \\dots & x_{mn}\n\\end{bmatrix} = \\begin{bmatrix}\n \\frac{e^{x_{11}}}{\\sum_{j}e^{x_{1j}}} & \\frac{e^{x_{12}}}{\\sum_{j}e^{x_{1j}}} & \\frac{e^{x_{13}}}{\\sum_{j}e^{x_{1j}}} & \\dots & \\frac{e^{x_{1n}}}{\\sum_{j}e^{x_{1j}}} \\\n \\frac{e^{x_{21}}}{\\sum_{j}e^{x_{2j}}} & \\frac{e^{x_{22}}}{\\sum_{j}e^{x_{2j}}} & \\frac{e^{x_{23}}}{\\sum_{j}e^{x_{2j}}} & \\dots & \\frac{e^{x_{2n}}}{\\sum_{j}e^{x_{2j}}} \\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\n \\frac{e^{x_{m1}}}{\\sum_{j}e^{x_{mj}}} & \\frac{e^{x_{m2}}}{\\sum_{j}e^{x_{mj}}} & \\frac{e^{x_{m3}}}{\\sum_{j}e^{x_{mj}}} & \\dots & \\frac{e^{x_{mn}}}{\\sum_{j}e^{x_{mj}}}\n\\end{bmatrix} = \\begin{pmatrix}\n softmax\\text{(first row of x)} \\\n softmax\\text{(second row of x)} \\\n ... \\\n softmax\\text{(last row of x)} \\\n\\end{pmatrix} $$", "# GRADED FUNCTION: softmax\n\ndef softmax(x):\n \"\"\"Calculates the softmax for each row of the input x.\n\n Your code should work for a row vector and also for matrices of shape (n, m).\n\n Argument:\n x -- A numpy matrix of shape (n,m)\n\n Returns:\n s -- A numpy matrix equal to the softmax of x, of shape (n,m)\n \"\"\"\n \n ### START CODE HERE ### (≈ 3 lines of code)\n expx = np.exp(x)\n expsum = np.sum(expx, axis=1, keepdims=True)\n s = expx / expsum\n\n ### END CODE HERE ###\n \n return s\n\nx = np.array([\n [9, 2, 5, 0, 0],\n [7, 5, 0, 0 ,0]])\nprint(\"softmax(x) = \" + str(softmax(x)))", "Expected Output:\n<table style=\"width:60%\">\n\n <tr> \n <td> **softmax(x)** </td> \n <td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04\n 1.21052389e-04]\n [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04\n 8.01252314e-04]]</td> \n </tr>\n</table>\n\nNote:\n- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.\nCongratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.\n<font color='blue'>\nWhat you need to remember:\n- np.exp(x) works for any np.array x and applies the exponential function to every coordinate\n- the sigmoid function and its gradient\n- image2vector is commonly used in deep learning\n- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. \n- numpy has efficient built-in functions\n- broadcasting is extremely useful\n2) Vectorization\nIn deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.", "import time\n\nx1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]\nx2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]\n\n### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###\ntic = time.process_time()\ndot = 0\nfor i in range(len(x1)):\n dot+= x1[i]*x2[i]\ntoc = time.process_time()\nprint (\"dot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC OUTER PRODUCT IMPLEMENTATION ###\ntic = time.process_time()\nouter = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros\nfor i in range(len(x1)):\n for j in range(len(x2)):\n outer[i,j] = x1[i]*x2[j]\ntoc = time.process_time()\nprint (\"outer = \" + str(outer) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC ELEMENTWISE IMPLEMENTATION ###\ntic = time.process_time()\nmul = np.zeros(len(x1))\nfor i in range(len(x1)):\n mul[i] = x1[i]*x2[i]\ntoc = time.process_time()\nprint (\"elementwise multiplication = \" + str(mul) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###\nW = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array\ntic = time.process_time()\ngdot = np.zeros(W.shape[0])\nfor i in range(W.shape[0]):\n for j in range(len(x1)):\n gdot[i] += W[i,j]*x1[j]\ntoc = time.process_time()\nprint (\"gdot = \" + str(gdot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\nx1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]\nx2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]\n\n### VECTORIZED DOT PRODUCT OF VECTORS ###\ntic = time.process_time()\ndot = np.dot(x1,x2)\ntoc = time.process_time()\nprint (\"dot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED OUTER PRODUCT ###\ntic = time.process_time()\nouter = np.outer(x1,x2)\ntoc = time.process_time()\nprint (\"outer = \" + str(outer) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED ELEMENTWISE MULTIPLICATION ###\ntic = time.process_time()\nmul = np.multiply(x1,x2)\ntoc = time.process_time()\nprint (\"elementwise multiplication = \" + str(mul) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")\n\n### VECTORIZED GENERAL DOT PRODUCT ###\ntic = time.process_time()\ndot = np.dot(W,x1)\ntoc = time.process_time()\nprint (\"gdot = \" + str(dot) + \"\\n ----- Computation time = \" + str(1000*(toc - tic)) + \"ms\")", "As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger. \nNote that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.\n2.1 Implement the L1 and L2 loss functions\nExercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.\nReminder:\n- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \\hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.\n- L1 loss is defined as:\n$$\\begin{align} & L_1(\\hat{y}, y) = \\sum_{i=0}^m|y^{(i)} - \\hat{y}^{(i)}| \\end{align}\\tag{6}$$", "# GRADED FUNCTION: L1\n\ndef L1(yhat, y):\n \"\"\"\n Arguments:\n yhat -- vector of size m (predicted labels)\n y -- vector of size m (true labels)\n \n Returns:\n loss -- the value of the L1 loss function defined above\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n loss = np.sum(np.abs(y - yhat))\n ### END CODE HERE ###\n \n return loss\n\nyhat = np.array([.9, 0.2, 0.1, .4, .9])\ny = np.array([1, 0, 0, 1, 1])\nprint(\"L1 = \" + str(L1(yhat,y)))", "Expected Output:\n<table style=\"width:20%\">\n\n <tr> \n <td> **L1** </td> \n <td> 1.1 </td> \n </tr>\n</table>\n\nExercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\\sum_{j=0}^n x_j^{2}$. \n\nL2 loss is defined as $$\\begin{align} & L_2(\\hat{y},y) = \\sum_{i=0}^m(y^{(i)} - \\hat{y}^{(i)})^2 \\end{align}\\tag{7}$$", "# GRADED FUNCTION: L2\n\ndef L2(yhat, y):\n \"\"\"\n Arguments:\n yhat -- vector of size m (predicted labels)\n y -- vector of size m (true labels)\n \n Returns:\n loss -- the value of the L2 loss function defined above\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n loss = np.sum((y - yhat) ** 2)\n ### END CODE HERE ###\n \n return loss\n\nyhat = np.array([.9, 0.2, 0.1, .4, .9])\ny = np.array([1, 0, 0, 1, 1])\nprint(\"L2 = \" + str(L2(yhat,y)))", "Expected Output: \n<table style=\"width:20%\">\n <tr> \n <td> **L2** </td> \n <td> 0.43 </td> \n </tr>\n</table>\n\nCongratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!\n<font color='blue'>\nWhat to remember:\n- Vectorization is very important in deep learning. It provides computational efficiency and clarity.\n- You have reviewed the L1 and L2 loss.\n- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
metpy/MetPy
dev/_downloads/0eff36d3fdf633f2a71ae3e92fdeb5b8/Simple_Sounding.ipynb
bsd-3-clause
[ "%matplotlib inline", "Simple Sounding\nUse MetPy as straightforward as possible to make a Skew-T LogP plot.", "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nimport metpy.calc as mpcalc\nfrom metpy.cbook import get_test_data\nfrom metpy.plots import add_metpy_logo, SkewT\nfrom metpy.units import units\n\n# Change default to be better for skew-T\nplt.rcParams['figure.figsize'] = (9, 9)\n\n# Upper air data can be obtained using the siphon package, but for this example we will use\n# some of MetPy's sample data.\n\ncol_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']\n\ndf = pd.read_fwf(get_test_data('jan20_sounding.txt', as_file_obj=False),\n skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)\n\n# Drop any rows with all NaN values for T, Td, winds\ndf = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed'\n ), how='all').reset_index(drop=True)", "We will pull the data out of the example dataset into individual variables and\nassign units.", "p = df['pressure'].values * units.hPa\nT = df['temperature'].values * units.degC\nTd = df['dewpoint'].values * units.degC\nwind_speed = df['speed'].values * units.knots\nwind_dir = df['direction'].values * units.degrees\nu, v = mpcalc.wind_components(wind_speed, wind_dir)\n\nskew = SkewT()\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\nskew.ax.set_ylim(1000, 100)\n\n# Add the MetPy logo!\nfig = plt.gcf()\nadd_metpy_logo(fig, 115, 100)\n\n# Example of defining your own vertical barb spacing\nskew = SkewT()\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\n\n# Set spacing interval--Every 50 mb from 1000 to 100 mb\nmy_interval = np.arange(100, 1000, 50) * units('mbar')\n\n# Get indexes of values closest to defined interval\nix = mpcalc.resample_nn_1d(p, my_interval)\n\n# Plot only values nearest to defined interval values\nskew.plot_barbs(p[ix], u[ix], v[ix])\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\nskew.ax.set_ylim(1000, 100)\n\n# Add the MetPy logo!\nfig = plt.gcf()\nadd_metpy_logo(fig, 115, 100)\n\n# Show the plot\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.17/_downloads/01fb0f5b44af7b68840573c40d1eec05/plot_read_and_write_raw_data.ipynb
bsd-3-clause
[ "%matplotlib inline", "Reading and writing raw files\nIn this example, we read a raw file. Plot a segment of MEG data\nrestricted to MEG channels. And save these data in a new\nraw file.", "# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nfname = data_path + '/MEG/sample/sample_audvis_raw.fif'\n\nraw = mne.io.read_raw_fif(fname)\n\n# Set up pick list: MEG + STI 014 - bad channels\nwant_meg = True\nwant_eeg = False\nwant_stim = False\ninclude = ['STI 014']\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bad channels + 2 more\n\npicks = mne.pick_types(raw.info, meg=want_meg, eeg=want_eeg, stim=want_stim,\n include=include, exclude='bads')\n\nsome_picks = picks[:5] # take 5 first\nstart, stop = raw.time_as_index([0, 15]) # read the first 15s of data\ndata, times = raw[some_picks, start:(stop + 1)]\n\n# save 150s of MEG data in FIF file\nraw.save('sample_audvis_meg_trunc_raw.fif', tmin=0, tmax=150, picks=picks,\n overwrite=True)", "Show MEG data", "raw.plot()" ]
[ "code", "markdown", "code", "markdown", "code" ]
fazzolini/fast_ai
deeplearning1/nbs/lesson3.ipynb
apache-2.0
[ "Training a better model", "from __future__ import division, print_function\n%matplotlib inline\nfrom importlib import reload # Python 3\nimport utils; reload(utils)\nfrom utils import *\n\n#path = \"data/dogscats/sample/\"\npath = \"data/dogscats/\"\nmodel_path = path + 'models/'\nif not os.path.exists(model_path): os.mkdir(model_path)\n\n#batch_size=1\nbatch_size=64", "Are we underfitting?\nOur validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:\n\nHow is this possible?\nIs this desirable?\n\nThe answer to (1) is that this is happening because of dropout. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability p (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.\nThe purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.\nSo the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!\n(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.)\nRemoving dropout\nOur high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:\n- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)\n- Split the model between the convolutional (conv) layers and the dense layers\n- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch\n- Create a new model with just the dense layers, and dropout p set to zero\n- Train this new model using the output of the conv layers as training data.\nAs before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent...", "model = vgg_ft(2)", "...and load our fine-tuned weights.", "model.load_weights(model_path+'finetune3.h5')", "We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer:", "layers = model.layers\n\nlast_conv_idx = [index for index,layer in enumerate(layers) \n if type(layer) is Convolution2D][-1]\n\nlast_conv_idx\n\nlayers[last_conv_idx]\n\nconv_layers = layers[:last_conv_idx+1]\nconv_model = Sequential(conv_layers)\n# Dense layers - also known as fully connected or 'FC' layers\nfc_layers = layers[last_conv_idx+1:]", "Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of \"recipes\" that can get us a long way!", "batches = get_batches(path+'train', shuffle=False, batch_size=batch_size)\nval_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)\nsteps_per_epoch = int(np.ceil(batches.samples/batch_size))\nvalidation_steps = int(np.ceil(val_batches.samples/batch_size))\n\nval_classes = val_batches.classes\ntrn_classes = batches.classes\nval_labels = onehot(val_classes)\ntrn_labels = onehot(trn_classes)\n\nval_features = conv_model.predict_generator(val_batches, validation_steps)\n\ntrn_features = conv_model.predict_generator(batches, steps_per_epoch)\n\nsave_array(model_path + 'train_convlayer_features.bc', trn_features)\nsave_array(model_path + 'valid_convlayer_features.bc', val_features)\n\ntrn_features = load_array(model_path+'train_convlayer_features.bc')\nval_features = load_array(model_path+'valid_convlayer_features.bc')\n\ntrn_features.shape", "For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.", "# SINCE KERAS MAKES USE OF INVERTED DROPOUT WE \"NEUTRALIZE\" proc_wgts(layer):\ndef proc_wgts(layer): return [o for o in layer.get_weights()]\n\n# Such a finely tuned model needs to be updated very slowly!\nopt = RMSprop(lr=0.00001, rho=0.7)\n\ndef get_fc_model():\n model = Sequential([\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dense(4096, activation='relu'),\n Dropout(0.),\n Dense(4096, activation='relu'),\n Dropout(0.),\n Dense(2, activation='softmax')\n ])\n\n for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))\n\n model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])\n return model\n\nfc_model = get_fc_model()", "And fit the model in the usual way:", "fc_model.fit(trn_features, trn_labels, epochs=8, \n batch_size=batch_size, validation_data=(val_features, val_labels))\n\nfc_model.save_weights(model_path+'no_dropout.h5')\n\nfc_model.load_weights(model_path+'no_dropout.h5')", "Reducing overfitting\nNow that we've gotten the model to overfit, we can take a number of steps to reduce this.\nApproaches to reducing overfitting\nWe do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):\n\nAdd more data\nUse data augmentation\nUse architectures that generalize well\nAdd regularization\nReduce architecture complexity.\n\nWe'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.\nWhich types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)\nWe recommend always using at least some light data augmentation, unless you have so much data that your model will never see the same input twice.\nAbout data augmentation\nKeras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation:", "# dim_ordering='tf' uses tensorflow dimension ordering,\n# which is the same order as matplotlib uses for display.\n# Therefore when just using for display purposes, this is more convenient\ngen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1, \n height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, \n channel_shift_range=10., horizontal_flip=True, data_format='channels_last')", "Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested).", "# Create a 'batch' of a single image\nimg = np.expand_dims(ndimage.imread(path+'cat.jpg'),0)\n# Request the generator to create batches from this image\naug_iter = gen.flow(img)\n\n# Get eight examples of these augmented images\naug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)]\n\n# The original\nplt.imshow(img[0])", "As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches.", "# Augmented data\nplots(aug_imgs, (20,7), 2)\n\n# If we cheanged it then ensure that we return to theano dimension ordering\n# K.set_image_dim_ordering('th')", "Adding data augmentation\nLet's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it:", "gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1, \n height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True)\n\nbatches = get_batches(path+'train', gen, batch_size=batch_size)\n# NB: We don't want to augment or shuffle the validation set\nval_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)\n\nsteps_per_epoch = int(np.ceil(batches.samples/batch_size))\nvalidation_steps = int(np.ceil(val_batches.samples/batch_size))", "When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.\nTherefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable:", "fc_model = get_fc_model()\n\nfor layer in conv_model.layers: layer.trainable = False\n# Look how easy it is to connect two models together!\nconv_model.add(fc_model)", "Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.", "conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])\n\nconv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=8, \n validation_data=val_batches, validation_steps=validation_steps)\n\nconv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=3, \n validation_data=val_batches, validation_steps=validation_steps)\n\nconv_model.save_weights(model_path + 'aug1.h5')\n\nconv_model.load_weights(model_path + 'aug1.h5')", "Batch normalization\nAbout batch normalization\nBatch normalization (batchnorm) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called normalization. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.\nPrior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights.\nBatchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that all modern networks should use batchnorm, or something equivalent. There are two reasons for this:\n1. Adding batchnorm to a model can result in 10x or more improvements in training speed\n2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to reduce overfitting.\nAs promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:\n1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean\n2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.\nThis ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so.\nAdding batchnorm to the model\nWe can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers):", "conv_layers[-1].output_shape[1:]\n\ndef get_bn_layers(p):\n return [\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dense(4096, activation='relu'),\n Dropout(p),\n BatchNormalization(),\n Dense(4096, activation='relu'),\n Dropout(p),\n BatchNormalization(),\n Dense(1000, activation='softmax')\n ]\n\np=0.6\n\nbn_model = Sequential(get_bn_layers(0.6))\n\n# where is this file?\n# bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5')\n\n# SINCE KERAS MAKES USE OF INVERTED DROPOUT WE \"NEUTRALIZE\" proc_wgts(layer):\ndef proc_wgts(layer, prev_p, new_p):\n scal = 1\n return [o*scal for o in layer.get_weights()]\n\nfor l in bn_model.layers: \n if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6))\n\nbn_model.pop()\nfor layer in bn_model.layers: layer.trainable=False\n\nbn_model.add(Dense(2,activation='softmax'))\n\nbn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])\n\nbn_model.fit(trn_features, trn_labels, epochs=8, validation_data=(val_features, val_labels))\n\nbn_model.save_weights(model_path+'bn.h5')\n\nbn_model.load_weights(model_path+'bn.h5')\n\nbn_layers = get_bn_layers(0.6)\nbn_layers.pop()\nbn_layers.append(Dense(2,activation='softmax'))\n\nfinal_model = Sequential(conv_layers)\nfor layer in final_model.layers: layer.trainable = False\nfor layer in bn_layers: final_model.add(layer)\n\nfor l1,l2 in zip(bn_model.layers, bn_layers):\n l2.set_weights(l1.get_weights())\n\nfinal_model.compile(optimizer=Adam(), \n loss='categorical_crossentropy', metrics=['accuracy'])\n\nfinal_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1, \n validation_data=val_batches, validation_steps=validation_steps)\n\nfinal_model.save_weights(model_path + 'final1.h5')\n\nfinal_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4, \n validation_data=val_batches, validation_steps=validation_steps)\n\nfinal_model.save_weights(model_path + 'final2.h5')\n\nfinal_model.optimizer.lr=0.001\n\nfinal_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=4, \n validation_data=val_batches, validation_steps=validation_steps)\n\nbn_model.save_weights(model_path + 'final3.h5')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
robertoalotufo/ia898
src/ellipse.ipynb
mit
[ "Function ellipse\nSynopse\nCreate a binary ellipse image.\n\n\ng = ellipse(s, r, c, theta)\n\n\nOutput:\n\ng: Image.\n\n\nInput:\ns: Image. [rows cols], output image dimensions.\nr: Double. [rRows rCols], radius for y and x directions.\nc: Image. [row0 col0], center of the circle.\ntheta: Double. Angle rotation in radians. (optional)\n\n\n\nDescription\nThe $ellipse$ function creates a binary image with dimensions given by $s$, radius given by $r (r[0] = rRows; r[1] = rCols)$ and center given by $c$. The pixels inside the circle are one and outside zero.\nFunction code", "import numpy as np\n \ndef ellipse(s, r, c, theta=0):\n rows, cols = s[0], s[1]\n rr0, cc0 = c[0], c[1]\n rr, cc = np.meshgrid(range(rows), range(cols), indexing='ij')\n rr = rr - rr0\n cc = cc - cc0\n cos = np.cos(theta)\n sen = np.sin(theta)\n i = cos/r[1]\n j = sen/r[0]\n m = -sen/r[1]\n n = cos/r[0]\n g = ((i*cc + m*rr)**2 + (j*cc + n*rr)**2) <= 1\n return g\n\ntesting = (__name__ == \"__main__\")\n\nif testing:\n ! jupyter nbconvert --to python ellipse.ipynb\n import numpy as np\n import sys,os\n import matplotlib.image as mpimg\n ia898path = os.path.abspath('../../')\n if ia898path not in sys.path:\n sys.path.append(ia898path)\n import ia898.src as ia\n", "Examples\nNumerical example:", "if testing:\n g = ia.ellipse([16,16], [2,4], [8,8], np.pi * 0.25)\n print('g:\\n', g.astype(int))", "Measuring time:", "if testing:\n from time import time\n t = time()\n g = ia.ellipse([300,300], [90,140], [150,150], np.pi * 0.25)\n tend = time()\n print('Computational time (10k, 10k) is {0:.2f} seconds.'.format(tend - t))\n ia.adshow(g, \"Ellipse\")\n\nif testing:\n print('Computational time (10k, 10k) is:')\n %timeit ia.ellipse([300,300], [90,140], [150,150], np.pi * 0.25)", "Equation\n$$\n \\begin{matrix} \n \\frac{((x-center_x)\\cos(\\theta) - (y-center_y)\\sin(\\theta))^2}{r_x^2}\n +\n \\frac{((x-center_x)\\sin(\\theta) - (y-center_y)\\cos(\\theta))^2}{r_y^2} <= 1\n \\end{matrix}\n$$\nContributions\n\nRafael Berri, 23sep2013: initial function." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ToqueWillot/M2DAC
FDMS/TME4/TME4_FiltrageCollaboratif_V2-Copy4.ipynb
gpl-2.0
[ "TME4 FDMS Collaborative Filtering\nFlorian Toqué & Paul Willot", "%matplotlib inline\nfrom random import random\nimport math\nimport numpy as np\nimport copy\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nimport pickle as pkl\nfrom scipy.spatial import distance\nimport seaborn as sns\nsns.set_style('darkgrid')", "Loading the data", "def loadMovieLens(path='./data/movielens'):\n #Get movie titles\n movies={}\n rev_movies={}\n for idx,line in enumerate(open(path+'/u.item')):\n idx,title=line.split('|')[0:2]\n movies[idx]=title\n rev_movies[title]=idx\n\n # Load data\n prefs={}\n for line in open(path+'/u.data'):\n (user,movieid,rating,ts)=line.split('\\t')\n prefs.setdefault(user,{})\n prefs[user][movies[movieid]]=float(rating)\n \n return prefs,rev_movies\n\ndata,movies = loadMovieLens(\"data/ml-100k\")", "Content example", "data['3']", "Splitting data between train/test\nWe avoid to let unseen data form the train set in the test set.\nWe also try to minimise the dataset reduction by splitting on each user.", "def getRawArray(data):\n d = []\n for u in data.keys():\n for i in data[u].keys():\n d.append([u,i,data[u][i]])\n return np.array(d)\n\n# splitting while avoiding to reduce the dataset too much\ndef split_train_test(data,percent_test):\n test={}\n train={}\n movie={}\n for u in data.keys():\n test.setdefault(u,{})\n train.setdefault(u,{})\n for movie in data[u]:\n #print(data[u][movie])\n if (random()<percent_test):\n test[u][movie]=data[u][movie]\n else:\n train[u][movie]=data[u][movie]\n return train, test\n\ndef split_train_test_by_movies(data,percent_test):\n test={}\n train={}\n movie={}\n for u in data.keys():\n for movie in data[u]:\n if (random()<percent_test):\n try:\n test[movie][u]=data[u][movie]\n except KeyError:\n test.setdefault(movie,{})\n test[movie][u]=data[u][movie]\n else:\n try:\n train[movie][u]=data[u][movie]\n except KeyError:\n train.setdefault(movie,{})\n train[movie][u]=data[u][movie]\n return train, test\n\npercent_test=0.2\ntrain,test=split_train_test(data,percent_test)", "split used for convenience on the average by movie baseline", "percent_test=0.2\nm_train,m_test=split_train_test_by_movies(data,percent_test)", "cleaning\n18 movies have no ratings at all", "def deleteUnseenInTest(train,test):\n for k in test.keys():\n try:\n train[k]\n except KeyError:\n test.pop(k,None)\n\ndef deleteUnknowData(triplet_test, trainUsers, trainItems) :\n to_Del = []\n for i,t in enumerate(triplet_test):\n if not t[0] in trainUsers:\n to_Del.append(i)\n elif not t[1] in trainItems:\n to_Del.append(i)\n return np.delete(triplet_test, to_Del, 0)\n\ndeleteUnseenInTest(train,test)\ndeleteUnseenInTest(m_train,m_test)\n\nlen(test)", "Matrix used for fast evaluation", "def getTriplet(data):\n triplet = []\n for u in data.keys():\n for i in data[u].keys():\n triplet.append([u,i,data[u][i]])\n return triplet\n\ndef getDataByUsers(triplet) :\n dataByUsers = {}\n for t in triplet:\n if not t[0] in dataByUsers.keys():\n dataByUsers[t[0]] = {}\n dataByUsers[t[0]][t[1]] = float(t[2])\n return dataByUsers\n\ndef getDataByItems(triplet) :\n dataByItems = {}\n for t in triplet:\n if not t[1] in dataByItems.keys():\n dataByItems[t[1]] = {}\n dataByItems[t[1]][t[0]] = float(t[2])\n return dataByItems\n\n# Split l'ensemble des triplets \ndef splitTrainTest(triplet, testProp) :\n perm = np.random.permutation(triplet)\n splitIndex = int(testProp * len(triplet))\n return perm[splitIndex:], perm[:splitIndex]\n\n# supprime des données de test les données inconnus en train\ndef deleteUnknowData(triplet_test, trainUsers, trainItems) :\n to_Del = []\n for i,t in enumerate(triplet_test):\n if not t[0] in trainUsers:\n to_Del.append(i)\n elif not t[1] in trainItems:\n to_Del.append(i)\n return np.delete(triplet_test, to_Del, 0)\n \n\n%%time\n\ntriplet = getTriplet(data)\n\n# split 80% train 20% test\narrayTrain, arrayTest = splitTrainTest(triplet , 0.2)\n\n# train\ntrainUsers = getDataByUsers(arrayTrain)\ntrainItems = getDataByItems(arrayTrain)\n\n#print len(triplet_test)\narrayTest = deleteUnknowData(arrayTest, trainUsers, trainItems)\n#print len(triplet_test)\n\n# test\ntestUsers = getDataByUsers(arrayTest)\ntestItems = getDataByItems(arrayTest)\n\narrayAll = getRawArray(data)\narrayTrain = getRawArray(train)\narrayTest = getRawArray(test)\narrayTest = deleteUnknowData(arrayTest,train,m_train)\n\narrayTest[:10,:10]", "Baseline: mean by user", "class baselineMeanUser:\n def __init__(self):\n self.users={}\n def fit(self,train):\n for user in train.keys():\n note=0.0\n for movie in train[user].keys():\n note+=train[user][movie]\n note=note/len(train[user])\n self.users[user]=note\n \n def predict(self,users):\n return [self.users[u] for u in users]\n\nbaseline_mu= baselineMeanUser()\nbaseline_mu.fit(train)\npred = baseline_mu.predict(evalArrayTest[:,0])\nprint(\"Mean Error %0.6f\" %(\n (np.array(pred) - np.array(evalArrayTest[:,2], float)) ** 2).mean())\n\nclass baselineMeanMovie:\n def __init__(self):\n self.movies={}\n def fit(self,train):\n for movie in train.keys():\n note=0.0\n for user in train[movie].keys():\n note+=train[movie][user]\n note=note/len(train[movie])\n self.movies[movie]=note\n \n def predict(self,movies):\n res=[]\n for m in movies:\n try:\n res.append(self.movies[m])\n except:\n res.append(3)\n return res\n\nbaseline_mm= baselineMeanMovie()\nbaseline_mm.fit(m_train)\npred = baseline_mm.predict(evalArrayTest[:,1])\nprint(\"Mean Error %0.6f\" %(\n (np.array(pred) - np.array(evalArrayTest[:,2], float)) ** 2).mean())", "Raw matrix are used for convenience and clarity.\nStructure like scipy sparse matrix or python dictionnaries may be used for speedup.\nComplete dataset", "rawMatrix = np.zeros((len(data.keys()),1682))\nfor u in data:\n for m in data[u]:\n rawMatrix[int(u)-1][int(movies[m])-1] = data[u][m]\n\nprint(np.shape(rawMatrix))\nrawMatrix[:5,:5]", "Train and test dataset", "rawMatrixTrain = np.zeros((len(data.keys()),1682))\nfor u in train:\n for m in train[u]:\n rawMatrixTrain[int(u)-1][int(movies[m])-1] = train[u][m]\n \nrawMatrixTest = np.zeros((len(data.keys()),1682))\nfor u in test:\n for m in test[u]:\n rawMatrixTest[int(u)-1][int(movies[m])-1] = test[u][m]", "Non-negative Matrix Factorization\nFast implementation using numpy's matrix processing.", "#from scipy import linalg\n\ndef nmf(X, latent_features, max_iter=100, eps = 1e-5,printevery=100):\n\n print \"NMF with %d latent features, %d iterations.\"%(latent_features, max_iter)\n\n # mask used to ignore null element (coded by zero)\n mask = np.sign(X)\n\n # randomly initialized matrix\n rows, columns = X.shape\n A = np.random.rand(rows, latent_features)\n \n Y = np.random.rand(latent_features, columns)\n # Not used as I couldn't find significant improvments\n #Y = linalg.lstsq(A, X)[0] # initializing that way as recommanded in a blog post\n #Y = np.maximum(Y, eps) # avoiding too low values\n\n masked_X = mask * X\n masktest = np.sign(rawMatrixTest) # used for prints\n masktrain = np.sign(rawMatrixTrain) # used for prints\n\n for i in range(1, max_iter + 1):\n\n top = np.dot(masked_X, Y.T)\n bottom = (np.dot((mask * np.dot(A, Y)), Y.T)) + eps\n A *= top / bottom\n \n top = np.dot(A.T, masked_X)\n bottom = np.dot(A.T, mask * np.dot(A, Y)) + eps\n Y *= top / bottom\n\n\n # evaluation\n if i % printevery == 0 or i == 1 or i == max_iter:\n X_est = np.dot(A, Y)\n q = masktest*X_est - rawMatrixTest\n q_train = masktrain*X_est - rawMatrixTrain\n print \"Iteration %d, Err %.05f, Err train %.05f\"%( i, (q*q).sum()/ masktest.sum(), (q_train*q_train).sum()/ masktest.sum() )\n \n return A, Y\n\n%%time\nA,Y = nmf(rawMatrixTrain,100,eps = 1e-5,max_iter=5,printevery=1)\nresMatrix = A.dot(Y)", "We see that it quickly get better than the baseline.\nHowever, we see below that it overfit after that:", "%%time\nA,Y = nmf(rawMatrixTrain,50,eps = 1e-5,max_iter=500,printevery=100)\nresMatrix = A.dot(Y)", "This is due to the high sparsity of the matrix.\nWe can of course reduce the features matrix size to avoid overfitting, but that will limit further improvments.", "%%time\nA,Y = nmf(rawMatrixTrain,1,eps = 1e-5,max_iter=100,printevery=20)\nresMatrix = A.dot(Y)", "Despite good results in few seconds on this dataset, this can only get us so far.\nWe then have to add regularization to the cost function.\nEvaluation", "## This class is used to make predictions\nclass evalMF:\n def __init__(self,resMatrix,dicU,dicI):\n self.resMatrix=resMatrix\n self.dicU = dicU\n self.dicI = dicI\n def fit(self):\n pass\n \n def predict(self,user,movie):\n return self.resMatrix[int(user)-1][int(self.dicI[movie])-1]\n\nmf = evalMF(resMatrix,data,movies)\n\n# np.array([ (float(ra[2]) - mf.predict(ra[0],ra[1]))**2 for ra in evalArrayTest]).mean()\n# faster evaluation\nmasqueTest=np.sign(rawMatrixTest)\nq = masqueTest*resMatrix - rawMatrixTest\n(q*q).sum()/ masqueTest.sum()", "Let's see some predictions", "print data[\"1\"][\"Akira (1988)\"]\nprint mf.predict(\"1\",\"Akira (1988)\")\nprint data[\"1\"][\"I.Q. (1994)\"]\nprint mf.predict(\"1\",\"I.Q. (1994)\")", "We usualy see an important difference between users, so we need to take the bias into account.", "summ=0\nfor i in data[\"1\"]:\n summ+=(float(data[\"1\"][i]) - mf.predict(\"1\",i))**2\nsumm/len(data[\"1\"])\n\nsumm=0\nfor i in data[\"3\"]:\n summ+=(float(data[\"3\"][i]) - mf.predict(\"3\",i))**2\nsumm/len(data[\"3\"])", "We have not been very successful with incorporating the bias and L1 into that implementation...\nWe build a simpler model below, and then add the regularization and bias.", "class FactoMatriceBiais():\n def __init__(self, k, epsilon=1e-3, nbIter=2000, lamb=0.5):\n self.k = k\n self.lamb = lamb\n self.epsilon = epsilon\n self.nbIter = nbIter\n\n def fit(self, trainUsers, trainItems, triplet):\n\n self.p = {}\n self.q = {}\n self.bu = {} #biais sur les utilisateurs\n self.bi = {} #biais sur les items\n self.mu = np.random.random() * 2 - 1\n \n for j in range(len(triplet)): # On initialise les cases vides en random\n u = triplet[j][0]\n i = triplet[j][1]\n if not u in self.p:\n self.p[u] = np.random.rand(1,self.k) # matrice ligne pour un users\n self.bu[u] = np.random.rand() * 2 - 1\n if not i in self.q:\n self.q[i] = np.random.rand(self.k,1) # matrice colonne pour un item\n self.bi[i] = np.random.rand() * 2 - 1\n loss = [] \n for it in range(self.nbIter):\n ind = np.random.randint(len(triplet))\n u = triplet[ind][0]\n i = triplet[ind][1]\n \n tmp = trainUsers[u][i] - (self.mu + self.bi[i] + self.bu[u] +self.p[u].dot(self.q[i])[0][0])\n self.p[u] = (1 - self.lamb * self.epsilon) * self.p[u] + self.epsilon * 2 * tmp * self.q[i].transpose()\n self.bu[u] = (1 - self.lamb * self.epsilon) * self.bu[u] + self.epsilon * 2 * tmp\n self.q[i] = (1 - self.lamb * self.epsilon) * self.q[i] + self.epsilon * 2 * tmp * self.p[u].transpose()\n self.bi[i] = (1 - self.lamb * self.epsilon) * self.bi[i] + self.epsilon * 2 * tmp\n self.mu = (1 - self.lamb * self.epsilon) * self.mu + self.epsilon * 2 * tmp\n \n loss.append(tmp*tmp) # erreur sans régularisation\n #loss.append(tmp**2 + self.lamb *(np.linalg.norm(self.p[u]).sum()**2 + np.linalg.norm(self.q[i]).sum()**2))\n \n if ((it)%(self.nbIter*0.2) == 0) :\n print \"itération : \" , it\n print \"loss : \", np.mean(loss)\n print \"-------\"\n loss = []\n # evaluation\n if i % printevery == 0 or i == 1 or i == max_iter:\n X_est = np.dot(A, Y)\n q = masktest*X_est - rawMatrixTest\n q_train = masktrain*X_est - rawMatrixTrain\n print \"Iteration %d, Err %.05f, Err train %.05f\"%( i, (q*q).sum()/ masktest.sum(), (q_train*q_train).sum()/ masktest.sum() )\n\n \n def predict(self, triplet_test):\n pred = np.zeros(len(triplet_test))\n for ind,t in enumerate(triplet_test):\n pred[ind] = self.mu + self.bu[t[0]] + self.bi[t[1]] + self.p[t[0]].dot(self.q[t[1]])[0][0]\n return pred\n \n def score(self, triplet_test) :\n return ((self.predict(triplet_test) - np.array(triplet_test[:,2], float)) ** 2).mean()\n\n%%time\nk = 10\nepsilon = 7e-3\nnbIter = 20*len(arrayTrain)\nlamb = 0.2\nmodel = FactoMatriceBiais(k, epsilon=epsilon, nbIter=nbIter,lamb=lamb)\nmodel.fit(trainUsers, trainItems, arrayTrain)\nprint \"erreur en test:\", model.score(arrayTest)", "", "class tSNE():\n def __init__(self,perp, nIter, lr, moment, dim=2):\n self.perp = perp # entre 5 et 50\n self.nIter = nIter\n self.lr = lr\n self.moment = moment\n self.dim = dim \n def fit(self,data):\n nEx = np.shape(data)[0]\n # Matrice des distances de ||xi - xj||² #\n normx = np.sum(data**2,1)\n normx = np.reshape(normx, (1, nEx))\n distancex = normx + normx.T - 2 * data.dot(data.T)\n # Calcul des sigma ---------------------------------------------------------------#\n lperp = np.log2(self.perp)\n # initialisation bornes pour la recherche dichotomique #\n sup = np.ones((nEx,1)) * np.max(distancex)\n inf = np.zeros((nEx,1))\n self.sigma = (sup + inf) / 2.\n # recherche dichotomique #\n stop = False\n while not stop:\n # Calculer la matrice des p(i|j)\n self.pcond = np.exp(-distancex / (2. * (self.sigma**2)))\n self.pcond = self.pcond / np.sum(self.pcond - np.eye(nEx),1).reshape(nEx,1)\n # Calculer l'entropie de p(i|j)\n entropy = - np.sum(self.pcond * np.log2(self.pcond), 0)\n # Mise a jour des bornes\n # Si il faut augmenter sigma\n up = entropy < lperp \n inf[up,0] = self.sigma[up,0]\n # Si il faut baisser sigma\n down = entropy > lperp \n sup[down,0] = self.sigma[down,0]\n # Mise a jour de sigma et condition d'arrêt\n old = self.sigma\n self.sigma = ((sup + inf) / 2.)\n if np.max(np.abs(old - self.sigma)) < 1e-5:\n stop = True\n #print np.exp(entropy)\n #print self.sigma.T \n #--------------------------------------------------------------------------#\n #initialiser y\n self.embeddings = np.zeros((self.nIter+2, nEx, self.dim))\n self.embeddings[1] = np.random.randn(nEx, self.dim) * 1e-4\n #--------------------------------------------------------------------------#\n # p(ij)\n self.pij = (self.pcond + self.pcond.T) / (2.*nEx)\n np.fill_diagonal(self.pij, 0)\n # Descente de Gradient\n #loss = []\n for t in xrange(1,self.nIter+1):\n # Matrice des distances \n normy = np.sum((self.embeddings[t]**2),1)\n normy = np.reshape(normy, (1, nEx))\n distancey = normy + normy.T - 2 * self.embeddings[t].dot(self.embeddings[t].T)\n # q(ij)\n # self.qij = (distancey.sum() + nEx*(nEx-1)) / (1 + distancey)\n # np.fill_diagonal(self.qij, 0)\n self.qij = 1 / (1 + distancey)\n np.fill_diagonal(self.qij, 0)\n self.qij = self.qij / self.qij.sum()\n # Descente de gradient\n yt = self.embeddings[t]\n tmpgrad = 4 * ((self.pij - self.qij) / (1 + distancey)).reshape(nEx, nEx,1)\n for i in range(nEx):\n dy = (tmpgrad[i] * (yt[i]-yt)).sum(0)\n self.embeddings[t+1][i] = yt[i] - self.lr * dy + self.moment * (yt[i] - self.embeddings[t-1,i])\n #l = stats.entropy(self.qij, self.pij, 2).mean()\n #loss.append(l)\n #if (t % 100 == 0):\n # print t,l\n #if (t % 100 == 0):\n # print t\n\nX_ini = np.vstack([data.data[data.target==i]\n for i in range(10)])\ncols = np.hstack([data.target[data.target==i]\n for i in range(10)])\n\n%%time\nfrom sklearn import datasets\nfrom scipy import stats\ndata = datasets.load_digits()\n\nmodel = tSNE(10,500,1000,0)\nmodel.fit(X_ini)\n\npalette = np.array(sns.color_palette(\"hls\", 10))\nt = np.shape(model.embeddings)[0] -1\n\n# We create a scatter plot.\nf = plt.figure(figsize=(8, 8))\nax = plt.subplot(aspect='equal')\nsc = ax.scatter(model.embeddings[t,:,0], model.embeddings[t,:,1], lw=0, s=40,\n c=palette[cols.astype(np.int)])\nplt.xlim(-25, 25)\nplt.ylim(-25, 25)\nax.axis('off')\nax.axis('tight')\n\n#plt.plot(mod.embedding_[12][0],mod.embedding_[12][1], 'bv')\n \nplt.show()", "For reference, let's compare it with sklearn's TSNE", "from sklearn.manifold import TSNE\n\nmod = TSNE(random_state=1337)\n\n%%time\nX = mod.fit_transform(X_ini)\n\npalette = np.array(sns.color_palette(\"hls\", 10))\n\n# We create a scatter plot.\nf = plt.figure(figsize=(8, 8))\nax = plt.subplot(aspect='equal')\nsc = ax.scatter(X[:,0], X[:,1], lw=0, s=40,\n c=palette[cols.astype(np.int)])\nplt.xlim(-25, 25)\nplt.ylim(-25, 25)\nax.axis('off')\nax.axis('tight')\n\n#plt.plot(mod.embedding_[12][0],mod.embedding_[12][1], 'bv')\n \nplt.show()", "It produce similar results, albeit faster, as expected." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pylablanche/MillionSong
Exploration_of_data_in_MillionMusicSubset.ipynb
mit
[ "Required imports", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport sqlite3\nimport h5py as h5\n%matplotlib inline\n\nplt.rcParams['figure.figsize'] = (8,6)\nsns.set_palette('Dark2')\nsns.set_style('whitegrid')\n\npath_to_data = '../MillionSongSubset/'", "Reading SQL tables\nAlternatively, there is a demo available at https://labrosa.ee.columbia.edu/millionsong/sites/default/files/tutorial1.py.txt that was made specifically for reading these files", "con_simi = sqlite3.connect(path_to_data+'AdditionalFiles/subset_artist_similarity.db')\ncon_term = sqlite3.connect(path_to_data+'AdditionalFiles/subset_artist_term.db')\ncon_meta = sqlite3.connect(path_to_data+'AdditionalFiles/subset_track_metadata.db')\n\ncur_simi = con_simi.cursor()\ncur_term = con_term.cursor()\ncur_meta = con_meta.cursor()", "First we need to find out the table names in each of our files:", "# subset_artist_similarity.db\nres = con_simi.execute(\"SELECT name FROM sqlite_master WHERE type='table';\")\nfor name in res:\n print(name[0])\n\n# subset_artist_term\nres = con_term.execute(\"SELECT name FROM sqlite_master WHERE type='table';\")\nfor name in res:\n print(name[0])\n\n# subset_track_metadata\nres = con_meta.execute(\"SELECT name FROM sqlite_master WHERE type='table';\")\nfor name in res:\n print(name[0])", "Exploring the tables", "songs = pd.read_sql_query('SELECT * FROM songs WHERE year!=0',con_meta)\n\nsongs.head(5)", "Histogram of artist_hotttnesss", "songs.artist_hotttnesss.hist(bins=np.linspace(0.0,1.0,41));\nplt.xlabel('Artist Hotness')", "Scatter plots of artist_hotttnesss vs year", "fig, ax = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True,\n figsize=(15,8))\n\nax[0].scatter(songs.year, songs.artist_hotttnesss, marker='.')\n\nax[1].hexbin(songs.year, songs.artist_hotttnesss, cmap='viridis', gridsize=41, mincnt=1.0)\n\nplt.subplots_adjust(wspace=0.02);\n\n", "Scatter plots of artist_familiarity vs year compared to artist_hotttnesss vs year", "fig, ax = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True,\n figsize=(15,12))\n\nax[0,0].scatter(songs.year, songs.artist_familiarity, marker='.')\nax[0,1].hexbin(songs.year, songs.artist_familiarity, cmap='viridis', gridsize=41, mincnt=1.0)\n\nax[1,0].scatter(songs.year, songs.artist_hotttnesss, marker='.')\nax[1,1].hexbin(songs.year, songs.artist_hotttnesss, cmap='viridis', gridsize=41, mincnt=1.0)\nax[-1,-1].set_xlim(1920,songs.year.max());\nplt.subplots_adjust(wspace=0.02, hspace=0.05)", "Artist_hotttnesss vs artist familiarity", "fig, ax = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True,\n figsize=(15,8))\n\nax[0].scatter(songs.artist_familiarity, songs.artist_hotttnesss, marker='.')\nax[1].hexbin(songs.artist_familiarity, songs.artist_hotttnesss, cmap='viridis', gridsize=51, mincnt=1.0)\n", "Artist_hotttnesss vs artist familiarity with a linear fit", "plt.subplots_adjust(wspace=0.02);\n### Artist_hotttnesss vs artist familiarity\nsns.lmplot(data=songs, x='artist_familiarity', y='artist_hotttnesss',\n markers='.', size=10);", "Artist_familiarity compared to artist_hotttnesss over time", "tmp = songs.groupby('year').mean()\ntmp[['artist_familiarity','artist_hotttnesss']].plot();", "Reading HDF5 files", "with pd.HDFStore(path_to_data+'AdditionalFiles/subset_msd_summary_file.h5') as store:\n print(store)\n analysis_summary = store.select('analysis/songs')\n metadata_summary = store.select('metadata/songs')\n musicbrainz_summary = store.select('musicbrainz/songs') \n\nanalysis_summary.head()\n\nmetadata_summary.head()\n\nmusicbrainz_summary.head()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ryan-leung/PHYS4650_Python_Tutorial
notebooks/Jan2018/python-syntax.ipynb
bsd-3-clause
[ "Python Syntax\nIntroduction\nIn this tutorial notebook, you will learn how to do programming in python and Jupyter (formerly named as ipython). \nSyntax of python is cosidered as \"clean\" compared to other languages. \nOpen a new python notebook, try to copy and paste the following codes to play with. The following codes show an example of a woring python code that calculate the \nIntroduction to ipython/jupyter\nThe Jupyter notebook is getting more attentions in the natural Science field in the past years. The development of jupyter notebook makes it more advance and stable. This is an article about \nhttp://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261\nBasic operations:\n\nClick on the cell to select it.\nPress SHIFT+ENTER on your keyboard or press the play button (<button class='btn btn-default fa-step-forward fa'><span class=\"toolbar-btn-label\">Run</span></button>) in the toolbar above.\nOpen new cells using the plus button (<button class='btn btn-default fa-plus fa' title=\"Run\"></button>)\n\nLet's us print a Hello World! statement\n\nWrite\npython\nprint(\"Hello World!\") \ninto the cell below:\nPress SHIFT+ENTER on your keyboard!\n\nHello World! in other language\nFortran\nfortran\n PROGRAM HELLO\n WRITE (*,100)\n STOP\n100 FORMAT (' Hello World! ' /)\n END\nLisp\nlisp\n(print \"Hello World!\")\nC++\n```cpp\ninclude <iostream.h>\nmain()\n{\n cout << \"Hello World!\" << endl;\n return 0;\n}\n```\nJava\njava\nclass HelloWorld {\n static public void main( String args[] ) {\n System.out.println( \"Hello World!\" );\n }\n}\nMatlab\noctave\ndisp('Hello World!');\nJavaScript\njavascript\ndocument.write('Hello World!')\nScreen Output\nTo enter more than one line in a cell, press Enter. The print function will print all the objects seprated by a comma , on the same line, the output will have . For example, print \"Hello\", \"World!\" will have a space between Hello and World!.", "print \"Hello\", \"World!\"\n\nprint \"Tips 3: Use \\ to escape an characters like \\\"\"\nprint \"Tips 4: Use \\\\n \\n to make a newline character\"\nprint '''Tips 5: Use three \\' to \nmake \nmultiple \nline\n''' ", "The magic command (ipython specific)\nAny command starts with % are magic command in ipython notebook. These % command can only be used in ipython instant. A full list of magic command can be found here: \nhttp://ipython.readthedocs.io/en/stable/interactive/magics.html. \nThese commands are particularly useful in developing and debugging your program.\nImport package and library\nPython is rich in library. Use expression like\npython\nimport xxxxxxx\nor\npython\nfrom xxxxxxx import yyyyyy\nto import library provided that you know the name of your library xxxxxx and objects yyyyyyy in the library xxxxxxx.\nPackage is not loaded at the beginning, you need to import it before using it.\nObject-oriented programme\nExample: Time module", "time.sleep(0.5);\nprint \"Too bad\"\n\nimport time\ntime.sleep(0.5);\nprint \"Now its work\"\n\nprint \"We delete the time object to unload it from memory\"\ndel time\ntime.sleep(0.5);", "Markdown\nSometimes you may need to write down some notes for yourself. The Jupyter notebook provides convenient ways for you to describe the notes in a Markdown mark-up language. To learn this language, you can look at the following page: https://guides.github.com/features/mastering-markdown/ . GitHub utilizes Markdown extensively. \nTo change the purpose of the cell, you can look up a widget like this: \n<div style=\"max-width: 100px;\"><select id=\"cell_type\" class=\"form-control select-xs\"><option value=\"code\">Code</option><option value=\"markdown\">Markdown</option><option value=\"raw\">Raw NBConvert</option><option value=\"heading\">Heading</option><option value=\"multiselect\" disabled=\"disabled\" style=\"display: none;\">-</option></select></div>\n\nand select Markdown" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
h-mayorquin/camp_india_2016
tutorials/rate models/Ratemodel1.ipynb
mit
[ "%matplotlib inline\nimport math\nfrom scipy.integrate import odeint\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.decomposition import PCA", "Rate models\nWe first start with a simple ratemodel of a neuron.\n$ \\tau \\dot{w} = -w + i $ where $i$ is input", "def eqs(y,t):\n tau=10\n dydt=(-y + I(t))/tau\n return dydt\n\n# def I(t):\n# return 0.1\n\n# def I(t):\n# if t>500 and t<700:\n# return 0.1\n# return 0\n\ndef I(t):\n T=200\n return 0.1*np.sin(2*np.pi*t/T)+0.1\n\n\ny0=[0.3]\ntmax=0.5\nt = np.linspace(0, tmax*1000, tmax*1000+1)\nsol = odeint(eqs, y0, t , hmax=1)\nplt.plot(t, sol[:, 0], 'b',label=\"Output\")\nplt.plot(t, map(I,t), 'r',label=\"Input\")\nplt.legend(loc='best')\nplt.xlabel('t')\nplt.grid()\nplt.show()\n", "Lets make a neuron with an autoapse, a synapse with itself\n$ \\tau \\dot{y} = -y + Wy + i $\n\nPlay with the W value to find what the behaviour is for different W. What point does the behaviour change? \nHow does the neuron respond to different input values.", "def eqs(y,t):\n tau=5\n W=1.\n dydt=(-y + W * y + I(t))/tau\n return dydt\n\ndef I(t):\n return 0\n\n# def I(t):\n# if t>500 and t<700:\n# return 0.1\n# return 0\n\n# def I(t):\n# T=200\n# return 0.1*np.sin(2*np.pi*t/T) +0.1\n\n\ny0=[2]\ntmax=1\nt = np.linspace(0, tmax*1000, tmax*1000+1)\nsol = odeint(eqs, y0, t , hmax=1)\nplt.plot(t, sol[:, 0], 'b',label=\"Output\")\nplt.plot(t, map(I,t), 'r',label=\"Input\")\nplt.legend(loc='best')\nplt.xlabel('t')\nplt.grid()\nplt.show()\n", "Mutually Inhibitory Pair\n$ \\tau \\dot{x_1} = -\\bar{x_1} - \\beta x_2 +b_1 $\n$ \\tau \\dot{x_2} = -\\bar{x_2} - \\beta x_1 +b_2 $\nwhich can be written in the vector form as:\n$ \\tau \\dot{\\bar{y}} = -\\bar{y} + W\\bar{y} + \\bar{b} $\nSee the effect of changing beta and b", "def eqs(y,t):\n tau=np.array([2,4])\n b=np.array([0.2,0.1])\n beta=0.5\n W=np.array([[0, -beta],[-beta,0]])\n dydt=(-y + W.dot(y) + b)/tau \n return dydt\n\ny0=np.array([0.1,0.3])\n# t = np.linspace(0, 20, 1001)\ntmax=0.1\nt = np.linspace(0, tmax*1000, tmax*1000+1)\n\nsol = odeint(eqs, y0, t, hmax=1)\nplt.plot(t, sol[:, 0], 'b') #x1\nplt.plot(t, sol[:, 1], 'g') #x2\nplt.show()", "Transforming \n$z_1=x_1+x_2$ \nand \n$z_2=x_1-x_2$ \nwe get \n$ \\tau \\dot{z_1} = -\\bar{z_1} - \\beta z_1 + (b_1+b_2) $\n$ \\tau \\dot{z_2} = -\\bar{z_2} + \\beta z_2 + (b_1-b_2) $", "plt.plot(t, sol[:, 0] + sol[:, 1], 'b') #x1+x2\nplt.plot(t, sol[:, 0] - sol[:, 1], 'g') #x1-x2\nplt.show() \n\n# The Wilson-Cowan Model\n# Set the fixed parameters\nglobal a,b,c,d,q,p\na=15\nb=18\nc=16\nd=15\n# the wilson-cowan equations\n\n\ndef eqs(y,t):\n E,I=y\n E_prime=-E+f(a*E-b*I+p(t))\n I_prime=-I+f(c*E-d*I+q(t))\n dydt=[E_prime,I_prime]\n return dydt\n\ndef f(x):\n if x>0:\n return x\n# return np.tanh(x)\n return 0\n \n\np = lambda t : 10\nq = lambda t : 5 \n\ny0=[0.2,0.9]\nt = np.linspace(0, 10, 1001)\nsol = odeint(eqs, y0, t)\n\nimport matplotlib.pyplot as plt\nplt.plot(t, sol[:, 0], 'b', label='E')\nplt.plot(t, sol[:, 1], 'g', label='I')\nplt.legend(loc='best')\nplt.xlabel('t')\nplt.grid()\nplt.show()", "Misha's paper\nParadoxical Effects of External Modulation of Inhibitory Interneurons\nMisha V. Tsodyks, William E. Skaggs, Terrence J. Sejnowski, and Bruce L. McNaughton \nTry to change the parameters: tau s and J to see what happens.\nLook for the effect : \"changes in external input to inhibitory interneurons can cause their activity to be modulated in the direction opposite to the change in the input if the intrinsic excitatory connections are sufficiently strong.\"", "def eqs(y,t):\n tau_e=20.0\n tau_i=10.0\n Jee=40.0\n Jei=25.0\n Jie=30.0\n Jii=15.0\n E,I = y\n dEdt = (-E+g_e(Jee *E - Jei * I + e(t)))/tau_e\n dIdt = (-I+g_i(Jie *E - Jii * I + i(t)))/tau_i\n dydt=[dEdt, dIdt]\n return dydt\n\nT=120\ndef e(t):\n return 0.1\ndef i(t):\n return 0.1*np.sin(2*np.pi*t/T)\ndef g_e(x):\n if x>0:\n return np.tanh(x)\n return 0\ndef g_i(x):\n if x>0:\n return np.tanh(x)\n return 0\n\n\ny0=np.array([0.,0.])\ntmax=0.5\nt = np.linspace(0, tmax*1000, tmax*1000+1)\nsol = odeint(eqs, y0, t)\n\nf, axarr = plt.subplots(2, sharex=True,figsize=(8,8))\naxarr[0].plot(t, sol[:, 0], 'b')\naxarr[0].plot(t, sol[:, 1], 'g')\naxarr[1].plot(t, i(t)*0.1, 'r')\nplt.show()", "Network\n$ \\tau \\dot{\\bar{r}} = -\\bar{r} + W\\bar{r} + \\bar{i} $\nIn the code below try to see:\n\nChange n_components in the PCA at 0.95 (find all the components which explain 95% of the variance)\nEffect of changing N\nEffect of changing of changing syn_stregth\nEffect of changing of changing input strength\nEffect of changing tau\nEffect of changing T\n\nBonus: Compare the eigenvectors of W with the Princpal companents returned by PCA\nBonus 2: effect off changing the random matrix from gaussian to something else", "N=200\nsyn_strength=0.1\ninput_strength=0.01\nW=np.random.randn(N,N)/N + 1.0/N*(syn_strength)\nb=np.random.rand(N)*input_strength\ntau=np.random.rand(N)*10\nT=100\ndef eqs(y,t):\n dydt=(-y + W.dot(y) + I(t))/tau \n return dydt\n\n\ndef I(t):\n return b*np.sin(2*np.pi*t/T)/N\n\nplt.figure(figsize=[15,15])\nv=np.max(np.abs(W).flatten())\nplt.imshow(W,interpolation='none',cmap='coolwarm', vmin=-v, vmax=v)\nplt.colorbar()\nplt.show()\n\ny0=np.random.rand(N)\ntmax=0.4\nt = np.linspace(0, tmax*1000, tmax*1000+1)\n\nsol = odeint(eqs, y0, t, hmax=1)\nplt.plot(t, sol,) #x1\nplt.show()\n\npca=PCA(n_components=N)\nsol_pca=pca.fit_transform(sol)\n\nplt.plot(t, sol_pca, ) #x1\nplt.show()\n\nplt.imshow(np.abs(sol_pca.T[0:10,:]),aspect=10,interpolation='none') #x1\nplt.colorbar()\n# plt.plot(t, sol[:, 1], 'g') #x2\nplt.show()", "Now lets make the connectivity matrix sparse to emulate more realistic networks.\nCheck the effect of changing sparsity parameter.", "N=200\nsyn_strength=-0.01\ninput_strength=0.01\nsparsity=0.1\nW=(np.random.randn(N,N)/N + 1.0/N*(syn_strength))*(np.random.random([N,N])<sparsity)\nb=np.random.rand(N)*input_strength\ntau=np.random.rand(N)*20\nT=100\ndef eqs(y,t):\n dydt=(-y + W.dot(y) + I(t))/tau \n return dydt\n\n\ndef I(t):\n return b*np.sin(2*np.pi*t/T)/N\n\n\ny0=np.random.rand(N)\ntmax=0.1\nt = np.linspace(0, tmax*1000, tmax*1000+1)\n\nsol = odeint(eqs, y0, t, hmax=1)\nplt.plot(t, sol,) #x1\nplt.show()\n\npca=PCA()\nsol_pca=pca.fit_transform(sol)\n\nplt.plot(t, sol_pca, ) #x1\nplt.show()\n\nplt.imshow(np.abs(sol_pca.T[0:10,:]),aspect=10,interpolation='none') #x1\nplt.colorbar()\n# plt.plot(t, sol[:, 1], 'g') #x2\nplt.show()\n\nplt.figure(figsize=[20,20])\nv=np.max(np.abs(W).flatten())\nplt.imshow(W,interpolation='none',cmap='coolwarm', vmin=-v, vmax=v)\nplt.colorbar()\nplt.show()", "Now lets make the spilt of the neurons into excitatory and inhibitory neurons explicit.", "N=200\nsyn_strength=0.1\ninput_strength=0.01\nsparsity=0.1\nW=(np.random.randn(N,N)/N + 1.0/N*(syn_strength))*(np.random.random([N,N])<sparsity)\ne_frac=0.8\nfor i,row in enumerate(W):\n W[i,:]=np.abs(row)*((np.random.rand()<e_frac)*2-1)\n \n# W=W.T\nb=np.random.rand(N)*input_strength\ntau=np.random.rand(N)*20\nT=100\ndef eqs(y,t):\n dydt=(-y + W.dot(y) + I(t))/tau \n return dydt\n\n\ndef I(t):\n return b*np.sin(2*np.pi*t/T)/N\n\n\ny0=np.random.rand(N)\ntmax=0.1\nt = np.linspace(0, tmax*1000, tmax*1000+1)\n\nsol = odeint(eqs, y0, t, hmax=1)\nplt.plot(t, sol,) #x1\nplt.show()\n\npca=PCA()\nsol_pca=pca.fit_transform(sol)\n\nplt.plot(t, sol_pca, ) #x1\nplt.show()\n\nplt.imshow(np.abs(sol_pca.T[0:10,:]),aspect=10,interpolation='none') #x1\nplt.colorbar()\n# plt.plot(t, sol[:, 1], 'g') #x2\nplt.show()\n\nplt.figure(figsize=[20,20])\nv=np.max(np.abs(W).flatten())\nplt.imshow(W,interpolation='none',cmap='coolwarm', vmin=-v, vmax=v)\nplt.colorbar()\nplt.show()", "Lets add a thresholding function to ensure the firing rate of neurons isn't negative", "N=200\nsyn_strength=0.1\ninput_strength=0.01\nsparsity=0.1\nW=(np.random.randn(N,N)/N + 1.0/N*(syn_strength))*(np.random.random([N,N])<sparsity)\ne_frac=0.8\nfor i,row in enumerate(W):\n W[i,:]=np.abs(row)*((np.random.rand()<e_frac)*2-1)\n\nb=np.random.rand(N)*input_strength\ntau=np.random.rand(N)*20\nT=100\ndef eqs(y,t):\n dydt=(-y + map(f,W.dot(y) + I(t)))/tau \n return dydt\n\ndef f(x):\n if x < 0:\n return 0\n return np.tanh(x)\n\ndef I(t):\n return b*np.sin(2*np.pi*t/T)/N\n\n\ny0=np.random.rand(N)\ntmax=0.1\nt = np.linspace(0, tmax*1000, tmax*1000+1)\n\nsol = odeint(eqs, y0, t, hmax=1)\nplt.plot(t, sol,) #x1\nplt.show()\n\npca=PCA()\nsol_pca=pca.fit_transform(sol)\n\nplt.plot(t, sol_pca, ) #x1\nplt.show()\n\nplt.imshow(np.abs(sol_pca.T[0:10,:]),aspect=10,interpolation='none') #x1\nplt.colorbar()\n# plt.plot(t, sol[:, 1], 'g') #x2\nplt.show()\n\nplt.figure(figsize=[20,20])\nv=np.max(np.abs(W).flatten())\nplt.imshow(W,interpolation='none',cmap='coolwarm', vmin=-v, vmax=v)\nplt.colorbar()\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rasbt/pattern_classification
parameter_estimation_techniques/max_likelihood_est_distributions.ipynb
gpl-3.0
[ "Sebastian Raschka \nlast updated: 05/07/2014 \n\nLink to this IPython Notebook on GitHub \nLink to the GitHub repository\n\n<hr>\nI am really looking forward to your comments and suggestions to improve and extend this tutorial! Just send me a quick note \nvia Twitter: &#64;rasbt\nor Email: bluewoodtree@gmail.com\n<hr>\n\nHow to compute Maximum Likelihood Estimates (MLE) for different distributions\n<a name ='sections'></a>\n<br>\n<br>\nSections\n\nIntroduction\nGeneral Concept\nMultivariate Gaussian Distribution\nUnivariate Rayleigh Distribution\nUnivariate Poisson Distribution\n\n<br>\n<br>\n<a name='introduction'></a>\nIntroduction\nThe Maximum Likelihood Estimation (MLE) is a technique that uses the training data to estimate parameter values for a particular distribution. A popular example would be to estimate the mean and variance of a Normal distribution by computing it from the training data.\nMLE can be used on pattern classification tasks under the condition that the model of the distributions (and the number of parameters that we want to estimate) is known.\nAn introduction about how to use the Maximum Likelihood Estimate for pattern classification task can be found in an earlier article\nTo summarize the problem: Using MLE, we want to estimate the values of the parameters for a given distribution. For example, in a pattern classification task with a Bayes classifier and normal distributed class-conditional densities, those parameters would be the mean and variance ( $p(\\pmb x \\; | \\; \\omega_i) \\sim N(\\pmb\\mu, \\pmb\\sigma^2)$ ). \n<a name='general_concept'></a>\n<br>\n<br>\nGeneral Concept\nFor the Maximum Likelihood Estimate (MLE), we assume that we have a data set of i.i.d. (independent and identically distributed) samples \n$D = \\left{ \\pmb x_1, \\pmb x_2,..., \\pmb x_n \\right} $.\n<br>\n<br>\nLikelihood\nThe probability of observing the data set $D = \\left{ \\pmb x_1, \\pmb x_2,..., \\pmb x_n \\right} $ can be pictured as probability to observe a particular sequence of patterns,\nwhere the probability of observing a particular patterns depends on $\\pmb \\theta$, the parameters the underlying (class-conditional) distribution. In order to apply MLE, we have to make the assumption that the samples are i.i.d. (independent and identically distributed).\n<br>\n<br>\n$p(D\\; | \\; \\pmb \\theta\\;) \\\n= p(\\pmb x_1 \\; | \\; \\pmb \\theta\\;)\\; \\cdot \\; p(\\pmb x_2 \\; | \\;\\pmb \\theta\\;) \\; \\cdot \\;... \\; p(\\pmb x_n \\; | \\; \\pmb \\theta\\;) \\ \n= \\prod_{k=1}^{n} \\; p(\\pmb x_k \\pmb \\; | \\; \\pmb \\theta \\;)$\n<br>\nWhere $\\pmb\\theta$ is the parameter vector, that contains the parameters for a particular distribution that we want to estimate.\nand $p(D\\; | \\; \\pmb \\theta\\;)$ is also called the likelihood of $\\pmb\\ \\theta$.\nFor convenience, we take the natural logarithm in order to compute the so-called log-likelihood: \n$p(D|\\theta) = \\prod_{k=1}^{n} p(x_k|\\theta) \\\n\\Rightarrow l(\\theta) = \\sum_{k=1}^{n} ln \\; p(x_k|\\theta)$ \nGoal:\nCompute $\\hat{\\pmb \\theta}$, which are the values that maximize $p(D\\; | \\; \\pmb \\theta\\;)$.\nIn pattern classification tasks we have multiple classes $\\omega_j$ with independent class-conditional densities $p(\\pmb x | \\omega_j)$, which are dependent on the parameters of the distribution $p(\\pmb x | \\omega_j, \\pmb \\theta_j)$\nApproach:\nIn order to maximize $p(D\\; | \\; \\pmb \\theta\\;)$, we can apply the rules of differential calculus for every parameters to the log-likelihoods:\n$\\nabla_{\\pmb \\theta} \\equiv \\begin{bmatrix}\n\\frac{\\partial \\; }{\\partial \\; \\theta_1} \\\n\\frac{\\partial \\; }{\\partial \\; \\theta_2} \\\n...\\\n\\frac{\\partial \\; }{\\partial \\; \\theta_p}\\end{bmatrix}$\nWhich as to be done for every class $\\omega_j$ separately, and for our convenience, let us drop the class labels j for now, so that for a class $\\omega_j$:\n$\\nabla_{\\pmb \\theta} l(\\pmb\\theta) \\equiv \\begin{bmatrix}\n\\frac{\\partial \\; L(\\pmb\\theta)}{\\partial \\; \\theta_1} \\\n\\frac{\\partial \\; L(\\pmb\\theta)}{\\partial \\; \\theta_2} \\\n...\\\n\\frac{\\partial \\; L(\\pmb\\theta)}{\\partial \\; \\theta_p}\\end{bmatrix}$\n$= \\begin{bmatrix}\n0 \\\n0 \\\n...\\\n0\\end{bmatrix}$\n<a name='multi_gauss'></a>\n<br>\n<br>\nMultivariate Gaussian Distribution\nProbability Density Function:\n$p(\\pmb x) \\sim N(\\pmb \\mu|\\Sigma)$\n$p(\\pmb x) \\sim \\frac{1}{(2\\pi)^{d/2} \\; |\\Sigma|^{1/2}} exp \\bigg[ -\\frac{1}{2}(\\pmb x - \\pmb \\mu)^t \\Sigma^{-1}(\\pmb x - \\pmb \\mu) \\bigg]$\n<hr>\n\nlikelihood of $\\pmb\\ \\theta$: \n$\\Rightarrow p(D\\; | \\; \\pmb \\theta\\;) = \\prod_{k=1}^{n} \\; p(\\pmb x_k \\pmb \\; | \\; \\pmb \\theta \\;)\\\n\\Rightarrow p(D\\; | \\; \\pmb \\theta\\;) = \\prod_{k=1}^{n} \\; \\frac{1}{(2\\pi)^{d/2} \\; |\\Sigma|^{1/2}} exp \\bigg[ -\\frac{1}{2}(\\pmb x - \\pmb \\mu)^t \\Sigma^{-1}(\\pmb x - \\pmb \\mu) \\bigg]$\nlog-likelihood of $\\pmb\\ \\theta$ (natural logarithm):\n$l(\\pmb\\theta) = \\sum\\limits_{k=1}^{n} - \\frac{1}{2}(\\pmb x - \\pmb \\mu)^t \\pmb \\Sigma^{-1} \\; (\\pmb x - \\pmb \\mu) - \\frac{d}{2} \\; ln \\; 2\\pi - \\frac{1}{2} \\;ln \\; |\\pmb\\Sigma|$\nThe 2 parameters that we want to estimate are $\\pmb \\mu_i$ and $\\pmb \\Sigma_i$, are \n$\\pmb \\theta_i = \\bigg[ \\begin{array}{c}\n\\ \\theta_{i1} \\\n\\ \\theta_{i2} \\\n\\end{array} \\bigg]=\n\\bigg[ \\begin{array}{c}\n\\pmb \\mu_i \\\n\\pmb \\Sigma_i \\\n\\end{array} \\bigg]$ \n<br>\n<br>\nMaximum Likelihood Estimate (MLE):\nIn order to obtain the MLE $\\boldsymbol{\\hat{\\theta}}$, we maximize $l (\\pmb \\theta)$, which can be done via differentiation:\nwith \n$\\nabla_{\\pmb \\theta} \\equiv \\begin{bmatrix}\n\\frac{\\partial \\; }{\\partial \\; \\theta_1} \\ \n\\frac{\\partial \\; }{\\partial \\; \\theta_2}\n\\end{bmatrix} = \\begin{bmatrix} \n\\frac{\\partial \\; }{\\partial \\; \\pmb \\mu} \\ \n\\frac{\\partial \\; }{\\partial \\; \\pmb \\sigma}\n\\end{bmatrix}$\n$\\Rightarrow \\nabla_{\\pmb \\theta} l(\\pmb\\theta) = \\sum\\limits_{k=1}^n \\nabla_{\\pmb \\theta} \\;ln\\; p(\\pmb x| \\pmb \\theta) = 0 $\n1st parameter $\\theta_1 = \\pmb \\mu$\n${\\hat{\\pmb\\mu}} = \\frac{1}{n} \\sum\\limits_{k=1}^{n} \\pmb x_k$\n2nd parameter $\\theta_2 = \\Sigma$\n${\\hat{\\pmb\\Sigma}} = \\frac{1}{n} \\sum\\limits_{k=1}^{n} (\\pmb x_k - \\hat{\\mu})(\\pmb x_k - \\hat{\\mu})^t$\nCode for multivariate Gaussian MLE", "# loading packages\n\n%pylab inline\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n\n\ndef mle_gauss_mu(samples):\n \"\"\"\n Calculates the Maximum Likelihood Estimate for a mean vector\n from a multivariate Gaussian distribution.\n \n Keyword arguments:\n samples (numpy array): Training samples for the MLE.\n Every sample point represents a row; dimensions by column.\n \n Returns a row vector (numpy.array) as the MLE mean estimate.\n \n \"\"\"\n dimensions = samples.shape[1]\n mu_est = np.zeros((dimensions,1))\n for dim in range(dimensions):\n mu_est = np.zeros((dimensions,1))\n col_mean = sum(samples[:,dim])/len(samples[:,dim])\n mu_est[dim] = col_mean\n return mu_est\n\ndef mle_gausscov(samples, mu_est):\n \"\"\"\n Calculates the Maximum Likelihood Estimate for the covariance matrix.\n \n Keyword Arguments:\n x_samples: np.array of the samples for 1 class, n x d dimensional \n mu_est: np.array of the mean MLE, d x 1 dimensional\n \n Returns the MLE for the covariance matrix as d x d numpy array.\n \n \"\"\"\n dimensions = samples.shape[1]\n assert (dimensions == mu_est.shape[0]), \"columns of sample set and rows of'\\\n 'mu vector (i.e., dimensions) must be equal.\"\n cov_est = np.zeros((dimensions,dimensions))\n for x_vec in samples:\n x_vec = x_vec.reshape(dimensions,1)\n cov_est += (x_vec - mu_est).dot((x_vec - mu_est).T)\n return cov_est / len(samples)", "Sample training data for MLE\n$\\pmb \\mu = \\Bigg[ \\begin{array}{c}\n\\ 0 \\\n\\ 0\n\\end{array} \\Bigg]\\;, \\quad \\quad \n\\pmb \\Sigma = \\Bigg[ \\begin{array}{ccc}\n\\ 1 & 0 & 0 \\\n\\ 0 & 1 & 0\n\\end{array} \\Bigg] \\quad$", "# true parameters and 100 3D training data points\n\nmu_vec = np.array([[0],[0]])\ncov_mat = np.eye(2)\n\nmulti_gauss = np.random.multivariate_normal(mu_vec.ravel(), cov_mat, 100)\nprint('Dimensions: {}x{}'.format(multi_gauss.shape[0], multi_gauss.shape[1]))", "Estimate parameters via MLE", "import prettytable\n\n# mean estimate\nmu_mle = mle_gauss_mu(multi_gauss)\nmu_mle_comp = prettytable.PrettyTable([\"mu\", \"true_param\", \"MLE_param\"])\nmu_mle_comp.add_row([\"\",mu_vec, mu_mle])\nprint(mu_mle_comp)\n\n# covariance estimate\ncov_mle = mle_gausscov(multi_gauss, mu_mle)\nmle_gausscov_comp = prettytable.PrettyTable([\"covariance\", \"true_param\", \"MLE_param\"])\nmle_gausscov_comp.add_row([\"\",cov_mat, cov_mle])\nprint(mle_gausscov_comp)\n\n### Implementing the Multivariate Gaussian Density Function\n\ndef pdf_multivariate_gauss(x, mu, cov):\n \"\"\"\n Caculate the multivariate normal density (pdf)\n\n Keyword arguments:\n x = numpy array of a \"d x 1\" sample vector\n mu = numpy array of a \"d x 1\" mean vector\n cov = \"numpy array of a d x d\" covariance matrix\n \n \"\"\"\n assert(mu.shape[0] > mu.shape[1]), 'mu must be a row vector'\n assert(x.shape[0] > x.shape[1]), 'x must be a row vector'\n assert(cov.shape[0] == cov.shape[1]), 'covariance matrix must be square'\n assert(mu.shape[0] == cov.shape[0]), 'cov_mat and mu_vec must have the same dimensions'\n assert(mu.shape[0] == x.shape[0]), 'mu and x must have the same dimensions'\n part1 = 1 / ( ((2* np.pi)**(len(mu)/2)) * (np.linalg.det(cov)**(1/2)) )\n part2 = (-1/2) * ((x-mu).T.dot(np.linalg.inv(cov))).dot((x-mu))\n return float(part1 * np.exp(part2))\n\nZ_true.shape\n\n# Plot Probability Density Function\nfrom matplotlib import pyplot as plt\n\nfig = plt.figure(figsize=(9, 9))\nax = fig.gca(projection='3d')\n\nX = np.linspace(-5, 5, 100)\nY = np.linspace(-5, 5, 100)\nX,Y = np.meshgrid(X,Y)\n\nZ_mle = []\nfor i,j in zip(X.ravel(),Y.ravel()):\n Z_mle.append(pdf_multivariate_gauss(np.array([[i],[j]]), mu_mle, cov_mle))\nZ_mle = np.asarray(Z_mle).reshape(len(Z_mle)**0.5, len(Z_mle)**0.5) \nsurf = ax.plot_wireframe(X, Y, Z_mle, color='red', rstride=2, cstride=2, alpha=0.3, label='MLE')\n\nZ_true = []\nfor i,j in zip(X.ravel(),Y.ravel()):\n Z_true.append(pdf_multivariate_gauss(np.array([[i],[j]]), mu_vec, cov_mat))\nZ_true = np.asarray(Z_true).reshape(len(Z_true)**0.5, len(Z_true)**0.5)\nsurf = ax.plot_wireframe(X, Y, Z_true, color='green', rstride=2, cstride=2, alpha=0.3, label='true param.')\n\nax.set_zlim(0, 0.2)\nax.zaxis.set_major_locator(plt.LinearLocator(10))\nax.zaxis.set_major_formatter(plt.FormatStrFormatter('%.02f'))\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel('p(x)')\nax.legend()\n\nplt.title('True vs. Predicted Gaussian densities')\n\nplt.show()", "<a name='uni_rayleigh'></a>\n<br>\n<br>\nUnivariate Rayleigh Distribution\nProbability Density Function\n$p(x|\\theta) = \\Bigg{ \\begin{array}{c}\n 2\\theta xe^{- \\theta x^2},\\quad \\quad x \\geq0, \\\n 0,\\quad otherwise. \\\n \\end{array}$\n<hr>\n\nDerive a formula for the maximum likelihood estimate of $\\theta$ , i.e., $\\hat{{\\theta}}_{mle}$.\n$p(D|\\theta) = \\prod_{k=1}^{n} p(x_k|\\theta) $\n$= \\prod_{k=1}^{n} 2 \\theta x_ke^{- \\theta x_{k}^{2}} $\nTaking the natural logarithm to get the log-likelihood:\n$\\Rightarrow L(\\theta) = \\sum_{k=1}^{n} ln \\; p(x_k|\\theta)$\n$= \\sum_{k=1}^{n} ln \\bigg( 2 \\theta x_ke^{- \\theta x_{k}^{2}} \\bigg) \\ \n= \\sum_{k=1}^{n} ln (2 \\theta x_k) - ( \\theta x_{k}^{2})$\n<br>\nDifferentiating the log-likelihood:\n$\\Rightarrow \\frac{\\partial L}{\\partial (\\theta)} = \\frac{\\partial}{\\partial (\\theta)} \\sum_{k=1}^{n} ln (2 \\theta x_k) - ( \\theta x_{k}^{2})$\n$= \\sum_{k=1}^{n} \\frac{\\partial}{\\partial (\\theta)}ln (2 \\theta x_k) - ( \\theta x_{k}^{2})\\\n= \\sum_{k=1}^{n} \\frac{2x_k}{2\\theta x_k} - x_{k}^{2} \\\n= \\sum_{k=1}^{n} \\frac{1}{\\theta} - x_{k}^{2}$\nGetting the maximum for $p(D|\\theta)$\n$\\Rightarrow \\sum_{k=1}^{n} \\frac{1}{\\theta} - x_{k}^{2} = 0 \\\n\\sum_{k=1}^{n} \\frac{1}{\\theta} = \\sum_{k=1}^{n} x_{k}^{2}$\n$\\frac{n}{\\theta} = \\sum_{k=1}^{n} x_{k}^{2} \\\n\\frac{\\theta}{n} = \\frac{1}{\\sum_{k=1}^{n} x_{k}^{2}} \\\n\\theta = \\frac{n}{\\sum_{k=1}^{n} x_{k}^{2}}$\n<br>\n<br>\nCode for univariate Rayleigh MLE", "# loading packages\n\nimport numpy as np\nfrom matplotlib import pyplot as plt\n%pylab inline\n\ndef comp_theta_mle(d):\n \"\"\"\n Computes the Maximum Likelihood Estimate for a given 1D training\n dataset for a Rayleigh distribution.\n \n \"\"\"\n theta = len(d) / sum([x**2 for x in d])\n return theta \n\ndef likelihood_ray(x, theta):\n \"\"\"\n Computes the class-conditional probability for an univariate\n Rayleigh distribution\n \n \"\"\"\n return 2*theta*x*np.exp(-theta*(x**2))\n\ntraining_data = [12, 17, 20, 24, 25, 30, 32, 50]\n\ntheta = comp_theta_mle(training_data)\n\nprint(\"Theta MLE:\", theta)\n\n# Plot Probability Density Function\nfrom matplotlib import pyplot as plt\n\nx_range = np.arange(0, 150, 0.1)\ny_range = [likelihood_ray(x, theta) for x in x_range]\n\nplt.figure(figsize=(10,8))\nplt.plot(x_range, y_range, lw=2)\nplt.title('Probability density function for the Rayleigh distribution')\nplt.ylabel('p(x|theta)')\n\nftext = 'theta = {:.5f}'.format(theta)\nplt.figtext(.15,.8, ftext, fontsize=11, ha='left')\n\n\nplt.ylim([0,0.04])\nplt.xlim([0,120])\nplt.xlabel('random variable x')\n\nplt.show()", "<a name='uni_poisson'></a>\n<br>\n<br>\nUnivariate Poisson Distribution\nProbability Density Function\n$p(x|\\theta) = \\frac{e^{-\\theta}\\theta^{xk}}{x_k!}$\n<hr>\n\nDerive a formula for the maximum likelihood estimate of $\\theta$ , i.e., $\\hat{{\\theta}}_{mle}$.\n$p(D|\\theta) = \\prod_{k=1}^{n} p(x_k|\\theta)$\n$= \\prod_{k=1}^{n}\n\\frac{e^{-\\theta}\\theta^{xk}}{x_k!}$\nTaking the natural logarithm to get the log-likelihood:\n$p(D|\\theta) = L(\\theta) = \\prod_{k=1}^{n} p(x_k|\\theta)$\n$= \\sum_{k=1}^{n} ln \\bigg( \\frac{e^{-\\theta}\\theta^{xk}}{x_k!} \\bigg)$\n$= \\sum_{k=1}^{n} ln(e^{-\\theta}\\theta^{xk}) - ln({x_k!})$ (simplify by removing the scalar, which becomes 0 in the derivative)\n$= \\sum_{k=1}^{n} ln(e^{-\\theta}\\theta^{xk})$\n$= \\sum_{k=1}^{n} ln(e^{-\\theta}) + ln(\\theta^{xk})$\n$= \\sum_{k=1}^{n} -\\theta + x_k \\; ln(\\theta)$\nDifferentiating the log-likelihood:\n$\\frac{\\partial \\; L(\\theta)}{\\partial \\; \\theta} = \\frac{\\partial \\; }{\\partial \\; \\theta} \\bigg( \\sum_{k=1}^{n} -\\theta + x_k \\; ln(\\theta)\\bigg)$\n$= \\sum_{k=1}^{n} \\frac{\\partial \\; }{\\partial \\; \\theta} \\bigg( -\\theta + x_k \\; ln(\\theta)\\bigg)$\n$= \\sum_{k=1}^{n} \\bigg( -1 + \\frac{x_k}{\\theta} \\bigg)$\nGetting the maximum for $p(D|\\theta)$\n$\\Rightarrow -n + \\sum_{k=1}^{n} x_k \\; \\cdot \\frac{1}{\\theta} = 0$\n$\\theta = \\frac{\\sum_{k=1}^{n} x_k }{n}$\n<br>\n<br>\nCode for univariate Poisson MLE", "def poisson_theta_mle(d):\n \"\"\"\n Computes the Maximum Likelihood Estimate for a given 1D training\n dataset from a Poisson distribution.\n \n \"\"\"\n return sum(d) / len(d)\n\nimport math\n\ndef likelihood_poisson(x, lam):\n \"\"\"\n Computes the class-conditional probability for an univariate\n Poisson distribution\n \n \"\"\"\n if x // 1 != x:\n likelihood = 0\n else:\n likelihood = math.e**(-lam) * lam**(x) / math.factorial(x)\n return likelihood\n\n# Drawing training data\n\nimport numpy as np\n\ntrue_param = 1.0\npoisson_data = np.random.poisson(lam=true_param, size=100)\n\nmle_poiss = poisson_theta_mle(poisson_data)\n\nprint('MLE:', mle_poiss)\n\n# Plot Probability Density Function\nfrom matplotlib import pyplot as plt\n \nx_range = np.arange(0, 5, 0.1)\ny_true = [likelihood_poisson(x, true_param) for x in x_range]\ny_mle = [likelihood_poisson(x, mle_poiss) for x in x_range]\n\nplt.figure(figsize=(10,8))\nplt.plot(x_range, y_true, lw=2, alpha=0.5, linestyle='--', label='true parameter ($\\lambda={}$)'.format(true_param))\nplt.plot(x_range, y_mle, lw=2, alpha=0.5, label='MLE ($\\lambda={}$)'.format(mle_poiss))\nplt.title('Poisson probability density function for the true and estimated parameters')\nplt.ylabel('p(x|theta)')\nplt.xlim([-1,5])\nplt.xlabel('random variable x')\nplt.legend()\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
himanshuy/titanic_survivals
titanic_survival_exploration.ipynb
gpl-3.0
[ "Machine Learning Engineer Nanodegree\nIntroduction and Foundations\nProject: Titanic Survival Exploration\nIn 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.\n\nTip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. \n\nGetting Started\nTo begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.\nRun the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.\n\nTip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.", "# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import display # Allows the use of display() for DataFrames\n\n# Import supplementary visualizations code visuals.py\nimport visuals as vs\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the dataset\nin_file = 'titanic_data.csv'\nfull_data = pd.read_csv(in_file)\n# Print the first few entries of the RMS Titanic data\ndisplay(full_data.head())", "From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:\n- Survived: Outcome of survival (0 = No; 1 = Yes)\n- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)\n- Name: Name of passenger\n- Sex: Sex of the passenger\n- Age: Age of the passenger (Some entries contain NaN)\n- SibSp: Number of siblings and spouses of the passenger aboard\n- Parch: Number of parents and children of the passenger aboard\n- Ticket: Ticket number of the passenger\n- Fare: Fare paid by the passenger\n- Cabin Cabin number of the passenger (Some entries contain NaN)\n- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)\nSince we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.\nRun the code cell below to remove Survived as a feature of the dataset and store it in outcomes.", "# Store the 'Survived' feature in a new variable and remove it from the dataset\noutcomes = full_data['Survived']\ndata = full_data.drop('Survived', axis = 1)\n\n# Show the new dataset with 'Survived' removed\ndisplay(data.head())", "The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].\nTo measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers. \nThink: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?", "def accuracy_score(truth, pred):\n \"\"\" Returns accuracy score for input truth and predictions. \"\"\"\n \n # Ensure that the number of predictions matches number of outcomes\n if len(truth) == len(pred): \n \n # Calculate and return the accuracy as a percent\n return \"Predictions have an accuracy of {:.2f}%.\".format((truth == pred).mean()*100)\n \n else:\n return \"Number of predictions does not match number of outcomes!\"\n \n# Test the 'accuracy_score' function\npredictions = pd.Series(np.ones(5, dtype = int))\nprint accuracy_score(outcomes[:5], predictions)", "Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.\n\nMaking Predictions\nIf we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.\nThe predictions_0 function below will always predict that a passenger did not survive.", "def predictions_0(data):\n \"\"\" Model with no features. Always predicts a passenger did not survive. \"\"\"\n\n predictions = []\n for _, passenger in data.iterrows():\n \n # Predict the survival of 'passenger'\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_0(data)", "Question 1\nUsing the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: 61.62%\n\nLet's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.\nRun the code cell below to plot the survival outcomes of passengers based on their sex.", "vs.survival_stats(data, outcomes, 'Sex')", "Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.", "def predictions_1(data):\n \"\"\" Model with one feature: \n - Predict a passenger survived if they are female. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n if passenger['Sex'] == 'female':\n predictions.append(1)\n else:\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_1(data)", "Question 2\nHow accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: 78.68%\n\nUsing just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.\nRun the code cell below to plot the survival outcomes of male passengers based on their age.", "vs.survival_stats(data, outcomes, 'SibSp',[\"Age > 30\"])", "Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.", "def predictions_2(data):\n \"\"\" Model with two features: \n - Predict a passenger survived if they are female.\n - Predict a passenger survived if they are male and younger than 10. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n if passenger['Sex'] == 'female':\n predictions.append(1)\n elif passenger['Age'] < 10:\n predictions.append(1)\n else:\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_2(data)", "Question 3\nHow accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: 79.35%\n\nAdding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. \nPclass, Sex, Age, SibSp, and Parch are some suggested features to try.\nUse the survival_stats function below to to examine various survival statistics.\nHint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: [\"Sex == 'male'\", \"Age &lt; 18\"]", "vs.survival_stats(data, outcomes, 'SibSp', [ \"Age < 16\"])", "After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.\nMake sure to keep track of the various features and conditions you tried before arriving at your final prediction model.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.", "def predictions_3(data):\n \"\"\" Model with multiple features. Makes a prediction with an accuracy of at least 80%. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n \n # Remove the 'pass' statement below \n # and write your prediction conditions here\n if passenger['Sex'] == 'female':\n predictions.append(1)\n elif passenger['Age'] < 16 and passenger['SibSp'] < 2:\n predictions.append(1)\n else:\n predictions.append(0)\n \n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_3(data)", "Question 4\nDescribe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?\nHint: Run the code cell below to see the accuracy of your predictions.", "print accuracy_score(outcomes, predictions)", "Answer: I took the hint from the project reviewer and explored SibSp dimension. After playing multiple values I found that for Age < 16 it accuracy increases to 80%\nConclusion\nAfter several iterations of exploring and conditioning on the data, you have built a useful algorithm for predicting the survival of each passenger aboard the RMS Titanic. The technique applied in this project is a manual implementation of a simple machine learning model, the decision tree. A decision tree splits a set of data into smaller and smaller groups (called nodes), by one feature at a time. Each time a subset of the data is split, our predictions become more accurate if each of the resulting subgroups are more homogeneous (contain similar labels) than before. The advantage of having a computer do things for us is that it will be more exhaustive and more precise than our manual exploration above. This link provides another introduction into machine learning using a decision tree.\nA decision tree is just one of many models that come from supervised learning. In supervised learning, we attempt to use features of the data to predict or model things with objective outcome labels. That is to say, each of our data points has a known outcome value, such as a categorical, discrete label like 'Survived', or a numerical, continuous value like predicting the price of a house.\nQuestion 5\nThink of a real-world scenario where supervised learning could be applied. What would be the outcome variable that you are trying to predict? Name two features about the data used in this scenario that might be helpful for making the predictions. \nAnswer: Fraud Detection from Credit Card transactions. Looking at a customer's credit card transactions, anamolies can be detected. Two features which can be used for making the predictions, are Transaction Amount and Location where transaction occured.\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
HrWangChengdu/CS231n
assignment1/features.ipynb
mit
[ "Image features exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nWe have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.\nAll of your work for this exercise will be done in this notebook.", "import random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading extenrnal modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2", "Load data\nSimilar to previous exercises, we will load CIFAR-10 data from disk.", "from cs231n.features import color_histogram_hsv, hog_feature\n\ndef get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):\n # Load the raw CIFAR-10 data\n cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\n X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n \n # Subsample the data\n mask = range(num_training, num_training + num_validation)\n X_val = X_train[mask]\n y_val = y_train[mask]\n mask = range(num_training)\n X_train = X_train[mask]\n y_train = y_train[mask]\n mask = range(num_test)\n X_test = X_test[mask]\n y_test = y_test[mask]\n\n return X_train, y_train, X_val, y_val, X_test, y_test\n\nX_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()", "Extract Features\nFor each image we will compute a Histogram of Oriented\nGradients (HOG) as well as a color histogram using the hue channel in HSV\ncolor space. We form our final feature vector for each image by concatenating\nthe HOG and color histogram feature vectors.\nRoughly speaking, HOG should capture the texture of the image while ignoring\ncolor information, and the color histogram represents the color of the input\nimage while ignoring texture. As a result, we expect that using both together\nought to work better than using either alone. Verifying this assumption would\nbe a good thing to try for the bonus section.\nThe hog_feature and color_histogram_hsv functions both operate on a single\nimage and return a feature vector for that image. The extract_features\nfunction takes a set of images and a list of feature functions and evaluates\neach feature function on each image, storing the results in a matrix where\neach column is the concatenation of all feature vectors for a single image.", "from cs231n.features import *\n\nnum_color_bins = 10 # Number of bins in the color histogram\nfeature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]\nX_train_feats = extract_features(X_train, feature_fns, verbose=True)\nX_val_feats = extract_features(X_val, feature_fns)\nX_test_feats = extract_features(X_test, feature_fns)\n\n# Preprocessing: Subtract the mean feature\nmean_feat = np.mean(X_train_feats, axis=0, keepdims=True)\nX_train_feats -= mean_feat\nX_val_feats -= mean_feat\nX_test_feats -= mean_feat\n\n# Preprocessing: Divide by standard deviation. This ensures that each feature\n# has roughly the same scale.\nstd_feat = np.std(X_train_feats, axis=0, keepdims=True)\nX_train_feats /= std_feat\nX_val_feats /= std_feat\nX_test_feats /= std_feat\n\n# Preprocessing: Add a bias dimension\nX_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])\nX_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])\nX_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])", "Train SVM on features\nUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.", "# Use the validation set to tune the learning rate and regularization strength\n\nfrom cs231n.classifiers.linear_classifier import LinearSVM\n\nlearning_rates = [1e-10, 5e-9, 1e-9, 5e-8, 1e-8, 5e-7, 1e-7, 5e-6]\nregularization_strengths = [1e6, 5e6, 1e7, 5e7, 1e8, 5e8, 1e9, 5e9, 1e10]\n\nresults = {}\nbest_val = -1\nbest_svm = None\n\n################################################################################\n# TODO: #\n# Use the validation set to set the learning rate and regularization strength. #\n# This should be identical to the validation that you did for the SVM; save #\n# the best trained classifer in best_svm. You might also want to play #\n# with different numbers of bins in the color histogram. If you are careful #\n# you should be able to get accuracy of near 0.44 on the validation set. #\n################################################################################\nfor lr in learning_rates:\n for rs in regularization_strengths:\n svm = LinearSVM()\n svm.train(X_train_feats, y_train, learning_rate=lr, reg=rs,\n num_iters=1500, verbose=False)\n \n y_train_pred = svm.predict(X_train_feats)\n pred_train = np.mean(y_train == y_train_pred)\n y_val_pred = svm.predict(X_val_feats)\n pred_val = np.mean(y_train == y_train_pred)\n results[(lr, rs)] = (pred_train, pred_val)\n if pred_val > best_val:\n best_val = pred_val\n best_svm = svm\n print 'done'\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out results.\nfor lr, reg in sorted(results):\n train_accuracy, val_accuracy = results[(lr, reg)]\n print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (\n lr, reg, train_accuracy, val_accuracy)\n \nprint 'best validation accuracy achieved during cross-validation: %f' % best_val\n\n# Evaluate your trained SVM on the test set\ny_test_pred = best_svm.predict(X_test_feats)\ntest_accuracy = np.mean(y_test == y_test_pred)\nprint test_accuracy\n\n# An important way to gain intuition about how an algorithm works is to\n# visualize the mistakes that it makes. In this visualization, we show examples\n# of images that are misclassified by our current system. The first column\n# shows images that our system labeled as \"plane\" but whose true label is\n# something other than \"plane\".\n\nexamples_per_class = 8\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nfor cls, cls_name in enumerate(classes):\n idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]\n idxs = np.random.choice(idxs, examples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)\n plt.imshow(X_test[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls_name)\nplt.show()", "Inline question 1:\nDescribe the misclassification results that you see. Do they make sense?\nNeural Network on image features\nEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. \nFor completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.", "print X_train_feats.shape\n\nfrom cs231n.classifiers.neural_net import TwoLayerNet\n\ninput_dim = X_train_feats.shape[1]\nhidden_dim = 500\nnum_classes = 10\n\nbest_acc = -1\n\nx = 10\ntmp_X_train_feats = X_train_feats[0:10, :]\ntmp_y_train = y_train[0:10]\n#tmp_X_val_feats = X_val_feats[0:x, :]\n#tmp_y_val = y_val[0:x, :]\n\nlearning_rates = [2e-1, 3e-1, 4e-1]\nregularization_strengths = [1e-7, 1e-6, 1e-5, 1e-4]\n\n\nfor lr in learning_rates:\n for rs in regularization_strengths:\n\n net = TwoLayerNet(input_dim, hidden_dim, num_classes)\n \n # Train the network\n stats = net.train(X_train_feats, y_train, X_val_feats, y_val,\n num_iters=1000, batch_size=200,\n learning_rate=lr, learning_rate_decay=0.95,\n reg=rs, verbose=False)\n\n # Predict on the validation set\n val_acc = (net.predict(X_val_feats) == y_val).mean()\n \n if (val_acc > best_acc):\n best_net = net \n best_acc = val_acc \n print 'lr %f, res %f, Validation accuracy:%f ' % (lr, rs, val_acc)\n\nprint 'done'\n\n################################################################################\n# TODO: Train a two-layer neural network on image features. You may want to #\n# cross-validate various parameters as in previous sections. Store your best #\n# model in the best_net variable. #\n################################################################################\n\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Run your neural net classifier on the test set. You should be able to\n# get more than 55% accuracy.\n\ntest_acc = (best_net.predict(X_test_feats) ==8 y_test).mean()\nprint test_acc", "Bonus: Design your own features!\nYou have seen that simple image features can improve classification performance. So far we have tried HOG and color histograms, but other types of features may be able to achieve even better classification performance.\nFor bonus points, design and implement a new type of feature and use it for image classification on CIFAR-10. Explain how your feature works and why you expect it to be useful for image classification. Implement it in this notebook, cross-validate any hyperparameters, and compare its performance to the HOG + Color histogram baseline.\nBonus: Do something extra!\nUse the material and code we have presented in this assignment to do something interesting. Was there another question we should have asked? Did any cool ideas pop into your head as you were working on the assignment? This is your chance to show off!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
solowPy/graduate-teaching
notebooks/1 Getting started.ipynb
mit
[ "import numpy as np\nimport sympy as sym\nimport solowpy", "1 Creating an instance of the solow.Model class\nIn this notebook I will walk you through the creation of an instance of the solow.Model class. To create an instance of the solow.Model we must define two primitives: an aggregate production function and a dictionary of model parameter values.\n1.1 Defining the production function $F$:\nAt each point in time the economy in a Solow growth model has some amounts of capital, $K$, labor, $L$, and knowledge (or technology), $A$, that can be combined to produce output, $Y$, according to some function, $F$:\n$$ Y(t) = F(K(t), A(t)L(t)) \\tag{1.1.1} $$\nwhere $t$ denotes time. Note that $A$ and $L$ are assumed to enter multiplicatively. Typically $A(t)L(t)$ denotes \"effective labor\", and technology that enters in this fashion is known as labor-augmenting or \"Harrod neutral.\"\nA key assumption of the model is that the function $F$ exhibits constant returns to scale in capital and labor inputs. Specifically,\n$$ F(cK(t), cA(t)L(t)) = cF(K(t), A(t)L(t)) = cY(t) \\tag {1.1.2} $$\nfor any $c \\ge 0$. For reference, the above information is contained in the docstring of the solow.Model.output attribute.", "solow.Model.output?", "Examples:\nA common functional form for aggregate production in a Solow model that satisies the above assumptions is the Cobb-Douglas production function\n\\begin{equation}\n \\lim_{\\rho \\rightarrow 0} Y(t) = K(t)^{\\alpha}(A(t)L(t))^{1-\\alpha}. \\tag{1.1.3}\n\\end{equation}\nThe Cobb-Douglas production function is actually a special case of a more general class of production functions called constant elasticity of substitution (CES) production functions.\n\\begin{equation}\n Y(t) = \\bigg[\\alpha K(t)^{\\rho} + (1-\\alpha) (A(t)L(t))^{\\rho}\\bigg]^{\\frac{1}{\\rho}} \\tag{1.1.4}\n\\end{equation}\nwhere $0 < \\alpha < 1$ and $-\\infty < \\rho < 1$. The parameter $\\rho = \\frac{\\sigma - 1}{\\sigma}$ where $\\sigma$ is the elasticity of substitution between factors of production. Taking the limit of equation 1.2 as the elasticity of subsitution goes to unity (i.e., $\\sigma=1 \\implies \\rho=0$) recovers the Cobb-Douglas functional form.", "# define model variables\nA, K, L = sym.symbols('A, K, L')\n\n# define production parameters\nalpha, sigma = sym.symbols('alpha, sigma')\n\n# define a production function\ncobb_douglas_output = K**alpha * (A * L)**(1 - alpha)\n\nrho = (sigma - 1) / sigma\nces_output = (alpha * K**rho + (1 - alpha) * (A * L)**rho)**(1 / rho)", "1.2 Defining model parameters\nA generic Solow growth model has several parameters that need to be specified. To see which parameters are required, we can check the docstring of the solow.Model.params attribute.", "solow.Model.params?", "In addition to the standard parameters $g, n, s, \\delta$, one will also need to specify any required parameters for the production function. In order to make sure that parameter values are consistent with the models assumptions some basic validation of the solow.Model.params attribute is done when ever the attribute is set.", "# these parameters look fishy...why?\ndefault_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.0, 'n': -0.03, 's': 0.15,\n 'delta': 0.01, 'alpha': 0.33}\n\n# ...raises an AttributeError\nmodel = solowpy.Model(output=cobb_douglas_output, params=default_params)", "Examples:\nHere are some examples of how one successfully creates an instance of the solow.Model class...", "cobb_douglas_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.02, 'n': 0.03, 's': 0.15,\n 'delta': 0.05, 'alpha': 0.33}\n\ncobb_douglas_model = solow.Model(output=cobb_douglas_output,\n params=cobb_douglas_params)\n\nces_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.02, 'n': 0.03, 's': 0.15,\n 'delta': 0.05, 'alpha': 0.33, 'sigma': 0.95}\n\nces_model = solowpy.Model(output=ces_output, params=ces_params)", "1.3 Other attributes of the solow.Model class\nThe intensive form of the production function\nThe assumption of constant returns to scale allows us to work with the intensive form of the aggregate production function, $F$. Defining $c=1/AL$ one can write\n$$ F\\bigg(\\frac{K}{AL}, 1\\bigg) = \\frac{1}{AL}F(A, K, L) \\tag{1.3.1} $$\nDefining $k=K/AL$ and $y=Y/AL$ to be capital per unit effective labor and output per unit effective labor, respectively, the intensive form of the production function can be written as\n$$ y = f(k). \\tag{1.3.2}$$\nAdditional assumptions are that $f$ satisfies $f(0)=0$, is concave (i.e., $f'(k) > 0, f''(k) < 0$), and satisfies the Inada conditions: $\\lim_{k \\rightarrow 0} = \\infty$ and $\\lim_{k \\rightarrow \\infty} = 0$. The <cite data-cite=\"inada1964\">(Inada, 1964)</cite> conditions are sufficient (but not necessary!) to ensure that the time path of capital per effective worker does not explode. Much of the above information is actually taken straight from the docstring for the solow.Model.intensive_output attribute.", "solowpy.Model.intensive_output?\n\nces_model.intensive_output", "One can numerically evaluate the intensive output for various values of capital stock (per unit effective labor) as follows...", "ces_model.evaluate_intensive_output(np.linspace(1.0, 10.0, 25))", "The marginal product of capital\nThe marginal product of capital is defined as follows:\n$$ \\frac{\\partial F(K, AL)}{\\partial K} \\equiv f'(k) \\tag{1.3.3}$$\nwhere $k=K/AL$ is capital stock (per unit effective labor).", "solowpy.Model.marginal_product_capital?\n\nces_model.marginal_product_capital", "One can numerically evaluate the marginal product of capital for various values of capital stock (per unit effective labor) as follows...", "ces_model.evaluate_mpk(np.linspace(1.0, 10.0, 25))", "Equation of motion for capital (per unit effective labor)\nBecause the economy is growing over time due to technological progress, $g$, and population growth, $n$, it makes sense to focus on the capital stock per unit effective labor, $k$, rather than aggregate physical capital, $K$. Since, by definition, $k=K/AL$, we can apply the chain rule to the time derative of $k$.\n\\begin{align}\n\\dot{k}(t) =& \\frac{\\dot{K}(t)}{A(t)L(t)} - \\frac{K(t)}{[A(t)L(t)]^2}\\bigg[\\dot{A}(t)L(t) + \\dot{L}(t)A(t)\\bigg] \\\n=& \\frac{\\dot{K}(t)}{A(t)L(t)} - \\bigg(\\frac{\\dot{A}(t)}{A(t)} + \\frac{\\dot{L}(t)}{L(t)}\\bigg)\\frac{K(t)}{A(t)L(t)} \\tag{1.3.4}\n\\end{align}\nBy definition, $k=K/AL$, and by assumption $\\dot{A}/A$ and $\\dot{L}/L$ are $g$ and $n$ respectively. Aggregate capital stock evolves according to\n$$ \\dot{K}(t) = sF(K(t), A(t)L(t)) - \\delta K(t). \\tag{1.3.5}$$\nSubstituting these facts into the above equation yields the equation of\nmotion for capital stock (per unit effective labor).\n\\begin{align}\n\\dot{k}(t) =& \\frac{sF(K(t), A(t)L(t)) - \\delta K(t)}{A(t)L(t)} - (g + n)k(t) \\\n=& \\frac{sY(t)}{A(t)L(t)} - (g + n + \\delta)k(t) \\\n=& sf(k(t)) - (g + n + \\delta)k(t) \\tag{1.3.6}\n\\end{align}\nThe above information is available for reference in the docstring for the solow.Model.k_dot attribute.", "solowpy.Model.k_dot?\n\nces_model.k_dot", "One can numerically evaluate the equation of motion for capital (per unit effective labor) for various values of capital stock (per unit effective labor) as follows...", "ces_model.evaluate_k_dot(np.linspace(1.0, 10.0, 25))", "1.4 Sub-classing the solow.Model class\nSeveral commonly used functional forms for aggregate production, including both the Cobb-Douglas and Constant Elasticity of Substitution (CES) production functions, have been sub-classed from solow.Model. For these functional forms, one only needs to specify a valid dictionary of model parameters.", "solowpy.cobb_douglas?\n\ncobb_douglas_model = solowpy.CobbDouglasModel(params=cobb_douglas_params)\n\nsolowpy.ces?\n\nces_model = solowpy.CESModel(params=ces_params)", "Now that you understand the basics, we can move on to finding the steady state of the Solow growth model." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bashtage/statsmodels
examples/notebooks/markov_autoregression.ipynb
bsd-3-clause
[ "Markov switching autoregression models\nThis notebook provides an example of the use of Markov switching models in statsmodels to replicate a number of results presented in Kim and Nelson (1999). It applies the Hamilton (1989) filter the Kim (1994) smoother.\nThis is tested against the Markov-switching models from E-views 8, which can be found at http://www.eviews.com/EViews8/ev8ecswitch_n.html#MarkovAR or the Markov-switching models of Stata 14 which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.", "%matplotlib inline\n\nfrom datetime import datetime\nfrom io import BytesIO\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport requests\nimport statsmodels.api as sm\n\n# NBER recessions\nfrom pandas_datareader.data import DataReader\n\nusrec = DataReader(\n \"USREC\", \"fred\", start=datetime(1947, 1, 1), end=datetime(2013, 4, 1)\n)", "Hamilton (1989) switching model of GNP\nThis replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written:\n$$\ny_t = \\mu_{S_t} + \\phi_1 (y_{t-1} - \\mu_{S_{t-1}}) + \\phi_2 (y_{t-2} - \\mu_{S_{t-2}}) + \\phi_3 (y_{t-3} - \\mu_{S_{t-3}}) + \\phi_4 (y_{t-4} - \\mu_{S_{t-4}}) + \\varepsilon_t\n$$\nEach period, the regime transitions according to the following matrix of transition probabilities:\n$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =\n\\begin{bmatrix}\np_{00} & p_{10} \\\np_{01} & p_{11}\n\\end{bmatrix}\n$$\nwhere $p_{ij}$ is the probability of transitioning from regime $i$, to regime $j$.\nThe model class is MarkovAutoregression in the time-series part of statsmodels. In order to create the model, we must specify the number of regimes with k_regimes=2, and the order of the autoregression with order=4. The default model also includes switching autoregressive coefficients, so here we also need to specify switching_ar=False to avoid that.\nAfter creation, the model is fit via maximum likelihood estimation. Under the hood, good starting parameters are found using a number of steps of the expectation maximization (EM) algorithm, and a quasi-Newton (BFGS) algorithm is applied to quickly find the maximum.", "# Get the RGNP data to replicate Hamilton\ndta = pd.read_stata(\"https://www.stata-press.com/data/r14/rgnp.dta\").iloc[1:]\ndta.index = pd.DatetimeIndex(dta.date, freq=\"QS\")\ndta_hamilton = dta.rgnp\n\n# Plot the data\ndta_hamilton.plot(title=\"Growth rate of Real GNP\", figsize=(12, 3))\n\n# Fit the model\nmod_hamilton = sm.tsa.MarkovAutoregression(\n dta_hamilton, k_regimes=2, order=4, switching_ar=False\n)\nres_hamilton = mod_hamilton.fit()\n\nres_hamilton.summary()", "We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample.\nFor reference, the shaded periods represent the NBER recessions.", "fig, axes = plt.subplots(2, figsize=(7, 7))\nax = axes[0]\nax.plot(res_hamilton.filtered_marginal_probabilities[0])\nax.fill_between(usrec.index, 0, 1, where=usrec[\"USREC\"].values, color=\"k\", alpha=0.1)\nax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])\nax.set(title=\"Filtered probability of recession\")\n\nax = axes[1]\nax.plot(res_hamilton.smoothed_marginal_probabilities[0])\nax.fill_between(usrec.index, 0, 1, where=usrec[\"USREC\"].values, color=\"k\", alpha=0.1)\nax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])\nax.set(title=\"Smoothed probability of recession\")\n\nfig.tight_layout()", "From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.", "print(res_hamilton.expected_durations)", "In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years.\nKim, Nelson, and Startz (1998) Three-state Variance Switching\nThis model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn.\nThe model in question is:\n$$\n\\begin{align}\ny_t & = \\varepsilon_t \\\n\\varepsilon_t & \\sim N(0, \\sigma_{S_t}^2)\n\\end{align}\n$$\nSince there is no autoregressive component, this model can be fit using the MarkovRegression class. Since there is no mean effect, we specify trend='n'. There are hypothesized to be three regimes for the switching variances, so we specify k_regimes=3 and switching_variance=True (by default, the variance is assumed to be the same across regimes).", "# Get the dataset\new_excs = requests.get(\"http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn\").content\nraw = pd.read_table(BytesIO(ew_excs), header=None, skipfooter=1, engine=\"python\")\nraw.index = pd.date_range(\"1926-01-01\", \"1995-12-01\", freq=\"MS\")\n\ndta_kns = raw.loc[:\"1986\"] - raw.loc[:\"1986\"].mean()\n\n# Plot the dataset\ndta_kns[0].plot(title=\"Excess returns\", figsize=(12, 3))\n\n# Fit the model\nmod_kns = sm.tsa.MarkovRegression(\n dta_kns, k_regimes=3, trend=\"n\", switching_variance=True\n)\nres_kns = mod_kns.fit()\n\nres_kns.summary()", "Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.", "fig, axes = plt.subplots(3, figsize=(10, 7))\n\nax = axes[0]\nax.plot(res_kns.smoothed_marginal_probabilities[0])\nax.set(title=\"Smoothed probability of a low-variance regime for stock returns\")\n\nax = axes[1]\nax.plot(res_kns.smoothed_marginal_probabilities[1])\nax.set(title=\"Smoothed probability of a medium-variance regime for stock returns\")\n\nax = axes[2]\nax.plot(res_kns.smoothed_marginal_probabilities[2])\nax.set(title=\"Smoothed probability of a high-variance regime for stock returns\")\n\nfig.tight_layout()", "Filardo (1994) Time-Varying Transition Probabilities\nThis model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn.\nIn the above models we have assumed that the transition probabilities are constant across time. Here we allow the probabilities to change with the state of the economy. Otherwise, the model is the same Markov autoregression of Hamilton (1989).\nEach period, the regime now transitions according to the following matrix of time-varying transition probabilities:\n$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =\n\\begin{bmatrix}\np_{00,t} & p_{10,t} \\\np_{01,t} & p_{11,t}\n\\end{bmatrix}\n$$\nwhere $p_{ij,t}$ is the probability of transitioning from regime $i$, to regime $j$ in period $t$, and is defined to be:\n$$\np_{ij,t} = \\frac{\\exp{ x_{t-1}' \\beta_{ij} }}{1 + \\exp{ x_{t-1}' \\beta_{ij} }}\n$$\nInstead of estimating the transition probabilities as part of maximum likelihood, the regression coefficients $\\beta_{ij}$ are estimated. These coefficients relate the transition probabilities to a vector of pre-determined or exogenous regressors $x_{t-1}$.", "# Get the dataset\nfilardo = requests.get(\"http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn\").content\ndta_filardo = pd.read_table(\n BytesIO(filardo), sep=\" +\", header=None, skipfooter=1, engine=\"python\"\n)\ndta_filardo.columns = [\"month\", \"ip\", \"leading\"]\ndta_filardo.index = pd.date_range(\"1948-01-01\", \"1991-04-01\", freq=\"MS\")\n\ndta_filardo[\"dlip\"] = np.log(dta_filardo[\"ip\"]).diff() * 100\n# Deflated pre-1960 observations by ratio of std. devs.\n# See hmt_tvp.opt or Filardo (1994) p. 302\nstd_ratio = (\n dta_filardo[\"dlip\"][\"1960-01-01\":].std() / dta_filardo[\"dlip\"][:\"1959-12-01\"].std()\n)\ndta_filardo[\"dlip\"][:\"1959-12-01\"] = dta_filardo[\"dlip\"][:\"1959-12-01\"] * std_ratio\n\ndta_filardo[\"dlleading\"] = np.log(dta_filardo[\"leading\"]).diff() * 100\ndta_filardo[\"dmdlleading\"] = dta_filardo[\"dlleading\"] - dta_filardo[\"dlleading\"].mean()\n\n# Plot the data\ndta_filardo[\"dlip\"].plot(\n title=\"Standardized growth rate of industrial production\", figsize=(13, 3)\n)\nplt.figure()\ndta_filardo[\"dmdlleading\"].plot(title=\"Leading indicator\", figsize=(13, 3))", "The time-varying transition probabilities are specified by the exog_tvtp parameter.\nHere we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters.\nBelow, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result.", "mod_filardo = sm.tsa.MarkovAutoregression(\n dta_filardo.iloc[2:][\"dlip\"],\n k_regimes=2,\n order=4,\n switching_ar=False,\n exog_tvtp=sm.add_constant(dta_filardo.iloc[1:-1][\"dmdlleading\"]),\n)\n\nnp.random.seed(12345)\nres_filardo = mod_filardo.fit(search_reps=20)\n\nres_filardo.summary()", "Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.", "fig, ax = plt.subplots(figsize=(12, 3))\n\nax.plot(res_filardo.smoothed_marginal_probabilities[0])\nax.fill_between(usrec.index, 0, 1, where=usrec[\"USREC\"].values, color=\"gray\", alpha=0.2)\nax.set_xlim(dta_filardo.index[6], dta_filardo.index[-1])\nax.set(title=\"Smoothed probability of a low-production state\")", "Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time:", "res_filardo.expected_durations[0].plot(\n title=\"Expected duration of a low-production state\", figsize=(12, 3)\n)", "During recessions, the expected duration of a low-production state is much higher than in an expansion." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kimkipyo/dss_git_kkp
통계, 머신러닝 복습/160531화_10일차_Scikit-Learn & statsmodels 패키지 소개 Introduction to Scikit-Learn & statsmodels packages/4.Scikit-Learn 패키지의 샘플 데이터 - classification용.ipynb
mit
[ "Scikit-Learn 패키지의 샘플 데이터 - classification용\nIris Dataset\nload_iris()\n\nhttps://en.wikipedia.org/wiki/Iris_flower_data_set\nR.A Fisher의 붓꽃 분류 연구\n관찰 자료\n꽃받침 길이(Sepal Length)\n꽃받침 폭(Sepal Width)\n꽃잎 길이(Petal Length)\n꽃잎 폭(Petal Width)\n종 \nsetosa\nversicolor\nvirginica", "from sklearn.datasets import load_iris\niris = load_iris()\nprint(iris.DESCR)\n\ndf = pd.DataFrame(iris.data, columns=iris.feature_names)\nsy = pd.Series(iris.target, dtype=\"category\")\nsy = sy.cat.rename_categories(iris.target_names)\ndf['species'] = sy\ndf\n\nsns.pairplot(df, hue='species')\nplt.show()", "뉴스 그룹 텍스트\nfetch_20newsgroups(): 20 News Groups text", "from sklearn.datasets import fetch_20newsgroups\nnewsgroups = fetch_20newsgroups(subset=\"all\")\nprint(newsgroups.description)\nprint(newsgroups.keys())\n\nfrom pprint import pprint\npprint(list(newsgroups.target_names))\n\nprint(newsgroups.data[1])\nprint(\"=\"*80)\nprint(newsgroups.target_names[newsgroups.target[1]])", "Olivetti faces\nfetch_olivetti_faces()\n\n얼굴 인식 이미지", "from sklearn.datasets import fetch_olivetti_faces\nolivetti = fetch_olivetti_faces()\nprint(olivetti.DESCR)\nprint(olivetti.keys())\n\nN=2; M=5;\nfig = plt.figure(figsize=(8,5))\nplt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05)\nklist = np.random.choice(range(len(olivetti.data)), N * M)\nfor i in range(N):\n for j in range(M):\n k = klist[i*M+j]\n ax = fig.add_subplot(N, M, i*M+j+1)\n ax.imshow(olivetti.images[k], cmap=plt.cm.bone);\n ax.grid(False)\n ax.xaxis.set_ticks([])\n ax.yaxis.set_ticks([])\n plt.title(olivetti.target[k])\nplt.tight_layout()\nplt.show()", "Labeled Faces in the Wild (LFW)\n#### fetch_lfw_people()\n\n\n유명인 얼굴 이미지 \n\n\nParameters\n\n\nfunneled : boolean, optional, default: True\n\nDownload and use the funneled variant of the dataset.\nresize : float, optional, default 0.5\nRatio used to resize the each face picture.\nmin_faces_per_person : int, optional, default None\nThe extracted dataset will only retain pictures of people that have at least min_faces_per_person different pictures.\ncolor : boolean, optional, default False\nKeep the 3 RGB channels instead of averaging them to a single gray level channel. If color is True the shape of the data has one more dimension than than the shape with color = False.", "from sklearn.datasets import fetch_lfw_people\nlfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4)\nprint(lfw_people.DESCR)\nprint(lfw_people.keys())\n\nN=2; M=5;\nfig = plt.figure(figsize=(8,5))\nplt.subplots_adjust(top=1, bottom=0, hspace=0.1, wspace=0.05)\nklist = np.random.choice(range(len(lfw_people.data)), N * M)\nfor i in range(N):\n for j in range(M):\n k = klist[i*M+j]\n ax = fig.add_subplot(N, M, i*M+j+1)\n ax.imshow(lfw_people.images[k], cmap=plt.cm.bone);\n ax.grid(False)\n ax.xaxis.set_ticks([])\n ax.yaxis.set_ticks([])\n plt.title(lfw_people.target_names[lfw_people.target[k]])\nplt.tight_layout()\nplt.show() ", "#### fetch_lfw_pairs()\n\n얼굴 이미지 Pair \n동일 인물일 수도 있고 아닐 수도 있음", "from sklearn.datasets import fetch_lfw_pairs\nlfw_pairs = fetch_lfw_pairs(resize=0.4)\nprint(lfw_pairs.DESCR)\nprint(lfw_pairs.keys())\n\nN=2; M=5;\nfig = plt.figure(figsize=(8,5))\nplt.subplots_adjust(top=1, bottom=0, hspace=0.01, wspace=0.05)\nklist = np.random.choice(range(len(lfw_pairs.data)), M)\nfor j in range(M):\n k = klist[j]\n ax1 = fig.add_subplot(N, M, j+1)\n ax1.imshow(lfw_pairs.pairs [k][0], cmap=plt.cm.bone);\n ax1.grid(False)\n ax1.xaxis.set_ticks([])\n ax1.yaxis.set_ticks([])\n plt.title(lfw_pairs.target_names[lfw_pairs.target[k]])\n ax2 = fig.add_subplot(N, M, j+1 + M)\n ax2.imshow(lfw_pairs.pairs [k][1], cmap=plt.cm.bone);\n ax2.grid(False)\n ax2.xaxis.set_ticks([])\n ax2.yaxis.set_ticks([])\nplt.tight_layout()\nplt.show() ", "Digits Handwriting Image\nload_digits()\n\n숫자 필기 이미지", "from sklearn.datasets import load_digits\ndigits = load_digits()\nprint(digits.DESCR)\nprint(digits.keys())\n\nN=2; M=5;\nfig = plt.figure(figsize=(10,5))\nplt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05)\nfor i in range(N):\n for j in range(M):\n k = i*M+j\n ax = fig.add_subplot(N, M, k+1)\n ax.imshow(digits.images[k], cmap=plt.cm.bone, interpolation=\"none\");\n ax.grid(False)\n ax.xaxis.set_ticks([])\n ax.yaxis.set_ticks([])\n plt.title(digits.target_names[k])\nplt.tight_layout()\nplt.show()", "mldata.org repository\nfetch_mldata()\n\nhttp://mldata.org\npublic repository for machine learning data, supported by the PASCAL network \n홈페이지에서 data name 을 검색 후 key로 이용\n\nMNIST 숫자 필기인식 자료\n\nhttps://en.wikipedia.org/wiki/MNIST_database\nMixed National Institute of Standards and Technology (MNIST) database\n0-9 필기 숫자 이미지\n28x28 pixel bounding box\nanti-aliased, grayscale levels\n60,000 training images and 10,000 testing images", "from sklearn.datasets.mldata import fetch_mldata\nmnist = fetch_mldata('MNIST original')\nmnist.keys()\n\nN=2; M=5;\nfig = plt.figure(figsize=(8,5))\nplt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05)\nklist = np.random.choice(range(len(mnist.data)), N * M)\nfor i in range(N):\n for j in range(M):\n k = klist[i*M+j]\n ax = fig.add_subplot(N, M, i*M+j+1)\n ax.imshow(mnist.data[k].reshape(28, 28), cmap=plt.cm.bone, interpolation=\"nearest\");\n ax.grid(False)\n ax.xaxis.set_ticks([])\n ax.yaxis.set_ticks([])\n plt.title(mnist.target[k])\nplt.tight_layout()\nplt.show() " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rashikaranpuria/Machine-Learning-Specialization
Regression/Assignmet_five/week-5-lasso-assignment-2-blank.ipynb
mit
[ "Regression Week 5: LASSO (coordinate descent)\nIn this notebook, you will implement your very own LASSO solver via coordinate descent. You will:\n* Write a function to normalize features\n* Implement coordinate descent for LASSO\n* Explore effects of L1 penalty\nFire up graphlab create\nMake sure you have the latest version of graphlab (>= 1.7)", "import graphlab", "Load in house sales data\nDataset is from house sales in King County, the region where the city of Seattle, WA is located.", "sales = graphlab.SFrame('kc_house_data.gl/kc_house_data.gl')\n# In the dataset, 'floors' was defined with type string, \n# so we'll convert them to int, before using it below\nsales['floors'] = sales['floors'].astype(int) ", "If we want to do any \"feature engineering\" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.\nImport useful functions from previous notebook\nAs in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2.", "import numpy as np # note this allows us to refer to numpy as np instead \n\ndef get_numpy_data(data_sframe, features, output):\n data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame\n # add the column 'constant' to the front of the features list so that we can extract it along with the others:\n features = ['constant'] + features # this is how you combine two lists\n # select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):\n features_sframe = data_sframe[features]\n # the following line will convert the features_SFrame into a numpy matrix:\n feature_matrix = features_sframe.to_numpy()\n # assign the column of data_sframe associated with the output to the SArray output_sarray\n output_sarray = data_sframe['price']\n # the following will convert the SArray into a numpy array by first converting it to a list\n output_array = output_sarray.to_numpy()\n return(feature_matrix, output_array)", "Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights:", "def predict_output(feature_matrix, weights):\n # assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array\n # create the predictions vector by using np.dot()\n predictions = np.dot(feature_matrix, weights)\n return(predictions)", "Normalize features\nIn the house dataset, features vary wildly in their relative magnitude: sqft_living is very large overall compared to bedrooms, for instance. As a result, weight for sqft_living would be much smaller than weight for bedrooms. This is problematic because \"small\" weights are dropped first as l1_penalty goes up. \nTo give equal considerations for all features, we need to normalize features as discussed in the lectures: we divide each feature by its 2-norm so that the transformed feature has norm 1.\nLet's see how we can do this normalization easily with Numpy: let us first consider a small matrix.", "X = np.array([[3.,5.,8.],[4.,12.,15.]])\nprint X", "Numpy provides a shorthand for computing 2-norms of each column:", "norms = np.linalg.norm(X, axis=0) # gives [norm(X[:,0]), norm(X[:,1]), norm(X[:,2])]\nprint norms", "To normalize, apply element-wise division:", "print X / norms # gives [X[:,0]/norm(X[:,0]), X[:,1]/norm(X[:,1]), X[:,2]/norm(X[:,2])]", "Using the shorthand we just covered, write a short function called normalize_features(feature_matrix), which normalizes columns of a given feature matrix. The function should return a pair (normalized_features, norms), where the second item contains the norms of original features. As discussed in the lectures, we will use these norms to normalize the test data in the same way as we normalized the training data.", "import numpy as np\ndef normalize_features(feature_matrix):\n norms = np.linalg.norm(feature_matrix, axis=0)\n features = feature_matrix / norms\n return features, norms", "To test the function, run the following:", "features, norms = normalize_features(np.array([[3.,6.,9.],[4.,8.,12.]]))\nprint features\n# should print\n# [[ 0.6 0.6 0.6]\n# [ 0.8 0.8 0.8]]\nprint norms\n# should print\n# [5. 10. 15.]", "Implementing Coordinate Descent with normalized features\nWe seek to obtain a sparse set of weights by minimizing the LASSO cost function\nSUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|).\n(By convention, we do not include w[0] in the L1 penalty term. We never want to push the intercept to zero.)\nThe absolute value sign makes the cost function non-differentiable, so simple gradient descent is not viable (you would need to implement a method called subgradient descent). Instead, we will use coordinate descent: at each iteration, we will fix all weights but weight i and find the value of weight i that minimizes the objective. That is, we look for\nargmin_{w[i]} [ SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|) ]\nwhere all weights other than w[i] are held to be constant. We will optimize one w[i] at a time, circling through the weights multiple times.\n 1. Pick a coordinate i\n 2. Compute w[i] that minimizes the cost function SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|)\n 3. Repeat Steps 1 and 2 for all coordinates, multiple times\nFor this notebook, we use cyclical coordinate descent with normalized features, where we cycle through coordinates 0 to (d-1) in order, and assume the features were normalized as discussed above. The formula for optimizing each coordinate is as follows:\n┌ (ro[i] + lambda/2) if ro[i] &lt; -lambda/2\nw[i] = ├ 0 if -lambda/2 &lt;= ro[i] &lt;= lambda/2\n └ (ro[i] - lambda/2) if ro[i] &gt; lambda/2\nwhere\nro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ].\nNote that we do not regularize the weight of the constant feature (intercept) w[0], so, for this weight, the update is simply:\nw[0] = ro[i]\nEffect of L1 penalty\nLet us consider a simple model with 2 features:", "simple_features = ['sqft_living', 'bedrooms']\nmy_output = 'price'\n(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)", "Don't forget to normalize features:", "simple_feature_matrix, norms = normalize_features(simple_feature_matrix)", "We assign some random set of initial weights and inspect the values of ro[i]:", "weights = np.array([1., 4., 1.])", "Use predict_output() to make predictions on this data.", "prediction = predict_output(simple_feature_matrix, weights)", "Compute the values of ro[i] for each feature in this simple model, using the formula given above, using the formula:\nro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ]\nHint: You can get a Numpy vector for feature_i using:\nsimple_feature_matrix[:,i]", "ro = [0 for i in range((simple_feature_matrix.shape)[1])]\nfor j in range((simple_feature_matrix.shape)[1]): \n ro[j] = (simple_feature_matrix[:,j] * (output - prediction + (weights[j] * simple_feature_matrix[:,j]))).sum()\nprint ro", "QUIZ QUESTION\nRecall that, whenever ro[i] falls between -l1_penalty/2 and l1_penalty/2, the corresponding weight w[i] is sent to zero. Now suppose we were to take one step of coordinate descent on either feature 1 or feature 2. What range of values of l1_penalty would not set w[1] zero, but would set w[2] to zero, if we were to take a step in that coordinate?", "diff = abs((ro[1]*2) - (ro[2]*2))\nprint('λ = (%e, %e)' %((ro[2]-diff/2+1)*2, (ro[2]+diff/2-1)*2))", "QUIZ QUESTION\nWhat range of values of l1_penalty would set both w[1] and w[2] to zero, if we were to take a step in that coordinate?", "print ro[1]*2\nprint ro[2]*2", "So we can say that ro[i] quantifies the significance of the i-th feature: the larger ro[i] is, the more likely it is for the i-th feature to be retained.\nSingle Coordinate Descent Step\nUsing the formula above, implement coordinate descent that minimizes the cost function over a single feature i. Note that the intercept (weight 0) is not regularized. The function should accept feature matrix, output, current weights, l1 penalty, and index of feature to optimize over. The function should return new weight for feature i.", "def lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty):\n # compute prediction\n prediction = predict_output(feature_matrix, weights) \n # compute ro[i] = SUM[ [feature_i]*(output - prediction + weight[i]*[feature_i]) ]\n ro_i = np.sum(feature_matrix[:,i]*(output - prediction + weights[i]*feature_matrix[:,i]))\n if i == 0: # intercept -- do not regularize\n new_weight_i = ro_i \n elif ro_i < -l1_penalty/2.:\n new_weight_i = ro_i + (l1_penalty/2) \n elif ro_i > l1_penalty/2.:\n new_weight_i = ro_i - (l1_penalty/2) \n else:\n new_weight_i = 0.\n return new_weight_i", "To test the function, run the following cell:", "# should print 0.425558846691\nimport math\nprint lasso_coordinate_descent_step(1, np.array([[3./math.sqrt(13),1./math.sqrt(10)],[2./math.sqrt(13),3./math.sqrt(10)]]), \n np.array([1., 1.]), np.array([1., 4.]), 0.1)", "Cyclical coordinate descent\nNow that we have a function that optimizes the cost function over a single coordinate, let us implement cyclical coordinate descent where we optimize coordinates 0, 1, ..., (d-1) in order and repeat.\nWhen do we know to stop? Each time we scan all the coordinates (features) once, we measure the change in weight for each coordinate. If no coordinate changes by more than a specified threshold, we stop.\nFor each iteration:\n1. As you loop over features in order and perform coordinate descent, measure how much each coordinate changes.\n2. After the loop, if the maximum change across all coordinates is falls below the tolerance, stop. Otherwise, go back to step 1.\nReturn weights\nIMPORTANT: when computing a new weight for coordinate i, make sure to incorporate the new weights for coordinates 0, 1, ..., i-1. One good way is to update your weights variable in-place. See following pseudocode for illustration.\n```\nfor i in range(len(weights)):\n old_weights_i = weights[i] # remember old value of weight[i], as it will be overwritten\n # the following line uses new values for weight[0], weight[1], ..., weight[i-1]\n # and old values for weight[i], ..., weight[d-1]\n weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty)\n# use old_weights_i to compute change in coordinate\n...\n\n```", "def lasso_cyclical_coordinate_descent(feature_matrix, output, initial_weights, l1_penalty, tolerance):\n weights = initial_weights.copy() \n # converged condition variable \n converged = False \n while not converged: \n max_change = 0\n for i in range(len(weights)):\n old_weights_i = weights[i] \n weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty) \n change_i = np.abs(old_weights_i - weights[i]) \n if change_i > max_change: \n max_change = change_i \n if max_change < tolerance: \n converged = True \n return weights", "Using the following parameters, learn the weights on the sales dataset.", "simple_features = ['sqft_living', 'bedrooms']\nmy_output = 'price'\ninitial_weights = np.zeros(3)\nl1_penalty = 1e7\ntolerance = 1.0", "First create a normalized version of the feature matrix, normalized_simple_feature_matrix", "(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)\n(normalized_simple_feature_matrix, simple_norms) = normalize_features(simple_feature_matrix) # normalize features", "Then, run your implementation of LASSO coordinate descent:", "weights = lasso_cyclical_coordinate_descent(normalized_simple_feature_matrix, output,\n initial_weights, l1_penalty, tolerance)\nprint weights\n\n# predictions = predict_output(normalized_simple_feature_matrix, weights)\n# rss = 0\n# for i in range(0, len(predictions)):\n# error = predictions[i] - sales['price'][i]\n# rss += error * error\n# print rss", "QUIZ QUESTIONS\n1. What is the RSS of the learned model on the normalized dataset?\n2. Which features had weight zero at convergence?\nEvaluating LASSO fit with more features\nLet us split the sales dataset into training and test sets.", "train_data,test_data = sales.random_split(.8,seed=0)", "Let us consider the following set of features.", "all_features = ['bedrooms',\n 'bathrooms',\n 'sqft_living',\n 'sqft_lot',\n 'floors',\n 'waterfront', \n 'view', \n 'condition', \n 'grade',\n 'sqft_above',\n 'sqft_basement',\n 'yr_built', \n 'yr_renovated']", "First, create a normalized feature matrix from the TRAINING data with these features. (Make you store the norms for the normalization, since we'll use them later)", "(all_feature_matrix, output) = get_numpy_data(train_data, all_features, my_output)\n(normalized_all_feature_matrix, simple_norms) = normalize_features(all_feature_matrix) # normalize features\nmy_output = 'price'\ninitial_weights = np.zeros(14)\nl1_penalty = 1e7\ntolerance = 1.0", "First, learn the weights with l1_penalty=1e7, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e7, you will need them later.", "weights1e7 = lasso_cyclical_coordinate_descent(normalized_all_feature_matrix, output,\n initial_weights, l1_penalty=1e7, tolerance=1)\nprint weights1e7", "QUIZ QUESTION\nWhat features had non-zero weight in this case?\nNext, learn the weights with l1_penalty=1e8, on the training data. Initialize weights to all zeros, and set the tolerance=1. Call resulting weights weights1e8, you will need them later.", "weights1e8 = lasso_cyclical_coordinate_descent(normalized_all_feature_matrix, output,\n initial_weights, l1_penalty=1e8, tolerance=1)\nprint weights1e8", "QUIZ QUESTION\nWhat features had non-zero weight in this case?\nFinally, learn the weights with l1_penalty=1e4, on the training data. Initialize weights to all zeros, and set the tolerance=5e5. Call resulting weights weights1e4, you will need them later. (This case will take quite a bit longer to converge than the others above.)", "weights1e4 = lasso_cyclical_coordinate_descent(normalized_all_feature_matrix, output,\n initial_weights, l1_penalty=1e4, tolerance=5e5)\nprint weights1e4", "QUIZ QUESTION\nWhat features had non-zero weight in this case?\nRescaling learned weights\nRecall that we normalized our feature matrix, before learning the weights. To use these weights on a test set, we must normalize the test data in the same way.\nAlternatively, we can rescale the learned weights to include the normalization, so we never have to worry about normalizing the test data: \nIn this case, we must scale the resulting weights so that we can make predictions with original features:\n 1. Store the norms of the original features to a vector called norms:\nfeatures, norms = normalize_features(features)\n 2. Run Lasso on the normalized features and obtain a weights vector\n 3. Compute the weights for the original features by performing element-wise division, i.e.\nweights_normalized = weights / norms\nNow, we can apply weights_normalized to the test data, without normalizing it!\nCreate a normalized version of each of the weights learned above. (weights1e4, weights1e7, weights1e8).", "# (normalized_simple_feature_matrix, simple_norms) = normalize_features(all_features) # normalize features\nnormalized_weights1e7 = weights1e7 / simple_norms\nprint normalized_weights1e7[3]\nnormalized_weights1e4 = weights1e4 / simple_norms\nnormalized_weights1e8 = weights1e8 / simple_norms\n", "To check your results, if you call normalized_weights1e7 the normalized version of weights1e7, then:\nprint normalized_weights1e7[3]\nshould return 161.31745624837794.\nEvaluating each of the learned models on the test data\nLet's now evaluate the three models on the test data:", "(test_feature_matrix, test_output) = get_numpy_data(test_data, all_features, 'price')", "Compute the RSS of each of the three normalized weights on the (unnormalized) test_feature_matrix:", "prediction = predict_output(test_feature_matrix, normalized_weights1e4)\nrss = 0\nfor i in range(0, len(prediction)):\n error = prediction[i] - test_data['price'][i]\n rss += error * error\nprint rss\n\nprediction = predict_output(test_feature_matrix, normalized_weights1e7)\nrss = 0\nfor i in range(0, len(prediction)):\n error = prediction[i] - test_data['price'][i]\n rss += error * error\nprint rss\n\nprediction = predict_output(test_feature_matrix, normalized_weights1e8)\nrss = 0\nfor i in range(0, len(prediction)):\n error = prediction[i] - test_data['price'][i]\n rss += error * error\nprint rss", "QUIZ QUESTION\nWhich model performed best on the test data?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JackDi/phys202-2015-work
assignments/assignment12/FittingModelsEx01.ipynb
mit
[ "Fitting Models Exercise 1\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt\nfrom IPython.html.widgets import interact", "Fitting a quadratic curve\nFor this problem we are going to work with the following model:\n$$ y_{model}(x) = a x^2 + b x + c $$\nThe true values of the model parameters are as follows:", "a_true = 0.5\nb_true = 2.0\nc_true = -4.0", "First, generate a dataset using this model using these parameters and the following characteristics:\n\nFor your $x$ data use 30 uniformly spaced points between $[-5,5]$.\nAdd a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal).\n\nAfter you generate the data, make a plot of the raw data (use points).", "# YOUR CODE HERE\nxdata=np.linspace(-5,5,30)\nN=30\ndy=2.0\ndef ymodel(a,b,c):\n return a*x**2+b*x+c\nydata = a_true*x**2 + b_true * x + c_true + np.random.normal(0.0, dy, size=N)\n\nplt.errorbar(xdata, ydata, dy,\n fmt='.k', ecolor='lightgray')\nplt.xlabel('x')\nplt.ylabel('y');\n\nassert True # leave this cell for grading the raw data generation and plot", "Now fit the model to the dataset to recover estimates for the model's parameters:\n\nPrint out the estimates and uncertainties of each parameter.\nPlot the raw data and best fit of the model.", "# YOUR CODE HERE\ndef chi2(theta, x, y, dy):\n # theta = [b, m]\n return np.sum(((y - theta[0] - theta[1] * x) / dy) ** 2)\n\ndef manual_fit(a, b, c):\n modely = a*xdata**2 + b*xdata +c\n plt.plot(xdata, modely)\n plt.errorbar(xdata, ydata, dy,\n fmt='.k', ecolor='lightgray')\n plt.xlabel('x')\n plt.ylabel('y')\n plt.text(1, 15, 'a={0:.2f}'.format(a))\n plt.text(1, 12.5, 'b={0:.2f}'.format(b))\n plt.text(1, 10, 'c={0:.2f}'.format(c))\n plt.text(1, 8.0, '$\\chi^2$={0:.2f}'.format(chi2([a,b,c],xdata,ydata, dy)))\n\n\n\ninteract(manual_fit, a=(-3.0,3.0,0.01), b=(0.0,4.0,0.01),c=(-5,5,0.1));\n\ndef deviations(theta, x, y, dy):\n return (y - theta[0] - theta[1] * x) / dy\n\nresult = opt.leastsq(deviations, theta_guess, args=(xdata, ydata, dy), full_output=True)\n\ntheta_best = result[0]\ntheta_cov = result[1]\ntheta_mov = result[2]\nprint('a = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))\nprint('b = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))\nprint('c = {0:.3f} +/- {1:.3f}'.format(theta_best[2], np.sqrt(theta_cov[2,2])))\n\n\n\n\nassert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
probml/pyprobml
notebooks/book1/19/finetune_cnn_torch.ipynb
mit
[ "Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/19/finetune_cnn_jax.ipynb\n<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/finetune_cnn_torch.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nFine-tuning a resnet image classifier to classify hotdog vs not-hotdog\nWe illustrate how to fine-tune a resnet classifier which has been pre-trained on ImageNet. \nBased on sec 13.2 of \nhttp://d2l.ai/chapter_computer-vision/fine-tuning.html.\nThe target dataset consists of 2 classes (hotdog vs no hotdog), and has 1400 images of each. (This example is inspired by Season 4, Episode 4 of the TV show Silicon Valley.", "import numpy as np\nimport matplotlib.pyplot as plt\n\nnp.random.seed(seed=1)\nimport math\nimport os\n\ntry:\n import torch\nexcept ModuleNotFoundError:\n %pip install -qq torch\n import torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\ntry:\n import torchvision\nexcept ModuleNotFoundError:\n %pip install -qq torchvision\n import torchvision\n\n!mkdir figures # for saving plots\n\n!wget https://raw.githubusercontent.com/d2l-ai/d2l-en/master/d2l/torch.py -q -O d2l.py\nimport d2l", "Dataset", "d2l.DATA_HUB[\"hotdog\"] = (d2l.DATA_URL + \"hotdog.zip\", \"fba480ffa8aa7e0febbb511d181409f899b9baa5\")\n\ndata_dir = d2l.download_extract(\"hotdog\")\n\ntrain_imgs = torchvision.datasets.ImageFolder(os.path.join(data_dir, \"train\"))\ntest_imgs = torchvision.datasets.ImageFolder(os.path.join(data_dir, \"test\"))", "We show the first 8 positive and last 8 negative images. We see the aspect ratio is quite different.", "hotdogs = [train_imgs[i][0] for i in range(8)]\nnot_hotdogs = [train_imgs[-i - 1][0] for i in range(8)]\nd2l.show_images(hotdogs + not_hotdogs, 2, 8, scale=1.4);", "We use data augmentation at train and test time, as shown below.", "# We specify the mean and variance of the three RGB channels to normalize the\n# image channel\nnormalize = torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n\ntrain_augs = torchvision.transforms.Compose(\n [\n torchvision.transforms.RandomResizedCrop(224),\n torchvision.transforms.RandomHorizontalFlip(),\n torchvision.transforms.ToTensor(),\n normalize,\n ]\n)\n\ntest_augs = torchvision.transforms.Compose(\n [\n torchvision.transforms.Resize(256),\n torchvision.transforms.CenterCrop(224),\n torchvision.transforms.ToTensor(),\n normalize,\n ]\n)", "Model", "pretrained_net = torchvision.models.resnet18(pretrained=True)", "The final layer is called fc, for fully connected.", "finetune_net = torchvision.models.resnet18(pretrained=True)\nfinetune_net.fc = nn.Linear(finetune_net.fc.in_features, 2)\nnn.init.xavier_uniform_(finetune_net.fc.weight)", "Fine tuning\nIn D2L, they call their training routine train_ch13, since it is in their chapter 13. We modify their code so it uses a single GPU, by commenting out the DataParallel part.", "def train_batch(net, X, y, loss, trainer, devices):\n X = X.to(devices[0])\n y = y.to(devices[0])\n net.train()\n trainer.zero_grad()\n pred = net(X)\n l = loss(pred, y)\n l.sum().backward()\n trainer.step()\n train_loss_sum = l.sum()\n train_acc_sum = d2l.accuracy(pred, y)\n return train_loss_sum, train_acc_sum\n\n\ndef train(net, train_iter, test_iter, loss, trainer, num_epochs, devices=d2l.try_all_gpus()):\n timer, num_batches = d2l.Timer(), len(train_iter)\n animator = d2l.Animator(\n xlabel=\"epoch\", xlim=[1, num_epochs], ylim=[0, 1], legend=[\"train loss\", \"train acc\", \"test acc\"]\n )\n # net = nn.DataParallel(net, device_ids=devices).to(devices[0])\n net = net.to(devices[0])\n for epoch in range(num_epochs):\n # Store training_loss, training_accuracy, num_examples, num_features\n metric = d2l.Accumulator(4)\n for i, (features, labels) in enumerate(train_iter):\n timer.start()\n l, acc = train_batch(net, features, labels, loss, trainer, devices)\n metric.add(l, acc, labels.shape[0], labels.numel())\n timer.stop()\n if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:\n animator.add(epoch + (i + 1) / num_batches, (metric[0] / metric[2], metric[1] / metric[3], None))\n test_acc = d2l.evaluate_accuracy_gpu(net, test_iter)\n animator.add(epoch + 1, (None, None, test_acc))\n print(f\"loss {metric[0] / metric[2]:.3f}, train acc \" f\"{metric[1] / metric[3]:.3f}, test acc {test_acc:.3f}\")\n print(f\"{metric[2] * num_epochs / timer.sum():.1f} examples/sec on \" f\"{str(devices)}\")", "We update all the parameters, but use a 10x larger learning rate for the fc layer.", "def train_fine_tuning(net, learning_rate, batch_size=128, num_epochs=5, param_group=True):\n train_iter = torch.utils.data.DataLoader(\n torchvision.datasets.ImageFolder(os.path.join(data_dir, \"train\"), transform=train_augs),\n batch_size=batch_size,\n shuffle=True,\n )\n test_iter = torch.utils.data.DataLoader(\n torchvision.datasets.ImageFolder(os.path.join(data_dir, \"test\"), transform=test_augs), batch_size=batch_size\n )\n devices = d2l.try_all_gpus()\n loss = nn.CrossEntropyLoss(reduction=\"none\")\n if param_group:\n params_1x = [param for name, param in net.named_parameters() if name not in [\"fc.weight\", \"fc.bias\"]]\n trainer = torch.optim.SGD(\n [{\"params\": params_1x}, {\"params\": net.fc.parameters(), \"lr\": learning_rate * 10}],\n lr=learning_rate,\n weight_decay=0.001,\n )\n else:\n trainer = torch.optim.SGD(net.parameters(), lr=learning_rate, weight_decay=0.001)\n train(net, train_iter, test_iter, loss, trainer, num_epochs, devices)\n\ntrain_fine_tuning(finetune_net, 5e-5)", "Test the model", "net = finetune_net.to(\"cpu\")\nnet.eval(); # set to eval mode (not training)\n\nfname = os.path.join(data_dir, \"test\", \"hotdog\", \"1000.png\")\nfrom PIL import Image\n\nimg = Image.open(fname)\ndisplay(img)\n\nimg_t = test_augs(img) # convert to tensor\nbatch_t = torch.unsqueeze(img_t, 0)\nout = net(batch_t)\nprobs = F.softmax(out, dim=1)\nprint(probs)\n\nfname = os.path.join(data_dir, \"test\", \"not-hotdog\", \"1000.png\")\nfrom PIL import Image\n\nimg = Image.open(fname)\ndisplay(img)\n\nimg_t = test_augs(img) # convert to tensor\nbatch_t = torch.unsqueeze(img_t, 0)\nout = net(batch_t)\nprobs = F.softmax(out, dim=1)\nprint(probs)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
grigorisg9gr/menpo-notebooks
menpo3d/Rasterization Basics.ipynb
bsd-3-clause
[ "Offscreen Rasterization Basics\nMenpo3D wraps a subproject called cyrasterize which allows for simple rasterization of 3D meshes. At the moment, only basic rendering is support, with no lighting. However, in the near future many more features will be added.\nTo begin, we need to import a mesh.", "import numpy as np\nimport menpo3d.io as mio\n\nmesh = mio.import_builtin_asset('james.obj')", "As with all core Menpo objects, it is very simple to visualize what the textured mesh looks like. An external window will be created which shows the mesh that we just loaded (the lovely James Booth). This window is fully interactive and contains a number of features provided by the underlying window manager, Mayavi.\nLeave this window open so that we can try and replicate it using the rasterizer!\nNote: You must call %matplotlib qt before rendering any 3D meshes to prevent the notebook from crashing", "%matplotlib qt\nviewer = mesh.view()", "Fetching the viewer state\nOncr you've moved James around in to an interesting pose, you might want to take snapshot of this pose using the rasterizer! We allow you to easily access this state via a property on the viewer.\nNOTE: You must leave the visualisation window open in order to be able to access these settings", "viewer_settings = viewer.renderer_settings", "As you can see from the output below, the renderer_settings property provides all the necessary state to control the camera for rasterization.", "# Let's print the current state so that we can see it!\nnp.set_printoptions(linewidth=500, precision=1, suppress=True)\nfor k, v in viewer_settings.items():\n print(\"{}: \".format(k))\n print(v)", "Building a GLRasterizer\nNow that we have all the necessary state, we a able to initialize our rasterizer and produce output images. We begin by initialising a GLRasterizer which the necessary camera/rendering canvas state.", "from menpo3d.rasterize import GLRasterizer\n\n# Build a rasterizer configured from the current view\nr = GLRasterizer(**viewer_settings)", "We can then rasterize our mesh of James, given then camera parameters that we just initialised our rasterizer with. This will produce a single output image that should be identical (bar the background colour or any lighting settings) to the view shown in the visualisation window.", "# Rasterize to produce an RGB image\nrgb_img = r.rasterize_mesh(mesh)\n\n%matplotlib inline\nrgb_img.view()", "All rasterized images have their mask set to show what the rasterizer actually processed. Any black pixels were not processed by the shader.", "rgb_img.mask.view()", "Rasterisation of arbitrary floating\nGLRasterizer gives us the ability to rasterize arbitrary floating point information. For instance, we can render out a XYZ floating point shape image. This is particularly useful for simulating depth cameras such as the Microsoft Kinect. Note, however, that the depth (z) values returned are in world coordinates, and do not represent true distances from the 'camera'.", "# The first output is the RGB image as before, the second is the XYZ information\nrgb_img, shape_img = r.rasterize_mesh_with_shape_image(mesh)\n\n# The last channel is the z information in model space coordinates\n# Note that this is NOT camera depth\nshape_img.view(channels=2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
astarostin/MachineLearningSpecializationCoursera
course4/week1 - Биномиальный критерий для доли - demo.ipynb
apache-2.0
[ "Биномиальный критерий для доли", "import numpy as np\nfrom scipy import stats\n\n%pylab inline", "Shaken, not stirred\nДжеймс Бонд говорит, что предпочитает мартини смешанным, но не взболтанным. Проведём слепой тест (blind test): n раз предложим ему пару напитков и выясним, какой из двух он предпочитает:\n\nвыборка - бинарный вектор длины $n$, где 1 - Джеймс Бонд предпочел смешанный напиток, 0 - взболтанный;\nгипотеза $H_0$ - Джеймс Бонд не различает 2 вида напитков и выбирает наугад;\nстатистика $t$ - количество единиц в выборке.", "n = 16\nn_samples = 1000\nsamples = np.random.randint(2, size = (n_samples, n))\n\nt_stat = map(sum, samples)\n\npylab.hist(t_stat, bins = 16, color = 'b', range = (0, 16), label = 't_stat')\npylab.legend()", "Нулевое распределение статистики — биномиальное $Bin(n, 0.5)$\nДвусторонняя альтернатива\nгипотеза $H_1$ - Джеймс Бонд предпочитает какой-то определённый вид мартини.", "stats.binom_test(12, 16, 0.5, alternative = 'two-sided')\n\nstats.binom_test(13, 16, 0.5, alternative = 'two-sided')\n\nstats.binom_test(67, 100, 0.75, alternative='two-sided')\n\nstats.binom_test(22, 50, 0.75, alternative='two-sided')", "Односторонняя альтернатива\nгипотеза $H_1$ - Джеймс Бонд предпочитает смешанный напиток.", "stats.binom_test(12, 16, 0.5, alternative = 'greater')\n\nstats.binom_test(11, 16, 0.5, alternative = 'greater')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cetoli/draft
src/supergame/ipnb/supergame.ipynb
gpl-2.0
[ "..# Activ Spyder - Captura de página do ActivUfrj\n* This file is part of program Activ Spyder\n* Copyright © 2022 Carlo Oliveira &#99;&#97;&#114;&#108;&#111;&#64;&#110;&#99;&#101;&#46;&#117;&#102;&#114;&#106;&#46;&#98;&#114;,\n* Labase labase.selfip.org; GPL is.gd/3Udt.\n* SPDX-License-Identifier: (GPLv3-or-later AND LGPL-2.0-only) WITH bison-exception\nCrawler for SuperGame -\nObtem versões dos relatórios dos games.\n\n\ncodeauthor:: Carlo Oliveira &#99;&#97;&#114;&#108;&#111;&#64;&#117;&#102;&#114;&#106;&#46;&#98;&#114;\n\nChangelog\n\nversionadded:: 22.05\n\n Criação do raspador de página.\n\n\nversionchanged:: 22.06\n\n Grráficos de aceleração.\n\n\n\nLeitura do Arquivo Capturado pelo Crawler", "import pandas as pd\ndf = pd.read_json(\"../author_data.json\")\ndf.info()\ndf", "Campos existentes dos dados originais\n| Nome | Descrição dos campos relevantes |\n|-------------:|---------------------------------|\n| author | Nome do participante |\n| version | Versão corrente do texto |\n| data_cri | Data de criação |\n| data_alt | Data de alteração |\n| alterado_por | Autor de alteração |\n| conteudo | Conteúdo da página |\nCampos Gerados a partir dos dados originais\n| Nome | Descrição dos campos relevantes |\n|-------------------:|-------------------------------------|\n| text_size | Tamanho em letras do conteúdo |\n| conta_imagem | Contagem de imagens da página |\n| velocidade_imagem | Aumento de imagens da página |\n| acelera_imagem | Seg. derivada de imagens da página |\n| conta_palavra | Contagem de palavras da página |\n| velocidade_palavra | Aumento de palavras da página |\n| acelera_palavra | Seg. derivada de palavras da página |", "import json\nimport csv\nimport bs4\n# load data using Python JSON module\n\nwith open('../stopwords.txt','r') as f:\n stopwords = f.read().split()\ndef image_count(html):\n soup = bs4.BeautifulSoup(html)\n image_tags = soup.find_all('img')\n return len([img for img in image_tags if \"/file/MATERIAIS.DESIGN.ARQUITETURA\" in img['src']])\ndef word_count(html):\n soup = bs4.BeautifulSoup(html)\n text = soup.get_text()\n words = [word for word in text.split() if word not in stopwords]\n return len(words)\n\nwith open('../author_data.json','r') as f:\n data = json.loads(f.read())# Flatten data\nheadings = [\"author\",\n\"version\",\n\"data_cri\",\n\"data_alt\",\n\"alterado_por\",\n\"owner\",\n\"text_size\",\n\"conta_imagem\",\n\"conta_palavra\",\n\"conteudo\"]\ndatan = {key: [] for key in headings}\n[datan[key].append(val) for aut in data for line in aut for key, val in line.items() if key in headings]\n[datan[\"text_size\"].append(len(line[\"conteudo\"])) for aut in data for line in aut]\n[datan[\"conta_imagem\"].append(image_count(line[\"conteudo\"])) for aut in data for line in aut]\n[datan[\"conta_palavra\"].append(word_count(line[\"conteudo\"])) for aut in data for line in aut]\n# datan\nwith open('../author_data.csv','w') as fw:\n w = csv.DictWriter(fw, datan.keys())\n w.writeheader()\n w.writerow(datan)\n\ndf = pd.DataFrame(datan)\npd.to_datetime(df.data_cri) #, errors = 'ignore')\ndf.data_cri = pd.to_datetime(df.data_cri)\ndf.data_alt = pd.to_datetime(df.data_alt)\ndf[\"velocidade_palavra\"] = df.groupby('author')['conta_palavra'].apply(lambda x: x.shift(1) - x)\ndf[\"acelera_palavra\"] = df.groupby('author')['velocidade_palavra'].apply(lambda x: x.shift(1) - x)\ndf[\"velocidade_imagem\"] = df.groupby('author')['conta_imagem'].apply(lambda x: x.shift(1) - x)\ndf[\"acelera_imagem\"] = df.groupby('author')['velocidade_imagem'].apply(lambda x: x.shift(1) - x)\ndf.info()\ndf\n\ndf.hist()", "Contagem da Produção de Palavras\nAs palavras no texto da página são contadas como forma deum aumento na habilidade de trabalhar com registros textuais.", "import seaborn as sns\nimport matplotlib.pyplot as plt\nsns.lineplot(x=\"version\", y=\"conta_palavra\", hue=\"author\", data=df).set(\n title='Contagem das palavras ao longo das versões', ylabel='Contagem das palavras')\nplt.gcf().set_size_inches(20, 10)", "Contagem da produção de imagens\nAs imagens da página são contadas como forma deum aumento na habilidade de trabalhar com registros visuais.", "import seaborn as sns\nimport matplotlib.pyplot as plt\nsns.lineplot(x=\"version\", y=\"conta_imagem\", hue=\"author\", data=df).set(\n title='Contagem das imagens ao longo das versões', ylabel='Contagem das imagens')\nplt.gcf().set_size_inches(20,10)", "Segunda derivada da contagem de palavras\nOs valores consecutivos de contagem de palavras são subtraídos para formar a velocidade. Velocidades consecutivas são subtraídas para obter a aceleração. Segundo a teoria metacognitiva do aprendizado, a aceleração na produção de resultados caracteriza a cognição como apta a entender o conteúdo estudado.", "import seaborn as sns\nimport matplotlib.pyplot as plt\nsns.lineplot(x=\"version\", y=\"acelera_palavra\", hue=\"author\", data=df).set(\n title='Aceleração da contagem das palavras ao longo das versões', ylabel='Variação do número de palavras')\nplt.gcf().set_size_inches(20, 10)", "Segunda derivada da contagem de imagens\nOs valores consecutivos de contagem de imagens são subtraídos para formar a velocidade.\nVelocidades consecutivas são subtraídas para obter a aceleração.\nComo já explicado anteriormente, a aceleração na produção de imagens caracteriza\num aumento na habilidade de trabalhar com registros visuais.", "import seaborn as sns\nimport matplotlib.pyplot as plt\nsns.lineplot(x=\"version\", y=\"acelera_imagem\", hue=\"author\", data=df).set(\n title='Aceleração das imagens ao longo das versões', ylabel='Variação do número de imagens')\nplt.gcf().set_size_inches(20, 10)", "Estatística da aceleração de palavras\nDistribuição estatística da aceleração na produção de texto pelos autores ao longo das versões.", "import seaborn as sns\nimport matplotlib.pyplot as plt\nsns.boxplot(x=\"version\", y=\"acelera_palavra\", data=df).set(\n title='Distribuição da aceleração ao longo das versões', ylabel='Variação do número de palavras')\nplt.gcf().set_size_inches(20, 10)", "Estatística da aceleração de palavras por autor\nDistribuição estatística da aceleração na produção de texto ao longo das versões para cada autor.", "import seaborn as sns\nimport matplotlib.pyplot as plt\nsns.boxplot(x=\"author\", y=\"acelera_palavra\", data=df).set(\n title='Distribuição da aceleração ao longo das versões por autor', ylabel='Variação do número de palavras')\nplt.gcf().set_size_inches(20, 10)", "Mapa da produção dos autores nas diversas versões\nO mapa mostra as dimensões de aceleração de palavras e imagens nos eixos y e x. Os autores são representados pelas cores e as versões pelo tamanho dos pontos. Este mapeamento permite observar a evolução cognitiva ao longo das versões tanto da produção textual como da visual.", "sns.relplot(x=\"acelera_palavra\", y=\"acelera_imagem\", hue=\"author\", size=\"version\",\n alpha=.5, palette=\"muted\", sizes=(40, 400),\n data=df)\nplt.gcf().set_size_inches(20, 10)\n", "Estatística da aceleração de imagens por autor\nDistribuição estatística da aceleração na produção visual ao longo das versões para cada autor.", "import seaborn as sns\nimport matplotlib.pyplot as plt\nsns.boxplot(x=\"author\", y=\"acelera_imagem\", data=df).set(\n title='Distribuição da aceleração ao longo das versões por autor', ylabel='Variação do número de imagens')\nplt.gcf().set_size_inches(20, 10)", "Regressão correlacionando produção textual e visual\nNo quadrante tendo como eixos as acelerações textuais e visuais, calcula-se uma regressão linear da produções dos autores ao longo das versões.\nOs coeficientes angulares das retas revelam tendências a evoluções que correlacionam as produções visuais com as textuais, com diversas abordagens cognitivas na aprendizagem.", "import seaborn as sns\nimport matplotlib.pyplot as plt\nsns.lmplot(x=\"acelera_imagem\", y=\"acelera_palavra\", hue=\"author\", data=df).set(\n title='Aceleração texto x imagem ao longo das versões por autor',\n ylabel='Aceleração do volume de texto', xlabel='Aceleração da contagem de imagens')\nplt.xlim(-15, 15)\nplt.ylim(-100, 100)\nplt.gcf().set_size_inches(20, 10)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/machine_learning_in_the_enterprise/labs/sdk-custom-image-classification-batch.ipynb
apache-2.0
[ "Vertex AI Custom Image Classification Model for Batch Prediction\nOverview\nIn this notebook, you learn how to use the Vertex SDK for Python to train and deploy a custom image classification model for batch prediction.\nLearning Objective\n\nCreate a Vertex AI custom job for training a model.\nTrain a TensorFlow model.\nMake a batch prediction.\nClean up resources.\n\nIntroduction\nIn this notebook, you will create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then do a prediction on the deployed model by sending data. Alternatively, you can create custom-trained models using gcloud command-line tool, or online using the Cloud Console.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. \nMake sure to enable the Vertex AI API and Compute Engine API.\nInstallation\nInstall the latest (preview) version of Vertex SDK for Python.", "# Setup your dependencies\nimport os\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# Google Cloud Notebook requires dependencies to be installed with '--user'\nUSER_FLAG = \"\"\nif IS_GOOGLE_CLOUD_NOTEBOOK:\n USER_FLAG = \"--user\"\n\n# Upgrade the specified package to the newest available version\n! pip install {USER_FLAG} --upgrade google-cloud-aiplatform", "Install the latest GA version of google-cloud-storage library as well.", "# Upgrade the specified package to the newest available version\n! pip install {USER_FLAG} --upgrade google-cloud-storage", "Install the pillow library for loading images.", "# Upgrade the specified package to the newest available version\n! pip install {USER_FLAG} --upgrade pillow", "Install the numpy library for manipulation of image data.", "# Upgrade the specified package to the newest available version\n! pip install {USER_FLAG} --upgrade numpy", "Please ignore the incompatible errors.\nRestart the kernel\nOnce you've installed everything, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Set your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.", "import os\n\nPROJECT_ID = \"\"\n\nif not os.getenv(\"IS_TESTING\"):\n # Get your Google Cloud project ID from gcloud\n shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)", "Otherwise, set your project ID here.", "if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.", "# Import necessary libraries\nfrom datetime import datetime\n\n# Use a timestamp to ensure unique resources\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a training job using the Cloud SDK, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. Vertex AI runs\nthe code from this package. In this tutorial, Vertex AI also saves the\ntrained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model resources.\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets.\nYou may also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Vertex AI services are\navailable. You may\nnot use a Multi-Regional Storage bucket for training with Vertex AI.", "# Fill in your bucket name and region\nBUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\nREGION = \"[your-region]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport Vertex SDK for Python\nImport the Vertex SDK for Python into your Python environment and initialize it.", "# Import necessary libraries\nimport os\nimport sys\n\nfrom google.cloud import aiplatform\nfrom google.cloud.aiplatform import gapic as aip\n\naiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME)", "Set hardware accelerators\nYou can set hardware accelerators for both training and prediction.\nSet the variables TRAIN_CPU/TRAIN_NCPU and DEPLOY_CPU/DEPLOY_NCPU to use a container image supporting a CPU and the number of CPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Tesla K80 GPUs allocated to each VM, you would specify:\n(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n\nSee the locations where accelerators are available.\nOtherwise specify (None, None) to use a container image to run on a CPU.\nNote: TensorFlow releases earlier than 2.3 for GPU support fail to load the custom model in this tutorial. This issue is caused by static graph operations that are generated in the serving function. This is a known issue, which is fixed in TensorFlow 2.3. If you encounter this issue with your own custom models, use a container image for TensorFlow 2.3 or later with GPU support.\nFor this lab we will use a container image to run on a CPU.", "TRAIN_CPU, TRAIN_NCPU = (None, None)\n\nDEPLOY_CPU, DEPLOY_NCPU = (None, None)", "Set pre-built containers\nVertex AI provides pre-built containers to run training and prediction.\nFor the latest list, see Pre-built containers for training and Pre-built containers for prediction", "TRAIN_VERSION = \"tf-cpu.2-1\"\nDEPLOY_VERSION = \"tf2-cpu.2-1\"\n\nTRAIN_IMAGE = \"gcr.io/cloud-aiplatform/training/{}:latest\".format(TRAIN_VERSION)\nDEPLOY_IMAGE = \"gcr.io/cloud-aiplatform/prediction/{}:latest\".format(DEPLOY_VERSION)\n\nprint(\"Training:\", TRAIN_IMAGE, TRAIN_CPU, TRAIN_NCPU)\nprint(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_CPU, DEPLOY_NCPU)", "Set machine types\nNext, set the machine types to use for training and prediction.\n\nSet the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.\nmachine type\nn1-standard: 3.75GB of memory per vCPU\nn1-highmem: 6.5GB of memory per vCPU\nn1-highcpu: 0.9 GB of memory per vCPU\n\n\nvCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]\n\nNote: The following is not supported for training:\n\nstandard: 2 vCPUs\nhighcpu: 2, 4 and 8 vCPUs\n\nNote: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.", "# Set the machine type\nMACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nTRAIN_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Train machine type\", TRAIN_COMPUTE)\n\nMACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)", "Tutorial\nNow you are ready to start creating your own custom-trained model with CIFAR10.\nTrain a model\nThere are two ways you can train a custom model using a container image:\n\n\nUse a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.\n\n\nUse your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.\n\n\nDefine the command args for the training script\nPrepare the command-line arguments to pass to your training script.\n- args: The command line arguments to pass to the corresponding Python module. In this example, they will be:\n - \"--epochs=\" + EPOCHS: The number of epochs for training.\n - \"--steps=\" + STEPS: The number of steps (batches) per epoch.\n - \"--distribute=\" + TRAIN_STRATEGY\" : The training distribution strategy to use for single or distributed training.\n - \"single\": single device.\n - \"mirror\": all GPU devices on a single compute instance.\n - \"multi\": all GPU devices on all compute instances.", "# Define the command arguments for the training script\nJOB_NAME = \"custom_job_\" + TIMESTAMP\nMODEL_DIR = \"{}/{}\".format(BUCKET_NAME, JOB_NAME)\n\nif not TRAIN_NCPU or TRAIN_NCPU < 2:\n TRAIN_STRATEGY = \"single\"\nelse:\n TRAIN_STRATEGY = \"mirror\"\n\nEPOCHS = 20\nSTEPS = 100\n\nCMDARGS = [\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n \"--distribute=\" + TRAIN_STRATEGY,\n]", "Training script\nIn the next cell, you will write the contents of the training script, task.py. In summary:\n\nGet the directory where to save the model artifacts from the environment variable AIP_MODEL_DIR. This variable is set by the training service.\nLoads CIFAR10 dataset from TF Datasets (tfds).\nBuilds a model using TF.Keras model API.\nCompiles the model (compile()).\nSets a training distribution strategy according to the argument args.distribute.\nTrains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps\nSaves the trained model (save(MODEL_DIR)) to the specified model directory.", "%%writefile task.py\n# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10\n\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nimport argparse\nimport os\nimport sys\ntfds.disable_progress_bar()\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--lr', dest='lr',\n default=0.01, type=float,\n help='Learning rate.')\nparser.add_argument('--epochs', dest='epochs',\n default=10, type=int,\n help='Number of epochs.')\nparser.add_argument('--steps', dest='steps',\n default=200, type=int,\n help='Number of steps per epoch.')\nparser.add_argument('--distribute', dest='distribute', type=str, default='single',\n help='distributed training strategy')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\nprint('TensorFlow Version = {}'.format(tf.__version__))\nprint('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\nprint('DEVICES', device_lib.list_local_devices())\n\n# Single Machine, single compute device\nif args.distribute == 'single':\n if tf.test.is_gpu_available():\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n else:\n strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\n# Single Machine, multiple compute device\nelif args.distribute == 'mirror':\n strategy = tf.distribute.MirroredStrategy()\n# Multiple Machine, multiple compute device\nelif args.distribute == 'multi':\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n\n# Multi-worker configuration\nprint('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n\n# Preparing dataset\nBUFFER_SIZE = 10000\nBATCH_SIZE = 64\n\ndef make_datasets_unbatched():\n # Scaling CIFAR10 data from (0, 255] to (0., 1.]\n def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255.0\n return image, label\n\n datasets, info = tfds.load(name='cifar10',\n with_info=True,\n as_supervised=True)\n return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()\n\n\n# Build the Keras model\ndef build_and_compile_cnn_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Conv2D(32, 3, activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n model.compile(\n loss=tf.keras.losses.sparse_categorical_crossentropy,\n optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),\n metrics=['accuracy'])\n return model\n\n# Train the model\nNUM_WORKERS = strategy.num_replicas_in_sync\n# Here the batch size scales up by number of workers since\n# `tf.data.Dataset.batch` expects the global batch size.\nGLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS\nMODEL_DIR = os.getenv(\"AIP_MODEL_DIR\")\n\ntrain_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)\n\nwith strategy.scope():\n # Creation of dataset, and model building/compiling need to be within\n # `strategy.scope()`.\n model = build_and_compile_cnn_model()\n\nmodel.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)\nmodel.save(MODEL_DIR)", "Train the model\nDefine your custom training job on Vertex AI.\nUse the CustomTrainingJob class to define the job, which takes the following parameters:\n\ndisplay_name: The user-defined name of this training pipeline.\nscript_path: The local path to the training script.\ncontainer_uri: The URI of the training container image.\nrequirements: The list of Python package dependencies of the script.\nmodel_serving_container_image_uri: The URI of a container that can serve predictions for your model — either a prebuilt container or a custom container.\n\nUse the run function to start training, which takes the following parameters:\n\nargs: The command line arguments to be passed to the Python script.\nreplica_count: The number of worker replicas.\nmodel_display_name: The display name of the Model if the script produces a managed Model.\nmachine_type: The type of machine to use for training.\naccelerator_type: The hardware accelerator type.\naccelerator_count: The number of accelerators to attach to a worker replica.\n\nThe run function creates a training pipeline that trains and creates a Model object. After the training pipeline completes, the run function returns the Model object.", "# Define your custom training job and use the run function to start the training\njob = # TODO -- Your code goes here(\n display_name=JOB_NAME,\n script_path=\"task.py\",\n container_uri=TRAIN_IMAGE,\n requirements=[\"tensorflow_datasets==1.3.0\"],\n model_serving_container_image_uri=DEPLOY_IMAGE,\n)\n\nMODEL_DISPLAY_NAME = \"cifar10-\" + TIMESTAMP\n\n# Start the training\nif TRAIN_CPU:\n model = # TODO -- Your code goes here(\n model_display_name=MODEL_DISPLAY_NAME,\n args=CMDARGS,\n replica_count=1,\n machine_type=TRAIN_COMPUTE,\n accelerator_type=TRAIN_CPU.name,\n accelerator_count=TRAIN_NCPU,\n )\nelse:\n model = # TODO -- Your code goes here(\n model_display_name=MODEL_DISPLAY_NAME,\n args=CMDARGS,\n replica_count=1,\n machine_type=TRAIN_COMPUTE,\n accelerator_count=0,\n )", "Make a batch prediction request\nSend a batch prediction request to your deployed model.\nGet test data\nDownload images from the CIFAR dataset and preprocess them.\nDownload the test images\nDownload the provided set of images from the CIFAR dataset:", "# Download the images\n! gsutil -m cp -r gs://cloud-samples-data/ai-platform-unified/cifar_test_images .", "Preprocess the images\nBefore you can run the data through the endpoint, you need to preprocess it to match the format that your custom model defined in task.py expects.\nx_test:\nNormalize (rescale) the pixel data by dividing each pixel by 255. This replaces each single byte integer pixel with a 32-bit floating point number between 0 and 1.\ny_test:\nYou can extract the labels from the image filenames. Each image's filename format is \"image_{LABEL}_{IMAGE_NUMBER}.jpg\"", "import numpy as np\nfrom PIL import Image\n\n# Load image data\nIMAGE_DIRECTORY = \"cifar_test_images\"\n\nimage_files = [file for file in os.listdir(IMAGE_DIRECTORY) if file.endswith(\".jpg\")]\n\n# Decode JPEG images into numpy arrays\nimage_data = [\n np.asarray(Image.open(os.path.join(IMAGE_DIRECTORY, file))) for file in image_files\n]\n\n# Scale and convert to expected format\nx_test = [(image / 255.0).astype(np.float32).tolist() for image in image_data]\n\n# Extract labels from image name\ny_test = [int(file.split(\"_\")[1]) for file in image_files]", "Prepare data for batch prediction\nBefore you can run the data through batch prediction, you need to save the data into one of a few possible formats.\nFor this tutorial, use JSONL as it's compatible with the 3-dimensional list that each image is currently represented in. To do this:\n\nIn a file, write each instance as JSON on its own line.\nUpload this file to Cloud Storage.\n\nFor more details on batch prediction input formats: https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions#batch_request_input", "import json\n\nBATCH_PREDICTION_INSTANCES_FILE = \"batch_prediction_instances.jsonl\"\n\nBATCH_PREDICTION_GCS_SOURCE = (\n BUCKET_NAME + \"/batch_prediction_instances/\" + BATCH_PREDICTION_INSTANCES_FILE\n)\n\n# Write instances at JSONL\nwith open(BATCH_PREDICTION_INSTANCES_FILE, \"w\") as f:\n for x in x_test:\n f.write(json.dumps(x) + \"\\n\")\n\n# Upload to Cloud Storage bucket\n! gsutil cp $BATCH_PREDICTION_INSTANCES_FILE $BATCH_PREDICTION_GCS_SOURCE\n\nprint(\"Uploaded instances to: \", BATCH_PREDICTION_GCS_SOURCE)", "Send the prediction request\nTo make a batch prediction request, call the model object's batch_predict method with the following parameters: \n- instances_format: The format of the batch prediction request file: \"jsonl\", \"csv\", \"bigquery\", \"tf-record\", \"tf-record-gzip\" or \"file-list\"\n- prediction_format: The format of the batch prediction response file: \"jsonl\", \"csv\", \"bigquery\", \"tf-record\", \"tf-record-gzip\" or \"file-list\"\n- job_display_name: The human readable name for the prediction job.\n - gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.\n- gcs_destination_prefix: The Cloud Storage path that the service will write the predictions to.\n- model_parameters: Additional filtering parameters for serving prediction results.\n- machine_type: The type of machine to use for training.\n- accelerator_type: The hardware accelerator type.\n- accelerator_count: The number of accelerators to attach to a worker replica.\n- starting_replica_count: The number of compute instances to initially provision.\n- max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.\nCompute instance scaling\nYou can specify a single instance (or node) to process your batch prediction request. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1.\nIf you want to use multiple nodes to process your batch prediction request, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.", "MIN_NODES = 1\nMAX_NODES = 1\n\n# The name of the job\nBATCH_PREDICTION_JOB_NAME = \"cifar10_batch-\" + TIMESTAMP\n\n# Folder in the bucket to write results to\nDESTINATION_FOLDER = \"batch_prediction_results\"\n\n# The Cloud Storage bucket to upload results to\nBATCH_PREDICTION_GCS_DEST_PREFIX = BUCKET_NAME + \"/\" + DESTINATION_FOLDER\n\n# Make SDK batch_predict method call\nbatch_prediction_job = # TODO -- Your code goes here(\n instances_format=\"jsonl\",\n predictions_format=\"jsonl\",\n job_display_name=BATCH_PREDICTION_JOB_NAME,\n gcs_source=BATCH_PREDICTION_GCS_SOURCE,\n gcs_destination_prefix=BATCH_PREDICTION_GCS_DEST_PREFIX,\n model_parameters=None,\n machine_type=DEPLOY_COMPUTE,\n accelerator_type=DEPLOY_CPU,\n accelerator_count=DEPLOY_NCPU,\n starting_replica_count=MIN_NODES,\n max_replica_count=MAX_NODES,\n sync=True,\n)", "Retrieve batch prediction results\nWhen the batch prediction is done processing, you can finally view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated when you created the batch prediction job. The predictions are located in a subdirectory starting with the name prediction. Within that directory, there is a file named prediction.results-xxxx-of-xxxx.\nLet's display the contents. You will get a row for each prediction. The row is the softmax probability distribution for the corresponding CIFAR10 classes.", "RESULTS_DIRECTORY = \"prediction_results\"\nRESULTS_DIRECTORY_FULL = RESULTS_DIRECTORY + \"/\" + DESTINATION_FOLDER\n\n# Create missing directories\nos.makedirs(RESULTS_DIRECTORY, exist_ok=True)\n\n# Get the Cloud Storage paths for each result\n! gsutil -m cp -r $BATCH_PREDICTION_GCS_DEST_PREFIX $RESULTS_DIRECTORY\n\n# Get most recently modified directory\nlatest_directory = max(\n [\n os.path.join(RESULTS_DIRECTORY_FULL, d)\n for d in os.listdir(RESULTS_DIRECTORY_FULL)\n ],\n key=os.path.getmtime,\n)\n\n# Get downloaded results in directory\nresults_files = []\nfor dirpath, subdirs, files in os.walk(latest_directory):\n for file in files:\n if file.startswith(\"prediction.results\"):\n results_files.append(os.path.join(dirpath, file))\n\n# Consolidate all the results into a list\nresults = []\nfor results_file in results_files:\n # Download each result\n with open(results_file, \"r\") as file:\n results.extend([json.loads(line) for line in file.readlines()])", "Evaluate results\nYou can then run a quick evaluation on the prediction results:\n\nnp.argmax: Convert each list of confidence levels to a label\nCompare the predicted labels to the actual labels\nCalculate accuracy as correct/total\n\nTo improve the accuracy, try training for a higher number of epochs.", "# Evaluate the results\ny_predicted = [np.argmax(result[\"prediction\"]) for result in results]\n\ncorrect = sum(y_predicted == np.array(y_test))\naccuracy = len(y_predicted)\nprint(\n f\"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}\"\n)", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nTraining Job\nModel\nCloud Storage Bucket", "delete_training_job = True\ndelete_model = True\n\n# Warning: Setting this to true will delete everything in your bucket\ndelete_bucket = False\n\n# Delete the training job\n# TODO -- Your code goes here()\n\n# Delete the model\n# TODO -- Your code goes here()\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil -m rm -r $BUCKET_NAME" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yinlx/MLDN
titanic_survival_exploration/Titanic_Survival_Exploration.ipynb
gpl-3.0
[ "Machine Learning Engineer Nanodegree\nIntroduction and Foundations\nProject 0: Titanic Survival Exploration\nIn 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.\n\nTip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. \n\nGetting Started\nTo begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.\nRun the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.\n\nTip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.", "import numpy as np\nimport pandas as pd\n\n# RMS Titanic data visualization code \nfrom titanic_visualizations import survival_stats\nfrom IPython.display import display\n%matplotlib inline\n\n# Load the dataset\nin_file = 'titanic_data.csv'\nfull_data = pd.read_csv(in_file)\n\n# Print the first few entries of the RMS Titanic data\ndisplay(full_data.head())", "From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:\n- Survived: Outcome of survival (0 = No; 1 = Yes)\n- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)\n- Name: Name of passenger\n- Sex: Sex of the passenger\n- Age: Age of the passenger (Some entries contain NaN)\n- SibSp: Number of siblings and spouses of the passenger aboard\n- Parch: Number of parents and children of the passenger aboard\n- Ticket: Ticket number of the passenger\n- Fare: Fare paid by the passenger\n- Cabin Cabin number of the passenger (Some entries contain NaN)\n- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)\nSince we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.\nRun the code block cell to remove Survived as a feature of the dataset and store it in outcomes.", "# Store the 'Survived' feature in a new variable and remove it from the dataset\noutcomes = full_data['Survived']\ndata = full_data.drop('Survived', axis = 1)\n\n# Show the new dataset with 'Survived' removed\ndisplay(data.head())", "The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].\nTo measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers. \nThink: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?", "def accuracy_score(truth, pred):\n \"\"\" Returns accuracy score for input truth and predictions. \"\"\"\n \n # Ensure that the number of predictions matches number of outcomes\n if len(truth) == len(pred): \n \n # Calculate and return the accuracy as a percent\n return \"Predictions have an accuracy of {:.2f}%.\".format((truth == pred).mean()*100)\n \n else:\n return \"Number of predictions does not match number of outcomes!\"\n \n# Test the 'accuracy_score' function\npredictions = pd.Series(np.ones(5, dtype = int))\nprint accuracy_score(outcomes[:5], predictions)", "Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.\n\nMaking Predictions\nIf we were told to make a prediction about any passenger aboard the RMS Titanic who we did not know anything about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers as a whole did not survive the ship sinking.\nThe function below will always predict that a passenger did not survive.", "def predictions_0(data):\n \"\"\" Model with no features. Always predicts a passenger did not survive. \"\"\"\n\n predictions = []\n for _, passenger in data.iterrows():\n \n # Predict the survival of 'passenger'\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_0(data)", "Question 1\nUsing the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Predictions have an accuracy of 61.62%.\nLet's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.\nRun the code cell below to plot the survival outcomes of passengers based on their sex.", "survival_stats(data, outcomes, 'Sex')", "Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.", "def predictions_1(data):\n \"\"\" Model with one feature: \n - Predict a passenger survived if they are female. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n if passenger['Sex'] == 'female':\n predictions.append(1)\n else:\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_1(data)", "Question 2\nHow accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Predictions have an accuracy of 78.68%.\nUsing just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. Consider, for example, all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.\nRun the code cell below to plot the survival outcomes of male passengers based on their age.", "survival_stats(data, outcomes, 'Age', [\"Sex == 'male'\"])", "Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.\nFill in the missing code below so that the function will make this prediction.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.", "def predictions_2(data):\n \"\"\" Model with two features: \n - Predict a passenger survived if they are female.\n - Predict a passenger survived if they are male and younger than 10. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n if passenger['Sex'] == 'female':\n predictions.append(1)\n elif passenger['Age'] < 10:\n predictions.append(1)\n else:\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_2(data)", "Question 3\nHow accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?\nHint: Run the code cell below to see the accuracy of this prediction.", "print accuracy_score(outcomes, predictions)", "Answer: Predictions have an accuracy of 79.35%.\nAdding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. \nPclass, Sex, Age, SibSp, and Parch are some suggested features to try.\nUse the survival_stats function below to to examine various survival statistics.\nHint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: [\"Sex == 'male'\", \"Age &lt; 18\"]", "survival_stats(data, outcomes, 'Age', [\"Sex == 'female\", \"Pclass == 3\"])", "After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.\nMake sure to keep track of the various features and conditions you tried before arriving at your final prediction model.\nHint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.", "def predictions_3(data):\n \"\"\" Model with multiple features. Makes a prediction with an accuracy of at least 80%. \"\"\"\n \n predictions = []\n for _, passenger in data.iterrows():\n if passenger['Sex'] == 'female':\n if passenger['Pclass'] == 3 and passenger['Age'] >= 30:\n predictions.append(0)\n else:\n predictions.append(1)\n else:\n if passenger['Age'] <= 10:\n predictions.append(1)\n else:\n predictions.append(0)\n \n # Return our predictions\n return pd.Series(predictions)\n\n# Make the predictions\npredictions = predictions_3(data)", "Question 4\nDescribe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?\nHint: Run the code cell below to see the accuracy of your predictions.", "print accuracy_score(outcomes, predictions)", "Answer: \n- After independently investing most of the features, Pclass feature is focused. It turns out a majority of the passengers in lower class did survive. \n- Then, invest the Age feature with Female and Pclass == 'lower class' and find out that the majority of the female passengers in lower class older than 30 did not survive.\nPredictions have an accuracy of 80.36%.\nConclusion\nCongratulations on what you've accomplished here! You should now have an algorithm for predicting whether or not a person survived the Titanic disaster, based on their features. In fact, what you have done here is a manual implementation of a simple machine learning model, the decision tree. In a decision tree, we split the data into smaller groups, one feature at a time. Each of these splits will result in groups that are more homogeneous than the original group, so that our predictions become more accurate. The advantage of having a computer do things for us is that it will be more exhaustive and more precise than our manual exploration above. This link provides another introduction into machine learning using a decision tree.\nA decision tree is just one of many algorithms that fall into the category of supervised learning. In this Nanodegree, you'll learn about supervised learning techniques first. In supervised learning, we concern ourselves with using features of data to predict or model things with objective outcome labels. That is, each of our datapoints has a true outcome value, whether that be a category label like survival in the Titanic dataset, or a continuous value like predicting the price of a house.\nQuestion 5\nCan you think of an example of where supervised learning can be applied?\nHint: Be sure to note the outcome variable to be predicted and at least two features that might be useful for making the predictions.\nAnswer: recently, my company wants to recruit new interns. Based on the previous candidates' resume and the result that they will join or not, we can do some prediction on the current candidates. In this case, the features are the items in the resume, such as speciality, GPA, gender, rewards, English level and so on. And the outcome is the candidates' decision. So we can put more efforts on the the people who will more-likely join us.\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kellyrowland/openmc
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mit
[ "This notebook shows the how tallies can be combined (added, subtracted, multiplied, etc.) using the Python API in order to create derived tallies. Since no covariance information is obtained, it is assumed that tallies are completely independent of one another when propagating uncertainties. The target problem is a simple pin cell.\nNote: that this Notebook was created using the latest Pandas v0.16.1. Everything in the Notebook will wun with older versions of Pandas, but the multi-indexing option in >v0.15.0 makes the tables look prettier.", "%load_ext autoreload\n%autoreload 2\n\nimport glob\nfrom IPython.display import Image\nimport numpy as np\n\nimport openmc\nfrom openmc.statepoint import StatePoint\nfrom openmc.summary import Summary\nfrom openmc.source import Source\nfrom openmc.stats import Box\n\n%matplotlib inline", "Generate Input Files\nFirst we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.", "# Instantiate some Nuclides\nh1 = openmc.Nuclide('H-1')\nb10 = openmc.Nuclide('B-10')\no16 = openmc.Nuclide('O-16')\nu235 = openmc.Nuclide('U-235')\nu238 = openmc.Nuclide('U-238')\nzr90 = openmc.Nuclide('Zr-90')", "With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pin.", "# 1.6 enriched fuel\nfuel = openmc.Material(name='1.6% Fuel')\nfuel.set_density('g/cm3', 10.31341)\nfuel.add_nuclide(u235, 3.7503e-4)\nfuel.add_nuclide(u238, 2.2625e-2)\nfuel.add_nuclide(o16, 4.6007e-2)\n\n# borated water\nwater = openmc.Material(name='Borated Water')\nwater.set_density('g/cm3', 0.740582)\nwater.add_nuclide(h1, 4.9457e-2)\nwater.add_nuclide(o16, 2.4732e-2)\nwater.add_nuclide(b10, 8.0042e-6)\n\n# zircaloy\nzircaloy = openmc.Material(name='Zircaloy')\nzircaloy.set_density('g/cm3', 6.55)\nzircaloy.add_nuclide(zr90, 7.2758e-3)", "With our three materials, we can now create a materials file object that can be exported to an actual XML file.", "# Instantiate a MaterialsFile, add Materials\nmaterials_file = openmc.MaterialsFile()\nmaterials_file.add_material(fuel)\nmaterials_file.add_material(water)\nmaterials_file.add_material(zircaloy)\nmaterials_file.default_xs = '71c'\n\n# Export to \"materials.xml\"\nmaterials_file.export_to_xml()", "Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.", "# Create cylinders for the fuel and clad\nfuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)\nclad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)\n\n# Create boundary planes to surround the geometry\n# Use both reflective and vacuum boundaries to make life interesting\nmin_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')\nmax_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')\nmin_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')\nmax_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')\nmin_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')\nmax_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')", "With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.", "# Create a Universe to encapsulate a fuel pin\npin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')\n\n# Create fuel Cell\nfuel_cell = openmc.Cell(name='1.6% Fuel')\nfuel_cell.fill = fuel\nfuel_cell.region = -fuel_outer_radius\npin_cell_universe.add_cell(fuel_cell)\n\n# Create a clad Cell\nclad_cell = openmc.Cell(name='1.6% Clad')\nclad_cell.fill = zircaloy\nclad_cell.region = +fuel_outer_radius & -clad_outer_radius\npin_cell_universe.add_cell(clad_cell)\n\n# Create a moderator Cell\nmoderator_cell = openmc.Cell(name='1.6% Moderator')\nmoderator_cell.fill = water\nmoderator_cell.region = +clad_outer_radius\npin_cell_universe.add_cell(moderator_cell)", "OpenMC requires that there is a \"root\" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.", "# Create root Cell\nroot_cell = openmc.Cell(name='root cell')\nroot_cell.fill = pin_cell_universe\n\n# Add boundary planes\nroot_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z\n\n# Create root Universe\nroot_universe = openmc.Universe(universe_id=0, name='root universe')\nroot_universe.add_cell(root_cell)", "We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.", "# Create Geometry and set root Universe\ngeometry = openmc.Geometry()\ngeometry.root_universe = root_universe\n\n# Instantiate a GeometryFile\ngeometry_file = openmc.GeometryFile()\ngeometry_file.geometry = geometry\n\n# Export to \"geometry.xml\"\ngeometry_file.export_to_xml()", "With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles.", "# OpenMC simulation parameters\nbatches = 20\ninactive = 5\nparticles = 2500\n\n# Instantiate a SettingsFile\nsettings_file = openmc.SettingsFile()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\nsettings_file.output = {'tallies': True, 'summary': True}\nsource_bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]\nsettings_file.source = Source(space=Box(\n source_bounds[:3], source_bounds[3:]))\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()", "Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.", "# Instantiate a Plot\nplot = openmc.Plot(plot_id=1)\nplot.filename = 'materials-xy'\nplot.origin = [0, 0, 0]\nplot.width = [1.26, 1.26]\nplot.pixels = [250, 250]\nplot.color = 'mat'\n\n# Instantiate a PlotsFile, add Plot, and export to \"plots.xml\"\nplot_file = openmc.PlotsFile()\nplot_file.add_plot(plot)\nplot_file.export_to_xml()", "With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.", "# Run openmc in plotting mode\nexecutor = openmc.Executor()\nexecutor.plot_geometry(output=False)\n\n# Convert OpenMC's funky ppm to png\n!convert materials-xy.ppm materials-xy.png\n\n# Display the materials plot inline\nImage(filename='materials-xy.png')", "As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.", "# Instantiate an empty TalliesFile\ntallies_file = openmc.TalliesFile()\n\n# Create Tallies to compute microscopic multi-group cross-sections\n\n# Instantiate energy filter for multi-group cross-section Tallies\nenergy_filter = openmc.Filter(type='energy', bins=[0., 0.625e-6, 20.])\n\n# Instantiate flux Tally in moderator and fuel\ntally = openmc.Tally(name='flux')\ntally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id]))\ntally.add_filter(energy_filter)\ntally.add_score('flux')\ntallies_file.add_tally(tally)\n\n# Instantiate reaction rate Tally in fuel\ntally = openmc.Tally(name='fuel rxn rates')\ntally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id]))\ntally.add_filter(energy_filter)\ntally.add_score('nu-fission')\ntally.add_score('scatter')\ntally.add_nuclide(u238)\ntally.add_nuclide(u235)\ntallies_file.add_tally(tally)\n\n# Instantiate reaction rate Tally in moderator\ntally = openmc.Tally(name='moderator rxn rates')\ntally.add_filter(openmc.Filter(type='cell', bins=[moderator_cell.id]))\ntally.add_filter(energy_filter)\ntally.add_score('absorption')\ntally.add_score('total')\ntally.add_nuclide(o16)\ntally.add_nuclide(h1)\ntallies_file.add_tally(tally)\n\n# K-Eigenvalue (infinity) tallies\nfiss_rate = openmc.Tally(name='fiss. rate')\nabs_rate = openmc.Tally(name='abs. rate')\nfiss_rate.add_score('nu-fission')\nabs_rate.add_score('absorption')\ntallies_file.add_tally(fiss_rate)\ntallies_file.add_tally(abs_rate)\n\n# Resonance Escape Probability tallies\ntherm_abs_rate = openmc.Tally(name='therm. abs. rate')\ntherm_abs_rate.add_score('absorption')\ntherm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))\ntallies_file.add_tally(therm_abs_rate)\n\n# Thermal Flux Utilization tallies\nfuel_therm_abs_rate = openmc.Tally(name='fuel therm. abs. rate')\nfuel_therm_abs_rate.add_score('absorption')\nfuel_therm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))\nfuel_therm_abs_rate.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id]))\ntallies_file.add_tally(fuel_therm_abs_rate)\n\n# Fast Fission Factor tallies\ntherm_fiss_rate = openmc.Tally(name='therm. fiss. rate')\ntherm_fiss_rate.add_score('nu-fission')\ntherm_fiss_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))\ntallies_file.add_tally(therm_fiss_rate)\n\n# Instantiate energy filter to illustrate Tally slicing\nenergy_filter = openmc.Filter(type='energy', bins=np.logspace(np.log10(1e-8), np.log10(20), 10))\n\n# Instantiate flux Tally in moderator and fuel\ntally = openmc.Tally(name='need-to-slice')\ntally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id]))\ntally.add_filter(energy_filter)\ntally.add_score('nu-fission')\ntally.add_score('scatter')\ntally.add_nuclide(h1)\ntally.add_nuclide(u238)\ntallies_file.add_tally(tally)\n\n# Export to \"tallies.xml\"\ntallies_file.export_to_xml()", "Now we a have a complete set of inputs, so we can go ahead and run our simulation.", "# Remove old HDF5 (summary, statepoint) files\n!rm statepoint.*\n\n# Run OpenMC with MPI!\nexecutor.run_simulation()", "Tally Data Processing\nOur simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, the tally results are not read into memory because they might be large, even large enough to exceed the available memory on a computer.", "# Load the statepoint file\nsp = StatePoint('statepoint.20.h5')", "You may have also noticed we instructed OpenMC to create a summary file with lots of geometry information in it. This can help to produce more sensible output from the Python API, so we will use the summary file to link against.", "# Load the summary file and link with statepoint\nsu = Summary('summary.h5')\nsp.link_with_summary(su)", "We have a tally of the total fission rate and the total absorption rate, so we can calculate k-infinity as:\n$$k_\\infty = \\frac{\\langle \\nu \\Sigma_f \\phi \\rangle}{\\langle \\Sigma_a \\phi \\rangle}$$\nIn this notation, $\\langle \\cdot \\rangle^a_b$ represents an OpenMC that is integrated over region $a$ and energy range $b$. If $a$ or $b$ is not reported, it means the value represents an integral over all space or all energy, respectively.", "# Compute k-infinity using tally arithmetic\nfiss_rate = sp.get_tally(name='fiss. rate')\nabs_rate = sp.get_tally(name='abs. rate')\nkeff = fiss_rate / abs_rate\nkeff.get_pandas_dataframe()", "Notice that even though the neutron production rate and absorption rate are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically!\nOften in textbooks you'll see k-infinity represented using the four-factor formula $$k_\\infty = p \\epsilon f \\eta.$$ Let's analyze each of these factors, starting with the resonance escape probability which is defined as $$p=\\frac{\\langle\\Sigma_a\\phi\\rangle_T}{\\langle\\Sigma_a\\phi\\rangle}$$ where the subscript $T$ means thermal energies.", "# Compute resonance escape probability using tally arithmetic\ntherm_abs_rate = sp.get_tally(name='therm. abs. rate')\nres_esc = therm_abs_rate / abs_rate\nres_esc.get_pandas_dataframe()", "The fast fission factor can be calculated as\n$$\\epsilon=\\frac{\\langle\\nu\\Sigma_f\\phi\\rangle}{\\langle\\nu\\Sigma_f\\phi\\rangle_T}$$", "# Compute fast fission factor factor using tally arithmetic\ntherm_fiss_rate = sp.get_tally(name='therm. fiss. rate')\nfast_fiss = fiss_rate / therm_fiss_rate\nfast_fiss.get_pandas_dataframe()", "The thermal flux utilization is calculated as\n$$f=\\frac{\\langle\\Sigma_a\\phi\\rangle^F_T}{\\langle\\Sigma_a\\phi\\rangle_T}$$\nwhere the superscript $F$ denotes fuel.", "# Compute thermal flux utilization factor using tally arithmetic\nfuel_therm_abs_rate = sp.get_tally(name='fuel therm. abs. rate')\ntherm_util = fuel_therm_abs_rate / therm_abs_rate\ntherm_util.get_pandas_dataframe()", "The final factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\\eta = \\frac{\\langle \\nu\\Sigma_f\\phi \\rangle_T}{\\langle \\Sigma_a \\phi \\rangle^F_T}$$", "# Compute neutrons produced per absorption (eta) using tally arithmetic\neta = therm_fiss_rate / fuel_therm_abs_rate\neta.get_pandas_dataframe()", "Now we can calculate $k_\\infty$ using the product of the factors form the four-factor formula.", "keff = res_esc * fast_fiss * therm_util * eta\nkeff.get_pandas_dataframe()", "We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger.\nLet's move on to a more complicated example now. Before we set up tallies to get reaction rates in the fuel and moderator in two energy groups for two different nuclides. We can use tally arithmetic to divide each of these reaction rates by the flux to get microscopic multi-group cross sections.", "# Compute microscopic multi-group cross-sections\nflux = sp.get_tally(name='flux')\nflux = flux.get_slice(filters=['cell'], filter_bins=[(fuel_cell.id,)])\nfuel_rxn_rates = sp.get_tally(name='fuel rxn rates')\nmod_rxn_rates = sp.get_tally(name='moderator rxn rates')\n\nfuel_xs = fuel_rxn_rates / flux\nfuel_xs.get_pandas_dataframe()", "We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections.", "# Show how to use Tally.get_values(...) with a CrossScore\nnu_fiss_xs = fuel_xs.get_values(scores=['(nu-fission / flux)'])\nprint(nu_fiss_xs)", "The same idea can be used not only for scores but also for filters and nuclides.", "# Show how to use Tally.get_values(...) with a CrossScore and CrossNuclide\nu235_scatter_xs = fuel_xs.get_values(nuclides=['(U-235 / total)'], \n scores=['(scatter / flux)'])\nprint(u235_scatter_xs)\n\n# Show how to use Tally.get_values(...) with a CrossFilter and CrossScore\nfast_scatter_xs = fuel_xs.get_values(filters=['energy'], \n filter_bins=[((0.625e-6, 20.),)], \n scores=['(scatter / flux)'])\nprint(fast_scatter_xs)", "A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format.", "# \"Slice\" the nu-fission data into a new derived Tally\nnu_fission_rates = fuel_rxn_rates.get_slice(scores=['nu-fission'])\nnu_fission_rates.get_pandas_dataframe()\n\n# \"Slice\" the H-1 scatter data in the moderator Cell into a new derived Tally\nneed_to_slice = sp.get_tally(name='need-to-slice')\nslice_test = need_to_slice.get_slice(scores=['scatter'], nuclides=['H-1'],\n filters=['cell'], filter_bins=[(moderator_cell.id,)])\nslice_test.get_pandas_dataframe()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
snowicecat/umich-eecs445-f16
handsOn_lecture00_python_tutorial/lecture00_python_tutorial.ipynb
mit
[ "EECS 445: Python Tutorial\nPresented by: Zhao Fu\nSeptember 12, 2016\nReferences:\n1. https://docs.python.org/3/tutorial/\n2. https://docs.python.org/3/library/\n3. http://cs231n.github.io/python-numpy-tutorial/\n4. https://github.com/donnemartin/data-science-ipython-notebooks\nWhy Python?\n\nEasy to learn\nHigh-level data structures\nElegant syntax\nLots of useful packages for machine learning and data science\n\nInstall\n\nhttps://www.continuum.io/downloads\n\n\nNow we have python3 installed\n\nnumpy\nscipy\nscikit-learn\nmatplotlib\n...\n\nTo install packages:\nbash\nconda install &lt;PACKAGE_NAME&gt;\nbash \npip install &lt;PACKAGE_NAME&gt;\nLet's run our slides first!\njupyter notebook\nWant more fancy stuff? Just install RISE!\nconda install -c damianavila82 rise\nPlay with your toys!\nHere is an option to play with if you can't set up jupyter on your own computer: https://tmpnb.org.", "print ('Hello Python!')", "Python Basics\n\nData Types\nContainers\nFunctions\nClasses\n\nBasic data types\nNumbers\nIntegers and floats work as you would expect from other languages:", "x = 3\nprint (x, type(x))\n\nprint (x + 3) # Addition;\nprint (x - x) # Subtraction;\nprint (x * 2) # Multiplication;\nprint (x ** 3) # Exponentiation;\n\nprint (x)\nx += 1\nprint (x)\nx = x + 1\nprint (x) # Prints \"4\"\nx *= 2\nprint (x) # Prints \"8\"\n\ny = 2.5\nprint (type(y)) # Prints \"<type 'float'>\"\nprint (y, y + 1, y * 2, y ** 2) # Prints \"2.5 3.5 5.0 6.25\"", "Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.\nPython also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.", "print (17 / 3) # return float\nprint (17 // 3) # return integer\nprint (17 % 3) # Modulo operation", "Booleans\nPython implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.):", "t, f = True, False # Note the Captilzation!\nprint (type(t)) # Prints \"<type 'bool'>\"", "Now we let's look at the operations:", "print (t and f) # Logical AND;\nprint (t or f) # Logical OR;\nprint (not t) # Logical NOT;\nprint (t != f) # Logical XOR;", "Strings", "hello = 'hello' # String literals can use single quotes\nworld = \"world\" # or double quotes; it does not matter.\nprint (hello, len(hello))\n\nhw = hello + ' ' + world # String concatenation\nprint (hw) # prints \"hello world\"\n\n# sprintf style string formatting \nhw12 = '%s %s %d' % (hello, world, 12)\n# Recommended formatting style for Py3.0+ (https://pyformat.info)\nnew_py3_hw12 = '{:>15} {:1.1f} {}'.format('hello' + ' ' + 'world', 1, 2) \nprint (hw12)\nprint (new_py3_hw12) \n\ns = \"hello\"\nprint (s.capitalize()) # Capitalize a string; prints \"Hello\"\nprint (s.upper()) # Convert a string to uppercase; prints \"HELLO\"\nprint (s.rjust(7)) # Right-justify a string, padding with spaces; prints \" hello\"\nprint (s.center(7)) # Center a string, padding with spaces; prints \" hello \"\nprint (s.replace('ll', '(ell)')) # Replace all instances of one substring with another;\n # prints \"he(ell)(ell)o\"\nprint (' world '.strip()) # Strip leading and trailing whitespace; prints \"world\"\n\n\"You can type ' inside\"\n\n'You can type \\' inside'", "You can find a list of all string methods in the document.\nContainers\nPython includes several built-in container types: lists, dictionaries, sets, and tuples.\nLists\nA list is the Python equivalent of an array, but is resizeable and can contain elements of different types:", "x = [1, 2, 3, 'a', 'b', 'c'] + ['hello'] # list append with the + operator\nprint (x, x[2]) # access by index\nprint (x[0]) # index can be negative\n\nx.append('element')\nprint (x)\nprint (x.pop(), x)", "Slicing\nIn addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:", "x = [1, 2, 3, 4, 5]\nprint (x[2:])\nprint (x[:3])\nprint (x[2:5])\nx[0:3] = ['a', 'b', 'c'] # modify elements in list\nprint (x)\n\ny = x[:] # copy list\ny[2] = 100 # x won't change\nprint ('y:', y)\nprint ('x:', x)", "As usual, you can find all the gory details about lists in the documentation.\nLoops\nYou can loop over the elements of a list like this:", "animals = ['cat', 'dog', 'monkey']\nfor animal in animals:\n print (animal)", "If you want access to the index of each element within the body of a loop, use the built-in enumerate function:", "animals = ['cat', 'dog', 'monkey']\nprint (enumerate(animals))\nfor idx, animal in enumerate(animals):\n print ('#%d: %s' % (idx + 1, animal))", "List comprehensions:\nWhen programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:", "nums = [0, 1, 2, 3, 4]\nsquares = []\nfor x in nums:\n squares.append(x ** 2)\nprint (squares)", "You can make this code simpler using a list comprehension:", "nums = [0, 1, 2, 3, 4]\nsquares = [x ** 2 for x in nums]\nprint (squares)", "List comprehensions can also contain conditions:", "nums = [0, 1, 2, 3, 4]\neven_squares = [x ** 2 for x in nums if x % 2 == 0]\neven_squares_alt = [i ** 2 for i in filter(lambda k: k % 2 == 0 , nums)]\nprint (even_squares_alt)\n\nnums = [0, 1, 2, 3, 4]\neven_squares_or_one = [x ** 2 if x % 2 == 0 else 1 for x in nums]\nprint (even_squares_or_one)", "Dictionaries\nA dictionary stores (key, value) pairs, similar to a Map in C++. You can use it like this:", "d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data\nprint (d['cat']) # Get an entry from a dictionary; prints \"cute\"\nprint ('cute' in d) # Check if a dictionary has a given key; prints \"True\"\n\nd['fish'] = 'wet' # Set an entry in a dictionary\nprint (d['fish']) # Prints \"wet\"\n\nprint (d['monkey']) # KeyError: 'monkey' not a key of d\n\nprint (d.get('monkey', 'N/A')) # Get an element with a default; prints \"N/A\"\nprint (d.get('fish', 'N/A')) # Get an element with a default; prints \"wet\"\n\ndel d['fish'] # Remove an element from a dictionary\nprint (d.get('fish', 'N/A')) # \"fish\" is no longer a key; prints \"N/A\"", "You can find all you need to know about dictionaries in the documentation.\nIt is easy to iterate over the keys in a dictionary:", "d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal in d:\n legs = d[animal]\n print ('A %s has %d legs' % (animal, legs))", "If you want access to keys and their corresponding values, use the items method:", "d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal, legs in d.items():\n print ('A %s has %d legs' % (animal, legs))", "Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:", "nums = [0, 1, 2, 3, 4]\neven_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}\nprint (even_num_to_square)\n\n# Make a dictionary from two lists using zip\nl1 = ['EECS445', 'EECS545'] \nl2 = ['Undergraduate ML', 'Graduate ML']\nd = dict(zip(l1, l2))\nprint (d)\n# Unroll dictionary into two tuples\nk, v = list(d.keys()), list(d.values())\nprint (d.items())\nprint (k, v)", "Sets\nA set is an unordered collection of distinct elements. As a simple example, consider the following:", "animals = {'cat', 'dog'}\nprint ('cat' in animals) # Check if an element is in a set; prints \"True\"\nprint ('fish' in animals) # prints \"False\"\n\nanimals.add('fish') # Add an element to a set\nprint ('fish' in animals)\nprint (len(animals)) # Number of elements in a set;\n\nanimals.add('cat') # Adding an element that is already in the set does nothing\nprint (len(animals)) \nanimals.remove('cat') # Remove an element from a set\nprint (len(animals))", "Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:", "animals = {'dog', 'fish', 'cat'}\nfor idx, animal in enumerate(animals):\n print ('#%d: %s' % (idx + 1, animal))\n# Prints \"#1: fish\", \"#2: dog\", \"#3: cat\"", "Set comprehensions: Like lists and dictionaries, we can easily construct sets using set comprehensions:", "from math import sqrt\nprint ({int(sqrt(x)) for x in range(30)})", "Tuples\nA tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example:", "d = {(x, x + 1): x for x in range(0, 10, 2)} # Create a dictionary with tuple keys, note that range can use step args.\nt = (0, 1) # Create a tuple\nprint (type(t))\nprint (d[t])\nprint (d[(2, 3)])\n\nt[0] = 1", "Functions\nPython functions are defined using the def keyword. For example:", "def get_GPA(x):\n if x >= 90:\n return \"A\"\n elif x >= 75:\n return \"B\"\n elif x >=60:\n return \"C\"\n else:\n return \"F\"\n\nfor x in [59, 70, 91]:\n print (get_GPA(x))", "We will often define functions to take optional keyword arguments, like this:", "def fib(n = 10):\n a = 0\n b = 1\n while b < n:\n print(b, end=',')\n a, b = b, a + b\n\nfib()", "Classes\nThe syntax for defining classes in Python is straightforward:", "class Greeter:\n\n # Constructor\n def __init__(self, name):\n self.name = name # Create an instance variable\n\n # Instance method\n def greet(self, loud=False):\n if loud:\n print ('HELLO, %s!' % self.name.upper())\n else:\n print ('Hello, %s' % self.name)\n\ng = Greeter('Fred') # Construct an instance of the Greeter class\ng.greet() # Call an instance method; prints \"Hello, Fred\"\ng.greet(loud=True) # Call an instance method; prints \"HELLO, FRED!\"", "Modules\n\nimport modules\nnumpy\nmatplotlib\nscikit-learn", "from modules import fibo\nfrom modules.fibo import fib2\nprint (fib2(10))\nprint (fibo.fib2(10))", "NumPy\n\nNumPy arrays, dtype, and shape\nReshape and Update In-Place\nCombine Arrays\nArray Math\nInner Product\nMatrixes\n\nTo use Numpy, we first need to import the numpy package:", "import numpy as np\n\na = np.array([1, 2, 3])\nprint(a)\nprint(a.shape)\nprint(a.dtype)\n\nb = np.array([[0, 2, 4], [1, 3, 5]], dtype = np.float64)\nprint(b)\nprint(b.shape)\nprint(b.dtype)", "Numpy also provides many functions to create arrays:", "np.zeros(5) # Create an array of all zeros\n\nnp.ones(shape=(3, 4), dtype = np.int32) # Create an array of all ones\n\nnp.full((2,2), 7, dtype = np.int32) # Create a constant array\n\nnp.eye(2) # Create a 2x2 identity matrix\n\nnp.random.random((2,2)) # Create an array filled with random values", "Array indexing\nNumpy offers several ways to index into arrays.\nSlicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:", "# Create the following rank 2 array with shape (3, 4)\n# [[ 1 2 3 4]\n# [ 5 6 7 8]\n# [ 9 10 11 12]]\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\n\n# Use slicing to pull out the subarray consisting of the first 2 rows\n# and columns 1 and 2; b is the following array of shape (2, 2):\n# [[2 3]\n# [6 7]]\nb = a[:2, 1:3]\nprint (b)\n\nprint (a[0, 1]) \nb[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]\nprint (a[0, 1])\n\nrow_r1 = a[1, :] # Rank 1 view of the second row of a \nrow_r2 = a[1:2, :] # Rank 2 view of the second row of a\nrow_r3 = a[[1], :] # Rank 2 view of the second row of a\nprint (a)\nprint (row_r1, row_r1.shape) \nprint (row_r2, row_r2.shape)\nprint (row_r3, row_r3.shape)", "Reshape and Update In-Place", "e = np.arange(12)\nprint(e)\n\n# f is a view of contents of e\nf = e.reshape(3, 4)\nprint(f)\n\n# Set values of e from index 5 onwards to 0\ne[7:] = 0\nprint (e)\n# f is also updated\nprint (f)\n\n# We can get transpose of array by T attribute\nprint (f.T)", "Combine Arrays", "a = np.array([1, 2, 3])\nprint(np.concatenate([a, a, a]))\n\nb = np.array([[1, 2, 3], [4, 5, 6]])\nd = b / 2.0\n# Use broadcasting when needed to do this automatically\nprint (np.vstack([a, b, d]))\n\n# In machine learning, useful to enrich or \n# add new/concatenate features with hstack\nnp.hstack([b, d])\nprint (np.concatenate([b, d], axis = 0))", "Array math\nBasic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:", "x = np.array([[1,2],[3,4]], dtype=np.float64)\ny = np.array([[5,6],[7,8]], dtype=np.float64)\n\n# Elementwise sum; both produce the array\nprint (x + y)\nprint (np.add(x, y))\n\n# Elementwise difference; both produce the array\nprint (x - y)\nprint (np.subtract(x, y))\n\n# Elementwise product; both produce the array\nprint (x * y)\nprint (np.multiply(x, y))\n\n# Elementwise division; both produce the array\n# [[ 0.2 0.33333333]\n# [ 0.42857143 0.5 ]]\nprint (x / y)\nprint (np.divide(x, y))\n\n# Elementwise square root; produces the array\n# [[ 1. 1.41421356]\n# [ 1.73205081 2. ]]\nprint (np.sqrt(x))", "Broadcasting\nArrays with different dimensions can also perform above operations.", "# Multiply single number\nprint (x * 0.5)\n\na = np.array([1, 2, 3])\nb = np.array([[1, 2, 3], [4, 5, 6]])\n\nc = a + b\nprint(a.reshape(1, 3).shape, b.shape, c.shape)\nprint(c)\n\na.reshape((1, 1, 3)) + c.reshape((2, 1, 3))", "We can also get statistical results directly using sum, mean and std methods.", "print (d)\nprint (d.sum())\nprint (d.sum(axis = 0))\nprint (d.mean())\nprint (d.mean(axis = 1))\nprint (d.std())\nprint (d.std(axis = 0))", "Inner Product\n$$\n(a_1, a_2, a_3, ..., a_n) \\cdot (b_1, b_2, b_3, ..., b_n)^T = \\sum_{i = 1}^{n}{a_ib_i}\n$$\nWe use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:", "x = np.array([[1,2],[3,4]])\ny = np.array([[5,6],[7,8]])\n\nv = np.array([9,10])\nw = np.array([11, 12])\n# Inner product of vectors; both produce 219\nprint (v.dot(w))\nprint (np.dot(v, w))\n\n# Matrix / vector product; both produce the rank 1 array [29 67]\nprint (x.dot(v))\nprint (np.dot(x, v))\n\n# Matrix / matrix product; both produce the rank 2 array\n# [[19 22]\n# [43 50]]\nprint (x.dot(y))\nprint (np.dot(x, y))", "Matrix\nInstead of arrays, we can also use matrix to simplify the code.", "x = np.matrix('1, 2, 3; 4, 5, 6')\ny = np.matrix(np.ones((3, 4)))\nprint(x.shape)\nprint(y.shape)\nprint(x * y)\nprint(y.T * x.T)", "You can find more in the document.\nMatplotlib\n\nPlotting Lines\nPlotting Multiple Lines\nScatter Plots\nLegend, Titles, etc.\nSubplots\nHistogram", "import pylab as plt", "To make pylab work inside ipython:", "%matplotlib inline\n\nplt.plot([1,2,3,4], 'o-')\nplt.ylabel('some numbers')\nplt.show()\n\nx = np.linspace(0,1,100);\ny1 = x ** 2;\ny2 = np.sin(x);\n\nplt.plot(x, y1, 'r-', label=\"parabola\");\nplt.plot(x, y2, 'g-', label=\"sine\");\nplt.legend();\nplt.xlabel(\"x axis\");\nplt.show()\n\n# Create sample data, add some noise\nx = np.random.uniform(1, 100, 1000)\ny = np.log(x) + np.random.normal(0, .3, 1000)\n\nplt.scatter(x, y)\nplt.show()", "Subplots\nYou can plot different things in the same figure using the subplot function. Here is an example:", "# Compute the x and y coordinates for points on sine and cosine curves\nx = np.arange(0, 3 * np.pi, 0.1)\ny_sin = np.sin(x)\ny_cos = np.cos(x)\n\n# First plot\nplt.subplot(2, 1, 1)\nplt.plot(x, y_sin)\nplt.title('Sine')\n\n# Second plot\nplt.subplot(2, 1, 2)\nplt.plot(x, y_cos)\nplt.title('Cosine')\n\n# Show the figure.\nplt.show()\n\nmu, sigma = 100, 15\nx = mu + sigma * np.random.randn(10000)\n\n# the histogram of the data\nn, bins, patches = plt.hist(x, 50, normed=1, facecolor='g', alpha=0.75)\n\nplt.xlabel('Smarts')\nplt.ylabel('Probability')\nplt.title('Histogram of IQ')\nplt.axis([40, 160, 0, 0.03])\nplt.grid(True)\nplt.show()", "Scikit-learn\nThis is a common machine learning package with lots of algorithms, you can find detailed usage here.\nHere is an example of KMeans cluster algorithm:", "from sklearn.cluster import KMeans\n\nmu1 = [5, 5]\nmu2 = [0, 0]\ncov1 = [[1, 0], [0, 1]]\ncov2 = [[2, 1], [1, 3]]\nx1 = np.random.multivariate_normal(mu1, cov1, 1000)\nx2 = np.random.multivariate_normal(mu2, cov2, 1000)\n\nprint (x1.shape)\nprint (x2.shape)\n\nplt.plot(x1[:, 0], x1[:, 1], 'r.')\nplt.plot(x2[:, 0], x2[:, 1], 'b.')\nplt.show()\n\nx = np.vstack([x1, x2])\nprint (x.shape)\nplt.plot(x[:, 0], x[:, 1], 'b.')\nplt.show()\n\ny_pred = KMeans(n_clusters=2).fit_predict(x)\nx_pred1 = x[y_pred == 0, :]\nx_pred2 = x[y_pred == 1, :]\nprint (x_pred1.shape)\nprint (x_pred2.shape)\nplt.plot(x_pred1[:, 0], x_pred1[:, 1], 'b.')\nplt.plot(x_pred2[:, 0], x_pred2[:, 1], 'r.')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
danielbultrini/FXFEL
Particle Distribution Visualization.ipynb
bsd-3-clause
[ "First, import the processing tools that contain classes and methods to read, plot and process standard unit particle distribution files.", "import processing_tools as pt\n", "The module consists of a class 'ParticleDistribution' that initializes to a dictionary containing the following entries given a filepath:\n|key | value |\n|----|-----------|\n|'x' | x position|\n|'y' | y position|\n|'z' | z position|\n|'px'| x momentum|\n|'py'| y momentum|\n|'pz'| z momentum|\n|'NE'| number of electrons per macroparticle|\nThe units are in line with the Standard Unit specifications, but can be converted to SI by calling the class method SU2SI\nValues can then be called by calling the 'dict':", "filepath = './example/example.h5'\n\ndata = pt.ParticleDistribution(filepath)\ndata.su2si\ndata.dict['x']", "Alternatively one can ask for a pandas dataframe where each column is one of the above properties of a macroparticle per row.", "panda_data = data.DistFrame()\npanda_data[0:5]", "This allows for quick plotting using the inbuilt pandas methods", "import matplotlib.pyplot as plt\nmatplotlib.style.use('ggplot') #optional\n\nx_axis = 'py'\ny_axis = 'px'\n\n\nplot = panda_data.plot(kind='scatter',x=x_axis,y=y_axis)\n#sets axis limits \nplot.set_xlim([panda_data[x_axis].min(),panda_data[x_axis].max()])\nplot.set_ylim([panda_data[y_axis].min(),panda_data[y_axis].max()])\nplt.show(plot)", "If further statistical analysis is required, the class 'Statistics' is provided. This contains methods to process standard properties of the electron bunch. This is called by giving a filepath to 'Statistics' The following operations can be performed:\n| Function | Effect and dict keys |\n|---------------------|-------------------------------------------------------------------------------------------------------------------------------|\n| calc_emittance | Calculates the emittance of all the slices, accessible by 'e_x' and 'e_y' |\n| calc_CoM | Calculates the weighed averages and standard deviations per slice of every parameter and beta functions, see below for keys. |\n| calc_current | Calculates current per slice, accessible in the dict as 'current'. |\n|slice | Slices the data in equal slices of an integer number. |\nThis is a subclass of the ParticleDistribution and all the methods previously described work.\n| CoM Keys | Parameter (per slice) |\n|------------------------|------------------------------------------------------------|\n| CoM_x, CoM_y, CoM_z | Centre of mass of x, y, z positions |\n| std_x, std_y, std_z | Standard deviation of x, y, z positions |\n| CoM_px, CoM_py, CoM_pz | Centre of mass of x, y, z momenta |\n| std_px, std_py, std_pz | Standard deviation of x, y, z momenta |\n| beta_x, beta_y | Beta functions (assuming Gaussian distribution) in x and y |\nFurthermore, there is a 'Step_Z' which returns the size of a slice as well as 'z_pos' which gives you central position of a given slice.\nAnd from this class both the DistFrame (containing the same data as above) and StatsFrame can be called:", "stats = pt.Statistics(filepath)\n\n#preparing the statistics\nstats.slice(100)\nstats.calc_emittance()\nstats.calc_CoM()\nstats.calc_current()\n\n#display pandas example\npanda_stats = stats.StatsFrame()\npanda_stats[0:5]\n\nax = panda_stats.plot(x='z_pos',y='CoM_y')\npanda_stats.plot(ax=ax, x='z_pos',y='std_y',c='b') #first option allows shared axes\n\nplt.show()\n", "And finally there is the FEL_Approximations which calculate simple FEL properties per slice. This is a subclass of statistics and as such every method described above is callable.\nThis class conatins the 'undulator' function that calculates planar undulator parameters given a period and either a peak magnetic field or K value.\nThe data must be sliced and most statistics have to be run before the other calculations can take place.\nThese are 'pierce' which calculates the pierce parameter and 1D gain length for a given slice, 'gain length' which calculates the Ming Xie gain and returns three entries in the dict 'MX_gain', '1D_gain', 'pierce', which hold an array for these values per slice. \n'FELFrame' returns a pandas dataframe with these and 'z_pos' for reference.\nTo make this easier, the class ProcessedData takes a filepath, number of slcies, undulator period, magnetic field or K and performs all the necessary steps automatically. As this is a subclass of FEL_Approximations all the values written above are accessible from here.", "FEL = pt.ProcessedData(filepath,num_slices=100,undulator_period=0.00275,k_fact=2.7)\n\npanda_FEL = FEL.FELFrame()\npanda_stats= FEL.StatsFrame()\npanda_FEL[0:5]", "If it is important to plot the statistical data alongside the FEL data, that can be easily achieved by concatinating the two sets as shown below", "import pandas as pd\n\ncat = pd.concat([panda_FEL,panda_stats], axis=1, join_axes=[panda_FEL.index]) #joins the two if you need to plot\n#FEL parameters as well as slicel statistics on the same plot\ncat['1D_gain']=cat['1D_gain']*40000000000 #one can scale to allow for visual comparison if needed\naz = cat.plot(x='z_pos',y='1D_gain')\ncat.plot(ax=az, x='z_pos',y='MX_gain',c='b')\nplt.show()\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ResearchComputing/RMACC2015-Spark
pyspark-exercises/02_aggregation.ipynb
gpl-2.0
[ "Simple Aggregation\nThanks, Monte!", "import numpy as np\n\ndata = np.arange(1000).reshape(100,10)\nprint data.shape", "Pandas", "import pandas as pd\n\npand_tmp = pd.DataFrame(data, \n columns=['x{0}'.format(i) for i in range(data.shape[1])])\npand_tmp.head()", "What is the row sum?", "pand_tmp.sum(axis=1)", "Column sum?", "pand_tmp.sum(axis=0)\n\npand_tmp.to_csv('numbers.csv', index=False)", "Spark", "lines = sc.textFile('numbers.csv', 18)\nfor l in lines.take(3):\n print l\n\ntype(lines.take(1))", "How do we skip the header? How about using find()? What is Boolean value for true with find()?", "lines = lines.filter(lambda x: x.find('x') != 0)\nfor l in lines.take(2):\n print l\n\ndata = lines.map(lambda x: x.split(','))\ndata.take(3)", "Row Sum\nCast to integer and sum!", "def row_sum(x):\n int_x = map(lambda x: int(x), x)\n return sum(int_x)\n\ndata_row_sum = data.map(row_sum)\n\nprint data_row_sum.collect()\nprint data_row_sum.count()", "Column Sum\nThis one's a bit trickier, and portends ill for large, complex data sets (like example 4)...\nLet's enumerate the list comprising each RDD \"line\" such that each value is indexed by the corresponding column number.", "def col_key(x):\n for i, value in enumerate(x):\n yield (i, int(value))\n\ntmp = data.flatMap(col_key)\ntmp.take(12)\n\ntmp.take(3)\n\ntmp = tmp.groupByKey()\nfor i in tmp.take(2):\n print i, type(i)\n\ndata_col_sum = tmp.map(lambda x: sum(x[1]))\nfor i in data_col_sum.take(2):\n print i\n\nprint data_col_sum.collect()\nprint data_col_sum.count()", "Column sum with Spark.sql.dataframe", "from pyspark.sql.types import *\n\npyspark_df = sqlCtx.createDataFrame(pand_tmp)\n\npyspark_df.take(2)\n\nfor i in pyspark_df.columns:\n print pyspark_df.groupBy().sum(i).collect()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
halflings/python-data-workshop
data-workshop-notebook.ipynb
apache-2.0
[ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nplt.style.use('ggplot')\nplt.rcParams['figure.figsize'] = 16, 9", "Data analytics and machine learning with Python\nI - Acquiring data\nA simple HTTP request", "import requests\n\nprint(requests.get(\"http://example.com\").text)", "Communicating with APIs", "response = requests.get(\"https://www.googleapis.com/books/v1/volumes\", params={\"q\":\"machine learning\"})\nraw_data = response.json()\ntitles = [item['volumeInfo']['title'] for item in raw_data['items']]\ntitles", "Parsing websites", "import lxml.html\n\npage = lxml.html.parse(\"http://www.blocket.se/stockholm?q=apple\")\n# ^ This is probably illegal. Blocket, please don't sue me!\nitems_data = []\nfor el in page.getroot().find_class(\"item_row\"):\n links = el.find_class(\"item_link\")\n images = el.find_class(\"item_image\")\n prices = el.find_class(\"list_price\")\n if links and images and prices and prices[0].text:\n items_data.append({\"name\": links[0].text,\n \"image\": images[0].attrib['src'],\n \"price\": int(prices[0].text.split(\":\")[0].replace(\" \", \"\"))})\nitems_data", "Reading local files (CSV/JSON)", "import pandas\n\ndf = pandas.read_csv('sample.csv')\n\n# Display the DataFrame\ndf\n\n# DataFrame's columns\ndf.columns\n\n# Values of a given column\ndf.Model", "Analyzing the dataframe", "# Any missing values?\ndf['Price']\n\ndf['Description']\n\n# Fill missing prices by a linear interpolation\ndf['Description'] = df['Description'].fillna(\"No description is available.\")\ndf['Price'] = df['Price'].interpolate()\n\ndf", "II - Exploring data", "import matplotlib.pyplot as plt\n\ndf = pandas.read_csv('sample2.csv')\n\ndf\n\n# This table has 3 columns: Office, Year, Sales\ndf.columns\n\n# It's really easy to query data with Pandas:\ndf[(df['Office'] == 'Stockholm') & (df['Sales'] > 260)]\n\n# It's also easy to do aggregations...\naggregated_stockholm_sales = df[df.Office == 'Stockholm'].groupby('Year').sum()\naggregated_stockholm_sales\n\naggregated_ny_sales = df[df.Office == 'New York'].groupby('Year').sum()\n# ... and generate plots\naggregated_stockholm_sales.plot(kind='bar')\naggregated_ny_sales.plot(kind='bar', color='g')", "Machine learning\nFeature extraction", "from sklearn import feature_extraction", "Extracting features from text", "corpus = ['Cats? I love cats!',\n 'I love dogs.',\n 'I hate cats :(',\n 'I love trains',\n ]\n\ntfidf = feature_extraction.text.TfidfVectorizer()\n\nprint(tfidf.fit_transform(corpus).toarray())\nprint(tfidf.get_feature_names())", "Dict vectorizer", "import json\n\n\ndata = [json.loads(\"\"\"{\"weight\": 194.0, \"sex\": \"female\", \"student\": true}\"\"\"),\n {\"weight\": 60., \"sex\": 'female', \"student\": True},\n {\"weight\": 80.1, \"sex\": 'male', \"student\": False},\n {\"weight\": 65.3, \"sex\": 'male', \"student\": True},\n {\"weight\": 58.5, \"sex\": 'female', \"student\": False}]\n\nvectorizer = feature_extraction.DictVectorizer(sparse=False)\n\nvectors = vectorizer.fit_transform(data)\nprint(vectors)\nprint(vectorizer.get_feature_names())", "Pre-processing\nScaling", "from sklearn import preprocessing\n\ndata = [[10., 2345., 0., 2.],\n [3., -3490., 0.1, 1.99],\n [13., 3903., -0.2, 2.11]]\n\npreprocessing.normalize(data)", "Dimensionality reduction", "from sklearn import decomposition\n\ndata = [[0.3, 0.2, 0.4, 0.32],\n [0.3, 0.5, 1.0, 0.19],\n [0.3, -0.4, -0.8, 0.22]]\n\npca = decomposition.PCA()\nprint(pca.fit_transform(data))\nprint(pca.explained_variance_ratio_)", "Machine learning models\nClassification (SVM)", "from sklearn import datasets\nfrom sklearn import svm\n\niris = datasets.load_iris()\n\nX = iris.data[:, :2]\ny = iris.target\n\nplt.scatter(X[:, 0], X[:, 1], color=['rgb'[v] for v in y])\n\nto_predict = np.array([[4.35, 3.1], [5.61, 2.42]])\nplt.scatter(to_predict[:, 0], to_predict[:, 1], color='purple')\n\n# Training the model\nclf = svm.SVC(kernel='rbf')\nclf.fit(X, y)\n\n# Doing predictions\nprint(clf.predict(to_predict))", "Regression (linear regression)", "import numpy as np\nfrom sklearn import linear_model\nimport matplotlib.pyplot as plt\n\ndef f(x):\n return x + np.random.random() * 3.\n\nX = np.arange(0, 5, 0.5)\nX = X.reshape((len(X), 1))\ny = list(map(f, X))\n\nclf = linear_model.LinearRegression()\nclf.fit(X, y)\n\nnew_X = np.arange(0.2, 5.2, 0.3)\nnew_X = new_X.reshape((len(new_X), 1))\nnew_y = clf.predict(new_X)\n\nplt.scatter(X, y, color='g', label='Training data')\n\nplt.plot(new_X, new_y, '.-', label='Predicted')\nplt.legend()", "Clustering (DBScan)", "from sklearn.cluster import DBSCAN\nfrom sklearn.datasets.samples_generator import make_blobs\nfrom sklearn.preprocessing import StandardScaler\n\n# Generate sample data\ncenters = [[1, 1], [-1, -1], [1, -1]]\nX, labels_true = make_blobs(n_samples=200, centers=centers, cluster_std=0.3,\n random_state=0)\nplt.scatter(X[:, 0], X[:, 1])\n\n# Compute DBSCAN\ndb = DBSCAN(eps=0.3, min_samples=10).fit(X)\ndb.labels_\n\nimport matplotlib.pyplot as plt\nplt.scatter(X[:, 0], X[:, 1], c=['rgbw'[v] for v in db.labels_])", "Cross-validation", "from sklearn import svm, cross_validation, datasets\n\niris = datasets.load_iris()\nX, y = iris.data, iris.target\n\nmodel = svm.SVC()\nprint(cross_validation.cross_val_score(model, X, y, scoring='precision_weighted'))\nprint(cross_validation.cross_val_score(model, X, y, scoring='mean_squared_error'))", "A more complex Machine Learning pipeline: \"what's cooking?\"\nThis is a basic solution I wrote for the Kaggle competition \"What's cooking?\" where the goal is to predict to which type of cuisine a meal belongs to based on a list of ingredients.\nYou'll need more advanced features and methods to win a Kaggle competition, but this already gets you 90% there.", "from collections import Counter\nimport json\n\nimport pandas as pd\nimport scipy.sparse\nimport sklearn.pipeline\nimport sklearn.cross_validation\nimport sklearn.feature_extraction\nimport sklearn.naive_bayes\n\ndef open_dataset(path):\n with open(path) as file:\n data = json.load(file)\n df = pd.DataFrame(data).set_index('id')\n return df\n\ndf = open_dataset('train.json')\n\npipeline = sklearn.pipeline.make_pipeline(sklearn.feature_extraction.DictVectorizer(), sklearn.feature_extraction.text.TfidfTransformer(sublinear_tf=True))\npipeline_bis = sklearn.pipeline.make_pipeline(sklearn.feature_extraction.DictVectorizer(), sklearn.feature_extraction.text.TfidfTransformer(sublinear_tf=True))\n\ndef map_term_count(ingredients):\n return Counter(sum((i.split(' ') for i in ingredients), []))\nX = pipeline.fit_transform(df.ingredients.apply(Counter))\nX = scipy.sparse.hstack([X, pipeline_bis.fit_transform(df.ingredients.apply(map_term_count))])\ny = df.cuisine.values\n\nmodel = sklearn.naive_bayes.MultinomialNB(alpha=0.1)\n\n# Cross-validation\nscore = sklearn.cross_validation.cross_val_score(model, X, y, cv=2)\nprint(score)\n\n# Running on the test dataset\nt_df = open_dataset('test.json')\nX_test = pipeline.transform(t_df.ingredients.apply(Counter))\nX_test = scipy.sparse.hstack([X_test, pipeline_bis.transform(t_df.ingredients.apply(map_term_count))])\n\nmodel.fit(X, y)\n\npredictions = model.predict(X_test)\nresult_df = pd.DataFrame(index=t_df.index)\nresult_df['cuisine'] = pd.Series(predictions, index=result_df.index)\n\nresult_df['ingredients'] = t_df['ingredients']\nresult_df", "Thanks for following! I hope you learned a thing or two :-)\nFeel free to ask any question, or contact me on kachkach.com / @halflings" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Aniruddha-Tapas/Applied-Machine-Learning
Clustering/Customer Segmentation for Market Analysis.ipynb
mit
[ "Creating Customer Segments\n\nIn this project we will analyze a dataset containing annual spending amounts for internal structure, to understand the variation in the different types of customers that a wholesale distributor interacts with.\nThe dataset can be downloaded from : https://archive.ics.uci.edu/ml/datasets/Wholesale+customers\nIt contains the folliwing attributes:\n1. FRESH: annual spending (m.u.) on fresh products (Continuous)\n2. MILK: annual spending (m.u.) on milk products (Continuous)\n3. GROCERY: annual spending (m.u.)on grocery products (Continuous)\n4. FROZEN: annual spending (m.u.)on frozen products (Continuous) \n5. DETERGENTS_PAPER: annual spending (m.u.) on detergents and paper products (Continuous) \n6. DELICATESSEN: annual spending (m.u.)on and delicatessen products (Continuous)\n7. CHANNEL: customer™ Channel - Horeca (Hotel/Restaurant/Cafe) or Retail channel (Nominal) \n8. REGION: customers™ Region - Lisnon, Oporto or Other (Nominal)\nWe would not be using the 2 columns 'Channel' and 'Region' as they represent classes. Instead we would use the other 6 attributes for customer clustering.", "# Import libraries: NumPy, pandas, matplotlib\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n# Tell iPython to include plots inline in the notebook\n%matplotlib inline\n\n# read .csv from provided dataset\ncsv_filename=\"Wholesale customers data.csv\"\n\n# df=pd.read_csv(csv_filename,index_col=0)\ndf=pd.read_csv(csv_filename)\n\ndf.head()\n\nfeatures = df.columns[2:]\nfeatures\n\ndata = df[features]\nprint(data.head(5))", "Feature Transformation\nThe first PCA dimension is the dimension in the data with highest variance. Intuitively, it corresponds to the 'longest' vector one can find in the 6-dimensional feature space that captures the data, that is, the eigenvector with the largest eigenvalue.\nThe first component will carry a high load of the 'Fresh' feature, as this feature seems to vary more than any of the other features (according to the README.md-file, 'Fresh' has the highest variance). Moreover, this feature seems to vary independently of the others, that is, a high or low value of 'Fresh' is not very informative for the values of the other features. A pratical interpretation of these observations could be that some of the supplied customers focus on fresh items, whereas other focus on non-fresh items.\nICA, as opposed to PCA, finds the subcomponents that are statistically independent. ICA also finds 'Fresh' as one of the first components. The other components, however, may differ, as they need not be orthogonal in the feature space (in contrast to PCA). \nPCA", "# Apply PCA with the same number of dimensions as variables in the dataset\nfrom sklearn.decomposition import PCA\npca = PCA(n_components=6) # 6 components for 6 variables\npca.fit(data)\n\n# Print the components and the amount of variance in the data contained in each dimension\nprint(pca.components_)\nprint(pca.explained_variance_ratio_)", "The explained variance is high for the first two dimensions (45.96 % and 40.52 %, respectively), but drops significantly beginning with the third dimension (7.00 % for the third, 4.40 % for the fourth dimension). Thus, the first two components explain already 86.5 % of the variation in the data.\nHow many dimension to choose for the analysis really depends on the goal of the analysis. Even though PCA reduces the feature space (with all advantages that brings, such as faster computations) and makes interpreting the data easier for us by projecting them down to a lower dimension, it necessarily comes with a loss of information that may or may not be desired.\nIt the case at hand, assuming interpretation is the goal (creating customer segments) and given the sharp drop of the explained variance after the second component, we would choose the first two dimensions for analysis.", "plt.plot(list(pca.explained_variance_ratio_),'-o')\nplt.title('Explained variance ratio as function of PCA components')\nplt.ylabel('Explained variance ratio')\nplt.xlabel('Component')\nplt.show()", "The first dimension seems to basically represent only the 'fresh'-feature, as this feature has a strong negative projection on the first dimension. The other features have rather weak (mostly negative) projections on the first dimension. That is, the first dimension basically tells us whether the 'fresh'-feature value is high or low, mixed with a little bit of information from the other features.\nThe second dimension is mainly represented by the features 'Grocery', 'Milk' and 'Detergents', in the order of decreasing importance, and has rather low correlation with the other features.\nThere are two main uses of this information. The first use is feature interpretation and hypothesis formation. We could form initial conjectures about the customer segments contained in the data. One conjecture could be that the bulk of customers can be split into customers ordering mainly 'fresh' items and customers mainly ordering 'Grocery', 'Milk' and 'Detergents' from the wholesale distributor. The second use is that, given knowledge of the PCA components, new features can be engineered for further analysis of the problem. These features could be generated by applying an exact PCA-transformation or by using some heuristic based on the feature combinations recovered in PCA.\nICA", "# Fit an ICA model to the data\n# Note: Adjust the data to have center at the origin first!\ndef center_data(data, rescale = 0):\n centeredData = data.copy()\n for col in centeredData.columns:\n centeredData[col] = (centeredData[col] - np.mean(centeredData[col]))/ (1 + rescale * np.std(centeredData[col]))\n return centeredData\n\nfrom sklearn.decomposition import FastICA\n#data_centered = center_data(data)\n\nica = FastICA(n_components=6, whiten=True)\nica.fit(center_data(data,0))\n\n# Print the independent components\nprint(ica.components_)\n\n# Print the independent components (rescaled again)\nprint('Independent components scaled with mean')\nprint(np.multiply(ica.components_,list(np.mean(data))))", "The first vector [-0.04771087 0.00496636 0.00492989 0.00208307 -0.0059068 0.00159593] again represents mainly the 'fresh'-feature, with a coefficient of -0.0477. The other features have a rather weak projection on the first dimension.\nThe second vector [ 0.00182027 0.0571306 -0.04596392 -0.00113553 0.00928388 -0.00925863] corresponds mainly to the features 'Milk' and 'Grocery', but in different directions. This indicates that, other things equal, high 'Milk'-spending is associated with low 'Grocery'-spending and vice versa.\nThe third vector [ 0.00360762 -0.01311074 -0.09638513 0.00448148 0.08132511 0.00872532] has as strong association with the 'Grocery'- and 'Detergents_Paper'-features, again in opposite directions. This indicates a negative association between these features across the wholesalers customers. \nThe main charactistic of the fourth vector [ 0.00463807 0.00127625 0.00476776 0.00160491 -0.00146026 -0.02758939] is that this vector has a relatively strong negative association with 'delicatessen' (and only rather weak associations with the other features). Even though the coefficient are very low, the vector permits the interpretation that 'delicatessen' are negativly related to the 'fresh'- and 'grocery'-features.\nClustering\nIn this section we will choose either K Means clustering or Gaussian Mixed Models clustering, which implements expectation-maximization. Then we will sample elements from the clusters to understand their significance.\nChoosing a Cluster Type\nK Means Clustering or Gaussian Mixture Models?\nBefore discussing the advantages of K Means vs Gaussian Mixture models, it is helpful to observe that both methods are actually very similar. The main difference is that Gaussian Mixture models make a probabilistic assignment of points to classes depending on some distance metric, whereas K Means makes a deterministic assignment depending on some metric. Now, when the variance of the Gaussian mixtures is very small, this method becomes very similar to K Means, since the assignment probabilities to a specific cluster converge to 0 or 1 for any point in the domain. Because of the probabilistic assignment, Gaussian Mixtures (in contrast to K Means) are often characterized as soft clustering algorithms.\nAn advantage of Gaussian Mixture models is that, if there is some a priori uncertainty about the assignment of a point to a cluster, this uncertainty is inherently reflected in the probabilistic model (soft assignment) and assignment probabilities can be computed for any data point after the model is trained. On the other hand, if a priori the clusters assignments are expected to be deterministic, K Means has advantages. An example would be a data generating process that actually is a mixture of Gaussians. Applying a Gaussian mixture model is more natural given this data generating process. When it comes to processing speed, the EM algorithm with Gaussian mixtures is generally slightly slower than Lloyd's algorithm for K Means, since computing the normal probability (EM) is generally slower than computing the L2-norm (K Means). A disadvantage of both methods is that they can get stuck in local minima (this can be considered as the cost of solving NP-hard problems (global min for k-means) approximately).\nSince there is no strong indication that the data are generated from a mixture of normals (this assesment may be different given more information about the nature of the spending data) and the goal is to \"hard\"-cluster them (and not assign probabilities), I decided the use the general-purpose k-means algorithm.\nA decision on the number of clusters will be made by visualizing the final clustering and deciding whether k equals the number of data centers found by visual inspection. Note that many other approaches for this task could be utilized, such as silhoutte analysis (see for example http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html).\nBelow is some starter code to help you visualize some cluster data. The visualization is based on this demo from the sklearn documentation.", "# Import clustering modules\nfrom sklearn.cluster import KMeans\nfrom sklearn.mixture import GMM\n\n# First we reduce the data to two dimensions using PCA to capture variation\npca = PCA(n_components=2)\nreduced_data = pca.fit_transform(data)\nprint(reduced_data[:10]) # print upto 10 elements\n\n# Implement your clustering algorithm here, and fit it to the reduced data for visualization\n# The visualizer below assumes your clustering object is named 'clusters'\n\n# TRIED OUT 2,3,4,5,6 CLUSTERS AND CONCLUDED THAT 3 CLUSTERS ARE A SENSIBLE CHOICE BASED ON VISUAL INSPECTION, SINCE \n# WE OBTAIN ONE CENTRAL CLUSTER AND TWO CLUSTERS THAT SPREAD FAR OUT IN TWO DIRECTIONS.\nkmeans = KMeans(n_clusters=3)\nclusters = kmeans.fit(reduced_data)\nprint(clusters)\n\n# Plot the decision boundary by building a mesh grid to populate a graph.\nx_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1\ny_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1\nhx = (x_max-x_min)/1000.\nhy = (y_max-y_min)/1000.\nxx, yy = np.meshgrid(np.arange(x_min, x_max, hx), np.arange(y_min, y_max, hy))\n\n# Obtain labels for each point in mesh. Use last trained model.\nZ = clusters.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Find the centroids for KMeans or the cluster means for GMM \n\ncentroids = kmeans.cluster_centers_\nprint('*** K MEANS CENTROIDS ***')\nprint(centroids)\n\n# TRANSFORM DATA BACK TO ORIGINAL SPACE FOR ANSWERING 7\nprint('*** CENTROIDS TRANSFERED TO ORIGINAL SPACE ***')\nprint(pca.inverse_transform(centroids))\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\nplt.figure(1)\nplt.clf()\nplt.imshow(Z, interpolation='nearest',\n extent=(xx.min(), xx.max(), yy.min(), yy.max()),\n cmap=plt.cm.Paired,\n aspect='auto', origin='lower')\n\nplt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)\nplt.scatter(centroids[:, 0], centroids[:, 1],\n marker='x', s=169, linewidths=3,\n color='w', zorder=10)\nplt.title('Clustering on the wholesale grocery dataset (PCA-reduced data)\\n'\n 'Centroids are marked with white cross')\nplt.xlim(x_min, x_max)\nplt.ylim(y_min, y_max)\nplt.xticks(())\nplt.yticks(())\nplt.show()", "The first cluster contains customers that have vastly (around 3 times) higher spendings in the 'Fresh'-category compared to the average, indicating that those customers specialize in selling fresh products. Also, customers in this cluster tend to place many orders in the 'Frozen'- and 'Delicatessen'-Category, but relatively few in the 'Detergents and Paper'-category.\nCustomers in the second cluster tend to spend the most overall, with particularly high spendings in the categories 'Milk', 'Grocery' and 'Detergents and Paper' and relatively low spendings in the 'Fresh' and 'Frozen' categories. Overall, this indicates that customers in this segment sell products that are more durable (i.e. not fresh).\nThe last cluster reflects small customers that have below-average annual spendings for each of the items. Appart from the low total spending, it is apparent that the spending distribution across categories is not pathological, that is, there is no category for which spendings are particularly low or high (given that spendings are low overall).\nRegarding the question targeted at distinguishing the clusters visually: I generally had no problems distinguishing the clusters. Besides that, one observation is that the PCA reduction does not result in clusters that are well separated from each other. Reducing the data to three or four dimensions only (instead of two) may result in clusters that have more separation, but adds the complexity of having to visually represent the data using an (hyper-)cube instead of a plane. Of course, one could try to improve cluster representation using a 3-component PCA and a cube.\n CENTROIDS TRANSFERED TO ORIGINAL SPACE \n[\n [ 35908.28; 6409.09; 6027.84; 6808.70; 1088.15; 2904.19] (first cluster)\n[ 7896.20; 18663.60; 27183.75; 2394.58; 12120.22; 2875.42] (second cluster)\n[ 8276.38; 3689.87; 5320.73; 2495.45; 1776.40; 1063.97]] (third cluster)\n<hr>\nElbow Method\nUsing the elbow method to find the optimal number of clusters\nOne of the main challenges in unsupervised learning is that we do not know the definitive answer. We don't have the ground truth class labels in our dataset that allow us to apply the techniques in order to evaluate the performance of a supervised model. Thus, in order to quantify the quality of clustering, we need to use intrinsic metrics—such as the within-cluster SSE (distortion) to compare the performance of different k-means clusterings. Conveniently, we don't need to compute the within-cluster SSE explicitly as it is already accessible via the inertia_ attribute after fitting a KMeans model.\nBased on the within-cluster SSE, we can use a graphical tool, the so-called elbow method, to estimate the optimal number of clusters k for a given task. Intuitively,\nwe can say that, if k increases, the distortion will decrease. This is because the\nsamples will be closer to the centroids they are assigned to. The idea behind the elbow method is to identify the value of k where the distortion begins to increase most rapidly, which will become more clear if we plot distortion for different\nvalues of k:", "X = df[features]\ny = df['Region']\n\ndistortions = []\nfor i in range(1, 11):\n km = KMeans(n_clusters=i, \n init='k-means++', \n n_init=10, \n max_iter=300, \n random_state=0)\n km.fit(X)\n distortions .append(km.inertia_)\nplt.plot(range(1,11), distortions , marker='o')\nplt.xlabel('Number of clusters')\nplt.ylabel('Distortion')\nplt.tight_layout()\n#plt.savefig('./figures/elbow.png', dpi=300)\nplt.show()", "Quantifying the quality of clustering via silhouette plots\nAnother intrinsic metric to evaluate the quality of a clustering is silhouette analysis, which can also be applied to clustering algorithms other than k-means that we will discuss later in this chapter. Silhouette analysis can be used as a graphical tool to plot a measure of how tightly grouped the samples in the clusters are. To calculate the silhouette coefficient of a single sample in our dataset, we can apply the following three steps:\n1. Calculate the cluster cohesion a(i) as the average distance between a sample x(i) and all other points in the same cluster.\n2. Calculate the cluster separation b(i) from the next closest cluster as the average distance between the sample x(i) and all samples in the nearest cluster.\n3. Calculate the silhouette s(i) as the difference between cluster cohesion and separation divided by the greater of the two as shown : \ns(i) = b(i) - a(i) / max(b(i),a(i))\nThe silhouette coefficient is bounded in the range -1 to 1. Based on the preceding formula, we can see that the silhouette coefficient is 0 if the cluster separation and cohesion are equal (b(i)=a(i)). Furthermore, we get close to an ideal silhouette coefficient of 1 if (b(i)>>a(i)) since b(i) quantifies how dissimilar a sample is to other clusters, and a(i) tells us how similar it is to the other samples in its own cluster, respectively.\nThe silhouette coefficient is available as silhouette_samples from scikit-learn's metric module, and optionally the silhouette_scores can be imported. This calculates the average silhouette coefficient across all samples, which is equivalent to numpy.mean(silhouette_samples(…)). By executing the following code, we will now create a plot of the silhouette coefficients for a k-means clustering with k=3:", "import numpy as np\nfrom matplotlib import cm\nfrom sklearn.metrics import silhouette_samples\n\nkm = KMeans(n_clusters=3, \n init='k-means++', \n n_init=10, \n max_iter=300,\n tol=1e-04,\n random_state=0)\ny_km = km.fit_predict(X)\n\ncluster_labels = np.unique(y_km)\nn_clusters = cluster_labels.shape[0]\nsilhouette_vals = silhouette_samples(X, y_km, metric='euclidean')\ny_ax_lower, y_ax_upper = 0, 0\nyticks = []\nfor i, c in enumerate(cluster_labels):\n c_silhouette_vals = silhouette_vals[y_km == c]\n c_silhouette_vals.sort()\n y_ax_upper += len(c_silhouette_vals)\n color = cm.jet(i / n_clusters)\n plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0, \n edgecolor='none', color=color)\n\n yticks.append((y_ax_lower + y_ax_upper) / 2)\n y_ax_lower += len(c_silhouette_vals)\n \nsilhouette_avg = np.mean(silhouette_vals)\nplt.axvline(silhouette_avg, color=\"red\", linestyle=\"--\") \n\nplt.yticks(yticks, cluster_labels + 1)\nplt.ylabel('Cluster')\nplt.xlabel('Silhouette coefficient')\n\nplt.tight_layout()\n# plt.savefig('./figures/silhouette.png', dpi=300)\nplt.show()", "Thus our clustering with 3 centroids is good.", "y.unique()", "Also the regions are 3 which validates our assumption.\nApply different unsupervised clustering techniques using Scikit-Learn\nApplying agglomerative clustering via scikit-learn", "from sklearn.cluster import AgglomerativeClustering\n\nac = AgglomerativeClustering(n_clusters=3, affinity='euclidean', linkage='complete')\nlabels = ac.fit_predict(X)\nprint('Cluster labels: %s' % labels)\n\nfrom sklearn.cross_validation import train_test_split\nX = df[features]\ny = df['Region']\nX_train, X_test, y_train, y_test = train_test_split(X, y ,test_size=0.25, random_state=42)", "K Means", "from sklearn import cluster\nclf = cluster.KMeans(init='k-means++', n_clusters=3, random_state=5)\nclf.fit(X_train)\nprint (clf.labels_.shape)\nprint (clf.labels_)\n\n# Predict clusters on testing data\ny_pred = clf.predict(X_test)\n\nfrom sklearn import metrics\nprint (\"Addjusted rand score:{:.2}\".format(metrics.adjusted_rand_score(y_test, y_pred)))\nprint (\"Homogeneity score:{:.2} \".format(metrics.homogeneity_score(y_test, y_pred)) )\nprint (\"Completeness score: {:.2} \".format(metrics.completeness_score(y_test, y_pred)))\nprint (\"Confusion matrix\")\nprint (metrics.confusion_matrix(y_test, y_pred))", "Affinity Propogation", "# Affinity propagation\naff = cluster.AffinityPropagation()\naff.fit(X_train)\nprint (aff.cluster_centers_indices_.shape)\n\ny_pred = aff.predict(X_test)\n\nfrom sklearn import metrics\nprint (\"Addjusted rand score:{:.2}\".format(metrics.adjusted_rand_score(y_test, y_pred)))\nprint (\"Homogeneity score:{:.2} \".format(metrics.homogeneity_score(y_test, y_pred)) )\nprint (\"Completeness score: {:.2} \".format(metrics.completeness_score(y_test, y_pred)))\nprint (\"Confusion matrix\")\nprint (metrics.confusion_matrix(y_test, y_pred))", "MeanShift", "ms = cluster.MeanShift()\nms.fit(X_train)\nprint (ms.cluster_centers_)\n\ny_pred = ms.predict(X_test)\n\nfrom sklearn import metrics\nprint (\"Addjusted rand score:{:.2}\".format(metrics.adjusted_rand_score(y_test, y_pred)))\nprint (\"Homogeneity score:{:.2} \".format(metrics.homogeneity_score(y_test, y_pred)) )\nprint (\"Completeness score: {:.2} \".format(metrics.completeness_score(y_test, y_pred)))\nprint (\"Confusion matrix\")\nprint (metrics.confusion_matrix(y_test, y_pred))", "Mixture of Guassian Models", "from sklearn import mixture\n\n# Define a heldout dataset to estimate covariance type\nX_train_heldout, X_test_heldout, y_train_heldout, y_test_heldout = train_test_split(\n X_train, y_train,test_size=0.25, random_state=42)\nfor covariance_type in ['spherical','tied','diag','full']:\n gm=mixture.GMM(n_components=3, covariance_type=covariance_type, random_state=42, n_init=5)\n gm.fit(X_train_heldout)\n y_pred=gm.predict(X_test_heldout)\n print (\"Adjusted rand score for covariance={}:{:.2}\".format(covariance_type, \n metrics.adjusted_rand_score(y_test_heldout, y_pred)))\n\n\ngm = mixture.GMM(n_components=3, covariance_type='tied', random_state=42)\ngm.fit(X_train)\n\n# Print train clustering and confusion matrix\ny_pred = gm.predict(X_test)\nprint (\"Addjusted rand score:{:.2}\".format(metrics.adjusted_rand_score(y_test, y_pred)))\nprint (\"Homogeneity score:{:.2} \".format(metrics.homogeneity_score(y_test, y_pred)) )\nprint (\"Completeness score: {:.2} \".format(metrics.completeness_score(y_test, y_pred)))\n\nprint (\"Confusion matrix\")\nprint (metrics.confusion_matrix(y_test, y_pred))", "", "pl=plt\nfrom sklearn import decomposition\n# In this case the seeding of the centers is deterministic,\n# hence we run the kmeans algorithm only once with n_init=1\n\npca = decomposition.PCA(n_components=2).fit(X_train)\nreduced_X_train = pca.transform(X_train)\n\n# Step size of the mesh. Decrease to increase the quality of the VQ.\nh = .01 # point in the mesh [x_min, m_max]x[y_min, y_max].\n\n# Plot the decision boundary. For that, we will asign a color to each\nx_min, x_max = reduced_X_train[:, 0].min() + 1, reduced_X_train[:, 0].max() - 1\ny_min, y_max = reduced_X_train[:, 1].min() + 1, reduced_X_train[:, 1].max() - 1\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\n\ngm.fit(reduced_X_train)\n#print np.c_[xx.ravel(),yy.ravel()]\nZ = gm.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\npl.figure(1)\npl.clf()\npl.imshow(Z, interpolation='nearest',\n extent=(xx.min(), xx.max(), yy.min(), yy.max()),\n cmap=pl.cm.Paired,\n aspect='auto', origin='lower')\n#print reduced_X_train.shape\n\npl.plot(reduced_X_train[:, 0], reduced_X_train[:, 1], 'k.', markersize=2)\n# Plot the centroids as a white X\ncentroids = gm.means_\n\npl.scatter(centroids[:, 0], centroids[:, 1],\n marker='.', s=169, linewidths=3,\n color='w', zorder=10)\n\npl.title('Mixture of gaussian models on the seeds dataset (PCA-reduced data)\\n'\n 'Means are marked with white dots')\npl.xlim(x_min, x_max)\npl.ylim(y_min, y_max)\npl.xticks(())\npl.yticks(())\npl.show()", "The plot of the data fitted by Guassian models looks like :\n<img src=\"images/gmplot.png\">\nConclusions\nPCA in combination with K-means-clustering was the most insightful solution technique. Even though only six features are present in the data, PCA identified the the two features with the highest variance and enabled me two cluster the data in a lower dimensional space. We used PCA over ICA for dimensionality reduction, since the PCA eigenvalues provide a convenient ordering of the most important variance directions (eigenvectors) of the data that can later be used in clustering. ICA does not provide a similar rank order for the most important components. After preprocessing the data with PCA, K-means uncovered (clustered) interpretable customer groups that are helpful to the client. We'hard-clustered' the data with K-means (instead of GMM) because there is no clear indication about the underlying data generating process and hence I prefered a method that does not assume Gausssian distributions. \nDue to the clustering, we can now tell the client that, among the large customers, one group focusses on the 'fresh' category and the other on more durable products. The small customers, however, have a more balanced assortment (and we also concluded that small customers actually exist.). \nHow would this help the company?\nIf the client plans to change, retire or introduce new (delivery) strategies in the future, she could use the segmentation k-means found to try the strategies on subsets of the segments. That is, the client could choose a small number of customers in each segment and introduce the change for those customers only. It could then, after some time, see how the customer segments respond to the change (for example by asking them for feedback). Based on this, the client may conclude to not proceed with the new strategy, proceed only for one or two customer segments, or implement it for the whole customer base. The ability to receive feedback on a new strategy from a small subset of customers before making a final decision is advantageous for the client.\nHow would we use that data to help predict future customer needs?\nIf the client implemented a strategy for a subset of each of the customer segements and asked them to provide feedback on a, say, 1 (very poor) to 5 (great) scale, the client could then run a linear regression of the feedback provided on a set of cluster membership features (dummy variables) to identify the needs of each of the customer segments (note that this linear regression basically results in segment averages for the response). This will help the client to determine whether the new strategy suits the customers needs (by segment) or not." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/workshops
extras/tensorflow_lattice/03_calibrator_basics.ipynb
apache-2.0
[ "Basics of 1d calibrators\nIn this notebook, we'll explain one dimensional calibrators.\nFirst we need to import libraries we're going to use.", "!pip install tensorflow_lattice\nimport tensorflow as tf\nimport tensorflow_lattice as tfl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport math", "Next, let's prepare a synthetic dataset.", "# Example function we will try to learn with calibration.\n%matplotlib inline\ndef f(x):\n return np.power(x, 3) + 0.1*np.sin(x * math.pi * 8)\n\ndef gen_examples(n):\n x = np.random.uniform(-1, 1.0, size=n)\n x.sort()\n y = f(x)\n return (x,y)\n\n# pwl_x_data and pwl_y_data contains a synthetic dataset.\nn = 100\npwl_x_data, pwl_y_data = gen_examples(n)\n\nplt.plot(pwl_x_data, pwl_y_data)\nplt.ylabel(\"y\")\nplt.xlabel(\"x\")", "Fitting a Piecewise Linear calibartor\nIn a piecewise linear (PWL) calibrator, input keypoints are given and output keypoints are TensorFlow variables.\nSo we fitting the output keypoints to minimize a loss function.", "%matplotlib inline\n\n# Let's reset the default graph to clean up the state.\ntf.reset_default_graph()\n\n# x is a placeholder for feeding 1d input data.\n# We will feed the full datapoints (100), i.e., batch_size == 100.\nx = tf.placeholder(dtype=tf.float32, shape=(n))\n# y is a placeholder for feeding ground truth output.\ny_ = tf.placeholder(dtype=tf.float32, shape=(n))\n\n\n# To use calibrator, we need to initialize input and output keypoints.\n# Here we'll use 50 number of keypoints in the PWL calibrator.\n# Here 50 input keypoints will be uniformly spaced over [-1, 1], and 50 output\n# keypoints will be uniformly spaced over [-0.5, 0.5].\n#\n# During training input keypoints will not be changed, but output keypoints will\n# be changed to fit our data.\n#\n# Calibrator will clip the input outside of the input range [-1, 1], which means\n# the input value less than -1 will be clipped to -1, and the input value\n# greater than +1 will be clipped to +1. Feel free to change input_min and\n# input_max to see this behavior.\nnum_keypoints = 50\nkp_inits = tfl.uniform_keypoints_for_signal(\n num_keypoints=num_keypoints,\n input_min=-1.0,\n input_max=1.0,\n output_min=-0.5,\n output_max=0.5)\n\n# Now we define PWL linear calibrator with 50 keypoints that calibrate the input\n# tensor x (with shape [batch_size]), to the output tensor y (with shape\n# [batch_size]). y[0] is the calibrated x[0], y[1] is the calibrated x[1], ....\n# calibration_layer returns three elements:\n# 1. output tensor\n# 2. Projection operator\n# 3. Regularization loss (scalar tensor)\n# We'll cover the second and the third part in this notebook as well, so let's\n# focus on the first.\n(y, _, _) = tfl.calibration_layer(\n uncalibrated_tensor=x,\n num_keypoints=num_keypoints,\n keypoints_initializers=kp_inits)\n\n# To train a calibrator, we define L2 loss.\n# Here y_ is the ground truth.\nloss = tf.reduce_mean(tf.square(y - y_))\n\n# Now we define TensorFlow training operator.\n# Here we'll use GradientDescentOptimizer with the initial learning rate 0.1.\n# This train_op computes the gradient of L2 \"loss\" we just defined w.r.t. the \n# output keypoints in the calibrator, and update output keypoints value to\n# minimize L2 \"loss\".\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)\n\nsess = tf.Session()\n# Before starting to train, we need to initialize variables in our computational\n# graph.\nsess.run(tf.global_variables_initializer())\n\n# Apply gradient descent operator 1000 times.\nfor _ in range(1000):\n # Update output keypoints by feeding the full data into our computational\n # graph.\n sess.run(train_op, feed_dict={x: pwl_x_data, y_: pwl_y_data})\n\n# Now training is done, let us fetch the prediction from our calibrator.\n# predicted will contain numpy n-d array of predictions over pwl_x_data.\npredicted = sess.run(y, feed_dict={x: pwl_x_data})\n\n# Plot the response.\nplt.plot(pwl_x_data, predicted)\nplt.plot(pwl_x_data, pwl_y_data)\nplt.ylabel(\"predicted\")\nplt.xlabel(\"x\")\nplt.legend(['PWL calibrator', 'True Data'])", "Bounded PWL calibrator\nBy default, output keypoints in the calibrator are not bounded.\nIn some cases, this is not desirable, especially when the output of the calibrator is fed into upper layer.\nFor example, a 2 x 2 lattice expects an input is in [0, 1] x [0, 1], so if calibrator output is fed into such a lattice layer,\nit would be better for output keypoints to be in the range [0, 1].", "%matplotlib inline\n\n# Same as before.\ntf.reset_default_graph()\n\nx = tf.placeholder(dtype=tf.float32, shape=(n))\ny_ = tf.placeholder(dtype=tf.float32, shape=(n))\n\nkp_inits = tfl.uniform_keypoints_for_signal(\n num_keypoints=50,\n input_min=-1.0,\n input_max=1.0,\n output_min=-0.5,\n output_max=0.5)\n\n# Now we define a calibrator with \"bound\".\n# By setting bound == True, we are making sure the output keypoints are inside\n# the initial output range from kp_inits, [-0.5, 0.5].\n# This is acheived by projection_op. projection_op is a collection of TensorFlow\n# operators that find the output keypoints not in the range [-0.5, 0.5] and\n# assign 0.5 for output keypoints larger than 0.5, -0.5 for output keypoints\n# smaller than -0.5.\n(y, projection_op, _) = tfl.calibration_layer(\n uncalibrated_tensor=x,\n num_keypoints=50,\n bound=True,\n keypoints_initializers=kp_inits)\n\n# Sqaured loss\nloss = tf.reduce_mean(tf.square(y - y_))\n\n# Minimize!\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)\n\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\n# Iterate 1000 times\nfor _ in range(1000):\n # Apply gradient.\n sess.run(train_op, feed_dict={x: pwl_x_data, y_: pwl_y_data})\n # Then apply projection. This is projected SGD.\n sess.run(projection_op)\n\npredicted = sess.run(y, feed_dict={x: pwl_x_data})\n\n# In the plot, we should see that the predictions are in the range [-0.5, 0.5].\nplt.plot(pwl_x_data, predicted)\nplt.plot(pwl_x_data, pwl_y_data)\nplt.ylabel(\"predicted\")\nplt.xlabel(\"x\")\nplt.legend(['PWL calibrator', 'True Data'])", "Bounded monotonic PWL calibrator\nYou can also set monotonicity in each calibrator.", "%matplotlib inline\n\ntf.reset_default_graph()\n\nx = tf.placeholder(dtype=tf.float32, shape=(n))\ny_ = tf.placeholder(dtype=tf.float32, shape=(n))\n\nkp_inits = tfl.uniform_keypoints_for_signal(\n num_keypoints=50,\n input_min=-1.0,\n input_max=1.0,\n output_min=-0.5,\n output_max=0.5)\n\n# Monotonically increasing 1d calibrator.\n# In addition to the bound, now let's make the calibrator monotonic.\n# Since we set monotonic to +1, projection_op now contains not only the bounding\n# projection, but also monotonicity projection.\n(y, projection_op, _) = tfl.calibration_layer(\n uncalibrated_tensor=x,\n num_keypoints=50,\n bound=True,\n monotonic=+1,\n keypoints_initializers=kp_inits)\n\n# Sqaured loss\nloss = tf.reduce_mean(tf.square(y - y_))\n\n# Minimize!\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)\n\nsess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())\n# Iterate 1000 times\nfor _ in range(1000):\n # Apply gradient.\n sess.run(train_op, feed_dict={x: pwl_x_data, y_: pwl_y_data})\n # Apply projection.\n sess.run(projection_op)\n\npredicted = sess.run(y, feed_dict={x: pwl_x_data})\n\n# Now in the plot, we should see monotonically increasing predictions.\nplt.plot(pwl_x_data, predicted)\nplt.plot(pwl_x_data, pwl_y_data)\nplt.ylabel(\"predicted\")\nplt.xlabel(\"x\")\nplt.legend(['PWL calibrator', 'True Data'])", "Laplacian regularizer\nNow let's add the Laplacian regularizer.\nLaplacian regularizer penalizes the change in consecutive output keypoint values.\nTherefore we can get much smoother 1d calibration result.", "%matplotlib inline\n\ntf.reset_default_graph()\n\nx = tf.placeholder(dtype=tf.float32, shape=(n))\ny_ = tf.placeholder(dtype=tf.float32, shape=(n))\n\nkp_inits = tfl.uniform_keypoints_for_signal(\n num_keypoints=50,\n input_min=-1.0,\n input_max=1.0,\n output_min=-0.5,\n output_max=0.5)\n\n# Piecewise linear calibration.\n# Here we set L2 Laplacian regularization.\n# L2 Laplacian regularization ==\n# ||output_keypoints[1:end] - output_keypoints[0:-2]||_2^2\n# which penalizes changes in consecutive output keypoints (the slope) in the\n# calibrator.\n# regularization is 1d scalar, and we expect this to be added to the loss.\n(y, projection_op, regularization) = tfl.calibration_layer(\n uncalibrated_tensor=x,\n num_keypoints=50,\n bound=True,\n monotonic=+1,\n l2_laplacian_reg=0.1,\n keypoints_initializers=kp_inits)\n\n# loss == squared loss + regularzation.\nloss = tf.reduce_mean(tf.square(y - y_))\nloss += regularization\n\n# Minimize!\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)\n\nsess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())\n# Iterate 10 times\nfor _ in range(1000):\n # Apply gradient.\n sess.run(train_op, feed_dict={x: pwl_x_data, y_: pwl_y_data})\n sess.run(projection_op)\n\npredicted = sess.run(y, feed_dict={x: pwl_x_data})\n# Now we expect a smoother calibrator.\nplt.plot(pwl_x_data, predicted)\nplt.plot(pwl_x_data, pwl_y_data)\nplt.ylabel(\"predicted\")\nplt.xlabel(\"x\")\nplt.legend(['PWL calibrator', 'True Data'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rudyryk/LearnAI
notebooks/2_fullyconnected.ipynb
unlicense
[ "Deep Learning\nAssignment 2\nPreviously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.\nThe goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.", "# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport os\nimport numpy as np\nimport tensorflow as tf\nfrom six.moves import cPickle as pickle\nfrom six.moves import range", "First reload the data we generated in 1_notmnist.ipynb.", "data_root = '../data' # Change me to store data elsewhere\npickle_file = os.path.join(data_root, 'notMNIST.pickle')\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n train_dataset = save['train_dataset']\n train_labels = save['train_labels']\n valid_dataset = save['valid_dataset']\n valid_labels = save['valid_labels']\n test_dataset = save['test_dataset']\n test_labels = save['test_labels']\n del save # hint to help gc free up memory\n print('Training set', train_dataset.shape, train_labels.shape)\n print('Validation set', valid_dataset.shape, valid_labels.shape)\n print('Test set', test_dataset.shape, test_labels.shape)", "Reformat into a shape that's more adapted to the models we're going to train:\n- data as a flat matrix,\n- labels as float 1-hot encodings.", "image_size = 28\nnum_labels = 10\n\ndef reformat(dataset, labels):\n dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]\n labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n return dataset, labels\ntrain_dataset, train_labels = reformat(train_dataset, train_labels)\nvalid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\ntest_dataset, test_labels = reformat(test_dataset, test_labels)\nprint('Training set', train_dataset.shape, train_labels.shape)\nprint('Validation set', valid_dataset.shape, valid_labels.shape)\nprint('Test set', test_dataset.shape, test_labels.shape)", "We're first going to train a multinomial logistic regression using simple gradient descent.\nTensorFlow works like this:\n* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:\n with graph.as_default():\n ...\n\n\n\nThen you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:\nwith tf.Session(graph=graph) as session:\n ...\n\n\nLet's load all the data into TensorFlow and build the computation graph corresponding to our training:", "# With gradient descent training, even this much data is prohibitive.\n# Subset the training data for faster turnaround.\ntrain_subset = 10000\n\ngraph = tf.Graph()\nwith graph.as_default():\n\n # Input data.\n # Load the training, validation and test data into constants that are\n # attached to the graph.\n tf_train_dataset = tf.constant(train_dataset[:train_subset])\n tf_train_labels = tf.constant(train_labels[:train_subset])\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n \n # Variables.\n # These are the parameters that we are going to be training. The weight\n # matrix will be initialized using random values following a (truncated)\n # normal distribution. The biases get initialized to zero.\n weights = tf.Variable(\n tf.truncated_normal([image_size * image_size, num_labels]))\n biases = tf.Variable(tf.zeros([num_labels]))\n \n # Training computation.\n # We multiply the inputs with the weight matrix, and add biases. We compute\n # the softmax and cross-entropy (it's one operation in TensorFlow, because\n # it's very common, and it can be optimized). We take the average of this\n # cross-entropy across all training examples: that's our loss.\n logits = tf.matmul(tf_train_dataset, weights) + biases\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n \n # Optimizer.\n # We are going to find the minimum of this loss using gradient descent.\n optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n \n # Predictions for the training, validation, and test data.\n # These are not part of training, but merely here so that we can report\n # accuracy figures as we train.\n train_prediction = tf.nn.softmax(logits)\n valid_prediction = tf.nn.softmax(\n tf.matmul(tf_valid_dataset, weights) + biases)\n test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)", "Let's run this computation and iterate:", "num_steps = 801\n\ndef accuracy(predictions, labels):\n return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n / predictions.shape[0])\n\nwith tf.Session(graph=graph) as session:\n summary_writer = tf.summary.FileWriter('../logs', graph=graph)\n # This is a one-time operation which ensures the parameters get initialized as\n # we described in the graph: random weights for the matrix, zeros for the\n # biases. \n tf.global_variables_initializer().run()\n print('Initialized')\n for step in range(num_steps):\n # Run the computations. We tell .run() that we want to run the optimizer,\n # and get the loss value and the training predictions returned as numpy\n # arrays.\n _, l, predictions = session.run([optimizer, loss, train_prediction])\n if (step % 100 == 0):\n print('Loss at step %d: %f' % (step, l))\n print('Training accuracy: %.1f%%' % accuracy(\n predictions, train_labels[:train_subset, :]))\n # Calling .eval() on valid_prediction is basically like calling run(), but\n # just to get that one numpy array. Note that it recomputes all its graph\n # dependencies.\n print('Validation accuracy: %.1f%%' % accuracy(\n valid_prediction.eval(), valid_labels))\n merged_summary = tf.summary.merge_all()\n\n print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))", "Let's now switch to stochastic gradient descent training instead, which is much faster.\nThe graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().", "batch_size = 128\n\ngraph = tf.Graph()\nwith graph.as_default():\n\n # Input data. For the training data, we use a placeholder that will be fed\n # at run time with a training minibatch.\n tf_train_dataset = tf.placeholder(tf.float32,\n shape=(batch_size, image_size * image_size))\n tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n \n # Variables.\n weights = tf.Variable(\n tf.truncated_normal([image_size * image_size, num_labels]))\n biases = tf.Variable(tf.zeros([num_labels]))\n \n # Training computation.\n logits = tf.matmul(tf_train_dataset, weights) + biases\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n \n # Optimizer.\n optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n \n # Predictions for the training, validation, and test data.\n train_prediction = tf.nn.softmax(logits)\n valid_prediction = tf.nn.softmax(\n tf.matmul(tf_valid_dataset, weights) + biases)\n test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)", "Let's run it:", "num_steps = 3001\n\nwith tf.Session(graph=graph) as session:\n tf.global_variables_initializer().run()\n print(\"Initialized\")\n for step in range(num_steps):\n # Pick an offset within the training data, which has been randomized.\n # Note: we could use better randomization across epochs.\n offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n # Generate a minibatch.\n batch_data = train_dataset[offset:(offset + batch_size), :]\n batch_labels = train_labels[offset:(offset + batch_size), :]\n # Prepare a dictionary telling the session where to feed the minibatch.\n # The key of the dictionary is the placeholder node of the graph to be fed,\n # and the value is the numpy array to feed to it.\n feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n _, l, predictions = session.run(\n [optimizer, loss, train_prediction], feed_dict=feed_dict)\n if (step % 500 == 0):\n print(\"Minibatch loss at step %d: %f\" % (step, l))\n print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels))\n print(\"Validation accuracy: %.1f%%\" % accuracy(\n valid_prediction.eval(), valid_labels))\n print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels))", "Problem\nTurn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy.", "relu_size = 1024\ninput_size = image_size * image_size\nnum_steps = 3001\nbatch_size = 128\n\ngraph = tf.Graph()\nwith graph.as_default():\n tf_train_dataset = tf.placeholder(tf.float32,\n shape=(batch_size, input_size))\n tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n\n def create_relu_model(x, weights, biases):\n # Layer 1: W1 * X + B1 -> ReLu\n layer_1 = tf.matmul(x, weights['n1']) + biases['n1']\n layer_1 = tf.nn.relu(layer_1)\n\n # Output layer: W2 * X + B2 -> Output\n out_layer = tf.matmul(layer_1, weights['out']) + biases['out']\n return out_layer\n\n # Simple ReLu model\n weights = {\n 'n1': tf.Variable(tf.truncated_normal([input_size, relu_size])),\n 'out': tf.Variable(tf.truncated_normal([relu_size, num_labels])),\n }\n biases = {\n 'n1': tf.Variable(tf.zeros([relu_size])),\n 'out': tf.Variable(tf.zeros([num_labels])),\n }\n relu_model = create_relu_model(tf_train_dataset, weights, biases)\n\n # Loss function\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(\n labels=tf_train_labels, logits=relu_model))\n\n # Optimizer.\n optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n\n # Predictions for the training, validation, and test data.\n train_prediction = tf.nn.softmax(relu_model)\n valid_prediction = tf.nn.softmax(\n create_relu_model(tf_valid_dataset, weights, biases))\n test_prediction = tf.nn.softmax(\n create_relu_model(tf_test_dataset, weights, biases))\n \nwith tf.Session(graph=graph) as session:\n tf.global_variables_initializer().run()\n print(\"Initialized\")\n for step in range(num_steps):\n # Pick an offset within the training data, which has been randomized.\n # Note: we could use better randomization across epochs.\n offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n # Generate a minibatch.\n batch_data = train_dataset[offset:(offset + batch_size), :]\n batch_labels = train_labels[offset:(offset + batch_size), :]\n # Prepare a dictionary telling the session where to feed the minibatch.\n # The key of the dictionary is the placeholder node of the graph to be fed,\n # and the value is the numpy array to feed to it.\n feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n _, l, predictions = session.run(\n [optimizer, loss, train_prediction], feed_dict=feed_dict)\n if (step % 500 == 0):\n print(\"Minibatch loss at step %d: %f\" % (step, l))\n print(\"Minibatch accuracy: %.1f%%\" % accuracy(\n predictions, batch_labels))\n print(\"Validation accuracy: %.1f%%\" % accuracy(\n valid_prediction.eval(), valid_labels))\n print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels))\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ML4DS/ML4all
TM1.IntrodNLP/NLP_py2_wikitools/databricks/TM1_NLP_db_student.ipynb
mit
[ "Text Analysis with NLTK\nAuthor: Jesús Cid-Sueiro\nDate: 2016/04/03\nLast review: 2017/04/21", "# %matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n# import pylab\n\n# Required imports\nfrom wikitools import wiki\nfrom wikitools import category\n\nimport nltk\nfrom nltk.tokenize import word_tokenize\nfrom nltk.corpus import stopwords\nfrom nltk.stem import WordNetLemmatizer\n\nfrom time import time\nfrom sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer\n\nfrom test_helper import Test\n\nimport gensim", "1. Corpus acquisition.\nIn these notebooks we will explore some tools for text analysis and two topic modeling algorithms available from Python toolboxes.\nTo do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes the capture of content from wikimedia sites very easy.\n(As a side note, there are many other available text collections to work with. In particular, the NLTK library has many examples, that you can explore using the nltk.download() tool.\nimport nltk\nnltk.download()\n\nfor instance, you can take the gutemberg dataset\nMycorpus = nltk.corpus.gutenberg\ntext_name = Mycorpus.fileids()[0]\nraw = Mycorpus.raw(text_name)\nWords = Mycorpus.words(text_name)\n\nAlso, tools like Gensim or Sci-kit learn include text databases to work with).\nIn order to use Wikipedia data, we will select a single category of articles:", "site = wiki.Wiki(\"https://en.wikipedia.org/w/api.php\")\n# Select a category with a reasonable number of articles (>100)\n# cat = \"Economics\"\ncat = \"Pseudoscience\"\nprint cat", "You can try with any other categories, but take into account that some categories may contain very few articles. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https://en.wikipedia.org/wiki/Category:Contents, for instance, and select the appropriate one.\nWe start downloading the text collection.", "# Loading category data. This may take a while\nprint \"Loading category data. This may take a while...\"\ncat_data = category.Category(site, cat)\n\ncorpus_titles = []\ncorpus_text = []\n\nfor n, page in enumerate(cat_data.getAllMembersGen()):\n print \"\\r Loading article {0}\".format(n + 1),\n corpus_titles.append(page.title)\n corpus_text.append(page.getWikiText())\n\nn_art = len(corpus_titles)\nprint \"\\nLoaded \" + str(n_art) + \" articles from category \" + cat", "Now, we have stored the whole text collection in two lists:\n\ncorpus_titles, which contains the titles of the selected articles\ncorpus_text, with the text content of the selected wikipedia articles\n\nYou can browse the content of the wikipedia articles to get some intuition about the kind of documents that will be processed.", "# n = 5\n# print corpus_titles[n]\n# print corpus_text[n]", "2. Corpus Processing\nTopic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection.\nThus, we will proceed with the following steps:\n\nTokenization\nHomogeneization\nCleaning\nVectorization\n\n2.1. Tokenization\nFor the first steps, we will use some of the powerfull methods available from the Natural Language Toolkit. In order to use the word_tokenize method from nltk, you might need to get the appropriate libraries using nltk.download(). You must select option \"d) Download\", and identifier \"punkt\"", "# You can comment this if the package is already available.\n# Select option \"d) Download\", and identifier \"punkt\"\n# nltk.download(\"punkt\")", "Task: Insert the appropriate call to word_tokenize in the code below, in order to get the tokens list corresponding to each Wikipedia article:", "corpus_tokens = []\n\nfor n, art in enumerate(corpus_text): \n print \"\\rTokenizing article {0} out of {1}\".format(n + 1, n_art),\n # This is to make sure that all characters have the appropriate encoding.\n art = art.decode('utf-8') \n \n # Tokenize each text entry. \n # scode: tokens = <FILL IN>\n \n # Add the new token list as a new element to corpus_tokens (that will be a list of lists)\n # scode: <FILL IN>\n\nprint \"\\n The corpus has been tokenized. Let's check some portion of the first article:\"\nprint corpus_tokens[0][0:30]\n\nTest.assertEquals(len(corpus_tokens), n_art, \"The number of articles has changed unexpectedly\")\nTest.assertTrue(len(corpus_tokens) >= 100, \n \"Your corpus_tokens has less than 100 articles. Consider using a larger dataset\")", "2.2. Homogeneization\nBy looking at the tokenized corpus you may verify that there are many tokens that correspond to punktuation signs and other symbols that are not relevant to analyze the semantic content. They can be removed using the stemming tool from nltk.\nThe homogeneization process will consist of:\n\nRemoving capitalization: capital alphabetic characters will be transformed to their corresponding lowercase characters.\nRemoving non alphanumeric tokens (e.g. punktuation signs)\nStemming/Lemmatization: removing word terminations to preserve the root of the words and ignore grammatical information.\n\n2.2.1. Filtering\nLet us proceed with the filtering steps 1 and 2 (removing capitalization and non-alphanumeric tokens).\nTask: Convert all tokens in corpus_tokens to lowercase (using .lower() method) and remove non alphanumeric tokens (that you can detect with .isalnum() method). You can do it in a single line of code...", "corpus_filtered = []\n\nfor n, token_list in enumerate(corpus_tokens):\n print \"\\rFiltering article {0} out of {1}\".format(n + 1, n_art),\n \n # Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.\n # Store the result in a new token list, clean_tokens.\n # scode: filtered_tokens = <FILL IN>\n \n # Add art to corpus_filtered\n # scode: <FILL IN>\n\nprint \"\\nLet's check the first tokens from document 0 after filtering:\"\nprint corpus_filtered[0][0:30]\n\nTest.assertTrue(all([c==c.lower() for c in corpus_filtered[23]]), 'Capital letters have not been removed')\nTest.assertTrue(all([c.isalnum() for c in corpus_filtered[13]]), 'Non alphanumeric characters have not been removed')", "2.2.2. Stemming vs Lemmatization\nAt this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences.\nTask: Apply the .stem() method, from the stemmer object created in the first line, to corpus_filtered.", "# Select stemmer.\nstemmer = nltk.stem.SnowballStemmer('english')\ncorpus_stemmed = []\n\nfor n, token_list in enumerate(corpus_filtered):\n print \"\\rStemming article {0} out of {1}\".format(n + 1, n_art),\n \n # Apply stemming to all tokens in token_list and save them in stemmed_tokens\n # scode: stemmed_tokens = <FILL IN>\n \n # Add stemmed_tokens to the stemmed corpus\n # scode: <FILL IN>\n\nprint \"\\nLet's check the first tokens from document 0 after stemming:\"\nprint corpus_stemmed[0][0:30]\n\nTest.assertTrue((len([c for c in corpus_stemmed[0] if c!=stemmer.stem(c)]) < 0.1*len(corpus_stemmed[0])), \n 'It seems that stemming has not been applied properly')", "Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk", "# You can comment this if the package is already available.\n# Select option \"d) Download\", and identifier \"wordnet\"\n# nltk.download()", "Task: Apply the .lemmatize() method, from the WordNetLemmatizer object created in the first line, to corpus_filtered.", "wnl = WordNetLemmatizer()\n\n# Select stemmer.\ncorpus_lemmat = []\n\nfor n, token_list in enumerate(corpus_filtered):\n print \"\\rLemmatizing article {0} out of {1}\".format(n + 1, n_art),\n \n # scode: lemmat_tokens = <FILL IN>\n\n # Add art to the stemmed corpus\n # scode: <FILL IN>\n\nprint \"\\nLet's check the first tokens from document 0 after lemmatization:\"\nprint corpus_lemmat[0][0:30]", "One of the advantages of the lemmatizer method is that the result of lemmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.\nHowever, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why \"is\" or \"are\" are preserved and not replaced by infinitive \"be\".\nAs an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v').\n2.3. Cleaning\nThe third step consists of removing those words that are very common in language and do not carry out usefull semantic content (articles, pronouns, etc).\nOnce again, we might need to load the stopword files using the download tools from nltk", "# You can comment this if the package is already available.\n# Select option \"d) Download\", and identifier \"stopwords\"\n# nltk.download()", "Task: In the second line below we read a list of common english stopwords. Clean corpus_stemmed by removing all tokens in the stopword list.", "corpus_clean = []\nstopwords_en = stopwords.words('english')\nn = 0\nfor token_list in corpus_stemmed:\n n += 1\n print \"\\rRemoving stopwords from article {0} out of {1}\".format(n, n_art),\n\n # Remove all tokens in the stopwords list and append the result to corpus_clean\n # scode: clean_tokens = <FILL IN>\n\n # scode: <FILL IN>\n \nprint \"\\n Let's check tokens after cleaning:\"\nprint corpus_clean[0][0:30]\n\nTest.assertTrue(len(corpus_clean) == n_art, 'List corpus_clean does not contain the expected number of articles')\nTest.assertTrue(len([c for c in corpus_clean[0] if c in stopwords_en])==0, 'Stopwords have not been removed')", "2.4. Vectorization\nUp to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library. \nAs a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them.", "# Create dictionary of tokens\nD = gensim.corpora.Dictionary(corpus_clean)\nn_tokens = len(D)\n\nprint \"The dictionary contains {0} tokens\".format(n_tokens)\nprint \"First tokens in the dictionary: \"\nfor n in range(10):\n print str(n) + \": \" + D[n]", "In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.\n Task: Apply the doc2bow method from gensim dictionary D, to all tokens in every article in corpus_clean. The result must be a new list named corpus_bow where each element is a list of tuples (token_id, number_of_occurrences).", "# Transform token lists into sparse vectors on the D-space\n# scode: corpus_bow = <FILL IN>\n\nTest.assertTrue(len(corpus_bow)==n_art, 'corpus_bow has not the appropriate size') ", "At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.\nAfter that, we have transformed each article (in corpus_clean) in a list tuples (id, n).", "print \"Original article (after cleaning): \"\nprint corpus_clean[0][0:30]\nprint \"Sparse vector representation (first 30 components):\"\nprint corpus_bow[0][0:30]\nprint \"The first component, {0} from document 0, states that token 0 ({1}) appears {2} times\".format(\n corpus_bow[0][0], D[0], corpus_bow[0][0][1])", "Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples \n[(0, 1), (3, 3), (5,2)]\n\nfor a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero.\n[1, 0, 0, 3, 0, 2, 0, 0, 0, 0]\n\nThese sparse vectors will be the inputs to the topic modeling algorithms.\nNote that, at this point, we have built a Dictionary containing", "print \"{0} tokens\".format(len(D))", "and a bow representation of a corpus with", "print \"{0} Wikipedia articles\".format(len(corpus_bow))", "Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.", "# SORTED TOKEN FREQUENCIES (I):\n# Create a \"flat\" corpus with all tuples in a single list\ncorpus_bow_flat = [item for sublist in corpus_bow for item in sublist]\n\n# Initialize a numpy array that we will use to cont tokens.\n# token_count[n] should store the number of ocurrences of the n-th token, D[n]\ntoken_count = np.zeros(n_tokens)\n\n# Count the number of occurrences of each token.\nfor x in corpus_bow_flat:\n # Update the proper element in token_count\n # scode: <FILL IN>\n\n# Sort by decreasing number of occurences\nids_sorted = np.argsort(- token_count)\ntf_sorted = token_count[ids_sorted]", "ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is", "print D[ids_sorted[0]]", "which appears", "print \"{0} times in the whole corpus\".format(tf_sorted[0])", "In the following we plot the most frequent terms in the corpus.", "# SORTED TOKEN FREQUENCIES (II):\nplt.rcdefaults()\n\n# Example data\nn_bins = 25\nhot_tokens = [D[i] for i in ids_sorted[n_bins-1::-1]]\ny_pos = np.arange(len(hot_tokens))\nz = tf_sorted[n_bins-1::-1]/n_art\n\nplt.barh(y_pos, z, align='center', alpha=0.4)\nplt.yticks(y_pos, hot_tokens)\nplt.xlabel('Average number of occurrences per article')\nplt.title('Token distribution')\nplt.show()\ndisplay()\n\n# SORTED TOKEN FREQUENCIES:\n\n# Example data\nplt.semilogy(tf_sorted)\nplt.xlabel('Average number of occurrences per article')\nplt.title('Token distribution')\nplt.show()\ndisplay()", "Exercise: There are usually many tokens that appear with very low frequency in the corpus. Count the number of tokens appearing only once, and what is the proportion of them in the token list.", "# scode: cold_tokens = <FILL IN>\n\nprint \"There are {0} cold tokens, which represent {1}% of the total number of tokens in the dictionary\".format(\n len(cold_tokens), float(len(cold_tokens))/n_tokens*100)", "Exercise: Represent graphically those 20 tokens that appear in the highest number of articles. Note that you can use the code above (headed by # SORTED TOKEN FREQUENCIES) with a very minor modification.", "# scode: <WRITE YOUR CODE HERE>", "Exercise: Count the number of tokens appearing only in a single article.", "# scode: <WRITE YOUR CODE HERE>", "Exercise (All in one): Note that, for pedagogical reasons, we have used a different for loop for each text processing step creating a new corpus_xxx variable after each step. For very large corpus, this could cause memory problems. \nAs a summary exercise, repeat the whole text processing, starting from corpus_text up to computing the bow, with the following modifications:\n\nUse a single for loop, avoiding the creation of any intermediate corpus variables.\nUse lemmatization instead of stemming.\nRemove all tokens appearing in only one document and less than 2 times.\nSave the result in a new variable corpus_bow1.", "# scode: <WRITE YOUR CODE HERE>", "Exercise (Visualizing categories): Repeat the previous exercise with a second wikipedia category. For instance, you can take \"communication\". \n\nSave the result in variable corpus_bow2.\nDetermine the most frequent terms in corpus_bow1 (term1) and corpus_bow2 (term2).\nTransform each article in corpus_bow1 and corpus_bow2 into a 2 dimensional vector, where the first component is the frecuency of term1 and the second component is the frequency of term2\nDraw a dispersion plot of all 2 dimensional points, using a different marker for each corpus. Could you differentiate both corpora using the selected terms only? What if the 2nd most frequent term is used?", "# scode: <WRITE YOUR CODE HERE>", "Exercise (bigrams): nltk provides an utility to compute n-grams from a list of tokens, in nltk.util.ngrams. Join all tokens in corpus_clean in a single list and compute the bigrams. Plot the 20 most frequent bigrams in the corpus.", "# scode: <WRITE YOUR CODE HERE>\n# Check the code below to see how ngrams works, and adapt it to solve the exercise.\n# from nltk.util import ngrams\n# sentence = 'this is a foo bar sentences and i want to ngramize it'\n# sixgrams = ngrams(sentence.split(), 2)\n# for grams in sixgrams:\n#  print grams", "2.4. Saving results\nThe dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms analyzed in the following notebook. Save them to be ready to use them during the next session.", "import pickle\ndata = {}\ndata['D'] = D\ndata['corpus_bow'] = corpus_bow\npickle.dump(data, open(\"wikiresults.p\", \"wb\"))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mjbommar/cscs-530-w2016
notebooks/basic-random/001-basic_distributions.ipynb
bsd-2-clause
[ "CSCS530 Winter 2015\nComplex Systems 530 - Computer Modeling of Complex Systems (Winter 2015)\n\nCourse ID: CMPLXSYS 530\nCourse Title: Computer Modeling of Complex Systems\nTerm: Winter 2015\nSchedule: Wednesdays and Friday, 1:00-2:30PM ET\nLocation: 120 West Hall (http://www.lsa.umich.edu/cscs/research/computerlab)\nTeachers: Mike Bommarito and Sarah Cherng\n\nView this repository on NBViewer\nBasic Distributions\nFrom page 5 of Thinking Complexity:\n\nDeterministic → stochastic: Classical models are usually deterministic, which may reflect\nunderlying philosophical determinism, discussed in Chapter 6; complex models often\nfeature randomness.\n\nIn order to incorporate randomness into our models, we need to understand basic distributions and learn how to work with them in Python. The notebook below covers the basic shape, parameters, and sampling of the following distributions:\n\nuniform discrete\nuniform continuous\nnormal/Gaussian (\"bell curve\")\nPoisson", "# Imports\nimport numpy\nimport scipy.stats\nimport matplotlib.pyplot as plt\n\n# Setup seaborn for plotting\nimport seaborn; seaborn.set()\n\n# Import widget methods\nfrom IPython.html.widgets import *", "Continuous Uniform distribution\nThe continous uniform distribution is one of the most commonly utilized distributions. As its name implies, it is characterized by a uniform or equal probability of any point being drawn from the distribution. This is clear from the probability density function (PDF) below:\n\nWe can sample a continuous uniform distribution using the numpy.random.uniform method below.\nDraw a continuous uniform sample\nIn the example below, we draw size=3 samples from a continous uniform distribution with range from -1 to +1.", "numpy.random.uniform(-1, 1, size=3)", "Visualize a continuous uniform sample\nIn the example below, we will visualize the distribution of size=100 continous uniform samples. This particular type of visualization is called a histogram.", "%matplotlib inline\n\n# Sample random data\nr = numpy.random.uniform(0, 1, size=100)\np = plt.hist(r)", "Interactive exploration of continuous uniform distribution\nIn the interactive tool below, we will explore how a random sample drawn from the continuous uniform distribution varies with:\n\nminimum and maximum of range (range_min, range_max)\nnumber of samples drawn (samples)\n\nTry varying the number of samples in the single digits, then slowly increase the number to 1000. How does the \"smoothness\" of the average sample vary? Compare to the probability density function figure above.", "def plot_continuous_uniform(range_min=0, range_max=1, samples=100):\n \"\"\"\n A continuous uniform plotter that takes min/max range and sample count.\n \"\"\"\n \n # Check assertions\n assert (range_min < range_max)\n assert (samples > 1)\n \n # Sample random data\n r = numpy.random.uniform(range_min, range_max, samples)\n p = plt.hist(r)\n\n# Call the ipython interact() method to allow us to explore the parameters and sampling\ninteract(plot_continuous_uniform, range_min=(0, 10),\n range_max = (1, 20),\n samples = (2, 1000))", "Discrete Uniform distribution\nThe discrete uniform distribution is another commonly utilized distributions. As its name implies, it is characterized by a uniform or equal probability of any point being drawn from the distribution. This is clear from the probability density function (PDF) below:\n\nWe can sample a discrete uniform distribution using the numpy.random.randint method below.\nDraw a discrete uniform sample\nIn the example below, we draw size=3 samples from a discrete uniform distribution with range from 0 to 10.", "numpy.random.randint(0, 10, size=3)", "Visualize a discrete uniform sample\nIn the example below, we will visualize the distribution of size=100 discrete uniform samples.", "# Sample random data\nr = numpy.random.randint(0, 10, size=100)\np = plt.hist(r)", "Interactive exploration of discrete uniform distribution\nIn the interactive tool below, we will explore how a random sample drawn from the discrete uniform distribution varies with:\n\nminimum and maximum of range (range_min, range_max)\nnumber of samples drawn (samples)\n\nTry varying the number of samples in the single digits, then slowly increase the number to 1000. How does the \"smoothness\" of the average sample vary? Compare to the probability density function figure above.", "def plot_discrete_uniform(range_min=0, range_max=10, samples=100):\n \"\"\"\n A discrete uniform plotter that takes min/max range and sample count.\n \"\"\"\n \n # Check assertions\n assert (range_min < range_max)\n assert (samples > 1)\n\n # Sample random data\n r = numpy.random.randint(range_min, range_max, samples)\n p = plt.hist(r)\n\n# Call the ipython interact() method to allow us to explore the parameters and sampling\ninteract(plot_discrete_uniform, range_min=(-10, 10),\n range_max = (-9, 20),\n samples = (2, 1000))", "Normal distribution\nThe normal distribution, commonly referred to as the \"bell curve\", is one of the most commmonly occuring continuous distributions in nature. It is characterized by its symmetry and its dispersion parameter, referred to as standard deviation. 68% of the distribution's probability mass falls within +/-1 standard deviation, and 95% of the probability mass falls within +/-2 standard deviations.\nThe normal distribution's probability density function (PDF) is below:\n \nWe can sample a normal distribution using the numpy.random.normal method below.\nDraw a normal sample\nIn the example below, we draw size=3 samples from a normal distribution with mean=10 and standard deviation sigma=3.", "numpy.random.normal(10, 3, size=3)", "Visualize a normal sample\nIn the example below, we will visualize the distribution of size=100 normal samples.", "# Sample random data\nr = numpy.random.normal(10, 3, size=100)\np = plt.hist(r)", "Interactive exploration of normal distribution\nIn the interactive tool below, we will explore how a random sample drawn from the normal distribution varies with:\n\nmean\nstandard deviation\nnumber of samples drawn (samples)\n\nIn addition to a histogram, this tool also shows a kernel density estimate (KDE). We can use KDEs to provide us with estimates of probability density functions, either for analysis and comparison or to use in further generative contexts to sample new values.", "def plot_normal(mean=0, standard_deviation=10, samples=100, window_range=100):\n # Check assertions\n assert (standard_deviation > 0)\n assert (samples > 1)\n \n # Sample random data and visualization\n r = numpy.random.normal(mean, standard_deviation, \n size=samples)\n p = plt.hist(r, normed=True)\n \n # Calculate the kernel density estimate and overplot it on the histogram\n kernel = scipy.stats.gaussian_kde(r)\n r_range = numpy.linspace(min(r), max(r))\n plt.plot(r_range, kernel(r_range))\n \n # Set the x limits\n plt.xlim(min(-window_range, min(r)), max(window_range, max(r)))\n\n# Create the widget\ninteract(plot_normal, mean=(-25, 25),\n standard_deviation = (1, 100),\n samples = (2, 1000),\n window_range = (1, 100))", "Poisson distribution\nThe Poisson distribution is, in Wikipedia's words:\n\na discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of\ntime and/or space if these events occur with a known average rate and independently of the time since the last event. The\nPoisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.\n\nThe Poisson distribution's probability density function (PDF) is below:\n \nWe can sample a normal distribution using the numpy.random.poisson method below.\nDraw a Poisson sample\nIn the example below, we draw size=3 samples from a Poisson distribution with rate=5.", "numpy.random.poisson(5, size=3)", "Visualize a Poisson sample\nIn the example below, we will visualize the distribution of size=100 Poisson samples.", "# Sample random data\nr = numpy.random.poisson(5, size=100)\np = plt.hist(r)", "Interactive exploration of Poisson distribution\nIn the interactive tool below, we will explore how a random sample drawn from the Poisson distribution varies with:\n\nrate\nnumber of samples drawn (samples)\n\nIn addition to a histogram, this tool again shows a kernel density estimate (KDE). Compare the KDE to the probability density function above.", "def plot_poisson(rate=5, samples=100, window_range=20):\n # Check assertions\n assert (rate > 0)\n assert (samples > 1)\n \n # Sample random data\n r = numpy.random.poisson(rate, size=samples)\n f = plt.figure()\n p = plt.hist(r, normed=True)\n \n # Calculate the KDE and overplot\n kernel = scipy.stats.gaussian_kde(r)\n r_range = numpy.linspace(min(r), max(r))\n plt.plot(r_range, kernel(r_range))\n \n # Set the x limits\n plt.xlim(-1, max(max(r), window_range))\n\n# Create the ipython widget\ninteract(plot_poisson, rate=(1, 100),\n samples = (2, 10000),\n window_range = (1, 100))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
stuser/temp
AI_Academy/trend_micro_basic_data_intro.ipynb
mit
[ "資料集簡介\n<p>欄位說明:</p>\n<p>FileID: 檔案識別ID</p>\n<p>CustomerID: 使用者裝置識別ID</p>\n<p>QueryTs: 該筆資料發生時間</p>\n<p>ProductID: 使用者裝置的產品代碼</p>", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.rcParams['font.family']='SimHei' #顯示中文\n\n%matplotlib inline\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# Load in the train datasets\ntrain = pd.read_csv('input/training-set.csv', encoding = \"utf-8\", header=None)\ntest = pd.read_csv('input/testing-set.csv', encoding = \"utf-8\", header=None)\n\n#query_log裡面被官方排除的 FileID\ntrain_exc = pd.read_csv('input/exception/exception_train.txt', encoding = \"utf-8\", header=None)\ntest_exc = pd.read_csv('input/exception/exception_testing.txt', encoding = \"utf-8\", header=None)\n\ntest_exc.head(2)\n\n# from sklearn.preprocessing import Imputer\n# imputer = Imputer(missing_values='NaN', axis=0, strategy='mean') \n# imputer.fit_transform(X[:,[1,3]])\n\n# training set - label: 0:非惡意程式, 1:惡意程式\ntrain.columns=['FileID','label']\ntrain.head(2)\n\n# testing set - AUC: Area Under ROC Curve\ntest.columns=['FileID','AUC']\ntest.head(2)\n\n#確認排除的FileID在training set裡面找不到\ntrain[train['FileID'].isin(train_exc[0])]\n\n#取0301當天的query log來查看\nquery_0301 = pd.read_csv('input/query_log/0301.csv', encoding = \"utf-8\", header=None)\nquery_0301.columns=['FileID','CustomerID','QueryTs','ProductID']\nquery_0301['times'] = 1\n\nquery_0301.head(2)\n\nquery_0301.describe()\n\nquery_0301.info(memory_usage='deep')", "樞杻分析", "query_0301.pivot_table(values='times',index=['FileID'],columns='ProductID',aggfunc='sum')", "聚合函數\ncount(個數), sum(加總), mean(平均), median(中位數), std(標準差), var(變異數), first(第一個非NA), last(最後一個非NA)", "query_0301.groupby(['FileID','CustomerID','ProductID'])[['times']].sum()", "其它會用到的工具", "#AUC計算範例\nimport numpy as np\nfrom sklearn import metrics\ny = np.array([1, 1, 2, 2])\npred = np.array([0.5, 1, 0.9, 1])\nfpr, tpr, thresholds = metrics.roc_curve(y, pred, pos_label=2)\nmetrics.auc(fpr, tpr)\n\n\n#timestamp轉換\nimport datetime\nprint(\n datetime.datetime.fromtimestamp(\n int(\"1488326402\")\n ).strftime('%Y-%m-%d %H:%M:%S'))\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xdnian/pyml
code/bonus/softmax-regression.ipynb
mit
[ "Sebastian Raschka, 2016\nhttps://github.com/1iyiwei/pyml\nNote that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).", "%load_ext watermark\n%watermark -a '' -u -d -v -p matplotlib,numpy,scipy\n\n# to install watermark just uncomment the following line:\n#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py\n\n%matplotlib inline", "Bonus Material - Softmax Regression\nSoftmax Regression (synonyms: Multinomial Logistic, Maximum Entropy Classifier, or just Multi-class Logistic Regression) is a generalization of logistic regression that we can use for multi-class classification (under the assumption that the classes are mutually exclusive). In contrast, we use the (standard) Logistic Regression model in binary classification tasks.\nBelow is a schematic of a Logistic Regression model that we discussed in Chapter 3.\n\nIn Softmax Regression (SMR), we replace the sigmoid logistic function by the so-called softmax function $\\phi_{softmax}(\\cdot)$.\n$$P(y=j \\mid z^{(i)}) = \\phi_{softmax}(z^{(i)}) = \\frac{e^{z^{(i)}}}{\\sum_{j=0}^{k} e^{z_{k}^{(i)}}},$$\nwhere we define the net input z as \n$$z = w_1x_1 + ... + w_mx_m + b= \\sum_{l=0}^{m} w_l x_l + b= \\mathbf{w}^T\\mathbf{x} + b.$$ \n(w is the weight vector, $\\mathbf{x}$ is the feature vector of 1 training sample, and $b$ is the bias unit.) \nNow, this softmax function computes the probability that this training sample $\\mathbf{x}^{(i)}$ belongs to class $j$ given the weight and net input $z^{(i)}$. So, we compute the probability $p(y = j \\mid \\mathbf{x^{(i)}; w}_j)$ for each class label in $j = 1, \\ldots, k.$. Note the normalization term in the denominator which causes these class probabilities to sum up to one.\n\nTo illustrate the concept of softmax, let us walk through a concrete example. Let's assume we have a training set consisting of 4 samples from 3 different classes (0, 1, and 2)\n\n$x_0 \\rightarrow \\text{class }0$\n$x_1 \\rightarrow \\text{class }1$\n$x_2 \\rightarrow \\text{class }2$\n$x_3 \\rightarrow \\text{class }2$", "import numpy as np\ny = np.array([0, 1, 2, 2])", "First, we want to encode the class labels into a format that we can more easily work with; we apply one-hot encoding:", "y_enc = (np.arange(np.max(y) + 1) == y[:, None]).astype(float)\nprint('one-hot encoding:\\n', y_enc)", "A sample that belongs to class 0 (the first row) has a 1 in the first cell, a sample that belongs to class 2 has a 1 in the second cell of its row, and so forth.\nNext, let us define the feature matrix of our 4 training samples. Here, we assume that our dataset consists of 2 features; thus, we create a 4x2 dimensional matrix of our samples and features.\nSimilarly, we create a 2x3 dimensional weight matrix (one row per feature and one column for each class).", "X = np.array([[0.1, 0.5],\n [1.1, 2.3],\n [-1.1, -2.3],\n [-1.5, -2.5]])\n\nW = np.array([[0.1, 0.2, 0.3],\n [0.1, 0.2, 0.3]])\n\nbias = np.array([0.01, 0.1, 0.1])\n\nprint('Inputs X:\\n', X)\nprint('\\nWeights W:\\n', W)\nprint('\\nbias:\\n', bias)", "To compute the net input, we multiply the 4x2 matrix feature matrix X with the 2x3 (n_features x n_classes) weight matrix W, which yields a 4x3 output matrix (n_samples x n_classes) to which we then add the bias unit: \n$$\\mathbf{Z} = \\mathbf{X}\\mathbf{W} + \\mathbf{b}.$$", "X = np.array([[0.1, 0.5],\n [1.1, 2.3],\n [-1.1, -2.3],\n [-1.5, -2.5]])\n\nW = np.array([[0.1, 0.2, 0.3],\n [0.1, 0.2, 0.3]])\n\nbias = np.array([0.01, 0.1, 0.1])\n\nprint('Inputs X:\\n', X)\nprint('\\nWeights W:\\n', W)\nprint('\\nbias:\\n', bias)\n\ndef net_input(X, W, b):\n return (X.dot(W) + b)\n\nnet_in = net_input(X, W, bias)\nprint('net input:\\n', net_in)", "Now, it's time to compute the softmax activation that we discussed earlier:\n$$P(y=j \\mid z^{(i)}) = \\phi_{softmax}(z^{(i)}) = \\frac{e^{z^{(i)}}}{\\sum_{j=0}^{k} e^{z_{k}^{(i)}}}.$$", "def softmax(z):\n return (np.exp(z.T) / np.sum(np.exp(z), axis=1)).T\n\nsmax = softmax(net_in)\nprint('softmax:\\n', smax)", "As we can see, the values for each sample (row) nicely sum up to 1 now. E.g., we can say that the first sample \n[ 0.29450637 0.34216758 0.36332605] has a 29.45% probability to belong to class 0.\nNow, in order to turn these probabilities back into class labels, we could simply take the argmax-index position of each row:\n[[ 0.29450637 0.34216758 0.36332605] -> 2 \n[ 0.21290077 0.32728332 0.45981591] -> 2\n[ 0.42860913 0.33380113 0.23758974] -> 0\n[ 0.44941979 0.32962558 0.22095463]] -> 0", "def to_classlabel(z):\n return z.argmax(axis=1)\n\nprint('predicted class labels: ', to_classlabel(smax))", "As we can see, our predictions are terribly wrong, since the correct class labels are [0, 1, 2, 2]. Now, in order to train our logistic model (e.g., via an optimization algorithm such as gradient descent), we need to define a cost function $J(\\cdot)$ that we want to minimize:\n$$J(\\mathbf{W}; \\mathbf{b}) = \\frac{1}{n} \\sum_{i=1}^{n} H(T_i, O_i),$$\nwhich is the average of all cross-entropies over our $n$ training samples. The cross-entropy function is defined as\n$$H(T_i, O_i) = -\\sum_m T_i \\cdot log(O_i).$$\nHere the $T$ stands for \"target\" (i.e., the true class labels) and the $O$ stands for output -- the computed probability via softmax; not the predicted class label.", "def cross_entropy(output, y_target):\n return - np.sum(np.log(output) * (y_target), axis=1)\n\nxent = cross_entropy(smax, y_enc)\nprint('Cross Entropy:', xent)\n\ndef cost(output, y_target):\n return np.mean(cross_entropy(output, y_target))\n\nJ_cost = cost(smax, y_enc)\nprint('Cost: ', J_cost)", "In order to learn our softmax model -- determining the weight coefficients -- via gradient descent, we then need to compute the derivative \n$$\\nabla \\mathbf{w}_j \\, J(\\mathbf{W}; \\mathbf{b}).$$\nI don't want to walk through the tedious details here, but this cost derivative turns out to be simply:\n$$\\nabla \\mathbf{w}j \\, J(\\mathbf{W}; \\mathbf{b}) = \\frac{1}{n} \\sum^{n}{i=0} \\big[\\mathbf{x}^{(i)}\\ \\big(O_i - T_i \\big) \\big]$$\nWe can then use the cost derivate to update the weights in opposite direction of the cost gradient with learning rate $\\eta$:\n$$\\mathbf{w}_j := \\mathbf{w}_j - \\eta \\nabla \\mathbf{w}_j \\, J(\\mathbf{W}; \\mathbf{b})$$ \nfor each class $$j \\in {0, 1, ..., k}$$\n(note that $\\mathbf{w}_j$ is the weight vector for the class $y=j$), and we update the bias units\n$$\\mathbf{b}j := \\mathbf{b}_j - \\eta \\bigg[ \\frac{1}{n} \\sum^{n}{i=0} \\big(O_i - T_i \\big) \\bigg].$$ \nAs a penalty against complexity, an approach to reduce the variance of our model and decrease the degree of overfitting by adding additional bias, we can further add a regularization term such as the L2 term with the regularization parameter $\\lambda$:\nL2: $\\frac{\\lambda}{2} ||\\mathbf{w}||_{2}^{2}$, \nwhere \n$$||\\mathbf{w}||{2}^{2} = \\sum^{m}{l=0} \\sum^{k}{j=0} w{i, j}$$\nso that our cost function becomes\n$$J(\\mathbf{W}; \\mathbf{b}) = \\frac{1}{n} \\sum_{i=1}^{n} H(T_i, O_i) + \\frac{\\lambda}{2} ||\\mathbf{w}||_{2}^{2}$$\nand we define the \"regularized\" weight update as\n$$\\mathbf{w}_j := \\mathbf{w}_j - \\eta \\big[\\nabla \\mathbf{w}_j \\, J(\\mathbf{W}) + \\lambda \\mathbf{w}_j \\big].$$\n(Please note that we don't regularize the bias term.)\nSoftmaxRegression Code\nBringing the concepts together, we could come up with an implementation as follows:", "# Sebastian Raschka 2016\n# Implementation of the mulitnomial logistic regression algorithm for\n# classification.\n\n# Author: Sebastian Raschka <sebastianraschka.com>\n#\n# License: BSD 3 clause\n\nimport numpy as np\nfrom time import time\n#from .._base import _BaseClassifier\n#from .._base import _BaseMultiClass\n\n\nclass SoftmaxRegression(object):\n\n \"\"\"Softmax regression classifier.\n\n Parameters\n ------------\n eta : float (default: 0.01)\n Learning rate (between 0.0 and 1.0)\n epochs : int (default: 50)\n Passes over the training dataset.\n Prior to each epoch, the dataset is shuffled\n if `minibatches > 1` to prevent cycles in stochastic gradient descent.\n l2 : float\n Regularization parameter for L2 regularization.\n No regularization if l2=0.0.\n minibatches : int (default: 1)\n The number of minibatches for gradient-based optimization.\n If 1: Gradient Descent learning\n If len(y): Stochastic Gradient Descent (SGD) online learning\n If 1 < minibatches < len(y): SGD Minibatch learning\n n_classes : int (default: None)\n A positive integer to declare the number of class labels\n if not all class labels are present in a partial training set.\n Gets the number of class labels automatically if None.\n random_seed : int (default: None)\n Set random state for shuffling and initializing the weights.\n\n Attributes\n -----------\n w_ : 2d-array, shape={n_features, 1}\n Model weights after fitting.\n b_ : 1d-array, shape={1,}\n Bias unit after fitting.\n cost_ : list\n List of floats, the average cross_entropy for each epoch.\n\n \"\"\"\n def __init__(self, eta=0.01, epochs=50,\n l2=0.0,\n minibatches=1,\n n_classes=None,\n random_seed=None):\n\n self.eta = eta\n self.epochs = epochs\n self.l2 = l2\n self.minibatches = minibatches\n self.n_classes = n_classes\n self.random_seed = random_seed\n\n def _fit(self, X, y, init_params=True):\n if init_params:\n if self.n_classes is None:\n self.n_classes = np.max(y) + 1\n self._n_features = X.shape[1]\n\n self.b_, self.w_ = self._init_params(\n weights_shape=(self._n_features, self.n_classes),\n bias_shape=(self.n_classes,),\n random_seed=self.random_seed)\n self.cost_ = []\n\n y_enc = self._one_hot(y=y, n_labels=self.n_classes, dtype=np.float)\n\n for i in range(self.epochs):\n for idx in self._yield_minibatches_idx(\n n_batches=self.minibatches,\n data_ary=y,\n shuffle=True):\n # givens:\n # w_ -> n_feat x n_classes\n # b_ -> n_classes\n\n # net_input, softmax and diff -> n_samples x n_classes:\n net = self._net_input(X[idx], self.w_, self.b_)\n softm = self._softmax(net)\n diff = softm - y_enc[idx]\n mse = np.mean(diff, axis=0)\n\n # gradient -> n_features x n_classes\n grad = np.dot(X[idx].T, diff)\n \n # update in opp. direction of the cost gradient\n self.w_ -= (self.eta * grad +\n self.eta * self.l2 * self.w_)\n self.b_ -= (self.eta * np.sum(diff, axis=0))\n\n # compute cost of the whole epoch\n net = self._net_input(X, self.w_, self.b_)\n softm = self._softmax(net)\n cross_ent = self._cross_entropy(output=softm, y_target=y_enc)\n cost = self._cost(cross_ent)\n self.cost_.append(cost)\n return self\n\n def fit(self, X, y, init_params=True):\n \"\"\"Learn model from training data.\n\n Parameters\n ----------\n X : {array-like, sparse matrix}, shape = [n_samples, n_features]\n Training vectors, where n_samples is the number of samples and\n n_features is the number of features.\n y : array-like, shape = [n_samples]\n Target values.\n init_params : bool (default: True)\n Re-initializes model parametersprior to fitting.\n Set False to continue training with weights from\n a previous model fitting.\n\n Returns\n -------\n self : object\n\n \"\"\"\n if self.random_seed is not None:\n np.random.seed(self.random_seed)\n self._fit(X=X, y=y, init_params=init_params)\n self._is_fitted = True\n return self\n \n def _predict(self, X):\n probas = self.predict_proba(X)\n return self._to_classlabels(probas)\n \n def predict(self, X):\n \"\"\"Predict targets from X.\n\n Parameters\n ----------\n X : {array-like, sparse matrix}, shape = [n_samples, n_features]\n Training vectors, where n_samples is the number of samples and\n n_features is the number of features.\n\n Returns\n ----------\n target_values : array-like, shape = [n_samples]\n Predicted target values.\n\n \"\"\"\n if not self._is_fitted:\n raise AttributeError('Model is not fitted, yet.')\n return self._predict(X)\n\n def predict_proba(self, X):\n \"\"\"Predict class probabilities of X from the net input.\n\n Parameters\n ----------\n X : {array-like, sparse matrix}, shape = [n_samples, n_features]\n Training vectors, where n_samples is the number of samples and\n n_features is the number of features.\n\n Returns\n ----------\n Class probabilties : array-like, shape= [n_samples, n_classes]\n\n \"\"\"\n net = self._net_input(X, self.w_, self.b_)\n softm = self._softmax(net)\n return softm\n\n def _net_input(self, X, W, b):\n return (X.dot(W) + b)\n\n def _softmax(self, z):\n return (np.exp(z.T) / np.sum(np.exp(z), axis=1)).T\n\n def _cross_entropy(self, output, y_target):\n return - np.sum(np.log(output) * (y_target), axis=1)\n\n def _cost(self, cross_entropy):\n L2_term = self.l2 * np.sum(self.w_ ** 2)\n cross_entropy = cross_entropy + L2_term\n return 0.5 * np.mean(cross_entropy)\n\n def _to_classlabels(self, z):\n return z.argmax(axis=1)\n \n def _init_params(self, weights_shape, bias_shape=(1,), dtype='float64',\n scale=0.01, random_seed=None):\n \"\"\"Initialize weight coefficients.\"\"\"\n if random_seed:\n np.random.seed(random_seed)\n w = np.random.normal(loc=0.0, scale=scale, size=weights_shape)\n b = np.zeros(shape=bias_shape)\n return b.astype(dtype), w.astype(dtype)\n \n def _one_hot(self, y, n_labels, dtype):\n \"\"\"Returns a matrix where each sample in y is represented\n as a row, and each column represents the class label in\n the one-hot encoding scheme.\n\n Example:\n\n y = np.array([0, 1, 2, 3, 4, 2])\n mc = _BaseMultiClass()\n mc._one_hot(y=y, n_labels=5, dtype='float')\n\n np.array([[1., 0., 0., 0., 0.],\n [0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 0.],\n [0., 0., 0., 1., 0.],\n [0., 0., 0., 0., 1.],\n [0., 0., 1., 0., 0.]])\n\n \"\"\"\n mat = np.zeros((len(y), n_labels))\n for i, val in enumerate(y):\n mat[i, val] = 1\n return mat.astype(dtype) \n \n def _yield_minibatches_idx(self, n_batches, data_ary, shuffle=True):\n indices = np.arange(data_ary.shape[0])\n\n if shuffle:\n indices = np.random.permutation(indices)\n if n_batches > 1:\n remainder = data_ary.shape[0] % n_batches\n\n if remainder:\n minis = np.array_split(indices[:-remainder], n_batches)\n minis[-1] = np.concatenate((minis[-1],\n indices[-remainder:]),\n axis=0)\n else:\n minis = np.array_split(indices, n_batches)\n\n else:\n minis = (indices,)\n\n for idx_batch in minis:\n yield idx_batch\n \n def _shuffle_arrays(self, arrays):\n \"\"\"Shuffle arrays in unison.\"\"\"\n r = np.random.permutation(len(arrays[0]))\n return [ary[r] for ary in arrays]", "Example 1 - Gradient Descent", "from mlxtend.data import iris_data\nfrom mlxtend.evaluate import plot_decision_regions\nimport matplotlib.pyplot as plt\n\n# Loading Data\n\nX, y = iris_data()\nX = X[:, [0, 3]] # sepal length and petal width\n\n# standardize\nX[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()\nX[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()\n\nlr = SoftmaxRegression(eta=0.01, epochs=10, minibatches=1, random_seed=0)\nlr.fit(X, y)\n\nplot_decision_regions(X, y, clf=lr)\nplt.title('Softmax Regression - Gradient Descent')\nplt.show()\n\nplt.plot(range(len(lr.cost_)), lr.cost_)\nplt.xlabel('Iterations')\nplt.ylabel('Cost')\nplt.show()", "Continue training for another 800 epochs by calling the fit method with init_params=False.", "lr.epochs = 800\n\nlr.fit(X, y, init_params=False)\n\nplot_decision_regions(X, y, clf=lr)\nplt.title('Softmax Regression - Stochastic Gradient Descent')\nplt.show()\n\nplt.plot(range(len(lr.cost_)), lr.cost_)\nplt.xlabel('Iterations')\nplt.ylabel('Cost')\nplt.show()", "Predicting Class Labels", "y_pred = lr.predict(X)\nprint('Last 3 Class Labels: %s' % y_pred[-3:])", "Predicting Class Probabilities", "y_pred = lr.predict_proba(X)\nprint('Last 3 Class Labels:\\n %s' % y_pred[-3:])", "Example 2 - Stochastic Gradient Descent", "from mlxtend.data import iris_data\nfrom mlxtend.evaluate import plot_decision_regions\nfrom mlxtend.classifier import SoftmaxRegression\nimport matplotlib.pyplot as plt\n\n# Loading Data\n\nX, y = iris_data()\nX = X[:, [0, 3]] # sepal length and petal width\n\n# standardize\nX[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()\nX[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()\n\nlr = SoftmaxRegression(eta=0.05, epochs=200, minibatches=len(y), random_seed=0)\nlr.fit(X, y)\n\nplot_decision_regions(X, y, clf=lr)\nplt.title('Softmax Regression - Stochastic Gradient Descent')\nplt.show()\n\nplt.plot(range(len(lr.cost_)), lr.cost_)\nplt.xlabel('Iterations')\nplt.ylabel('Cost')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ajhenrikson/phys202-2015-work
assignments/assignment07/AlgorithmsEx02.ipynb
mit
[ "Algorithms Exercise 2\nImports", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nimport numpy as np", "Peak finding\nWrite a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:\n\nProperly handle local maxima at the endpoints of the input array.\nReturn a Numpy array of integer indices.\nHandle any Python iterable as input.", "def find_peaks(a):\n \"\"\"Find the indices of the local maxima in a sequence.\"\"\"\n \n\np1 = find_peaks([2,0,1,0,2,0,1])\nassert np.allclose(p1, np.array([0,2,4,6]))\np2 = find_peaks(np.array([0,1,2,3]))\nassert np.allclose(p2, np.array([3]))\np3 = find_peaks([3,2,1,0])\nassert np.allclose(p3, np.array([0]))", "Here is a string with the first 10000 digits of $\\pi$ (after the decimal). Write code to perform the following:\n\nConvert that string to a Numpy array of integers.\nFind the indices of the local maxima in the digits of $\\pi$.\nUse np.diff to find the distances between consequtive local maxima.\nVisualize that distribution using an appropriately customized histogram.", "from sympy import pi, N\npi_digits_str = str(N(pi, 10001))[2:]\n\nw=pi_digits_str\nr=[]\nfor x in w:\n r.append(int(x))\na=np.array(r)\nm=find_peaks(a)\ns=np.diff(m)\nplt.hist(s,bins=50)\ns\n\nassert True # use this for grading the pi digits histogram" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
josiahdavis/python_data_analysis
python_data_analysis.ipynb
mit
[ "Python for Data Analysis\nThis is a hands on workshop aimed at getting you comfortable with the the syntax of core data analysis concepts in Python. Some background in base Python is useful, but not required to learn from this workshop.\n* Keystrokes for the IPython notebook\n* Reading and Summarizing Data\n* Filtering and Sorting Data\n* Modifying Columns\n* Handling Missing Values\n* EXERCISE: Working with drinks data\n* Indexing and Slicing Data\n* Analyzing across time\n* Split-Apply-Combine\n* Merging Data\n* Writing Data\n* Other Useful Features\nKeystrokes for the IPython Notebook\nThere are two modes: Command (enabled by esc) and Edit (enabled by enter). The table below has a quick reference of the main keystrokes that I will be using in the workshop. To get the full list go to Help -> Keyboard Shortcuts.\n| | Mac | PC |\n|:----------|:--------|:-------|\n| Command Mode | esc | |\n| Delete | d, d | |\n| Markdown | m | |\n| Run Cell | control, return | |\n| Run Cell and Insert Below | option, return | |\n| Insert Above | a | |\n| Insert Below | b | |\n| Edit Mode | return | Enter |\n| Run Cell | control, return | |\n| Run Cell and Insert Below | option, return | |\nReading and Summarizing Data\nReading Data", "# Import the pandas and numpy libraries\nimport pandas as pd\nimport numpy as np\n\n# Read a file with an absolute path\nufo = pd.read_csv('/Users/josiahdavis/Documents/GitHub/python_data_analysis/ufo_sightings.csv')\n\n# Alterntively, read the the file using a relative path\nufo = pd.read_csv('ufo_sightings.csv')\n\n# Alterntively read in the file from the internet\nufo = pd.read_csv('https://raw.githubusercontent.com/josiahdavis/python_data_analysis/master/ufo_sightings.csv')\n\n# Get help on a function\nhelp(pd.read_csv)", "Summarize the data that was just read in", "ufo.head(10) # Look at the top 10 observations\n\nufo.tail() # Bottom x observations (defaults to 5)\n\nufo.describe() # get summary statistics for columns\n\nufo.index # \"the index\" (aka \"the labels\")\n\nufo.columns # column names (which is \"an index\")\n\nufo.dtypes # data types of each column\n\nufo.values # underlying numpy array\n\nufo.info() # concise summary", "Filtering and Sorting Data", "# Select a single column\nufo['State']\n\nufo.State # This is equivalent\n\n# Select multiple columns\nufo[['State', 'City','Shape Reported']]\n\nmy_cols = ['State', 'City', 'Shape Reported']\nufo[my_cols] # This is equivalent\n\n# Logical filtering\nufo[ufo.State == 'TX'] # Select only rows where State == 'TX'\n\nufo[~(ufo.State == 'TX')] # Select everything where the test fails\n\nufo[ufo.State != 'TX'] # Same thing as before\n\nufo.City[ufo.State == 'TX'] # Select only city columm where State == 'TX'\n\nufo[ufo.State == 'TX'].City # Same thing as before\n\nufo[(ufo.State == 'CA') | (ufo.State =='TX')] # Select only records where State is 'CA' or State is 'TX'\n\nufo_dallas = ufo[(ufo.City == 'Dallas') & (ufo.State =='TX')] # Select only Dallas, TX records\n\nufo[ufo.City.isin(['Austin','Dallas', 'Houston'])] # Select only Austin, Dallas, or Houston records", "Sorting", "ufo.State.order() # only works for a Series\n\nufo.sort_index(inplace=True) # sort rows by label\n\nufo.sort_index(ascending=False, inplace=False)\n\nufo.sort_index(by='State') # sort rows by specific column\n\nufo.sort_index(by=['State', 'Shape Reported']) # sort by multiple columns\n\nufo.sort_index(by=['State', 'Shape Reported'], ascending=[False, True], inplace=True) # specify sort order", "Modifying Columns", "# Add a new column as a function of existing columns\nufo['Location'] = ufo['City'] + ', ' + ufo['State']\nufo.head()\n\n# Rename columns\nufo.rename(columns={'Colors Reported':'Colors', 'Shape Reported':'Shape'}, inplace=True)\nufo.head()\n\n# Hide a column (temporarily)\nufo.drop(['Location'], axis=1)\n\n# Delete a column (permanently)\ndel ufo['Location']", "Handling Missing Values", "# Missing values are often just excluded\nufo.describe() # Excludes missing values\n\nufo.Shape.value_counts() # Excludes missing values\n\nufo.Shape.value_counts(dropna=False) # Includes missing values\n\n# Find missing values in a Series\nufo.Shape.isnull() # True if NaN, False otherwise\n\nufo.Shape.notnull() # False if NaN, True otherwise\n\nufo.Shape.isnull().sum() # Count the missing values\n\n# Find missing values in a DataFrame\nufo.isnull()\n\n# Count the missing values in a DataFrame\nufo.isnull().sum()\n\n# Exclude rows with missing values in a dataframe\nufo[(ufo.Shape.notnull()) & (ufo.Colors.notnull())]\n\n# Drop missing values\nufo.dropna() # Drop a row if ANY values are missing\n\nufo.dropna(how='all') # Drop a row only if ALL values are missing\n\n# Fill in missing values for a series\nufo.Colors.fillna(value='Unknown', inplace=True)\n\n# Fill in missing values for the DataFrame\nufo.fillna(value='Unknown', inplace=True)", "Exercise: Working with the Drinks Data\n(Be on the lookout for a curveball question)", "# Read drinks.csv (in the 'drinks_data' folder) into a DataFrame called 'drinks'\n\n# Print the first 10 rows\n\n# Examine the data types of all columns\n\n# Print the 'beer_servings' Series\n\n# Calculate the average 'beer_servings' for the entire dataset\n\n# Print all columns, but only show rows where the country is in Europe\n\n# Calculate the average 'beer_servings' for all of Europe\n\n# Only show European countries with 'wine_servings' greater than 300\n\n# Determine which 10 countries have the highest 'total_litres_of_pure_alcohol'\n\n# Determine which country has the highest value for 'beer_servings'\n\n# Count the number of occurrences of each 'continent' value and see if it looks correct\n\n# Determine which countries do not have continent designations\n\n# Determine the number of countries per continent. Does it look right?", "Solutions", "# Read drinks.csv (in the drinks_data folder) into a DataFrame called 'drinks'\ndrinks = pd.read_csv('drinks_data/drinks.csv')\n\n# Print the first 10 rows\ndrinks.head(10)\n\n# Examine the data types of all columns\ndrinks.dtypes\ndrinks.info()\n\n# Print the 'beer_servings' Series\ndrinks.beer_servings\ndrinks['beer_servings']\n\n# Calculate the average 'beer_servings' for the entire dataset\ndrinks.describe() # Mean is provided in the summary from describe()\ndrinks.beer_servings.mean() # Alternatively, calculate the mean directly\n\n# Print all columns, but only show rows where the country is in Europe\ndrinks[drinks.continent=='EU']\n\n# Calculate the average 'beer_servings' for all of Europe (hint: use the .mean() function)\ndrinks[drinks.continent=='EU'].beer_servings.mean()\n\n# Only show European countries with 'wine_servings' greater than 300\ndrinks[(drinks.continent=='EU') & (drinks.wine_servings > 300)]\n\n# Determine which 10 countries have the highest 'total_litres_of_pure_alcohol'\ndrinks.sort_index(by='total_litres_of_pure_alcohol').tail(10)\n\n# Determine which country has the highest value for 'beer_servings' (hint: use the .max() function)\ndrinks[drinks.beer_servings==drinks.beer_servings.max()].country\n\ndrinks[['country', 'beer_servings']].sort_index(by='beer_servings', ascending=False).head(1) # This is equivalent\n\n# Count the number of occurrences of each 'continent' value and see if it looks correct\ndrinks.continent.value_counts()\n\n# Determine which countries do not have continent designations\ndrinks[drinks.continent.isnull()].country\n\n# Due to \"na_filter = True\" default within pd.read_csv()\nhelp(pd.read_csv)", "Indexing and Slicing Data\nCreate a new index", "ufo.set_index('State', inplace=True)\nufo.index\n\nufo.index.is_unique\n\nufo.sort_index(inplace=True)\nufo.head(25)", "loc: filter rows by LABEL, and select columns by LABEL", "ufo.loc['FL',:] # row with label FL`\n\nufo.loc[:'FL',:] # rows with labels through'FL'\n\nufo.loc['FL':'HI', 'City':'Shape'] # rows FL, columns 'City' through 'Shape Reported'\n\nufo.loc[:, 'City':'Shape'] # all rows, columns 'City' through 'Shape Reported'\n\nufo.loc[['FL', 'TX'], ['City','Shape']] # rows FL and TX, columns 'City' and 'Shape Reported'", "iloc: filter rows by POSITION, and select columns by POSITION", "ufo.iloc[0,:] # row with 0th position (first row)\n\nufo.iloc[0:3,:] # rows with positions 0 through 2 (not 3)\n\nufo.iloc[0:3, 0:3] # rows and columns with positions 0 through 2\n\nufo.iloc[:, 0:3] # all rows, columns with positions 0 through 2\n\nufo.iloc[[0,2], [0,1]] # 1st and 3rd row, 1st and 2nd column", "Add another level to the index", "ufo.set_index('City', inplace=True, append=True) # Adds to existing index\nufo.sort_index(inplace=True)\nufo.head(25)\n\nufo.loc[['ND', 'WY'],:] # Select all records from ND AND WY\n\nufo.loc['ND':'WY',:] # Select all records from ND THROUGH WY\n\nufo.loc[('ND', 'Bismarck'),:] # Select all records from Bismark, ND\n\nufo.loc[('ND', 'Bismarck'):('ND','Casselton'),:] # Select all records from Bismark, ND through Casselton, ND\n\nufo.reset_index(level='City', inplace=True) # Remove the City from the index\nufo.head()\n\nufo.reset_index(inplace=True) # Remove all columns from the index\n\nufo.head()", "Analyzing Across Time", "# Reset the index\nufo.dtypes\n\n# Convert Time column to date-time format (defined in Pandas)\n# Reference: https://docs.python.org/2/library/time.html#time.strftime\nufo['Time'] = pd.to_datetime(ufo['Time'], format=\"%m/%d/%Y %H:%M\")\nufo.dtypes\n\n# Compute date range\nufo.Time.min()\n\nufo.Time.max()\n\n# Slice using time\nufo[ufo.Time > pd.datetime(1995, 1, 1)] # Slice using the time\n\nufo[(ufo.Time > pd.datetime(1995, 1, 1)) & (ufo.State =='TX')] # Works with other logical conditions, as expected\n\n# Set the index to time\nufo.set_index('Time', inplace=True)\nufo.sort_index(inplace=True)\nufo.head()\n\n# Access particular times/ranges\nufo.loc['1995',:]\nufo.loc['1995-01',:]\nufo.loc['1995-01-01',:]\n\n# Access range of times/ranges\nufo.loc['1995':,:]\nufo.loc['1995':'1996',:]\nufo.loc['1995-12-01':'1996-01',:]\n\n# Access elements of the timestamp\n# Reference: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-date-components\nufo.index.year\nufo.index.month\nufo.index.weekday\nufo.index.day\nufo.index.time\nufo.index.hour\n\n# Create a new variable with time element\nufo['Year'] = ufo.index.year\nufo['Month'] = ufo.index.month\nufo['Day'] = ufo.index.day\nufo['Weekday'] = ufo.index.weekday\nufo['Hour'] = ufo.index.hour", "Split-Apply-Combine\n\nDrawing by Hadley Wickham", "# For each year, calculate the count of sightings\nufo.groupby('Year').City.count()\n\n# For each Shape, calculate the first sighting, last sighting, and range of sightings. \nufo.groupby('Shape').Year.min()\nufo.groupby('Shape').Year.max()\n\n# Specify the variable outside of the apply statement\nufo.groupby('Shape').Year.apply(lambda x: x.max())\n\n# Specifiy the variable within the apply statement\nufo.groupby('Shape').apply(lambda x: x.Year.max() - x.Year.min())\n\n# Specify a custom function to use in the apply statement\ndef get_max_year(df):\n try:\n return df.Year.max()\n except:\n return ''\nufo.groupby('Shape').apply(lambda x: get_max_year(x))\n\n# Split/combine can occur on multiple columns at the same time\nufo.groupby(['Weekday','Hour']).City.count()", "Merging Data", "# Read in population data\npop = pd.read_csv('population.csv')\npop.head()\n\nufo.head()\n\n# Merge the data together\nufo = pd.merge(ufo, pop, on='State', how = 'left')\n\n# Specify keys if columns have different names\nufo = pd.merge(ufo, pop, left_on='State', right_on='State', how = 'left')\n\n# Observe the new Population column\nufo.head()\n\n# Check for values that didn't make it (length)\nufo.Population.isnull().sum()\n\n# Check for values that didn't make it (values)\nufo[ufo.Population.isnull()]\n\n# Change the records that didn't match up using np.where command\nufo['State'] = np.where(ufo['State'] == 'Fl', 'FL', ufo['State'])\n\n# Alternatively, change the state using native python string functionality\nufo['State'] = ufo['State'].str.upper()\n\n# Merge again, this time get all of the records\nufo = pd.merge(ufo, pop, on='State', how = 'left')", "Writing Data", "ufo.to_csv('ufo_new.csv') \n\nufo.to_csv('ufo_new.csv', index=False) # Index is not included in the csv", "Other Useful Features\nDetect duplicate rows", "ufo.duplicated() # Series of logicals\n\nufo.duplicated().sum() # count of duplicates\n\nufo[ufo.duplicated(['State','Time'])] # only show duplicates\n\nufo[ufo.duplicated()==False] # only show unique rows\n\nufo_unique = ufo[~ufo.duplicated()] # only show unique rows\n\nufo.duplicated(['State','Time']).sum() # columns for identifying duplicates", "Map existing values to other values", "ufo['Weekday'] = ufo.Weekday.map({ 0:'Mon', 1:'Tue', 2:'Wed', \n 3:'Thu', 4:'Fri', 5:'Sat', \n 6:'Sun'})", "Pivot rows to columns", "ufo.groupby(['Weekday','Hour']).City.count()\n\nufo.groupby(['Weekday','Hour']).City.count().unstack(0) # Make first row level a column\n\nufo.groupby(['Weekday','Hour']).City.count().unstack(1) # Make second row level a column\n# Note: .stack() transforms columns to rows", "Randomly sample a DataFrame", "idxs = np.random.rand(len(ufo)) < 0.66 # create a Series of booleans\ntrain = ufo[idxs] # will contain about 66% of the rows\ntest = ufo[~idxs] # will contain the remaining rows", "Replace all instances of a value", "ufo.Shape.replace('DELTA', 'TRIANGLE') # replace values in a Series\n\nufo.replace('PYRAMID', 'TRIANGLE') # replace values throughout a DataFrame", "One more thing...", "%matplotlib inline\n\n# Plot the number of sightings over time\nufo.groupby('Year').City.count().plot( kind='line', \n color='r', \n linewidth=2, \n title='UFO Sightings by year')\n\n# Plot the number of sightings over the day of week and time of day\nufo.groupby(['Weekday','Hour']).City.count().unstack(0).plot( kind='line', \n linewidth=2,\n title='UFO Sightings by Time of Day')\n\n# Plot multiple plots on the same plot (plots neeed to be in column format) \nufo_fourth = ufo[(ufo.Year.isin([2011, 2012, 2013, 2014])) & (ufo.Month == 7)]\nufo_fourth.groupby(['Year', 'Day']).City.count().unstack(0).plot( kind = 'bar',\n subplots=True,\n figsize=(7,9))", "References\nUFO data\n* Scraped from: http://www.nuforc.org/webreports.html\n* Write up about this data: http://josiahjdavis.com/2015/01/01/identifying-with-ufos/ \nDrinks data\n* Downloaded from: https://github.com/fivethirtyeight/data/tree/master/alcohol-consumption\n* Write up about this data: http://fivethirtyeight.com/datalab/dear-mona-followup-where-do-people-drink-the-most-beer-wine-and-spirits/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jaidevd/inmantec_fdp
notebooks/day3/04_clustering.ipynb
mit
[ "Clustering: Unsupervised Grouping of Data", "import numpy as np\nfrom sklearn.datasets import load_iris, load_digits\nfrom sklearn.metrics import f1_score\nfrom sklearn.cluster import KMeans\nfrom sklearn.decomposition import PCA\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\n%matplotlib inline\n\niris = load_iris()\nX = iris.data\ny = iris.target\n\nprint(X.shape)\n\npca = PCA(n_components=2)\nX = pca.fit_transform(X)", "Fit a simple KMeans cluster model in iris dataset", "km = KMeans()\nkm.fit(X)\nclusters = km.predict(X)\nplt.scatter(X[:, 0], X[:, 1], c=clusters, alpha=0.5)\nplt.scatter(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1],\n c=np.arange(km.n_clusters), marker='x', s=150, linewidth=3)", "Q: What went wrong?", "km = KMeans(n_clusters=3)\nkm.fit(X)\nclusters = km.predict(X)\nplt.scatter(X[:, 0], X[:, 1], c=clusters, alpha=0.5)\nplt.scatter(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1],\n c=np.arange(km.n_clusters), marker='x', s=150, linewidth=3)\n\nprint(\"Clustering F1 Score: %f\" % f1_score(y, clusters))", "Q: What went wrong?", "print(y)\n\nprint(clusters)\n\nc_mapped = clusters.copy()\nc_mapped[clusters == 1] = 0\nc_mapped[clusters == 2] = 1\nc_mapped[clusters == 0] = 2\n\nprint(\"Clustering F1 Score: %f\" % f1_score(y, c_mapped))", "Always interpret results with caution!\nClustering as Data Compression: Vector Quantization", "from scipy.misc import face\nracoon = face(gray=True)\nfig, ax = plt.subplots(nrows=1, ncols=2)\nax[0].imshow(racoon, cmap=plt.cm.gray)\nax[0].set_xticks([])\nax[0].set_yticks([])\n_ = ax[1].hist(racoon.reshape(-1, 1), bins=256,\n normed=True, color='.5', edgecolor='.5')\nplt.tight_layout()\n\nX = racoon.reshape(-1, 1)\nkm = KMeans(n_clusters=5)\nkm.fit(X)\nvalues = km.cluster_centers_.ravel()\nlabels = km.labels_\nrac_compressed = np.choose(labels, values)\nrac_compressed.shape = racoon.shape\nfig, ax = plt.subplots(nrows=1, ncols=2)\nax[0].imshow(rac_compressed, cmap=plt.cm.gray)\nax[0].set_xticks([])\nax[0].set_yticks([])\n_ = ax[1].hist(rac_compressed.reshape(-1, 1), bins=256,\n normed=True, color='.5', edgecolor='.5')\nplt.tight_layout()", "Overview of clustering methods in sklearn\n\nExercise: Apply KMeans clustering on MNIST digits dataset and figure out which cluster belongs to which digit\nHint: Try to visualize the average of all images that belong to one cluster", "digits = load_digits()\nX = digits.data\ny = digits.target\n\n# enter code here" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
metpy/MetPy
v0.9/_downloads/56e68110d2faf6be8284d896c8f4cd23/Natural_Neighbor_Verification.ipynb
bsd-3-clause
[ "%matplotlib inline", "Natural Neighbor Verification\nWalks through the steps of Natural Neighbor interpolation to validate that the algorithmic\napproach taken in MetPy is correct.\nFind natural neighbors visual test\nA triangle is a natural neighbor for a point if the\ncircumscribed circle &lt;https://en.wikipedia.org/wiki/Circumscribed_circle&gt;_ of the\ntriangle contains that point. It is important that we correctly grab the correct triangles\nfor each point before proceeding with the interpolation.\nAlgorithmically:\n\n\nWe place all of the grid points in a KDTree. These provide worst-case O(n) time\n complexity for spatial searches.\n\n\nWe generate a Delaunay Triangulation &lt;https://docs.scipy.org/doc/scipy/\n reference/tutorial/spatial.html#delaunay-triangulations&gt;_\n using the locations of the provided observations.\n\n\nFor each triangle, we calculate its circumcenter and circumradius. Using\n KDTree, we then assign each grid a triangle that has a circumcenter within a\n circumradius of the grid's location.\n\n\nThe resulting dictionary uses the grid index as a key and a set of natural\n neighbor triangles in the form of triangle codes from the Delaunay triangulation.\n This dictionary is then iterated through to calculate interpolation values.\n\n\nWe then traverse the ordered natural neighbor edge vertices for a particular\n grid cell in groups of 3 (n - 1, n, n + 1), and perform calculations to generate\n proportional polygon areas.\n\n\nCircumcenter of (n - 1), n, grid_location\n Circumcenter of (n + 1), n, grid_location\nDetermine what existing circumcenters (ie, Delaunay circumcenters) are associated\n with vertex n, and add those as polygon vertices. Calculate the area of this polygon.\n\n\nIncrement the current edges to be checked, i.e.:\n n - 1 = n, n = n + 1, n + 1 = n + 2\n\n\nRepeat steps 5 & 6 until all of the edge combinations of 3 have been visited.\n\n\nRepeat steps 4 through 7 for each grid cell.", "import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.spatial import ConvexHull, Delaunay, delaunay_plot_2d, Voronoi, voronoi_plot_2d\nfrom scipy.spatial.distance import euclidean\n\nfrom metpy.interpolate import geometry\nfrom metpy.interpolate.points import natural_neighbor_point", "For a test case, we generate 10 random points and observations, where the\nobservation values are just the x coordinate value times the y coordinate\nvalue divided by 1000.\nWe then create two test points (grid 0 & grid 1) at which we want to\nestimate a value using natural neighbor interpolation.\nThe locations of these observations are then used to generate a Delaunay triangulation.", "np.random.seed(100)\n\npts = np.random.randint(0, 100, (10, 2))\nxp = pts[:, 0]\nyp = pts[:, 1]\nzp = (pts[:, 0] * pts[:, 0]) / 1000\n\ntri = Delaunay(pts)\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility\ndelaunay_plot_2d(tri, ax=ax)\n\nfor i, zval in enumerate(zp):\n ax.annotate('{} F'.format(zval), xy=(pts[i, 0] + 2, pts[i, 1]))\n\nsim_gridx = [30., 60.]\nsim_gridy = [30., 60.]\n\nax.plot(sim_gridx, sim_gridy, '+', markersize=10)\nax.set_aspect('equal', 'datalim')\nax.set_title('Triangulation of observations and test grid cell '\n 'natural neighbor interpolation values')\n\nmembers, tri_info = geometry.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))\n\nval = natural_neighbor_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0],\n tri_info)\nax.annotate('grid 0: {:.3f}'.format(val), xy=(sim_gridx[0] + 2, sim_gridy[0]))\n\nval = natural_neighbor_point(xp, yp, zp, (sim_gridx[1], sim_gridy[1]), tri, members[1],\n tri_info)\nax.annotate('grid 1: {:.3f}'.format(val), xy=(sim_gridx[1] + 2, sim_gridy[1]))", "Using the circumcenter and circumcircle radius information from\n:func:metpy.interpolate.geometry.find_natural_neighbors, we can visually\nexamine the results to see if they are correct.", "def draw_circle(ax, x, y, r, m, label):\n th = np.linspace(0, 2 * np.pi, 100)\n nx = x + r * np.cos(th)\n ny = y + r * np.sin(th)\n ax.plot(nx, ny, m, label=label)\n\n\nmembers, tri_info = geometry.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility\ndelaunay_plot_2d(tri, ax=ax)\nax.plot(sim_gridx, sim_gridy, 'ks', markersize=10)\n\nfor i, info in tri_info.items():\n x_t = info['cc'][0]\n y_t = info['cc'][1]\n\n if i in members[1] and i in members[0]:\n draw_circle(ax, x_t, y_t, info['r'], 'm-', str(i) + ': grid 1 & 2')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n elif i in members[0]:\n draw_circle(ax, x_t, y_t, info['r'], 'r-', str(i) + ': grid 0')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n elif i in members[1]:\n draw_circle(ax, x_t, y_t, info['r'], 'b-', str(i) + ': grid 1')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n else:\n draw_circle(ax, x_t, y_t, info['r'], 'k:', str(i) + ': no match')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=9)\n\nax.set_aspect('equal', 'datalim')\nax.legend()", "What?....the circle from triangle 8 looks pretty darn close. Why isn't\ngrid 0 included in that circle?", "x_t, y_t = tri_info[8]['cc']\nr = tri_info[8]['r']\n\nprint('Distance between grid0 and Triangle 8 circumcenter:',\n euclidean([x_t, y_t], [sim_gridx[0], sim_gridy[0]]))\nprint('Triangle 8 circumradius:', r)", "Lets do a manual check of the above interpolation value for grid 0 (southernmost grid)\nGrab the circumcenters and radii for natural neighbors", "cc = np.array([tri_info[m]['cc'] for m in members[0]])\nr = np.array([tri_info[m]['r'] for m in members[0]])\n\nprint('circumcenters:\\n', cc)\nprint('radii\\n', r)", "Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram\n&lt;https://docs.scipy.org/doc/scipy/reference/tutorial/spatial.html#voronoi-diagrams&gt;_\nwhich serves as a complementary (but not necessary)\nspatial data structure that we use here simply to show areal ratios.\nNotice that the two natural neighbor triangle circumcenters are also vertices\nin the Voronoi plot (green dots), and the observations are in the polygons (blue dots).", "vor = Voronoi(list(zip(xp, yp)))\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility\nvoronoi_plot_2d(vor, ax=ax)\n\nnn_ind = np.array([0, 5, 7, 8])\nz_0 = zp[nn_ind]\nx_0 = xp[nn_ind]\ny_0 = yp[nn_ind]\n\nfor x, y, z in zip(x_0, y_0, z_0):\n ax.annotate('{}, {}: {:.3f} F'.format(x, y, z), xy=(x, y))\n\nax.plot(sim_gridx[0], sim_gridy[0], 'k+', markersize=10)\nax.annotate('{}, {}'.format(sim_gridx[0], sim_gridy[0]), xy=(sim_gridx[0] + 2, sim_gridy[0]))\nax.plot(cc[:, 0], cc[:, 1], 'ks', markersize=15, fillstyle='none',\n label='natural neighbor\\ncircumcenters')\n\nfor center in cc:\n ax.annotate('{:.3f}, {:.3f}'.format(center[0], center[1]),\n xy=(center[0] + 1, center[1] + 1))\n\ntris = tri.points[tri.simplices[members[0]]]\nfor triangle in tris:\n x = [triangle[0, 0], triangle[1, 0], triangle[2, 0], triangle[0, 0]]\n y = [triangle[0, 1], triangle[1, 1], triangle[2, 1], triangle[0, 1]]\n ax.plot(x, y, ':', linewidth=2)\n\nax.legend()\nax.set_aspect('equal', 'datalim')\n\n\ndef draw_polygon_with_info(ax, polygon, off_x=0, off_y=0):\n \"\"\"Draw one of the natural neighbor polygons with some information.\"\"\"\n pts = np.array(polygon)[ConvexHull(polygon).vertices]\n for i, pt in enumerate(pts):\n ax.plot([pt[0], pts[(i + 1) % len(pts)][0]],\n [pt[1], pts[(i + 1) % len(pts)][1]], 'k-')\n\n avex, avey = np.mean(pts, axis=0)\n ax.annotate('area: {:.3f}'.format(geometry.area(pts)), xy=(avex + off_x, avey + off_y),\n fontsize=12)\n\n\ncc1 = geometry.circumcenter((53, 66), (15, 60), (30, 30))\ncc2 = geometry.circumcenter((34, 24), (53, 66), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc1, cc2])\n\ncc1 = geometry.circumcenter((53, 66), (15, 60), (30, 30))\ncc2 = geometry.circumcenter((15, 60), (8, 24), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2], off_x=-9, off_y=3)\n\ncc1 = geometry.circumcenter((8, 24), (34, 24), (30, 30))\ncc2 = geometry.circumcenter((15, 60), (8, 24), (30, 30))\ndraw_polygon_with_info(ax, [cc[1], cc1, cc2], off_x=-15)\n\ncc1 = geometry.circumcenter((8, 24), (34, 24), (30, 30))\ncc2 = geometry.circumcenter((34, 24), (53, 66), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2])", "Put all of the generated polygon areas and their affiliated values in arrays.\nCalculate the total area of all of the generated polygons.", "areas = np.array([60.434, 448.296, 25.916, 70.647])\nvalues = np.array([0.064, 1.156, 2.809, 0.225])\ntotal_area = np.sum(areas)\nprint(total_area)", "For each polygon area, calculate its percent of total area.", "proportions = areas / total_area\nprint(proportions)", "Multiply the percent of total area by the respective values.", "contributions = proportions * values\nprint(contributions)", "The sum of this array is the interpolation value!", "interpolation_value = np.sum(contributions)\nfunction_output = natural_neighbor_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri,\n members[0], tri_info)\n\nprint(interpolation_value, function_output)", "The values are slightly different due to truncating the area values in\nthe above visual example to the 3rd decimal place.", "plt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/production_ml/labs/post_training_quant.ipynb
apache-2.0
[ "Post-training dynamic range quantization\nLearning Objectives\n 1. We will learn how to train a TensorFlow model.\n 2. We will learn how to load the model into an interpreter.\n 3. We will learn how to evaluate the models.\nIntroduction\nTensorFlow Lite now supports\nconverting weights to 8 bit precision as part of model conversion from\ntensorflow graphdefs to TensorFlow Lite's flat buffer format. Dynamic range quantization achieves a 4x reduction in the model size. In addition, TFLite supports on the fly quantization and dequantization of activations to allow for:\n\nUsing quantized kernels for faster implementation when available.\nMixing of floating-point kernels with quantized kernels for different parts\n of the graph.\n\nThe activations are always stored in floating point. For ops that\nsupport quantized kernels, the activations are quantized to 8 bits of precision\ndynamically prior to processing and are de-quantized to float precision after\nprocessing. Depending on the model being converted, this can give a speedup over\npure floating point computation.\nIn contrast to\nquantization aware training\n, the weights are quantized post training and the activations are quantized dynamically \nat inference in this method.\nTherefore, the model weights are not retrained to compensate for quantization\ninduced errors. It is important to check the accuracy of the quantized model to\nensure that the degradation is acceptable.\nThis tutorial trains an MNIST model from scratch, checks its accuracy in\nTensorFlow, and then converts the model into a Tensorflow Lite flatbuffer\nwith dynamic range quantization. Finally, it checks the\naccuracy of the converted model and compare it to the original float model.\nEach learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.\nBuild an MNIST model\nSetup", "# Importing necessary modules\nimport logging\nlogging.getLogger(\"tensorflow\").setLevel(logging.DEBUG)\n\nimport tensorflow as tf\nfrom tensorflow import keras\nimport numpy as np\nimport pathlib", "Train a TensorFlow model", "# Load MNIST dataset\nmnist = keras.datasets.mnist\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n\n# Normalize the input image so that each pixel value is between 0 to 1.\ntrain_images = train_images / 255.0\ntest_images = test_images / 255.0\n\n# Define the model architecture\nmodel = keras.Sequential([\n keras.layers.InputLayer(input_shape=(28, 28)),\n keras.layers.Reshape(target_shape=(28, 28, 1)),\n keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),\n keras.layers.MaxPooling2D(pool_size=(2, 2)),\n keras.layers.Flatten(),\n keras.layers.Dense(10)\n])\n\n# Train the digit classification model\nmodel.compile(optimizer='adam',\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n# TODO 1 - Your code goes here.\n", "For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy.\nConvert to a TensorFlow Lite model\nUsing the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model.\nNow load the model using the TFLiteConverter:", "converter = tf.lite.TFLiteConverter.from_keras_model(model)\ntflite_model = converter.convert()", "Write it out to a tflite file:", "tflite_models_dir = pathlib.Path(\"/tmp/mnist_tflite_models/\")\ntflite_models_dir.mkdir(exist_ok=True, parents=True)\n\ntflite_model_file = tflite_models_dir/\"mnist_model.tflite\"\ntflite_model_file.write_bytes(tflite_model)", "To quantize the model on export, set the optimizations flag to optimize for size:", "converter.optimizations = [tf.lite.Optimize.DEFAULT]\ntflite_quant_model = converter.convert()\ntflite_model_quant_file = tflite_models_dir/\"mnist_model_quant.tflite\"\ntflite_model_quant_file.write_bytes(tflite_quant_model)", "Note how the resulting file, is approximately 1/4 the size.", "!ls -lh {tflite_models_dir}", "Run the TFLite models\nRun the TensorFlow Lite model using the Python TensorFlow Lite\nInterpreter.\nLoad the model into an interpreter", "# TODO 2 - Your code goes here.\n\n\ninterpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))\ninterpreter_quant.allocate_tensors()", "Test the model on one image", "# Here, expanding the shape of an array\ntest_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n\ninput_index = interpreter.get_input_details()[0][\"index\"]\noutput_index = interpreter.get_output_details()[0][\"index\"]\n\ninterpreter.set_tensor(input_index, test_image)\ninterpreter.invoke()\npredictions = interpreter.get_tensor(output_index)\n\nimport matplotlib.pylab as plt\n\n# Displaying data as an image\nplt.imshow(test_images[0])\ntemplate = \"True:{true}, predicted:{predict}\"\n_ = plt.title(template.format(true= str(test_labels[0]),\n predict=str(np.argmax(predictions[0]))))\nplt.grid(False)", "Evaluate the models", "# A helper function to evaluate the TF Lite model using \"test\" dataset.\ndef evaluate_model(interpreter):\n input_index = interpreter.get_input_details()[0][\"index\"]\n output_index = interpreter.get_output_details()[0][\"index\"]\n\n # Run predictions on every image in the \"test\" dataset.\n prediction_digits = []\n for test_image in test_images:\n # Pre-processing: add batch dimension and convert to float32 to match with\n # the model's input data format.\n # TODO 3 - Your code goes here.\n\n # Run inference.\n interpreter.invoke()\n\n # Post-processing: remove batch dimension and find the digit with highest\n # probability.\n output = interpreter.tensor(output_index)\n digit = np.argmax(output()[0])\n prediction_digits.append(digit)\n\n # Compare prediction results with ground truth labels to calculate accuracy.\n accurate_count = 0\n for index in range(len(prediction_digits)):\n if prediction_digits[index] == test_labels[index]:\n accurate_count += 1\n accuracy = accurate_count * 1.0 / len(prediction_digits)\n\n return accuracy\n\nprint(evaluate_model(interpreter))", "Repeat the evaluation on the dynamic range quantized model to obtain:", "print(evaluate_model(interpreter_quant))", "In this example, the compressed model has no difference in the accuracy.\nOptimizing an existing model\nResnets with pre-activation layers (Resnet-v2) are widely used for vision applications.\n Pre-trained frozen graph for resnet-v2-101 is available on\n Tensorflow Hub.\nYou can convert the frozen graph to a TensorFLow Lite flatbuffer with quantization by:", "import tensorflow_hub as hub\n\nresnet_v2_101 = tf.keras.Sequential([\n keras.layers.InputLayer(input_shape=(224, 224, 3)),\n hub.KerasLayer(\"https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4\")\n])\n\nconverter = tf.lite.TFLiteConverter.from_keras_model(resnet_v2_101)\n\n# Convert to TF Lite without quantization\nresnet_tflite_file = tflite_models_dir/\"resnet_v2_101.tflite\"\nresnet_tflite_file.write_bytes(converter.convert())\n\n# Convert to TF Lite with quantization\nconverter.optimizations = [tf.lite.Optimize.DEFAULT]\nresnet_quantized_tflite_file = tflite_models_dir/\"resnet_v2_101_quantized.tflite\"\nresnet_quantized_tflite_file.write_bytes(converter.convert())\n\n!ls -lh {tflite_models_dir}/*.tflite", "The model size reduces from 171 MB to 43 MB.\nThe accuracy of this model on imagenet can be evaluated using the scripts provided for TFLite accuracy measurement.\nThe optimized model top-1 accuracy is 76.8, the same as the floating point model." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mitchshack/data_analysis_with_python_and_pandas
4 - pandas Basics/4-0 General pandas Concepts.ipynb
apache-2.0
[ "General pandas Concepts", "import sys\nprint(sys.version)\nimport numpy as np\nprint(np.__version__)\nimport pandas as pd\nprint(pd.__version__)", "Now we’ve covered numpy the basis for pandas. We’ve covered some of the more advanced python concepts like list comprehensions and lambda functions. Let’s jump back to our roadmap.\nWe’ve covered the general ecosystem. We’ve covered a lot of numpy, now let’s get our hands dirty with some real data and actually using pandas. I hope you’ve watched the numpy videos that we covered earlier, they may seem academic but they’re really going to provide a fantastic foundation for what we’re going to learn now.\nNow I'm going to breeze through a couple of subjects right now. Don’t feel the need to take notes or even try this code yourself. You can if you like, but it’s mainly to introduce you to the power of pandas, not for you to copy.\nPandas is made up of a couple of core types.\nWe’ve got an index. The index is a way of querying the data in an array or Series or querying the data in a Series or DataFrame.", "pd.Index", "We’ve got the Series. The Series is like a 1 dimensional array in numpy. It has some helper functions and an index that allows for querying of the data in simple ways.\nWe can make a simple Series from a numpy array.", "pd.Series\n\nseries_ex = pd.Series(np.arange(26))\nseries_ex", "Now that we’ve created it. We can see it has an index, that we just talked about, as well as values. When we print these out, they should look similar - just like numpy arrays. Now here is where the series gets powerful.", "series_ex.index", "we can replace the index with our own index. In this example I’ll use the lower case values of ascii characters.", "import string\nlcase = string.ascii_lowercase\nucase = string.ascii_uppercase\nprint(lcase, ucase)\n\nlcase = list(lcase)\nucase = list(ucase)\nprint(lcase)\nprint(ucase)\n\nseries_ex.index = lcase\n\nseries_ex.index\n\nseries_ex", "Now we can query just like we would if an array. You can think of the Series like an extremely powerful array.\nWe can query either sections or specific values.", "series_ex.ix['d':'k']\n\nseries_ex.ix['f']", "Now don’t worry about the functions that I’m using. We’re going to go over those in detail - I just wanted to introduce the concept.\nWe’ve got the DataFrame which is like a matrix or series of series’. It also has an index (or multiple indexes).", "pd.DataFrame", "Let’s go ahead and create one. We’ve make it from the lowercase, uppercase, and a number range.", "letters = pd.DataFrame([lcase, ucase, list(range(26))])\nletters", "Just like a numpy array we can transpose it.", "letters = letters.transpose()\nletters.head()\n\nletters.columns\n\nletters.index", "But now that we have columns as well as an index, we can rename the columns to better describe and query the data.", "letters.columns = ['lowercase','uppercase','number']\n\nletters.lowercase\n\nletters['lowercase']", "We can even set up a date range to associate each letter with a date. Now obviously this isn’t too helpful for the alphabet, but this allows you to do some amazing things once you are analyzing real data.", "letters.index = pd.date_range('9/1/2012',periods=26)\n\nletters\n\nletters['9-10-2012':'9-15-2012']", "Now if you don’t have any experience with pandas this is going to seem like a lot! Don’t worry we’re going to cover everything in the coming videos, I just wanted to give you an introduction to the amazingly expressive power of pandas and python. We’ve seen the building blocks with the Index, the Series, and the DataFrame.\nNow let’s dive deeper into each one." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/data-engineering/demos/composer_gcf_trigger/composertriggered.ipynb
apache-2.0
[ "Triggering a Cloud Composer Pipeline with a Google Cloud Function\nIn this advanced lab you will learn how to create and run an Apache Airflow workflow in Cloud Composer that completes the following tasks:\n- Watches for new CSV data to be uploaded to a Cloud Storage bucket\n- A Cloud Function call triggers the Cloud Composer Airflow DAG to run when a new file is detected \n- The workflow finds the input file that triggered the workflow and executes a Cloud Dataflow job to transform and output the data to BigQuery\n- Moves the original input file to a different Cloud Storage bucket for storing processed files\nPart One: Create Cloud Composer environment and workflow\nFirst, create a Cloud Composer environment if you don't have one already by doing the following:\n1. In the Navigation menu under Big Data, select Composer\n2. Select Create\n3. Set the following parameters:\n - Name: mlcomposer\n - Location: us-central1\n - Other values at defaults\n4. Select Create\nThe environment creation process is completed when the green checkmark displays to the left of the environment name on the Environments page in the GCP Console.\nIt can take up to 20 minutes for the environment to complete the setup process. Move on to the next section - Create Cloud Storage buckets and BigQuery dataset.\nSet environment variables", "import os\nPROJECT = 'your-project-id' # REPLACE WITH YOUR PROJECT ID\nREGION = 'us-central1' # REPLACE WITH YOUR REGION e.g. us-central1\n\n# do not change these\nos.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION", "Create Cloud Storage buckets\nCreate two Cloud Storage Multi-Regional buckets in your project. \n- project-id_input\n- project-id_output\nRun the below to automatically create the buckets and load some sample data:", "%%bash\n## create GCS buckets\nexists=$(gsutil ls -d | grep -w gs://${PROJECT}_input/)\nif [ -n \"$exists\" ]; then\n echo \"Skipping the creation of input bucket.\"\nelse\n echo \"Creating input bucket.\"\n gsutil mb -l ${REGION} gs://${PROJECT}_input\n echo \"Loading sample data for later\"\n gsutil cp resources/usa_names.csv gs://${PROJECT}_input\nfi\n\nexists=$(gsutil ls -d | grep -w gs://${PROJECT}_output/)\nif [ -n \"$exists\" ]; then\n echo \"Skipping the creation of output bucket.\"\nelse\n echo \"Creating output bucket.\"\n gsutil mb -l ${REGION} gs://${PROJECT}_output\nfi", "Create BigQuery Destination Dataset and Table\nNext, we'll create a data sink to store the ingested data from GCS<br><br>\nCreate a new Dataset\n\nIn the Navigation menu, select BigQuery\nThen click on your qwiklabs project ID\nClick Create Dataset\nName your dataset ml_pipeline and leave other values at defaults\nClick Create Dataset\n\nCreate a new empty table\n\nClick on the newly created dataset\nClick Create Table\nFor Destination Table name specify ingest_table\n\nFor schema click Edit as Text and paste in the below schema\nstate: STRING,<br>\ngender: STRING,<br>\nyear: STRING,<br>\nname: STRING,<br>\nnumber: STRING,<br>\ncreated_date: STRING,<br>\nfilename: STRING,<br>\nload_dt: DATE<br><br>\n\n\nClick Create Table\n\n\nReview of Airflow concepts\nWhile your Cloud Composer environment is building, let’s discuss the sample file you’ll be using in this lab.\n<br><br>\nAirflow is a platform to programmatically author, schedule and monitor workflows\n<br><br>\nUse airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The airflow scheduler executes your tasks on an array of workers while following the specified dependencies.\n<br><br>\nCore concepts\n\nDAG - A Directed Acyclic Graph is a collection of tasks, organised to reflect their relationships and dependencies.\nOperator - The description of a single task, it is usually atomic. For example, the BashOperator is used to execute bash command.\nTask - A parameterised instance of an Operator; a node in the DAG.\nTask Instance - A specific run of a task; characterised as: a DAG, a Task, and a point in time. It has an indicative state: running, success, failed, skipped, …<br><br>\nThe rest of the Airflow concepts can be found here.\n\nComplete the DAG file\nCloud Composer workflows are comprised of DAGs (Directed Acyclic Graphs). The code shown in simple_load_dag.py is the workflow code, also referred to as the DAG. \n<br><br>\nOpen the file now to see how it is built. Next will be a detailed look at some of the key components of the file.\n<br><br>\nTo orchestrate all the workflow tasks, the DAG imports the following operators:\n- DataFlowPythonOperator\n- PythonOperator\n<br><br>\nAction: <span style=\"color:blue\">Complete the # TODOs in the simple_load_dag.py DAG file below</span> file while you wait for your Composer environment to be setup.", "%%writefile simple_load_dag.py\n# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"A simple Airflow DAG that is triggered externally by a Cloud Function when a\nfile lands in a GCS bucket.\nOnce triggered the DAG performs the following steps:\n1. Triggers a Google Cloud Dataflow job with the input file information received\n from the Cloud Function trigger.\n2. Upon completion of the Dataflow job, the input file is moved to a\n gs://<target-bucket>/<success|failure>/YYYY-MM-DD/ location based on the\n status of the previous step.\n\"\"\"\n\nimport datetime\nimport logging\nimport os\n\nfrom airflow import configuration\nfrom airflow import models\nfrom airflow.contrib.hooks import gcs_hook\nfrom airflow.contrib.operators import dataflow_operator\nfrom airflow.operators import python_operator\nfrom airflow.utils.trigger_rule import TriggerRule\n\n# We set the start_date of the DAG to the previous date. This will\n# make the DAG immediately available for scheduling.\nYESTERDAY = datetime.datetime.combine(\n datetime.datetime.today() - datetime.timedelta(1),\n datetime.datetime.min.time())\n\n# We define some variables that we will use in the DAG tasks.\nSUCCESS_TAG = 'success'\nFAILURE_TAG = 'failure'\n\n# An Airflow variable called gcp_completion_bucket is required.\n# This variable will contain the name of the bucket to move the processed\n# file to.\n\n# '_names' must appear in CSV filename to be ingested (adjust as needed)\n# we are only looking for files with the exact name usa_names.csv (you can specify wildcards if you like)\nINPUT_BUCKET_CSV = 'gs://'+models.Variable.get('gcp_input_location')+'/usa_names.csv' \n\n# TODO: Populate the models.Variable.get() with the actual variable name for your output bucket\nCOMPLETION_BUCKET = 'gs://'+models.Variable.get('gcp_completion_bucket')\nDS_TAG = '{{ ds }}'\nDATAFLOW_FILE = os.path.join(\n configuration.get('core', 'dags_folder'), 'dataflow', 'process_delimited.py')\n\n# The following additional Airflow variables should be set:\n# gcp_project: Google Cloud Platform project id.\n# gcp_temp_location: Google Cloud Storage location to use for Dataflow temp location.\nDEFAULT_DAG_ARGS = {\n 'start_date': YESTERDAY,\n 'retries': 2,\n\n # TODO: Populate the models.Variable.get() with the variable name for your GCP Project\n 'project_id': models.Variable.get('gcp_project'),\n 'dataflow_default_options': {\n 'project': models.Variable.get('gcp_project'),\n\n # TODO: Populate the models.Variable.get() with the variable name for temp location\n 'temp_location': 'gs://'+models.Variable.get('gcp_temp_location'),\n 'runner': 'DataflowRunner'\n }\n}\n\n\ndef move_to_completion_bucket(target_bucket, target_infix, **kwargs):\n \"\"\"A utility method to move an object to a target location in GCS.\"\"\"\n # Here we establish a connection hook to GoogleCloudStorage.\n # Google Cloud Composer automatically provides a google_cloud_storage_default\n # connection id that is used by this hook.\n conn = gcs_hook.GoogleCloudStorageHook()\n\n # The external trigger (Google Cloud Function) that initiates this DAG\n # provides a dag_run.conf dictionary with event attributes that specify\n # the information about the GCS object that triggered this DAG.\n # We extract the bucket and object name from this dictionary.\n source_bucket = models.Variable.get('gcp_input_location')\n source_object = models.Variable.get('gcp_input_location')+'/usa_names.csv' \n completion_ds = kwargs['ds']\n\n target_object = os.path.join(target_infix, completion_ds, source_object)\n\n logging.info('Copying %s to %s',\n os.path.join(source_bucket, source_object),\n os.path.join(target_bucket, target_object))\n conn.copy(source_bucket, source_object, target_bucket, target_object)\n\n logging.info('Deleting %s',\n os.path.join(source_bucket, source_object))\n conn.delete(source_bucket, source_object)\n\n\n# Setting schedule_interval to None as this DAG is externally trigger by a Cloud Function.\n# The following Airflow variables should be set for this DAG to function:\n# bq_output_table: BigQuery table that should be used as the target for\n# Dataflow in <dataset>.<tablename> format.\n# e.g. lake.usa_names\n# input_field_names: Comma separated field names for the delimited input file.\n# e.g. state,gender,year,name,number,created_date\n\n# TODO: Name the DAG id GcsToBigQueryTriggered\nwith models.DAG(dag_id='GcsToBigQueryTriggered',\n description='A DAG triggered by an external Cloud Function',\n schedule_interval=None, default_args=DEFAULT_DAG_ARGS) as dag:\n # Args required for the Dataflow job.\n job_args = {\n 'input': INPUT_BUCKET_CSV,\n\n # TODO: Populate the models.Variable.get() with the variable name for BQ table\n 'output': models.Variable.get('bq_output_table'),\n\n # TODO: Populate the models.Variable.get() with the variable name for input field names\n 'fields': models.Variable.get('input_field_names'),\n 'load_dt': DS_TAG\n }\n\n # Main Dataflow task that will process and load the input delimited file.\n # TODO: Specify the type of operator we need to call to invoke DataFlow\n dataflow_task = dataflow_operator.DataFlowPythonOperator(\n task_id=\"process-delimited-and-push\",\n py_file=DATAFLOW_FILE,\n options=job_args)\n\n # Here we create two conditional tasks, one of which will be executed\n # based on whether the dataflow_task was a success or a failure.\n success_move_task = python_operator.PythonOperator(task_id='success-move-to-completion',\n python_callable=move_to_completion_bucket,\n # A success_tag is used to move\n # the input file to a success\n # prefixed folder.\n op_args=[models.Variable.get('gcp_completion_bucket'), SUCCESS_TAG],\n provide_context=True,\n trigger_rule=TriggerRule.ALL_SUCCESS)\n\n failure_move_task = python_operator.PythonOperator(task_id='failure-move-to-completion',\n python_callable=move_to_completion_bucket,\n # A failure_tag is used to move\n # the input file to a failure\n # prefixed folder.\n op_args=[models.Variable.get('gcp_completion_bucket'), FAILURE_TAG],\n provide_context=True,\n trigger_rule=TriggerRule.ALL_FAILED)\n\n # The success_move_task and failure_move_task are both downstream from the\n # dataflow_task.\n dataflow_task >> success_move_task\n dataflow_task >> failure_move_task\n", "Viewing environment information\nNow that you have a completed DAG, it's time to copy it to your Cloud Composer environment and finish the setup of your workflow.<br><br>\n1. Go back to Composer to check on the status of your environment.\n2. Once your environment has been created, click the name of the environment to see its details.\n<br><br>\nThe Environment details page provides information, such as the Airflow web UI URL, Google Kubernetes Engine cluster ID, name of the Cloud Storage bucket connected to the DAGs folder.\n<br><br>\nCloud Composer uses Cloud Storage to store Apache Airflow DAGs, also known as workflows. Each environment has an associated Cloud Storage bucket. Cloud Composer schedules only the DAGs in the Cloud Storage bucket.\nSetting Airflow variables\nOur DAG relies on variables to pass in values like the GCP Project. We can set these in the Admin UI.\nAirflow variables are an Airflow-specific concept that is distinct from environment variables. In this step, you'll set the following six Airflow variables used by the DAG we will deploy.", "## Run this to display which key value pairs to input\nimport pandas as pd\npd.DataFrame([\n ('gcp_project', PROJECT),\n ('gcp_input_location', PROJECT + '_input'),\n ('gcp_temp_location', PROJECT + '_output/tmp'),\n ('gcp_completion_bucket', PROJECT + '_output'),\n ('input_field_names', 'state,gender,year,name,number,created_date'),\n ('bq_output_table', 'ml_pipeline.ingest_table')\n], columns = ['Key', 'Value'])", "Option 1: Set the variables using the Airflow webserver UI\n\nIn your Airflow environment, select Admin > Variables\nPopulate each key value in the table with the required variables from the above table\n\nOption 2: Set the variables using the Airflow CLI\nThe next gcloud composer command executes the Airflow CLI sub-command variables. The sub-command passes the arguments to the gcloud command line tool.<br><br>\nTo set the three variables, run the gcloud composer command once for each row from the above table. Just as an example, to set the variable gcp_project you could do this:", "%%bash\ngcloud composer environments run ENVIRONMENT_NAME \\\n --location ${REGION} variables -- \\\n --set gcp_project ${PROJECT}}", "Copy your Airflow bucket name\n\nNavigate to your Cloud Composer instance<br/><br/>\nSelect DAGs Folder<br/><br/>\nYou will be taken to the Google Cloud Storage bucket that Cloud Composer has created automatically for your Airflow instance<br/><br/>\nCopy the bucket name into the variable below (example: us-central1-composer-08f6edeb-bucket)", "AIRFLOW_BUCKET = 'us-central1-composer-21587538-bucket' # REPLACE WITH AIRFLOW BUCKET NAME\nos.environ['AIRFLOW_BUCKET'] = AIRFLOW_BUCKET", "Copy your Airflow files to your Airflow bucket", "%%bash\ngsutil cp simple_load_dag.py gs://${AIRFLOW_BUCKET}/dags # overwrite DAG file if it exists\ngsutil cp -r dataflow/process_delimited.py gs://${AIRFLOW_BUCKET}/dags/dataflow/ # copy Dataflow job to be ran", "Navigating Using the Airflow UI\nTo access the Airflow web interface using the GCP Console:\n1. Go back to the Composer Environments page.\n2. In the Airflow webserver column for the environment, click the new window icon. \n3. The Airflow web UI opens in a new browser window. \nTrigger DAG run manually\nRunning your DAG manually ensures that it operates successfully even in the absence of triggered events. \n1. Trigger the DAG manually click the play button under Links\n\nPart Two: Trigger DAG run automatically from a file upload to GCS\nNow that your manual workflow runs successfully, you will now trigger it based on an external event. \nCreate a Cloud Function to trigger your workflow\nWe will be following this reference guide to setup our Cloud Function\n1. In the code block below, uncomment the project_id, location, and composer_environment and populate them\n2. Run the below code to get your CLIENT_ID (needed later)", "import google.auth\nimport google.auth.transport.requests\nimport requests\nimport six.moves.urllib.parse\n\n# Authenticate with Google Cloud.\n# See: https://cloud.google.com/docs/authentication/getting-started\ncredentials, _ = google.auth.default(\n scopes=['https://www.googleapis.com/auth/cloud-platform'])\nauthed_session = google.auth.transport.requests.AuthorizedSession(\n credentials)\n\nproject_id = 'your-project-id'\nlocation = 'us-central1'\ncomposer_environment = 'composer'\n\nenvironment_url = (\n 'https://composer.googleapis.com/v1beta1/projects/{}/locations/{}'\n '/environments/{}').format(project_id, location, composer_environment)\ncomposer_response = authed_session.request('GET', environment_url)\nenvironment_data = composer_response.json()\nairflow_uri = environment_data['config']['airflowUri']\n\n# The Composer environment response does not include the IAP client ID.\n# Make a second, unauthenticated HTTP request to the web server to get the\n# redirect URI.\nredirect_response = requests.get(airflow_uri, allow_redirects=False)\nredirect_location = redirect_response.headers['location']\n\n# Extract the client_id query parameter from the redirect.\nparsed = six.moves.urllib.parse.urlparse(redirect_location)\nquery_string = six.moves.urllib.parse.parse_qs(parsed.query)\nprint(query_string['client_id'][0])\n", "Grant Service Account Permissions\nTo authenticate to Cloud IAP, grant the Appspot Service Account (used by Cloud Functions) the Service Account Token Creator role on itself. To do this, execute the following command in Cloud Shell. Be sure to replace 'your-project-id'", "#Execute the following in Cloud Shell, it will not work here\ngcloud iam service-accounts add-iam-policy-binding \\\nyour-project-id@appspot.gserviceaccount.com \\\n--member=serviceAccount:your-project-id@appspot.gserviceaccount.com \\\n--role=roles/iam.serviceAccountTokenCreator", "Create the Cloud Function\n\nNavigate to Compute > Cloud Functions\nSelect Create function\nFor name specify 'gcs-dag-trigger-function'\nFor trigger type select 'Cloud Storage'\nFor event type select 'Finalize/Create'\nFor bucket, specify the input bucket you created earlier \n\nImportant: be sure to select the input bucket and not the output bucket to avoid an endless triggering loop)\npopulate index.js\nComplete the four required constants defined below in index.js code and paste it into the Cloud Function editor (the js code will not run in this notebook). The constants are: \n- PROJECT_ID\n- CLIENT_ID (from earlier)\n- WEBSERVER_ID (part of Airflow webserver URL) \n- DAG_NAME (GcsToBigQueryTriggered)", "'use strict';\n\nconst fetch = require('node-fetch');\nconst FormData = require('form-data');\n\n/**\n * Triggered from a message on a Cloud Storage bucket.\n *\n * IAP authorization based on:\n * https://stackoverflow.com/questions/45787676/how-to-authenticate-google-cloud-functions-for-access-to-secure-app-engine-endpo\n * and\n * https://cloud.google.com/iap/docs/authentication-howto\n *\n * @param {!Object} data The Cloud Functions event data.\n * @returns {Promise}\n */\nexports.triggerDag = async data => {\n // Fill in your Composer environment information here.\n\n // The project that holds your function\n const PROJECT_ID = 'your-project-id';\n // Navigate to your webserver's login page and get this from the URL\n const CLIENT_ID = 'your-iap-client-id';\n // This should be part of your webserver's URL:\n // {tenant-project-id}.appspot.com\n const WEBSERVER_ID = 'your-tenant-project-id';\n // The name of the DAG you wish to trigger\n const DAG_NAME = 'GcsToBigQueryTriggered';\n\n // Other constants\n const WEBSERVER_URL = `https://${WEBSERVER_ID}.appspot.com/api/experimental/dags/${DAG_NAME}/dag_runs`;\n const USER_AGENT = 'gcf-event-trigger';\n const BODY = {conf: JSON.stringify(data)};\n\n // Make the request\n try {\n const iap = await authorizeIap(CLIENT_ID, PROJECT_ID, USER_AGENT);\n\n return makeIapPostRequest(\n WEBSERVER_URL,\n BODY,\n iap.idToken,\n USER_AGENT,\n iap.jwt\n );\n } catch (err) {\n throw new Error(err);\n }\n};\n\n/**\n * @param {string} clientId The client id associated with the Composer webserver application.\n * @param {string} projectId The id for the project containing the Cloud Function.\n * @param {string} userAgent The user agent string which will be provided with the webserver request.\n */\nconst authorizeIap = async (clientId, projectId, userAgent) => {\n const SERVICE_ACCOUNT = `${projectId}@appspot.gserviceaccount.com`;\n const JWT_HEADER = Buffer.from(\n JSON.stringify({alg: 'RS256', typ: 'JWT'})\n ).toString('base64');\n\n let jwt = '';\n let jwtClaimset = '';\n\n // Obtain an Oauth2 access token for the appspot service account\n const res = await fetch(\n `http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/${SERVICE_ACCOUNT}/token`,\n {\n headers: {'User-Agent': userAgent, 'Metadata-Flavor': 'Google'},\n }\n );\n const tokenResponse = await res.json();\n if (tokenResponse.error) {\n return Promise.reject(tokenResponse.error);\n }\n\n const accessToken = tokenResponse.access_token;\n const iat = Math.floor(new Date().getTime() / 1000);\n const claims = {\n iss: SERVICE_ACCOUNT,\n aud: 'https://www.googleapis.com/oauth2/v4/token',\n iat: iat,\n exp: iat + 60,\n target_audience: clientId,\n };\n jwtClaimset = Buffer.from(JSON.stringify(claims)).toString('base64');\n const toSign = [JWT_HEADER, jwtClaimset].join('.');\n\n const blob = await fetch(\n `https://iam.googleapis.com/v1/projects/${projectId}/serviceAccounts/${SERVICE_ACCOUNT}:signBlob`,\n {\n method: 'POST',\n body: JSON.stringify({\n bytesToSign: Buffer.from(toSign).toString('base64'),\n }),\n headers: {\n 'User-Agent': userAgent,\n Authorization: `Bearer ${accessToken}`,\n },\n }\n );\n const blobJson = await blob.json();\n if (blobJson.error) {\n return Promise.reject(blobJson.error);\n }\n\n // Request service account signature on header and claimset\n const jwtSignature = blobJson.signature;\n jwt = [JWT_HEADER, jwtClaimset, jwtSignature].join('.');\n const form = new FormData();\n form.append('grant_type', 'urn:ietf:params:oauth:grant-type:jwt-bearer');\n form.append('assertion', jwt);\n\n const token = await fetch('https://www.googleapis.com/oauth2/v4/token', {\n method: 'POST',\n body: form,\n });\n const tokenJson = await token.json();\n if (tokenJson.error) {\n return Promise.reject(tokenJson.error);\n }\n\n return {\n jwt: jwt,\n idToken: tokenJson.id_token,\n };\n};\n\n/**\n * @param {string} url The url that the post request targets.\n * @param {string} body The body of the post request.\n * @param {string} idToken Bearer token used to authorize the iap request.\n * @param {string} userAgent The user agent to identify the requester.\n */\nconst makeIapPostRequest = async (url, body, idToken, userAgent) => {\n const res = await fetch(url, {\n method: 'POST',\n headers: {\n 'User-Agent': userAgent,\n Authorization: `Bearer ${idToken}`,\n },\n body: JSON.stringify(body),\n });\n\n if (!res.ok) {\n const err = await res.text();\n throw new Error(err);\n }\n};", "populate package.json\nCopy and paste the below into package.json", "{\n \"name\": \"nodejs-docs-samples-functions-composer-storage-trigger\",\n \"version\": \"0.0.1\",\n \"dependencies\": {\n \"form-data\": \"^2.3.2\",\n \"node-fetch\": \"^2.2.0\"\n },\n \"engines\": {\n \"node\": \">=8.0.0\"\n },\n \"private\": true,\n \"license\": \"Apache-2.0\",\n \"author\": \"Google Inc.\",\n \"repository\": {\n \"type\": \"git\",\n \"url\": \"https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git\"\n },\n \"devDependencies\": {\n \"@google-cloud/nodejs-repo-tools\": \"^3.3.0\",\n \"mocha\": \"^6.0.0\",\n \"proxyquire\": \"^2.1.0\",\n \"sinon\": \"^7.2.7\"\n },\n \"scripts\": {\n \"test\": \"mocha test/*.test.js --timeout=20000\"\n }\n}", "For Function to execute, specify triggerDag (note: case sensitive)\nSelect Create\n\nUpload CSVs and Monitor\n\nPractice uploading and editing CSVs named usa_names.csv into your input bucket (note: the DAG filters to only ingest CSVs with 'usa_names.csv' as the filepath. Adjust this as needed in the DAG code.)\nTroubleshoot Cloud Function call errors by monitoring the logs. In the below screenshot we filter in Logging for our most recent Dataflow job and are scrolling through to ensure the job is processing and outputting records to BigQuery\n\n\n\nTroubleshoot Airflow workflow errors by monitoring the Browse > DAG Runs \n\nCongratulations!\nYou’ve have completed this advanced lab on triggering a workflow with a Cloud Function." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gpagliuca/pyfas
docs/notebooks/.ipynb_checkpoints/Tab_files-checkpoint.ipynb
gpl-3.0
[ "import pandas as pd\nimport pyfas as fa", "Tab files\nA tab file contains thermodynamic properties pre-calculated by a thermodynamic simulator like PVTsim. It is good practice to analyze these text files before using them. Unfortunately there are several file layouts (key, fixed, with just a fluid, etc.). The Tab class handles some (most?) of the possible cases but not necessarily all the combinations.\nThe only public method is extract_all and returns a pandas dataframe with the thenrmodynamic properties.\nAt this moment in time the dtaframe obtained is not unique, it depends on the tab format and on the number of fluids in the original tab file. Room to improve here.\nTab file loading", "tab_path = '../../pyfas/test/test_files/'\nfname = '3P_single-fluid_key.tab'\ntab = fa.Tab(tab_path+fname)", "Extraction", "tab.export_all()\n\ntab.data", "Some key info about the tab file are provided as tab.metadata", "tab.metadata", "Plotting\nHere under an example of a 3D plot of the liquid hydropcarbon viscosity", "import matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport itertools as it\n\ndef plot_property_keyword(pressure, temperature, thermo_property):\n fig = plt.figure(figsize=(16, 12))\n ax = fig.add_subplot(111, projection='3d')\n X = []\n Y = []\n for x, y in it.product(pressure, temperature):\n X.append(x/1e5)\n Y.append(y) \n ax.scatter(X, Y, thermo_property)\n ax.set_ylabel('Temperature [C]')\n ax.set_xlabel('Pressure [bar]')\n ax.set_xlim(0, )\n ax.set_title('ROHL')\n return fig\n\nplot_property_keyword(tab.metadata['p_array'],\n tab.metadata['t_array'],\n tab.data.T['ROHL'].values[0]) " ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
joashxu/NYCSubwayDataAnalysis
NYCSubway.ipynb
mit
[ "Analyzing NYC subway data\nThis project look at the NYC Subway data and figure out if more people ride the subway when it is raining versus when it is not raining.\nThe data can be found at:\n\nOriginal data set at: https://www.dropbox.com/s/meyki2wl9xfa7yk/turnstile_data_master_with_weather.csv\nImproved data set at: https://www.dropbox.com/s/1lpoeh2w6px4diu/improved-dataset.zip?dl=0\n\n\nReferences\n\nFrost, Jim. Choosing Between a Nonparametric Test and a Parametric Test. http://blog.minitab.com/blog/adventures-in-statistics/choosing-between-a-nonparametric-test-and-a-parametric-test\nSPSS tutorial. Mann-Whitney U Test using SPS. https://statistics.laerd.com/spss-tutorials/mann-whitney-u-test-using-spss-statistics.php\nFrost, Jim. Regression Analysis: How Do I Interpret R-squared and Assess the Goodness-of-Fit? http://blog.minitab.com/blog/adventures-in-statistics/regression-analysis-how-do-i-interpret-r-squared-and-assess-the-goodness-of-fit\nNIST. Are the model residuals well-behaved? http://www.itl.nist.gov/div898/handbook/pri/section2/pri24.htm\nOriginLab. Graphic Residual Analysis. http://www.originlab.com/doc/Origin-Help/Residual-Plot-Analysis\nGraphpad. One-tail vs. two-tail P values. http://graphpad.com/guides/prism/6/statistics/index.htm?one-tail_vs__two-tail_p_values.htm\nSkymark.com. Normal Test Plot. http://www.skymark.com/resources/tools/normal_test_plot.asp\nNau, Robert. Regression diagnostics: testing the assumptions of linear regression. http://people.duke.edu/~rnau/testing.htm\n\n\nStatistical Analysis", "%matplotlib inline\n\nimport numpy as np\nimport pandas\nimport matplotlib.pyplot as plt\n\ntry:\n import seaborn as sns\n \n sns.set_palette(\"deep\", desat=.6)\n sns.set_context(rc={\"figure.figsize\": (10, 6)})\nexcept ImportError:\n sns = None\n\ndf = pandas.read_csv('turnstile_data_master_with_weather.csv')", "I split up the data into two category, with rain and without rain.\nI then summarize and plot the distribution for this two samples.", "gb = df.groupby('rain')\n\ngb.get_group(1).ENTRIESn_hourly\n\nwith_rain_df = df['ENTRIESn_hourly'][df['rain'] == 1]\nwithout_rain_df = df['ENTRIESn_hourly'][df['rain'] == 0]\n\n# Let's take a look at the summary of these two samples\nsummary = pandas.concat([with_rain_df.describe(), without_rain_df.describe()], axis=1)\nsummary.columns = ['With rain summary', 'Without rain summary']\n\nprint summary\n\nplot_description = ('This is the histogram of the ENTRIESn_hourly when it is raining vs when it is not raining.\\n'\n 'In both instances, they are not normally distributed and are positively skewed.')\n\nfigure, plot_list = plt.subplots(1, 2, sharey=True, figsize=(16, 8))\nfigure.subplots_adjust(wspace=0.08)\nfigure.text(0.08, 0.5, 'Frequency', va='center', rotation='vertical')\nfigure.text(0.5, 0.95, 'Histogram of ENTRIESn_hourly', ha='center')\nfigure.text(0.5, 0.01, plot_description, ha='center')\n\ndf_list = [(with_rain_df, 'Rain'), (without_rain_df, 'No rain')]\n\nfor idx, plot in enumerate(plot_list):\n hist, bins = np.histogram(df_list[idx][0], bins=60)\n width = 0.8 * (bins[1] - bins[0])\n center = (bins[:-1] + bins[1:]) / 2\n plot.bar(center, hist, align='center', width=width)\n plot.set_title(df_list[idx][1])\n plot.set_xlabel('ENTRIESn_hourly')", "Statistical test, tail and null hypothesis\nI perform Mann-Whitney U test on the two samples. We will be performing a two-tail P value with .05 p-critical value, with the following hypothesis:\nThe null hypothesis is:\nThe distributions of both groups are identical, so that there is a 50% probability that an observation from a value randomly selected from one population exceeds an observation randomly selected from the other population.\nP(x > y) = 0.5\nAlternative hypothesis:\nThe distributions of both groups are not identical, the probability that an observation from a value randomly selected from one population exceeds an observation randomly selected from the other population is not 50%.\nP(x > y) != 0.5 \nStatistical test reasoning\nI am using this test because the samples is non-normal.\nGenerally, we use Mann-Whitney U test, if it follow several assumptions:\n\nThe dependent variable is ordinal or continous\nThe independent variable (with rain, without rain) is 2 categorical data\nThere is no relationship between the observations in each group or between the groups themselves\nThe two variables are not normaly distributed\n\nResult\nBelow the computed result:", "import scipy.stats\n\nU, p = scipy.stats.mannwhitneyu(with_rain_df, without_rain_df)\nprint \"U :\", U\nprint \"two-tail p :\", p * 2 # the module returns one-tail value, we need to multiply by 2 to get two-tail value", "We can see that the p < .05.\nInterpretation\nSince our p-value of 0.049999825587 is < 0.05 our critical value.\nWe conclude that there is statistically significant difference between the distributions of the two sample. This results are significant at the .05 level.\n\nLinear Regression\nNow let's build a model so we can make predictions.\nApproach\nTo compute the coefficients theta and produce prediction for ENTRIESn_hourly in the regression model I choose to \nuse OLS using Statsmodels.\nI added 3 helper function here:\n\nA function that does linear regression,\nA function that calculate R^2. We need this to check on our result and,\nA function that generate list of prediction value based on data and list of features", "import statsmodels.api as sm\n\ndef linear_regression(features, values):\n \"\"\"\n Perform linear regression given a data set with an arbitrary number of features.\n \"\"\" \n features = sm.add_constant(features)\n model = sm.OLS(values, features)\n results = model.fit()\n intercept, params = results.params[0], results.params[1:]\n return intercept, params\n\ndef compute_r_squared(data, predictions):\n '''\n Compute r_squared given a data set and the predictions.\n '''\n \n data_mean = np.mean(data)\n ss_res = ((data - predictions)**2).sum()\n ss_tot = ((data - data_mean)**2).sum()\n r_squared = 1 - (ss_res/ss_tot)\n \n return r_squared\n\ndef calculate_predictions(dataframe, features):\n '''\n Generate prediction given dataframe and features.\n '''\n \n # Add UNIT to features using dummy variables\n dummy_units = pandas.get_dummies(dataframe['UNIT'], prefix='unit')\n features = features.join(dummy_units)\n\n # Values\n values = dataframe['ENTRIESn_hourly']\n\n # Get the numpy arrays\n #features_array = features.values\n #values_array = values.values\n\n # Perform linear regression\n intercept, params = linear_regression(features, values)\n\n predictions = intercept + np.dot(features, params)\n\n return predictions, intercept, params", "Features\nI will try to compute the prediction for several possible features.\nPlease check the code below for the features.\nI aim to have R^2 score of 0.40 or better.\nThe features of interest are listed below along with the reason:\n\nrain : I think people will decide to ride the subway if it is raining\nprecipi : same with the above, it includes snow, drizzle, hail, etc.\nmeantempi : I think people will ride the subway more if it is cold\nfog : I think people will ride the subway more if it is fogging, it is probably not fun to drive\nmeanwindspi: If it is windy, people who use bike or walk might opt to take subway\nHour : People tend to ride subway on certain hour, for example to get to or back from work. \n\nI will create combinations out of these feature, compute the predictions and get the R^2 value.\nIn addition to this, I have added dummy variables 'UNIT' for the features.", "from itertools import combinations\n\n# Build our features list combination\n# feature_of_interest = ['rain', 'meantempi', 'fog', 'meanwindspdi', 'precipi', 'Hour']\n# features_list = []\n# for count in range(1, len(feature_of_interest) + 1):\n# for item in combinations(feature_of_interest, count):\n# features_list.append(list(item))\n\nfeatures_list = [['rain', 'meantempi', 'fog', 'meanwindspdi', 'precipi', 'Hour']]\n\nbest_score = {'score': 0}\nfor features in features_list:\n predictions, intercept, params = calculate_predictions(df, df[features])\n r2_score = compute_r_squared(df['ENTRIESn_hourly'], predictions)\n\n # Save the best score\n if r2_score > best_score['score']:\n best_score['score'] = r2_score\n best_score['feature_variables'] = features\n best_score['intercept'] = intercept\n best_score['param'] = params\n best_score['predictions_values'] = predictions\n\nprint \"\\n\\n\"\nprint \"The highest score :\", best_score['score']\nprint \"Feature variables for the highest score:\", ', '.join(best_score['feature_variables'])\nprint \"Coefficient for the highest score\"\nprint \" Intercept:\", best_score['intercept']\nprint \" Non-dummy Parameters:\", best_score['param'][:len(best_score['feature_variables'])]", "Results\nThe features variables we use :\nrain, meantempi, fog, meanwindspdi, precipi, Hour\nThe parameters of the non-dummy features are:\n-32.26725174, -5.91215877, 120.27580878, 26.27992382, -22.61805652, 67.39739472\nThe R2 value is 0.458621326664\nIntrepretation\nIn order to see the goodness of fit for the regression model, let's try to plot the residuals\n(that is, the difference between the original hourly entry data and the predicted values).", "figure = plt.figure()\nplt.title('Histogram of residual value')\nplt.xlabel('Residual value')\nplt.ylabel('Relative frequency')\n\n(df['ENTRIESn_hourly'] - best_score['predictions_values']).hist(bins=100)\n\nplot_description = 'Residual value histogram of the model. The plot shows long tails.'\nfigure.text(0.5, 0, plot_description, ha='center')\n", "The histogram has long tails, which suggests that there are some very large residuals. To make sure I plot normal probablity plot.", "import scipy.stats\n\nfigure = plt.figure()\nprob_plot = scipy.stats.probplot(df['ENTRIESn_hourly'] - best_score['predictions_values'], plot=plt)\n\nplot_description = ('Normal probability plot of the residual values.\\n'\n 'The curve which starts below the normal line,\\n'\n 'bends to follow it, and ends above it indicates long tails.')\nfigure.text(0.5, -0.02, plot_description, ha='center')", "The plot confirms that we have long tails problem which means we are seeing more variance than we would expect in a normal distribution.\nIn fact if we simply plot the residual values we got:", "figure = plt.figure()\nplt.title('Residual value plot')\nplt.xlabel('Number of data')\nplt.ylabel('Residual value (data - prediction)')\n\nplt.plot(df['ENTRIESn_hourly'] - best_score['predictions_values']) \n\nplot_description = 'The plot shows that the residuals follow a cyclical pattern.'\nfigure.text(0.5, 0, plot_description, ha='center')", "It shows that there are some non-linearity in the data.\nAnd this should be addressed by designing a non linear model.\nIn conclusion, we achieve a R^2 value that we set (> 0.40), but on further inspection we find out that the linear model is not appropriate to predict ridership.\nVisualization\nLet's use some visualization to ask get some more answer from the data.\nFirst, let's see ridership by day of week.", "import datetime\nfrom ggplot import *\n\ndf_with_weekday = df.copy()\ndf_with_weekday['weekday'] = df_with_weekday['DATEn'].apply(lambda x: datetime.datetime.strptime(x, '%Y-%m-%d').weekday())\n\ndays = [\"Monday\", \"Tuesday\", \"Wednesday\", \"Thursday\", \"Friday\", \"Saturday\", \"Sunday\"]\n\ndf_with_weekday = df_with_weekday[['weekday', 'ENTRIESn_hourly', 'EXITSn_hourly']].groupby('weekday').sum().reset_index()\nplot = ggplot(aes(x='weekday'), data=df_with_weekday) + \\\n geom_bar(aes(x='weekday', y='ENTRIESn_hourly'), stat='bar') + \\\n xlab('Day of the week') + \\\n ylab('Total entries') + \\\n scale_x_continuous(breaks=range(7), labels=days) + \\\n ggtitle('Ridership by day of the week')\n \nprint plot", "Now, let's try to see the top 20 station with the highest total entry.", "unit_entries_df = df[['UNIT', 'ENTRIESn_hourly']].groupby(['UNIT']).sum().sort(['ENTRIESn_hourly'], ascending =[0]).reset_index()[:20]\nplot= ggplot(aes(x='UNIT', y='ENTRIESn_hourly'), data=unit_entries_df) + \\\n geom_bar(aes(x='UNIT', weight='ENTRIESn_hourly'), stat='bar') + \\\n xlab('Station') + \\\n ylab('Total entries') + \\\n ggtitle('Top 20 ridership per station')\n \nprint plot", "Another visualization. This time using matplotlib.\nShowing total ridership by week day and hour.", "df_with_weekday = df.copy()\ndf_with_weekday['weekday'] = df_with_weekday['DATEn'].apply(lambda x: datetime.datetime.strptime(x, '%Y-%m-%d').weekday())\n\nweekday = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']\n\nfigure, plot_list = plt.subplots(1, 7, sharey=True, figsize=(16, 8))\nfigure.text(0.04, 0.5, 'Total entries', va='center', rotation='vertical')\nfigure.text(0.5, 0.95, 'Ridership throughout the day.', ha='center')\nfigure.subplots_adjust(wspace=0.08)\nfor day, item in enumerate(plot_list):\n total_per_hour = []\n for i in range(23):\n total_per_hour.append(df_with_weekday['ENTRIESn_hourly'][(df_with_weekday['weekday'] == day) & (df_with_weekday['Hour'] == i)].sum())\n item.plot(range(23), total_per_hour)\n item.set_xlim(0, 23)\n item.set_xlabel('Time of day')\n item.set_title(weekday[day])\n \nplot_description = ('This plot shows ridership throughout the day. We see more ridership on weekdays vs on weekends.\\n'\n 'The peak ridership are around 11AM and 8PM on weekdays and around 3PM and 8PM on weekends.')\nfigure.text(0.5, 0.01, plot_description, ha='center')", "Looks like there is more entry on weekdays versus on weekends. And it is pretty similar routine on weekdays.\nA peak at around 10AM and another one around 8PM.\n\nConclusions\nOur statistical analysis shows that there is significant difference in ridership when it is raining vs when it is not raining.\nWe develop a model to predict the ridership. We used linear model (OLS) and achieve R^2 value that we set as the goal. However after analyzing the residual, we find that the model is not appropriate to predict the ridership. \n\nReflection\nDataset\nThe dataset although big (131951 records) only covers 30 days of data. The data was collected from 2011-05-01 to 2011-05-30 This is probably not enough. It will be interesting to see data from multiple months. If for instance on April it rains a lot, can we see that the ridership on April is significantly different then the ridership on May.\nAnalysis\nWe used linear regression method to make our models. But the resulting model does not seems to be appropriate to predict the ridership of NYC subway. We need to address the non-linearity in the data.\nInitially I thought maybe if I remove the Hour from the model features list (looking at figure 3.2 I conclude that this is what causes the cycle) the model will work, but I got the same result." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mediagestalt/Counting-Word-Frequencies
Counting Word Frequencies.ipynb
mit
[ "Counting Word Frequencies with Python\nThe debates that occur in the Canadian Parliament are transcribed and published in a document known as <i>Hansard</i>. Since 2006, <i>Hansard</i> has been available for public download in <code>.xml</code> file format. To date, this archive contains over 57 million words. By converting these transcripts from <code>.xml</code> to <code>.txt</code>, the files can be processed in numerous ways with the <i>Python</i> coding language.</p>\n<p>The purpose of this document is to illustrate the counting of specific words in the collection of text files that will be referred to from now on as the <b><span style=\"cursor:help;\" title=\"a large and structured set of texts\">corpus</b></span>. Terms that may be unfamiliar to the reader are highlighted in <b><span style=\"cursor:help;\" title=\"hover the mouse over the word to read the definition\">bold</span></b>, and contain a definition that can be accessed by hovering the mouse pointer over the word. While it is not necessary to read every line of code to understand the process, explanatory sections within the code are marked with a # and coloured light blue.\n\n### Part 1: Determining the word frequency for a single file\n\nBefore we can begin working with a piece of text, it must be loaded into and read by <i>Python</i>. Rather than altering (and potentially irreversibly changing) our original text file, we will work with the contents of the file as a <b><span style=\"cursor:help;\" title=\"any finite sequence of characters (i.e., letters, numerals, symbols and punctuation marks)\">string</span></b> of text contained within a <b><span style=\"cursor:help;\" title=\"a value that can change, depending on conditions or on information passed to the program\">variable</span></b>. Now we can close our original text file, keeping it intact.\n\nIn the next piece of code we will open the file that contains the textual transcripts of all of the Parliamentary debates that occured in the House of Commons during the 39th sitting of Parliament.\n\nWhile working with one long string may be useful for other applications, our purposes require that we split the text into pieces. In this case, each word will become it's own unique string.", "# 1. open the text file\ninfile = open('data/39.txt')\n# 2. read the file and assign it to the variable 'text'\ntext = infile.read()\n# 3. close the text file\ninfile.close()\n# 4. split the variable 'text' into distinct word strings\nwords = text.split()", "Now that we have loaded our file, we can begin to work on it. <i>Python</i> offers us a lot of pre-built tools to make the task of coding easier. Some of the most commonly used tools are known as <b><span style=\"cursor:help;\" title=\"a set of instructions that performs a specific task\">functions</span></b>. Functions are useful for automating tasks that would otherwise require a repetitive amount of coding. While <i>Python</i> has many built-in functions, the language's true power comes from the ability to define unique functions based on programming needs. In the code above, we've already used four <i>Python</i> functions: <code>open</code>, <code>read</code>, <code>close</code>, and <code>split</code>.</p>\n<p>In the next piece of code we will define our own funtion, called <code>count_in_list</code>. This function will allow us to count the occurence of any word in the corpus.", "# 5. define the'count_in_list' function\ndef count_in_list(item_to_count, list_to_search):\n \"Counts the number of a specified word within a list of words\"\n number_of_hits = 0\n for item in list_to_search:\n if item == item_to_count:\n number_of_hits += 1\n return number_of_hits", "Now we can call the function for any word we choose. The next example shows that there are <u>392</u> occurences of the word <code>privacy</code> contained in the transcripts for the 39th sitting of Parliament.", "# 6. here the function counts the instances of the word 'privacy'\nprint \"Instances of the word \\'privacy\\':\", (count_in_list(\"privacy\", words))", "Unfortunately, there are two distinct problems here, centred around the fact that our function is only counting the string <code>privacy</code> exactly as it appears.</p>\n<p>The first problem is that text strings are case-sensitive. If the word contains UPPERCASE and lowercase letters, the word that is searched for will only be counted if the cases match exactly. The following example counts the number of instances of <code>Privacy</code> with the first letter capitalized.", "print \"Instances of the word \\'Privacy\\':\", (count_in_list(\"Privacy\", words))", "Here is a more extreme example to illustrate the point.", "print \"Instances of the word \\'pRiVaCy\\':\",(count_in_list(\"pRiVaCy\", words))", "The second problem is that of punctuation. Much like words are case-sensitive, they are also punctuation-sensitive. If a piece of punctuation has been included in the string, it will be included in the search. Here we count the occurrences of <code>privacy,</code> shown here with a comma after the word.", "print \"Instances of the word \\'privacy,\\':\", (count_in_list(\"privacy,\", words))", "And here we count <code>privacy.</code>, with the word followed by a period.", "print \"Instances of the word \\'privacy.\\':\",(count_in_list(\"privacy.\", words))", "We could comb through the text to find all of the different instantiations of <code>privacy</code>, and then run the code for each one and add together all of the numbers, but that would be time consuming and potentially inaccurate. Instead, we must process the text further to make the text uniform. In this case we want to make all of the characters lowercase, and remove all of the punctuation.</p>\n<p><i>Python</i> has a function that will do this for us.\n\nWe will reload the text file, split the text into distinct words or <b><span style=\"cursor:help;\" title=\"an individual occurence of a symbol or string\">tokens</span></b> and then use the the text cleaning function.", "infile = open('data/39.txt')\ntext = infile.read()\ninfile.close()\ntokens = text.split()\n\n#here we call the text cleaning function\nwords = [w.lower() for w in tokens if w.isalpha()]", "Now, when we count the instances of <code>privacy</code>, we are presented with a total of <u>846</u> instances.", "print \"Instances of the word \\'privacy\\':\", (count_in_list(\"privacy\", words))", "Part 2: Determining Word Frequencies for the Entire Corpus\nNow let's see how this compares to the rest of the corpus. To accomplish this, we must write another function that will read all of the text files in our file folder.\n<p>First we need to introduce a function of <i>Python</i> that we've yet to see: modules. Modules are packages of functions and code that serve specific purposes. These are much like functions, but more complex.\n\n<p>The next piece of code imports a module called <code>os</code>, specifically the function <code>listdir</code>. We will use <code>listdir</code> to print a list of all the files in a specific directory. Each of the listed files corresponds to a textual transcript of Hansard. The first nine files refer to the complete transcript for each year from 2006 to 2014, while the last three files are the transcripts corresponding to each sitting of Parliament, in this case the 39th through to the end of the second sitting of the 41st Parliament.", "# imports the os module\nfrom os import listdir\n# calls the listdir function to list the files in a specific directory\nlistdir(\"data\")", "Although we can display the contents of a directory by using the <code>listdir</code> function, <i>Python</i> needs those names stored in a list in order to iterate over it. We also want to specify that only files with the extension <code>.txt</code> are included. Here we create another function called <code>list_textfiles</code>.", "def list_textfiles(directory):\n \"Return a list of filenames ending in '.txt'\"\n textfiles = []\n for filename in listdir(directory):\n if filename.endswith(\".txt\"):\n textfiles.append(directory + \"/\" + filename)\n return textfiles", "Rather than writing code to open each file individually, we can create another custom function to open the file we pass to it. We'll call this one read_file.", "def read_file(filename):\n \"Read the contents of FILENAME and return as a string.\"\n infile = open(filename)\n contents = infile.read()\n infile.close()\n return contents", "Now we can open all of the files in our directory, strip each file of uppercase letters and punctuation, split the whole of each text into tokens, and store all the data as separate lists in our variable <code>corpus</code>.", "corpus = []\nfor filename in list_textfiles(\"data\"):\n # reads the file\n text = read_file(filename)\n # splits the text into tokens\n tokens = text.split()\n # removes the punctuation and changes Uppercase to lower\n words = [w.lower() for w in tokens if w.isalpha()]\n # creates a set of word lists for each file\n corpus.append(words)", "Let's check to make sure the code worked by using the <code>len</code> function to count the number of items in our <code>corpus</code> list.", "print\"There are\", len(corpus), \"files in the list, named: \", ', '.join(list_textfiles('data'))", "Let's create a function to make the names of the files more readable. First we'll have to strip the the file extension <code>.txt</code>.", "from os.path import splitext\n\ndef remove_ext(filename):\n \"Removes the file extension, such as .txt\"\n name, extension = splitext(filename)\n return name\n\nfor files in list_textfiles('data'):\n remove_ext(files)", "Now let's make a function to remove the <code>data/</code>.", "from os.path import basename\n\ndef remove_dir(filepath):\n \"Removes the path from the file name\"\n name = basename(filepath)\n return name\nfor files in list_textfiles('data'):\n remove_dir(files)", "And finally, we'll write a function to tie the two functions together.", "def get_filename(filepath):\n \"Removes the path and file extension from the file name\"\n filename = remove_ext(filepath)\n name = remove_dir(filename)\n return name\n\n\nfilenames = []\nfor files in list_textfiles('data'):\n files = get_filename(files)\n filenames.append(files)", "Now we can display a readable list of the files within our directory.", "print\"There are\", len(corpus), \"files in the list, named:\", ', '.join(filenames),\".\"", "The next step involves iterating through both lists: <code>corpus</code> and <code>filenames</code>, in order to generate a word frequency for each file in the corpus. For this we will use <i>Python's</i> <code>zip</code> function.", "for words, names in zip(corpus, filenames):\n print\"Instances of the word \\'privacy\\' in\",names, \":\", count_in_list(\"privacy\", words)", "What's exciting about this code is that we can now search the entire corpus for any word we choose. Let's search for <code>information</code>.", "for words, names in zip(corpus, filenames):\n print\"Instances of the word \\'information\\' in\",names, \":\", count_in_list(\"information\", words)", "How about <code>ethics</code>?", "for words, names in zip(corpus, filenames):\n print\"Instances of the word \\'ethics\\' in\",names, \":\", count_in_list(\"ethics\", words)", "While word frequencies, by themselves, do not give us a tremendous amount of contextual information, they are a valuable first step in conducting large scale text analyses. For instance, returning to our frequency list for <code>privacy</code>, we can observe a general trend suggesting that the use of <code>privacy</code> has been increasing between 2006 and now. It is important to note that our calculations are a raw number. For a more contextual analysis we could calculate how many times the Parliament was in session during each period, or perhaps we could compare the word <code>privacy</code> to the total amount of words in each file.</p>\n<p>Stay tuned for the next section: <i>Adding Context to Word Frequency Counts</i>\n\n--------" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
julienchastang/unidata-python-workshop
notebooks/Pandas/Pandas Introduction.ipynb
mit
[ "<div style=\"width:1000 px\">\n\n<div style=\"float:right; width:98 px; height:98px;\">\n<img src=\"https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png\" alt=\"Unidata Logo\" style=\"height: 98px;\">\n</div>\n\n<h1>Introduction to Pandas</h1>\n<h3>Unidata Python Workshop</h3>\n\n<div style=\"clear:both\"></div>\n</div>\n\n<hr style=\"height:2px;\">\n\nOverview:\n\nTeaching: 35 minutes\nExercises: 40 minutes\n\nQuestions\n\nWhat is Pandas?\nWhat are the basic Pandas data structures?\nHow can I read data into Pandas?\nWhat are some of the data operations available in Pandas?\n\nObjectives\n\n<a href=\"#series\">Data Series</a>\n<a href=\"#frames\">Data Frames</a>\n<a href=\"#loading\">Loading Data in Pandas</a>\n<a href=\"#missing\">Missing Data</a>\n<a href=\"#manipulating\">Manipulating Data</a>\n\n<a name=\"series\"></a>\nData Series\nData series are one of the fundamental data structures in Pandas. You can think of them like a dictionary; they have a key (index) and value (data/values) like a dictionary, but also have some handy functionality attached to them.\nTo start out, let's create a series from scratch. We'll imagine these are temperature observations.", "import pandas as pd\ntemperatures = pd.Series([23, 20, 25, 18])\ntemperatures", "The values on the left are the index (zero based integers by default) and on the right are the values. Notice that the data type is an integer. Any NumPy datatype is acceptable in a series.\nThat's great, but it'd be more useful if the station were associated with those values. In fact you could say we want the values indexed by station name.", "temperatures = pd.Series([23, 20, 25, 18], index=['TOP', 'OUN', 'DAL', 'DEN'])\ntemperatures", "Now, very similar to a dictionary, we can use the index to access and modify elements.", "temperatures['DAL']\n\ntemperatures[['DAL', 'OUN']]", "We can also do basic filtering, math, etc.", "temperatures[temperatures > 20]\n\ntemperatures + 2", "Remember how I said that series are like dictionaries? We can create a series straight from a dictionary.", "dps = {'TOP': 14,\n 'OUN': 18,\n 'DEN': 9,\n 'PHX': 11,\n 'DAL': 23}\n\ndewpoints = pd.Series(dps)\ndewpoints", "It's also easy to check and see if an index exists in a given series:", "'PHX' in dewpoints\n\n'PHX' in temperatures", "Series have a name attribute and their index has a name attribute.", "temperatures.name = 'temperature'\ntemperatures.index.name = 'station'\n\ntemperatures", "Exercise\n\nCreate a series of pressures for stations TOP, OUN, DEN, and DAL (assign any values you like).\nSet the series name and series index name.\nPrint the pressures for all stations which have a dewpoint below 15.", "# Your code goes here\n", "Solution", "# %load solutions/make_series.py\n", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"frames\"></a>\nData Frames\nSeries are great, but what about a bunch of related series? Something like a table or a spreadsheet? Enter the data frame. A data frame can be thought of as a dictionary of data series. They have indexes for their rows and their columns. Each data series can be of a different type, but they will all share a common index.\nThe easiest way to create a data frame by hand is to use a dictionary.", "data = {'station': ['TOP', 'OUN', 'DEN', 'DAL'],\n 'temperature': [23, 20, 25, 18],\n 'dewpoint': [14, 18, 9, 23]}\n\ndf = pd.DataFrame(data)\ndf", "You can access columns (data series) using dictionary type notation or attribute type notation.", "df['temperature']\n\ndf.dewpoint", "Notice the index is shared and that the name of the column is attached as the series name.\nYou can also create a new column and assign values. If I only pass a scalar it is duplicated.", "df['wspeed'] = 0.\ndf", "Let's set the index to be the station.", "df.index = df.station\ndf", "Well, that's close, but we now have a redundant column, so let's get rid of it.", "df = df.drop('station', axis='columns')\ndf", "We can also add data and order it by providing index values. Note that the next cell contains data that's \"out of order\" compared to the dataframe shown above. However, by providing the index that corresponds to each value, the data is organized correctly into the dataframe.", "df['pressure'] = pd.Series([1010,1000,998,1018], index=['DEN','TOP','DAL','OUN'])\ndf", "Now let's get a row from the dataframe instead of a column.", "df.loc['DEN']", "We can even transpose the data easily if we needed that do make things easier to merge/munge later.", "df.T", "Look at the values attribute to access the data as a 1D or 2D array for series and data frames recpectively.", "df.values\n\ndf.temperature.values", "Exercise\n\nAdd a series of rain observations to the existing data frame.\nApply an instrument correction of -2 to the dewpoint observations.", "# Your code goes here\n", "Solution", "# %load solutions/rain_obs.py\n", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"loading\"></a>\nLoading Data in Pandas\nThe real power of pandas is in manupulating and summarizing large sets of tabular data. To do that, we'll need a large set of tabular data. We've included a file in this directory called JAN17_CO_ASOS.txt that has all of the ASOS observations for several stations in Colorado for January of 2017. It's a few hundred thousand rows of data in a tab delimited format. Let's load it into Pandas.", "import pandas as pd\n\ndf = pd.read_csv('Jan17_CO_ASOS.txt', sep='\\t')\n\ndf.head()\n\ndf = pd.read_csv('Jan17_CO_ASOS.txt', sep='\\t', parse_dates=['valid'])\n\ndf.head()\n\ndf = pd.read_csv('Jan17_CO_ASOS.txt', sep='\\t', parse_dates=['valid'], na_values='M')\n\ndf.head()", "Let's look in detail at those column names. Turns out we need to do some cleaning of this file. Welcome to real world data analysis.", "df.columns\n\ndf.columns = ['station', 'time', 'temperature', 'dewpoint', 'pressure']\n\ndf.head()", "For other formats of data CSV, fixed width, etc. that are tools to read it as well. You can even read excel files straight into Pandas.\n<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"missing\"></a>\nMissing Data\nWe've already dealt with some missing data by turning the 'M' string into actual NaN's while reading the file in. We can do one better though and delete any rows that have all values missing. There are similar operations that could be performed for columns. You can even drop if any values are missing, all are missing, or just those you specify are missing.", "len(df)\n\ndf = df.dropna(axis='rows', how='all', subset=['temperature', 'dewpoint', 'pressure'])\n\nlen(df)\n\ndf.head()", "Exercise\nOur dataframe df has data in which we dropped any entries that were missing all of the temperature, dewpoint and pressure observations. Let's modify our command some and create a new dataframe df2 that only keeps observations that have all three variables (i.e. if a pressure is missing, the whole entry is dropped). This is useful if you were doing some computation that requires a complete observation to work.", "# Your code goes here\n# df2 = ", "Solution", "# %load solutions/drop_obs.py\n", "Lastly, we still have the original index values. Let's reindex to a new zero-based index for only the rows that have valid data in them.", "df.reset_index(drop=True)\n\ndf.head()", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"manipulating\"></a>\nManipulating Data\nWe can now take our data and do some intersting things with it. Let's start with a simple min/max.", "print(f'Min: {df.temperature.min()}\\nMax: {df.temperature.max()}')", "You can also do some useful statistics on data with attached methods like corr for correlation coefficient.", "df.temperature.corr(df.dewpoint)", "We can also call a groupby on the data frame to start getting some summary information for each station.", "df.groupby('station').mean()", "Exercise\nCalculate the min, max, and standard deviation of the temperature field grouped by each station.", "# Calculate min\n\n\n# Calculate max\n\n\n# Calculate standard deviation\n", "Solution", "# %load solutions/calc_stats.py\n", "Now, let me show you how to do all of that and more in a single call.", "df.groupby('station').describe()", "Now let's suppose we're going to make a meteogram or similar and want to get all of the data for a single station.", "df.groupby('station').get_group('0CO').head().reset_index(drop=True)", "Exercise\n\nRound the temperature column to whole degrees.\nGroup the observations by temperature and use the count method to see how many instances of the rounded temperatures there are in the dataset.", "# Your code goes here\n", "Solution", "# %load solutions/temperature_count.py\n", "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jo-c-2017/DS_Projects
JC_inferential_statistics_ex2.ipynb
apache-2.0
[ "Examining Racial Discrimination in the US Job Market\nBackground\nRacial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés to black-sounding or white-sounding names and observing the impact on requests for interviews from employers.\nData\nIn the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.\nNote that the 'b' and 'w' values in race are assigned randomly to the resumes when presented to the employer.\nExercises\nYou will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.\nAnswer the following questions in this notebook below and submit to your Github account. \n\nWhat test is appropriate for this problem? Does CLT apply?\nWhat are the null and alternate hypotheses?\nCompute margin of error, confidence interval, and p-value.\nWrite a story describing the statistical significance in the context or the original problem.\nDoes your analysis mean that race/name is the most important factor in callback success? Why or why not? If not, how would you amend your analysis?\n\nYou can include written notes in notebook cells using Markdown: \n - In the control panel at the top, choose Cell > Cell Type > Markdown\n - Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet\nResources\n\nExperiment information and data source: http://www.povertyactionlab.org/evaluation/discrimination-job-market-united-states\nScipy statistical methods: http://docs.scipy.org/doc/scipy/reference/stats.html \nMarkdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet", "import pandas as pd\nimport numpy as np\nimport math\nfrom scipy import stats\nimport matplotlib.pyplot as plt\n\ndata = pd.io.stata.read_stata('data/us_job_market_discrimination.dta')\ndata.head(5)", "(1) The callback rate for each group (black-sounding and white-sounding names) follows a binomial distribution, which approaches a normal distribution when np>10 and n(1-p)>10 (see below). Therefore, the difference in call back rate follows a normal distribution, and the CLT can be applied.", "n_b = data[data.race=='b'].call.count() #total number of black-sounding names\np_b = np.sum(data[data.race=='b'].call)/n_b #callback rate for black-sounding names\nn_w = data[data.race=='w'].call.count() #total number of white-sounding names\np_w = np.sum(data[data.race=='w'].call)/n_w #callback rate for white-sounding names \nprint('np for black-sounding names: ' + str(n_b * p_b))\nprint('n(1-p) for black-sounding names: ' + str(n_b * (1-p_b))) \nprint('np for white-sounding names: ' + str(n_w * p_w))\nprint('n(1-p) for white-sounding names: ' + str(n_w * (1-p_w)))", "(2) A two sample z test is appropriate to test if there is a significant difference in callback rate $p$ between resumes with black-sounding and white-sounding names:\n$H_0: p_B = p_W$\n$H_A: p_B \\neq p_w$\nIn this case, p value is 3.86e-05, marginal error is: 0.015 and 95 % Confidence Interval is [-0.047, -0.017]. Therefore, we are able to reject the null hypothesis that the callback rate between resumes with black-sounding and white-sounding names are the same.", "mean_diff = p_b - p_w\nse = math.sqrt((p_b* (1-p_b))/n_b + (p_w* (1-p_w))/n_w)\nz = abs(mean_diff)/se\np_z = (1-stats.norm.cdf(z))*2\nme = 1.96*se\nub = mean_diff + me\nlb = mean_diff -me\nprint('p value is: ' + str(p_z))\nprint('marginal error is: ' + str(me))\nprint('95 % Confidence Interval: [' + str(lb) + ', ' + str(ub) + ']')", "(3) In summary, there is a significant difference in callback rate between resumes with black-sounding and white-sounding names. The callback rate for resumes with black-sounding names is significant lower than that for resumes with white-sounding names. However, it does not mean that race/name is the most important factor in callback rate. We can also evaluate if there are confounding variables (such as years of experience and education). If not, we can further evaluate if any other variables have more significant effects than race/name." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Nikolay-Lysenko/presentations
hard_use_and_abuse/xgboost_vs_nested_shapes.ipynb
mit
[ "Introduction\nIn this presentation, some toy classification problems are studied. Their common property is that raw features to be used form inefficient representations, while a bit of feature engineering can result in guaranteed perfect scores. However, for the sake of curiosity, here features are not transformed and it is measured how well Gradient Boosting can predict class labels based on initial representations.\nAlthough the problems that are suggested here may look too artificial and impractical, some useful conclusions are drawn after the experiments.\nGeneral Preparations", "from itertools import product\n\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport mpl_toolkits.mplot3d.axes3d as axes3d\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import roc_auc_score\n\n# Startup settings can not suppress a warning from `xgboost` and so this is needed.\nimport warnings\nwith warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n import xgboost as xgb\n\nnp.random.seed(361)", "Concentric Spheres\nThis binary classification problem is very simple. There are several concentric (probably, high-dimensional) spheres and each of them is associated with one class only. This means that radius (i.e. distance between a point and the common center of all spheres) is a \"golden feature\" — a classifier trained only on it can achieve superior accuracy.\nNevertheless, the question is what xgboost is able to achieve if it is applied in a naive straightforward fashion without radius computation.", "def draw_from_unit_sphere(sample_size, n_dim):\n \"\"\"\n Draws `sample_size` random samples\n from uniform distribution on\n `n_dim`-dimensional unit sphere.\n \n The idea is to draw samples from an\n isotropic distribution (here, normal\n distribution) and then norm them.\n \n @type sample_size: int\n @type n_dim: int\n @rtype: numpy.ndarray\n \"\"\"\n normal_sample = np.random.normal(size=(sample_size, n_dim))\n radii = np.sqrt((normal_sample ** 2).sum(axis=1))\n radii = radii.reshape((radii.shape[0], 1))\n return normal_sample / radii", "Let us show that the above function works.", "sample_size = 300\ncircumference = draw_from_unit_sphere(sample_size, 2)\nthree_d_sphere = draw_from_unit_sphere(sample_size, 3)\n\nfig = plt.figure(figsize=(15, 8))\n\nax_one = fig.add_subplot(121)\nax_one.scatter(circumference[:, 0], circumference[:, 1])\nax_one.set_aspect('equal')\nax_one.set_title(\"Sample from circumference\")\n\nax_two = fig.add_subplot(122, projection='3d')\nax_two.scatter(three_d_sphere[:, 0], three_d_sphere[:, 1], three_d_sphere[:, 2])\nax_two.set_aspect('equal')\n_ = ax_two.set_title(\"Sample from 3D sphere\", y=1.075)\n\ndef draw_from_concentric_spheres(radii, n_dim, samples_per_sphere):\n \"\"\"\n Draws `sample_per_sphere` samples from\n uniform distribution on a sphere of\n a radius that is in `radii`. Then\n concatenates all such results.\n \n @type radii: list(float)\n @type n_dim: int\n @type samples_per_sphere: int\n @rtype: numpy.ndarray\n \"\"\"\n spheres = []\n for radius in radii:\n spheres.append(radius * draw_from_unit_sphere(samples_per_sphere, n_dim))\n spheres = np.vstack(spheres)\n return spheres\n\ndef synthesize_nested_spheres_dataset(radii_of_positives, radii_of_negatives,\n n_dim, samples_per_sphere):\n \"\"\"\n Creates dataset for a binary classification\n problem, where objects are drawn from\n concentric spheres and distance from the\n origin determines the class of an object.\n \n @type radii_of_positives: list(float)\n @type radii_of_negatives: list(float)\n @type n_dim: int\n @type samples_per_sphere: int\n @rtype: numpy.ndarray\n \"\"\"\n positives = draw_from_concentric_spheres(radii_of_positives, n_dim,\n samples_per_sphere)\n positives = np.hstack((positives, np.ones((positives.shape[0], 1))))\n negatives = draw_from_concentric_spheres(radii_of_negatives, n_dim,\n samples_per_sphere)\n negatives = np.hstack((negatives, np.zeros((negatives.shape[0], 1))))\n dataset = np.vstack((positives, negatives))\n return dataset\n\ndef evaluate_xgboost_performance(dataset, max_depth):\n \"\"\"\n Computes ROC-AUC score achieved by\n gradient-boosted trees of depth that\n is not more than `max_depth`.\n \n The reported score is measured on\n hold-out test set and number of\n estimators is determined by\n early stopping.\n \n @type dataset: numpy.ndarray\n @type max_depth: int\n @rtype: float\n \"\"\"\n # Prepare data.\n X_refit, X_test, y_refit, y_test = \\\n train_test_split(dataset[:, :-1], dataset[:, -1],\n stratify=dataset[:, -1], random_state=361)\n X_train, X_val, y_train, y_val = \\\n train_test_split(X_refit, y_refit,\n stratify=y_refit, random_state=361)\n dm_refit = xgb.DMatrix(X_refit, label=y_refit)\n dm_train = xgb.DMatrix(X_train, label=y_train)\n dm_val = xgb.DMatrix(X_val, label=y_val)\n dm_test = xgb.DMatrix(X_test, label=y_test)\n \n # Set hyperparameters.\n num_rounds = 3000\n hyperparams = {'max_depth': max_depth,\n 'subsample': 0.9,\n 'objective': 'binary:logistic'}\n early_stopping_rounds = 10\n learning_rates = [0.3] * 1000 + [0.2] * 1000 + [0.1] * 1000\n\n # Train model.\n bst = xgb.train(hyperparams, dm_train, num_rounds,\n early_stopping_rounds=early_stopping_rounds,\n evals=[(dm_train, 'train'), (dm_val, 'valid')],\n learning_rates=learning_rates,\n verbose_eval=500)\n num_rounds = bst.best_iteration\n learning_rates = learning_rates[:num_rounds]\n bst = xgb.train(hyperparams, dm_refit, num_rounds,\n evals=[(dm_refit, 'refit')],\n learning_rates=learning_rates,\n verbose_eval=500)\n \n # Evaluate performance.\n y_hat = bst.predict(dm_test)\n score = roc_auc_score(y_test, y_hat)\n return score", "Settings of the experiment are introduced in the below cell. It is possible to change them in order to see what happens.", "positive_radii = [10, 12, 14]\nnegative_radii = [11, 13]\n\ndims = [2, 3, 4, 5, 6]\nbase_sizes = [250, 500, 1000, 2000]\ncurse_adjustment_factor = 4 # Compensate curse of dimensionality.", "One thing that can be unclear is why curse_adjustment_factor is involved. The idea behind this is that increase of dimensionality has two-fold impact:\n\nthe density of data becomes lower;\nthe problem of shapes separation becomes harder itself.\n\nHere, the goal is to study only the latter impact and so curse_adjustment_factor is included in order to counterbalance decrease in data density.", "scores = []\nfor n_dim, sample_size in product(dims, base_sizes):\n print('\\n---')\n print(\"Dimensionality is {}, base size is {}\".format(n_dim, sample_size))\n adjusted_size = (curse_adjustment_factor ** (n_dim - 2)) * sample_size\n dataset = synthesize_nested_spheres_dataset(positive_radii, negative_radii,\n n_dim, adjusted_size)\n # Decision stumps work poorly here and so deeper trees are required.\n score = evaluate_xgboost_performance(dataset, max_depth=10)\n scores.append({'n_dim': n_dim, 'sample_size': sample_size, 'score': score})\nprint('\\n')\n\nscores_df = pd.DataFrame(scores)\nscores_df\n\nfig = plt.figure(figsize=(10, 10))\nax = fig.add_subplot(111)\nfor idx, group in scores_df.groupby(['sample_size']):\n name_for_legend = 'unadjusted sample size: {}'.format(idx)\n group[name_for_legend] = group['score']\n group.plot('n_dim', name_for_legend, ax=ax, marker='o')\nax.set_xlim(min(dims) - 0.25, max(dims) + 0.25)\nax.grid(True)\nax.set_xlabel('Number of dimensions')\nax.set_ylabel('ROC-AUC')\n_ = ax.set_title('Performance of XGBoost in nested spheres classification')", "Although every value used in the above plot is volatile, some general tendencies can be easily seen. The higher number of dimensions is, the lower ROC-AUC score is. Also the higher number of labelled examples for a particular dimensionality is, the better performance is.\nNested Surfaces of Revolution with Spheres as Generatrixes\nThis binary classification problem is a bit more involved. There are several $n$-dimensional nested shapes that are surfaces of revolution constructed by rotation of (hyper)spheres of different radii (called minor radii) along a circumference of a fixed radius (called major radius) centered at the origin. For example, if $n = 3$, the shapes are nested tori.\nAgain, each shape is associated with one class only. Thus, a \"golden feature\" is distance between a point and the circumference of revolution. However, like in the previous problem, only coordinates in $n$-dimensional space are used.", "def draw_from_surface_of_sphere_revolution(major_radius, minor_radius,\n n_dim, sample_size):\n \"\"\"\n Draws `sample_size` random samples\n from uniform distribution on\n `n_dim`-dimensional surface of\n revolution with hypersphere as\n generatrix.\n \n This hypersphere has radius equal to\n `minor_radius`, while the circumference\n of revolution has radius `major_radius`\n and lies within a linear span of the\n first two basis vectors (denoted as\n x and y respectively).\n \n @type major_radius: float\n @type minor_radius: float\n @type n_dim: int\n @type sample_size: int\n @rtype: numpy.ndarray\n \"\"\"\n try:\n assert n_dim > 2\n except AssertionError:\n raise ValueError(\"Number of dimensions must be 3 or greater.\")\n try:\n assert major_radius > minor_radius\n except AssertionError:\n raise ValueError(\"Major radius must be greater than minor radius.\")\n\n revolution_part = draw_from_unit_sphere(sample_size, 2)\n sphere_part = draw_from_unit_sphere(sample_size, n_dim - 1)\n xy_coefficients = major_radius + minor_radius * sphere_part[:, 0]\n xy_coefficients = xy_coefficients.reshape((xy_coefficients.shape[0], 1))\n projection_on_xy = xy_coefficients * revolution_part\n projection_on_other_axes = minor_radius * sphere_part[:, 1:]\n return np.hstack((projection_on_xy, projection_on_other_axes))", "Let us show that the above function works. Why not to draw a sample from torus?", "torus = draw_from_surface_of_sphere_revolution(major_radius=4, minor_radius=1.5,\n n_dim=3, sample_size=300)\n\nfig = plt.figure(figsize=(15, 8))\n\nax_one = fig.add_subplot(121, projection='3d')\nax_one.scatter(torus[:, 0], torus[:, 1], torus[:, 2])\nax_one.set_aspect('equal') # It does not work properly, so hack is below.\nax_one.set_zlim(torus[:, 0].min() - 1, torus[:, 0].max() + 1)\nax_one.set_title(\"Sample from torus (in 3D)\", y=1.075)\n\nax_two = fig.add_subplot(122)\nax_two.scatter(torus[:, 0], torus[:, 1])\nax_two.set_aspect('equal')\nax_two.grid(True)\n_ = ax_two.set_title(\"Sample from torus (projection on the first two axes)\")\n\ndef draw_from_nested_surfaces_of_sphere_revolution(major_radius, minor_radii,\n n_dim, samples_per_surface):\n \"\"\"\n Draws from several uniform\n distributions on nested\n surfaces of revolution. \n \n @type major_radius: float\n @type minor_radii: list(float)\n @type n_dim: int\n @type samples_per_surface: int\n @rtype: numpy.ndarray\n \"\"\"\n surfaces = []\n for minor_radius in minor_radii:\n surfaces.append(\n draw_from_surface_of_sphere_revolution(major_radius, minor_radius,\n n_dim, samples_per_surface))\n surfaces = np.vstack(surfaces)\n return surfaces\n\ndef synthesize_nested_surfaces_of_revolution(major_radius,\n radii_of_positives, radii_of_negatives,\n n_dim, samples_per_surface):\n \"\"\"\n Creates dataset for a binary classification\n problem, where objects are drawn from\n nested surfaces of revolution and distance\n between a point and the circumference of\n revolution determines the class of the point.\n \n @type major_radius: float\n @type radii_of_positives: list(float)\n @type radii_of_negatives: list(float)\n @type n_dim: int\n @type samples_per_surface: int\n @rtype: numpy.ndarray\n \"\"\"\n positives = draw_from_nested_surfaces_of_sphere_revolution(\n major_radius, radii_of_positives, n_dim, samples_per_surface)\n positives = np.hstack((positives, np.ones((positives.shape[0], 1))))\n negatives = draw_from_nested_surfaces_of_sphere_revolution(\n major_radius, radii_of_negatives, n_dim, samples_per_surface)\n negatives = np.hstack((negatives, np.zeros((negatives.shape[0], 1))))\n dataset = np.vstack((positives, negatives))\n return dataset", "Settings of the experiment are introduced in the below cell. It is possible to change them in order to see what happens.", "major_radius = 100\npositive_radii = [10, 12, 14]\nnegative_radii = [11, 13]\n\ndims = [3, 4, 5, 6]\nbase_sizes = [1000, 2000, 4000]\ncurse_adjustment_factor = 4 # As before, compensate curse of dimensionality.\n\nscores = []\nfor n_dim, sample_size in product(dims, base_sizes):\n print('\\n---')\n print(\"Dimensionality is {}, base size is {}\".format(n_dim, sample_size))\n adjusted_size = (curse_adjustment_factor ** (n_dim - 2)) * sample_size\n dataset = synthesize_nested_surfaces_of_revolution(\n major_radius, positive_radii, negative_radii, n_dim, adjusted_size)\n score = evaluate_xgboost_performance(dataset, max_depth=25)\n scores.append({'n_dim': n_dim, 'sample_size': sample_size, 'score': score})\nprint('\\n')\n\nscores_df = pd.DataFrame(scores)\nscores_df\n\nfig = plt.figure(figsize=(10, 10))\nax = fig.add_subplot(111)\nfor idx, group in scores_df.groupby(['sample_size']):\n name_for_legend = 'unadjusted sample size: {}'.format(idx)\n group[name_for_legend] = group['score']\n group.plot('n_dim', name_for_legend, ax=ax, marker='o')\nax.set_xlim(min(dims) - 0.25, max(dims) + 0.25)\nax.grid(True)\nax.set_xlabel('Number of dimensions')\nax.set_ylabel('ROC-AUC')\n_ = ax.set_title('Performance of XGBoost in classification of points from ' +\n 'nested surfaces of revolution')", "Suddenly, the higher the number of dimensions is, the higher ROC-AUC score is. One potential explanation for this is as follows. Probably, Gradient Boosting starves for labelled examples in this problem and curse_adjustment_factor is big enough, so it does not only compensate growth of dimensionality, but also provides facilities to train higher capacity algorithms.\nNow, let us verify that, indeed, after feature engeneering is applied, the problem becomes trivial.", "def generate_golden_feature_for_surfaces_of_revolution(dataset, major_radius):\n \"\"\"\n For each point from `dataset` computes\n distance between the point and the\n circumference of revolution which has\n radius `major_radius` and lies within\n a span of the first two basis vectors\n (denoted as x and y respectively).\n \n @type dataset: numpy.ndarray\n @type major_radius: float\n @rtype: numpy.ndarray\n \"\"\"\n norms_of_xy_proj = np.sqrt((dataset[:, :2] ** 2).sum(axis=1))\n norms_of_xy_proj = norms_of_xy_proj.reshape((norms_of_xy_proj.shape[0], 1))\n normed_xy_proj = dataset[:, :2] / norms_of_xy_proj\n nearest_points_from_circumference = major_radius * normed_xy_proj\n other_coordinates = np.zeros((dataset.shape[0], dataset.shape[1] - 3))\n nearest_points_from_circumference = np.hstack(\n (nearest_points_from_circumference, other_coordinates))\n golden_feature = np.sqrt(\n ((dataset[:, :-1] - nearest_points_from_circumference) ** 2).sum(axis=1))\n return golden_feature\n\ngolden_feature = generate_golden_feature_for_surfaces_of_revolution(dataset,\n major_radius)\ngolden_feature = np.around(golden_feature, decimals=2)\nnp.unique(golden_feature)\n\ndf = pd.DataFrame(np.vstack((golden_feature, dataset[:, -1])).T,\n columns=['golden_feature', 'target'])\ndf.groupby(['golden_feature']).agg({'target': [min, max]})", "This means that class label can be reconstructed with 100% accuracy given only the golden feature. Moreover, even single decision tree of depth 4 is able to do it.\nConclusion\nDespite this presentation is related to toy problems of synthetic datasets classification, two meaningful observations can be made:\n\n\nThere is a heuristic that Gradient Boosting works better with decision stumps rather than with deep decision trees. However, the theory behind this heuristic is sometimes forgotten. Actually, Gradient Boosting requires base estimators that have better performance than random guessing and Gradient Boosting can not significally boost strong classifiers. This means that if decision stumps outperform random guessing, the heuristic is applicable, but otherwise it is misleading. In the considered here problems, decision stumps can not achieve accuracy improvement over random guessing and so learning stucks.\n\n\nAccording to Wolpert's no-free-lunch theorem, there is no machine learning algorithm that can solve every problem excellently. At first glance, the problems from this presentation are the ones that can be solved by Gradient Boosting — there are dense tabular data and number of observations is not small. Nevertheless, inefficient representation does not allow Gradient Boosting to learn true dependency. That being said, data understanding and feature engineering are important stages of data-driven modelling." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
glasperfan/thesis
bach_code/legacy/NLL Curves.ipynb
apache-2.0
[ "NLL Curves", "%matplotlib inline\nimport numpy as np\nimport scipy as sp\nimport matplotlib as mpl\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt\nimport pandas as pd\npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\npd.set_option('display.notebook_repr_html', True)\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\nsns.set_context(\"poster\")\n\nwith open('learning_rates.txt', 'r') as f:\n lines = f.readlines()\n\nvalues = []\nfor l in lines:\n if \",\" in l:\n values.append(map(float, l.split(\",\")))\n else:\n values.append(float(l))\n\nlearning_rates = []\nsizes = []\nnlls = []\nfor idx, v in enumerate(values):\n if idx % 21 == 0:\n learning_rates.append(v[1])\n sizes.append(v[0])\n elif idx % 21 == 1:\n nlls.append(values[idx:idx+20])\n else:\n pass", "Graph training error as a function of average NLL over epochs\nLR = learning rate {0.1, 0.01, 0.001}\nSZ = size of the hidden layer and the embedding size {100, 200, 250}", "f, ax = plt.subplots(3,3, sharex=True)\nX = range(1, 21)\nfor i in range(len(nlls)):\n a = ax[i / 3][i % 3]\n a.plot(X, nlls[i])\n a.set_title(\"LR: %s, SZ: %s\" % (learning_rates[i], sizes[i]))\n a.set_ylabel(\"Average NLL\")\n a.set_xlabel(\"Epochs\")\n a.set_ylim([0.5, 2.6])\nplt.tight_layout()", "Conclusion\nBest performance with a larger embedding size (250) and a learning rate of 0.01. The concern now is overfitting.", "with open('test.txt', 'r') as f:\n data = f.readlines()\n\ndata = [x.split('\\t')[:2] for x in data]\ndata = [(int(x), float(y)) for (x,y) in data]\nx = [d[0] for d in data]\ny = [d[1] for d in data]\nplt.plot(x, y)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
taylort7147/udacity-projects
customer_segments/customer_segments.ipynb
mit
[ "Machine Learning Engineer Nanodegree\nUnsupervised Learning\nProject: Creating Customer Segments\nWelcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nGetting Started\nIn this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.\nThe dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.\nRun the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.", "# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import display # Allows the use of display() for DataFrames\n\n# Import supplementary visualizations code visuals.py\nimport visuals as vs\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the wholesale customers dataset\ntry:\n data = pd.read_csv(\"customers.csv\")\n data.drop(['Region', 'Channel'], axis = 1, inplace = True)\n print \"Wholesale customers dataset has {} samples with {} features each.\".format(*data.shape)\nexcept:\n print \"Dataset could not be loaded. Is the dataset missing?\"", "Data Exploration\nIn this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.\nRun the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.", "# Display a description of the dataset\ndisplay(data.describe())", "Implementation: Selecting Samples\nTo get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.", "# TODO: Select three indices of your choice you wish to sample from the dataset\nimport random\n\nrandom.seed(14)\nindices = [random.randint(0, data.shape[0]) for x in range(3)]\nsampleIndices = indices\nprint(\"Indices: {}\".format(indices))\n\n# Create a DataFrame of the chosen samples\nsamples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)\nprint \"Chosen samples of wholesale customers dataset:\"\ndisplay(samples)", "Question 1\nConsider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.\nWhat kind of establishment (customer) could each of the three samples you've chosen represent?\nHint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying \"McDonalds\" when describing a sample customer as a restaurant.\nAnswer:\n| | Index | Establishment | Reasoning \n|:-------:|:---------:|:-------------------------:|:-------------------\n| 0 | 47 | Large supermarket | Sales for Fresh, Milk, Grocery, and Detergents_Paper are well over the 75% quartile.\n| 1 | 309 | Hotel | There are proportionally high sales in Milk, Grocery, and Detergents_Paper, all greater than 75% of the population.\n| 2 | 287 | Restaurant | Fresh, Frozen, and Delicatessen sales are all greater than the median.\nImplementation: Feature Relevance\nOne interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.\nIn the code block below, you will need to implement the following:\n - Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.\n - Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.\n - Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.\n - Import a decision tree regressor, set a random_state, and fit the learner to the training data.\n - Report the prediction score of the testing set using the regressor's score function.", "from sklearn.cross_validation import train_test_split\nfrom sklearn.tree import DecisionTreeRegressor\n\ndef find_relevance(data, target_label):\n # TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature\n new_data = data.drop([target_label], axis=1, inplace=False)\n target = data[target_label]\n\n\n # TODO: Split the data into training and testing sets using the given feature as the target\n X_train, X_test, y_train, y_test = train_test_split(new_data, target, test_size=0.25, random_state=14)\n\n # TODO: Create a decision tree regressor and fit it to the training set\n regressor = DecisionTreeRegressor(random_state=14)\n regressor.fit(X_train, y_train)\n\n # TODO: Report the score of the prediction using the testing set\n score = regressor.score(X_test, y_test)\n return score\nfor target_label in data.columns:\n score = find_relevance(data, target_label)\n print(\"{:>20s}: {:+0.3f}\".format(target_label, score))", "Question 2\nWhich feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?\nHint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.\nAnswer:\nI attempted to predict the Milk feature. The reported score was 0.397. While there is some correlation, it is not strong -- 39.7% of the variation is explained using the other features. This feature may be useful in identifying customers' spending habits, though clearly not as important as some of the other features (Fresh, Frozen, and Delicatessen).\nVisualize Feature Distributions\nTo get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.", "# Produce a scatter matrix for each pair of features in the data\npd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');", "Question 3\nAre there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?\nHint: Is the data normally distributed? Where do most of the data points lie? \nAnswer:\nThe following pairs show some correlation:\nMilk & Grocery<br>\nMilk & Detergents_Paper<br>\nDetergents_Paper & Grocery\nMilk appears to be correlated with both Grocery and Detergents_Paper, which agrees with the suspicion that Milk is not completely necessary. \nNone of the data appears to be normally distributed, they are all skewed right. They are all centered around values < 10,000 with the exception of Fresh which is centered around 12,000. I see no distinction in terms of distibution between features that appear to be correlated with other features and those that don't.\nData Preprocessing\nIn this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.\nImplementation: Feature Scaling\nIf data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.\nIn the code block below, you will need to implement the following:\n - Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this.\n - Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log.", "# TODO: Scale the data using the natural logarithm\nlog_data = np.log(data)\n\n# TODO: Scale the sample data using the natural logarithm\nlog_samples = np.log(samples)\n\n# Produce a scatter matrix for each pair of newly-transformed features\npd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');", "Observation\nAfter applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).\nRun the code below to see how the sample data has changed after having the natural logarithm applied to it.", "# Display the log-transformed sample data\ndisplay(log_samples)", "Implementation: Outlier Detection\nDetecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many \"rules of thumb\" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.\nIn the code block below, you will need to implement the following:\n - Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.\n - Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.\n - Assign the calculation of an outlier step for the given feature to step.\n - Optionally remove data points from the dataset by adding indices to the outliers list.\nNOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!\nOnce you have performed this implementation, the dataset will be stored in the variable good_data.", "features = log_data.columns\noutlierLimitDict = {}\noutlierDict = {}\n\n# For each feature find the data points with extreme high or low values\nfor feature in features:\n # TODO: Calculate Q1 (25th percentile of the data) for the given feature\n Q1 = np.percentile(log_data[feature], 25)\n \n # TODO: Calculate Q3 (75th percentile of the data) for the given feature\n Q3 = np.percentile(log_data[feature], 75)\n \n # TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)\n iqr = Q3 - Q1\n step = 1.5 * iqr\n outlierLimitDict[feature] = (Q1 - step, Q3 + step)\n \n # Display the outliers\n outliers = log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]\n for index in outliers.index:\n originalCount = outlierDict.get(index, 0)\n outlierDict[index] = originalCount + 1\n print \"Data points considered outliers for the feature '{}':\".format(feature)\n display(outliers)\n \n# Print indices of rows that are outliers for multiple features \nfor index in sorted(outlierDict.keys()):\n if outlierDict[index] > 1:\n print(\"{:3}: {}\".format(index, outlierDict[index]))\n \n# OPTIONAL: Select the indices for data points you wish to remove\noutliers = []\n\n# Remove the outliers, if any were specified\ngood_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)\n\n# Make sure samples don't contain these indices\nfor index in sampleIndices:\n if index in outliers:\n raise Exception(\"The samples contain an outlier (index {})\".format(index))\n \ndef color_point(row):\n if row.name in outliers:\n return \"red\"\n if row.name in outlierDict.keys():\n return \"green\"\n return \"black\"\n \npd.scatter_matrix(log_data, figsize = (14,8), diagonal = 'kde', alpha=1, lw=0, c=log_data.apply(color_point, axis=1));\n", "Question 4\nAre there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why. \nAnswer:\nThere were several rows that were considered outliers for more than one feature\nRow | Outlier in N features\n:---:|:----:\n65 | 2\n66 | 2\n75 | 2\n128 | 2\n154 | 3\nI've chosen to keep all outliers rather than discard them. All the data points are feasible and do not appear do be erronous inputs. The presence of the outliers affects both the results and assumptions about the data, so it is not legitimate to drop them, as it would be hiding a trend that may exist.\nFeature Transformation\nIn this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.\nImplementation: PCA\nNow that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new \"feature\" of the space, however it is a composition of the original features present in the data.\nIn the code block below, you will need to implement the following:\n - Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.\n - Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.", "from sklearn.decomposition import PCA\n\n# TODO: Apply PCA by fitting the good data with the same number of dimensions as features\nn = min(good_data.shape)\npca = PCA(n_components=n)\npca.fit(good_data)\n\n# TODO: Transform log_samples using the PCA fit above\npca_samples = pca.transform(log_samples)\n\n# Generate PCA results plot\npca_results = vs.pca_results(good_data, pca)\n\n\nfor i in range(1,n+1):\n print(\"The total variance explained by the first {} principle component{} is {}.\".format(\n i,\n \" \" if i == 1 else \"s\",\n sum(pca.explained_variance_ratio_[0:i])\n ))", "Question 5\nHow much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.\nHint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the indivdual feature weights.\nAnswer:\nThe total variance explained by the first 2 principle components is 0.718945231737.\nThe total variance explained by the first 4 principle components is 0.931295845055.\nIn each of the dimensions in the visualizations, the largest bar represents the most important or most meaningful feature. There can be multiple meaningful features in a single dimension. If I was to label the first four newly generated dimensions, they would be as follows:\nDimension | Label | Explanation\n:---------:|:----------------:|:---------------------\n 1 | Consumer retail spending | There's a strong positive weight on Detergents_Paper and fairly strong positive weights on Milk and Grocery, which is in line with the type of spending that occurs at retail stores.\n 2 | Commercial food service | There are similarly strong positive weights on Fresh, Frozen, and Delicatessen. This indicates that the dimension is driven by spending on food-related products. Since the weight is solely on food-related products, it indicates that the spending is done by customers in the food industry, rather than general consumers.\n 3 | Health-conscious spending | There is a strong positive weight on Fresh and a strong negative weight on \"Delicatessen\". This suggests that the more fresh goods (healthy) that are purchased, the less dessert goods (unhealthy) are purchased.\n 4 | \"On-the-Go\"-style food | A large positive weight on Frozen, a mild positive weight on Detergents_Paper, and mild negative weight on Fresh indicate spending on goods that are used by households, but don't have a dedicated cook. The large negative weight on Delicatessen bolsters that assumption in that consumers who rely on pre-made meals don't necessarily eat junk food.\nObservation\nRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.", "# Display sample log-data after having a PCA transformation applied\ndisplay(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))", "Implementation: Dimensionality Reduction\nWhen using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.\nIn the code block below, you will need to implement the following:\n - Assign the results of fitting PCA in two dimensions with good_data to pca.\n - Apply a PCA transformation of good_data using pca.transform, and assign the results to reduced_data.\n - Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.", "# TODO: Apply PCA by fitting the good data with only two dimensions\npca = PCA(n_components=2)\n\n# TODO: Transform the good data using the PCA fit above\nreduced_data = pca.fit_transform(good_data)\n\n# TODO: Transform log_samples using the PCA fit above\npca_samples = pca.transform(log_samples)\n\n# Create a DataFrame for the reduced data\nreduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])", "Observation\nRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.", "# Display sample log-data after applying PCA transformation in two dimensions\ndisplay(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))", "Visualizing a Biplot\nA biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.\nRun the code cell below to produce a biplot of the reduced-dimension data.", "# Create a biplot\nvs.biplot(good_data, reduced_data, pca)", "Observation\nOnce we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories. \nFrom the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?\nClustering\nIn this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. \nQuestion 6\nWhat are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?\nAnswer:\nK-Means clustering requires less computational power than the Gaussian Mixture Model (GMM). K-Means also does not make assumptions about the distribution of the data. GMM is more robust in that it allows soft clustering, where one point may belong to multiple klusters to varying degrees. This helps find hidden relationships in data. It is also less prone to falling into local minima, which is a problem for K-Means.\nThe data that we have observed up until this point suggests that data points may belong to multiple clusters. For example, if we look at the 2<sup>nd</sup> and 3<sup>rd</sup> principle components, we see that Delicatessen is a strong indicator in both. If we use K-Means, we would force data points into one or the other, which will could negatively impact the analyze and assign steps of the next iteration and change the shape of the cluster. On the other hand, if we use GMM, we give the option of the data points assigning to either cluster and don't allow it to affect the shape of the clusters. Because the data is overlapping like this, I'm opting to use GMM.\nImplementation: Creating Clusters\nDepending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the \"goodness\" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.\nIn the code block below, you will need to implement the following:\n - Fit a clustering algorithm to the reduced_data and assign it to clusterer.\n - Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.\n - Find the cluster centers using the algorithm's respective attribute and assign them to centers.\n - Predict the cluster for each sample data point in pca_samples and assign them sample_preds.\n - Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.\n - Assign the silhouette score to score and print the result.", "from sklearn.mixture import GMM\nfrom sklearn.metrics import silhouette_score\n\n# TODO: Apply your clustering algorithm of choice to the reduced data \ndef getGmmSilhouetteScore(n, data, samples):\n clusterer = GMM(n_components=n, random_state=14)\n clusterer.fit(data)\n\n # TODO: Predict the cluster for each data point\n preds = clusterer.predict(data)\n\n # TODO: Find the cluster centers\n centers = clusterer.means_\n\n # TODO: Predict the cluster for each transformed sample data point\n sample_preds = clusterer.predict(samples)\n\n # TODO: Calculate the mean silhouette coefficient for the number of clusters chosen\n score = silhouette_score(data, preds)\n return score, centers, preds, sample_preds\n\nbestSilhouetteScoreN = 0\nbestSilhouetteScore = -1\nmaxN = 10\nfor n in range(2, maxN):\n score, _, _, _ = getGmmSilhouetteScore(n, reduced_data, pca_samples)\n if score > bestSilhouetteScore:\n bestSilhouetteScore = score\n bestSilhouetteScoreN = n\n print(\"Sillhouette score for n={}: {}\".format(n, score))\n\nscore, centers, preds, sample_preds = getGmmSilhouetteScore(bestSilhouetteScoreN, reduced_data, pca_samples)\nprint(\"\")\nprint(\"Best n is {} with a silhouette score of {}.\".format(bestSilhouetteScoreN, score))\n ", "Question 7\nReport the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score? \nSillhouette score for n=2: 0.316017379116\nSillhouette score for n=3: 0.375222595239\nSillhouette score for n=4: 0.333662047955\nSillhouette score for n=5: 0.257867358339\nSillhouette score for n=6: 0.262324563865\nSillhouette score for n=7: 0.313909530829\nSillhouette score for n=8: 0.295714674659\nSillhouette score for n=9: 0.32036621781\nBest n is 3 with a silhouette score of 0.375222595239.\nCluster Visualization\nOnce you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.", "# Display the results of the clustering from implementation\nvs.cluster_results(reduced_data, preds, centers, pca_samples)", "Implementation: Data Recovery\nEach cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.\nIn the code block below, you will need to implement the following:\n - Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.\n - Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.", "# TODO: Inverse transform the centers\nlog_centers = pca.inverse_transform(centers)\n\n# TODO: Exponentiate the centers\ntrue_centers = np.exp(log_centers)\n\n# Display the true centers\nsegments = ['Segment {}'.format(i) for i in range(0,len(centers))]\ntrue_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())\ntrue_centers.index = segments\ndisplay(true_centers)", "Question 8\nConsider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?\nHint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.\nAnswer:\nConsidering the median and quartiles for the statistical description (since the mean is convoluted due to one or more outliers), my observations are as follows:\nIn Segment 0, Milk, Grocery, Detergents_Paper all lie very close to the 75% quartile, indicating that customers segments in this cluster tend toward every-day-use products - something you would find in retail stores and department stores.\nIn Segment 1, Milk, Grocery, and Detergents_Paper lie close to the 25% quartile, and Frozen lies closer to the 75% quartile. The rest fall close to the median. Based on this, the customer segment appears to belong to establishments with emphasis on food and desserts, such as restaurants and cafes.\nIn Segement 2, Fresh, Frozen, and Delicatessen lie at or below the 25% quartile, and the rest follow below the median. This segment is characterized by low overall spending in all 6 categories. Milk and Grocery are relatively high, indicating that this may be a store which sells small amounts of everyday groceries, like small markets, or pharmacies.\n| | Fresh | Milk | Grocery | Frozen | Detergents_Paper | Delicatessen\n| ----------:|:--------------:|:----------------:|:---------------:|:---------------:|:------------------:|:-------------:\n| count | 440.000000 | 440.000000 | 440.000000 | 440.000000 | 440.000000 | 440.000000\n| mean | 12000.297727 | 5796.265909 | 7951.277273 | 3071.931818 | 2881.493182 | 1524.870455\n| std | 12647.328865 | 7380.377175 | 9503.162829 | 4854.673333 | 4767.854448 | 2820.105937\n| min | 3.000000 | 55.000000 | 3.000000 | 25.000000 | 3.000000 | 3.000000\n| 25% | 3127.750000 | 1533.000000 | 2153.000000 | 742.250000 | 256.750000 | 408.250000\n| 50% | 8504.000000 | 3627.000000 | 4755.500000 | 1526.000000 | 816.500000 | 965.500000\n| 75% | 16933.750000 | 7190.250000 | 10655.750000 | 3554.250000 | 3922.000000 | 1820.250000\n| max | 112151.000000 | 73498.000000 | 92780.000000 | 60869.000000 | 40827.000000 | 47943.000000\nQuestion 9\nFor each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?\nRun the code block below to find which cluster each sample point is predicted to be.", "# Display the predictions\nfor i, pred in enumerate(sample_preds):\n print \"Sample point\", i, \"predicted to be in Cluster\", pred", "Answer:\nYes, sample points 0 and 2 agree with my earlier predictions. Samples 0 and 1 have higher than median sales in Milk, Grocery, and Detergents_Paper which put them close to Cluster 0's centroid. I didn't mention hotel as an establishment for Cluster 0, but according do my description, it fits. Similarly, Sample 2's sales in Frozen are above the median, while Milk, Grocery, and Detergents_Paper are below the median, placing it closer to Cluster 1's median.\n| | Index | Establishment | Fresh | Milk | Grocery | Frozen | Detergents_Paper | Delicatessen\n|:-------:|:---------:|:-------------------:|:-----:|:-----:|:---------:|:--------:|:-----------------:|:------------:\n| 0 | 47 | Large supermarket | 44466 | 54259 | 55571 | 7782 | 24171 | 6465\n| 1 | 309 | Hotel | 918 | 20655 | 13567 | 1465 | 6846 | 806\n| 2 | 287 | Restaurant | 15354 | 2102 | 2828 | 8366 | 386 | 1027\nConclusion\nIn this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.\nQuestion 10\nCompanies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?\nHint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?\nAnswer:\nA change of delivery service from 5 days a week to 3 days a week would negatively affect customers who go through the products they buy quickly, such as fresh food which can't be stored in bulk for long periods of time. This might negatively impact restaurants, cafes, and fresh food markets. On the other hand, it would not affect large supermarkets, retail stores, and other similar customers who buy in bulk because the products are not time-sensative. I don't think this segment would react positively, but more likely would be neutral to the change.\nThis information can be used to randomly sample from each of the two segments to receive useful feedback as to whether or not each segment would react positively to the change.\nQuestion 11\nAdditional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.\nHow can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?\nHint: A supervised learner could be used to train on the original customers. What would be the target variable?\nAnswer:\nWe could train a supervised learner using the 6 spending categories as the features and the customer segment as the label. Then we could input the new customers and have the learner classify them, assigning each a label.\nVisualizing Underlying Distributions\nAt the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.\nRun the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.", "# Display the clustering results based on 'Channel' data\nvs.channel_results(reduced_data, outliers, pca_samples)", "Question 12\nHow well does the clustering algorithm and number of clusters you've chosen compare to this underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers? Are there customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes' by this distribution? Would you consider these classifications as consistent with your previous definition of the customer segments?\nAnswer:\nThe clustering split the data very similar to the existing split that is exhibited with the Hotel/Restaurant/Cafe and Retailer split. It additionally classified the sample points correctly, according to this underlying distribution. The number of clusters was consistent as well.\nAccording to this distribution, there seems to be several Hotel/Restaurant/Cafe points that lie together with the bulk of the Retailer data points, which indicates that there may be some mixture of the two in this region. \nOverall, the classifications in this distribution are consistent with the customer segments identified via clustering.\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
urgedata/pythondata
fbprophet/.ipynb_checkpoints/fbprophet_metrics-checkpoint.ipynb
mit
[ "This notebook covers using metrics to analyze the 'accuracy' of prophet models. In this notebook, we will extend the previous example (http://pythondata.com/forecasting-time-series-data-prophet-part-3/).\nImport necessary libraries", "import pandas as pd\nimport numpy as np\nfrom fbprophet import Prophet\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error\n\n%matplotlib inline\n \nplt.rcParams['figure.figsize']=(20,10)\nplt.style.use('ggplot')", "Read in the data\nRead the data in from the retail sales CSV file in the examples folder then set the index to the 'date' column. We are also parsing dates in the data file.", "sales_df = pd.read_csv('../examples/retail_sales.csv', index_col='date', parse_dates=True)\n\nsales_df.head()", "Prepare for Prophet\nAs explained in previous prophet posts, for prophet to work, we need to change the names of these columns to 'ds' and 'y'.", "df = sales_df.reset_index()\n\ndf.head()", "Let's rename the columns as required by fbprophet. Additioinally, fbprophet doesn't like the index to be a datetime...it wants to see 'ds' as a non-index column, so we won't set an index differnetly than the integer index.", "df=df.rename(columns={'date':'ds', 'sales':'y'})\n\ndf.head()", "Now's a good time to take a look at your data. Plot the data using pandas' plot function", "df.set_index('ds').y.plot()", "Running Prophet\nNow, let's set prophet up to begin modeling our data using our promotions dataframe as part of the forecast\nNote: Since we are using monthly data, you'll see a message from Prophet saying Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this. This is OK since we are workign with monthly data but you can disable it by using weekly_seasonality=True in the instantiation of Prophet.", "model = Prophet(weekly_seasonality=True)\nmodel.fit(df);", "We've instantiated the model, now we need to build some future dates to forecast into.", "future = model.make_future_dataframe(periods=24, freq = 'm')\nfuture.tail()", "To forecast this future data, we need to run it through Prophet's model.", "forecast = model.predict(future)", "The resulting forecast dataframe contains quite a bit of data, but we really only care about a few columns. First, let's look at the full dataframe:", "forecast.tail()", "We really only want to look at yhat, yhat_lower and yhat_upper, so we can do that with:", "forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()", "Plotting Prophet results\nProphet has a plotting mechanism called plot. This plot functionality draws the original data (black dots), the model (blue line) and the error of the forecast (shaded blue area).", "model.plot(forecast);", "Personally, I'm not a fan of this visualization but I'm not going to build my own...you can see how I do that here:\nhttps://github.com/urgedata/pythondata/blob/master/fbprophet/fbprophet_part_one.ipynb.\nAdditionally, prophet let's us take a at the components of our model, including the holidays. This component plot is an important plot as it lets you see the components of your model including the trend and seasonality (identified in the yearly pane).", "model.plot_components(forecast);", "Now that we have our model, let's take a look at how it compares to our actual values using a few different metrics - R-Squared and Mean Squared Error (MSE).\nTo do this, we need to build a combined dataframe with yhat from the forecasts and the original 'y' values from the data.", "metric_df = forecast.set_index('ds')[['yhat']].join(df.set_index('ds').y).reset_index()\n\nmetric_df.tail()", "You can see from the above, that the last part of the dataframe has \"NaN\" for 'y'...that's fine because we are only concerend about checking the forecast values versus the actual values so we can drop these \"NaN\" values.", "metric_df.dropna(inplace=True)\n\nmetric_df.tail()", "Now let's take a look at our R-Squared value", "r2_score(metric_df.y, metric_df.yhat)", "An r-squared value of 0.99 is amazing (and probably too good to be true, which tells me this data is most likely overfit).", "mean_squared_error(metric_df.y, metric_df.yhat)", "That's a large MSE value...and confirms my suspicion that this data is overfit and won't likely hold up well into the future. Remember...for MSE, closer to zero is better.\nNow...let's see what the Mean Absolute Error (MAE) looks like.", "mean_absolute_error(metric_df.y, metric_df.yhat)", "Not good. Not good at all. BUT...the purpose of this particular post is to show some usage of R-Squared, MAE and MSE's as metrics and I think we've done that. \nI can tell you from experience that part of the problem with this particular data is that its monthly and there aren't that many data points to start with (only 72 data points...not ideal for modeling)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/ja/tutorials/text/nmt_with_attention.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "アテンションを用いたニューラル機械翻訳\nNote: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳はベストエフォートであるため、この翻訳が正確であることや英語の公式ドキュメントの 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリtensorflow/docsにプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 docs-ja@tensorflow.org メーリングリストにご連絡ください。\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/text/nmt_with_attention\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/text/nmt_with_attention.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/text/nmt_with_attention.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/text/nmt_with_attention.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nこのノートブックでは、スペイン語から英語への翻訳を行う Sequence to Sequence (seq2seq) モデルを訓練します。このチュートリアルは、 Sequence to Sequence モデルの知識があることを前提にした上級編のサンプルです。\nこのノートブックのモデルを訓練すると、\"¿todavia estan en casa?\" のようなスペイン語の文を入力して、英訳: \"are you still at home?\" を得ることができます。\nこの翻訳品質はおもちゃとしてはそれなりのものですが、生成されたアテンションの図表の方が面白いかもしれません。これは、翻訳時にモデルが入力文のどの部分に注目しているかを表しています。\n<img src=\"https://tensorflow.org/images/spanish-english.png\" alt=\"spanish-english attention plot\">\nNote: このサンプルは P100 GPU 1基で実行した場合に約 10 分かかります。", "import tensorflow as tf\n\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\nfrom sklearn.model_selection import train_test_split\n\nimport unicodedata\nimport re\nimport numpy as np\nimport os\nimport io\nimport time", "データセットのダウンロードと準備\nここでは、http://www.manythings.org/anki/ で提供されている言語データセットを使用します。このデータセットには、次のような書式の言語翻訳ペアが含まれています。\nMay I borrow this book? ¿Puedo tomar prestado este libro?\nさまざまな言語が用意されていますが、ここでは英語ースペイン語のデータセットを使用します。利便性を考えてこのデータセットは Google Cloud 上に用意してありますが、ご自分でダウンロードすることも可能です。データセットをダウンロードしたあと、データを準備するために下記のようないくつかの手順を実行します。\n\nそれぞれの文ごとに、開始 と 終了 のトークンを付加する\n特殊文字を除去して文をきれいにする\n単語インデックスと逆単語インデックス(単語 → id と id → 単語のマッピングを行うディクショナリ)を作成する\n最大長にあわせて各文をパディングする", "# ファイルのダウンロード\npath_to_zip = tf.keras.utils.get_file(\n 'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',\n extract=True)\n\npath_to_file = os.path.dirname(path_to_zip)+\"/spa-eng/spa.txt\"\n\n# ユニコードファイルを ascii に変換\ndef unicode_to_ascii(s):\n return ''.join(c for c in unicodedata.normalize('NFD', s)\n if unicodedata.category(c) != 'Mn')\n\n\ndef preprocess_sentence(w):\n w = unicode_to_ascii(w.lower().strip())\n\n # 単語とそのあとの句読点の間にスペースを挿入\n # 例: \"he is a boy.\" => \"he is a boy .\"\n # 参照:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation\n w = re.sub(r\"([?.!,¿])\", r\" \\1 \", w)\n w = re.sub(r'[\" \"]+', \" \", w)\n\n # (a-z, A-Z, \".\", \"?\", \"!\", \",\") 以外の全ての文字をスペースに置き換え\n w = re.sub(r\"[^a-zA-Z?.!,¿]+\", \" \", w)\n\n w = w.rstrip().strip()\n\n # 文の開始と終了のトークンを付加\n # モデルが予測をいつ開始し、いつ終了すれば良いかを知らせるため\n w = '<start> ' + w + ' <end>'\n return w\n\nen_sentence = u\"May I borrow this book?\"\nsp_sentence = u\"¿Puedo tomar prestado este libro?\"\nprint(preprocess_sentence(en_sentence))\nprint(preprocess_sentence(sp_sentence).encode('utf-8'))\n\n# 1. アクセント記号を除去\n# 2. 文をクリーニング\n# 3. [ENGLISH, SPANISH] の形で単語のペアを返す\ndef create_dataset(path, num_examples):\n lines = io.open(path, encoding='UTF-8').read().strip().split('\\n')\n\n word_pairs = [[preprocess_sentence(w) for w in l.split('\\t')] for l in lines[:num_examples]]\n\n return zip(*word_pairs)\n\nen, sp = create_dataset(path_to_file, None)\nprint(en[-1])\nprint(sp[-1])\n\ndef max_length(tensor):\n return max(len(t) for t in tensor)\n\ndef tokenize(lang):\n lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(\n filters='')\n lang_tokenizer.fit_on_texts(lang)\n\n tensor = lang_tokenizer.texts_to_sequences(lang)\n\n tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,\n padding='post')\n\n return tensor, lang_tokenizer\n\ndef load_dataset(path, num_examples=None):\n # クリーニングされた入力と出力のペアを生成\n targ_lang, inp_lang = create_dataset(path, num_examples)\n\n input_tensor, inp_lang_tokenizer = tokenize(inp_lang)\n target_tensor, targ_lang_tokenizer = tokenize(targ_lang)\n\n return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer", "実験を速くするためデータセットのサイズを制限(オプション)\n100,000 を超える文のデータセットを使って訓練するには長い時間がかかります。訓練を速くするため、データセットのサイズを 30,000 に制限することができます(もちろん、データが少なければ翻訳の品質は低下します)。", "# このサイズのデータセットで実験\nnum_examples = 30000\ninput_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)\n\n# ターゲットテンソルの最大長を計算\nmax_length_targ, max_length_inp = max_length(target_tensor), max_length(input_tensor)\n\n# 80-20で分割を行い、訓練用と検証用のデータセットを作成\ninput_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)\n\n# 長さを表示\nprint(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))\n\ndef convert(lang, tensor):\n for t in tensor:\n if t!=0:\n print (\"%d ----> %s\" % (t, lang.index_word[t]))\n\nprint (\"Input Language; index to word mapping\")\nconvert(inp_lang, input_tensor_train[0])\nprint ()\nprint (\"Target Language; index to word mapping\")\nconvert(targ_lang, target_tensor_train[0])", "tf.data データセットの作成", "BUFFER_SIZE = len(input_tensor_train)\nBATCH_SIZE = 64\nsteps_per_epoch = len(input_tensor_train)//BATCH_SIZE\nembedding_dim = 256\nunits = 1024\nvocab_inp_size = len(inp_lang.word_index)+1\nvocab_tar_size = len(targ_lang.word_index)+1\n\ndataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)\ndataset = dataset.batch(BATCH_SIZE, drop_remainder=True)\n\nexample_input_batch, example_target_batch = next(iter(dataset))\nexample_input_batch.shape, example_target_batch.shape", "エンコーダー・デコーダーモデルの記述\nTensorFlow の Neural Machine Translation (seq2seq) tutorial に記載されているアテンション付きのエンコーダー・デコーダーモデルを実装します。この例では、最新の API セットを使用します。このノートブックは、上記の seq2seq チュートリアルにある attention equations を実装します。下図は、入力の単語ひとつひとつにアテンション機構によって重みが割り当てられ、それを使ってデコーダーが文中の次の単語を予測することを示しています。下記の図と式は Luong の論文 にあるアテンション機構の例です。\n<img src=\"https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg\" width=\"500\" alt=\"attention mechanism\">\n入力がエンコーダーを通過すると、shape が (batch_size, max_length, hidden_size) のエンコーダー出力と、shape が (batch_size, hidden_size) のエンコーダーの隠れ状態が得られます。\n下記に実装されている式を示します。\n<img src=\"https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg\" alt=\"attention equation 0\" width=\"800\">\n<img src=\"https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg\" alt=\"attention equation 1\" width=\"800\">\nこのチュートリアルでは、エンコーダーでは Bahdanau attention を使用します。簡略化した式を書く前に、表記方法を定めましょう。\n\nFC = 全結合 (Dense) レイヤー\nEO = エンコーダーの出力\nH = 隠れ状態\nX = デコーダーへの入力\n\n擬似コードは下記のとおりです。\n\nscore = FC(tanh(FC(EO) + FC(H)))\nattention weights = softmax(score, axis = 1) softmax は既定では最後の軸に対して実行されますが、スコアの shape が (batch_size, max_length, hidden_size) であるため、最初の軸 に適用します。max_length は入力の長さです。入力それぞれに重みを割り当てようとしているので、softmax はその軸に適用されなければなりません。\ncontext vector = sum(attention weights * EO, axis = 1). 上記と同様の理由で axis = 1 に設定しています。\nembedding output = デコーダーへの入力 X は Embedding レイヤーを通して渡されます。\nmerged vector = concat(embedding output, context vector)\nこの結合されたベクトルがつぎに GRU に渡されます。\n\nそれぞれのステップでのベクトルの shape は、コードのコメントに指定されています。", "class Encoder(tf.keras.Model):\n def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):\n super(Encoder, self).__init__()\n self.batch_sz = batch_sz\n self.enc_units = enc_units\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n self.gru = tf.keras.layers.GRU(self.enc_units,\n return_sequences=True,\n return_state=True,\n recurrent_initializer='glorot_uniform')\n\n def call(self, x, hidden):\n x = self.embedding(x)\n output, state = self.gru(x, initial_state = hidden)\n return output, state\n\n def initialize_hidden_state(self):\n return tf.zeros((self.batch_sz, self.enc_units))\n\nencoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)\n\n# サンプル入力\nsample_hidden = encoder.initialize_hidden_state()\nsample_output, sample_hidden = encoder(example_input_batch, sample_hidden)\nprint ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))\nprint ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))\n\nclass BahdanauAttention(tf.keras.layers.Layer):\n def __init__(self, units):\n super(BahdanauAttention, self).__init__()\n self.W1 = tf.keras.layers.Dense(units)\n self.W2 = tf.keras.layers.Dense(units)\n self.V = tf.keras.layers.Dense(1)\n\n def call(self, query, values):\n # hidden shape == (batch_size, hidden size)\n # hidden_with_time_axis shape == (batch_size, 1, hidden size)\n # スコアを計算するためにこのように加算を実行する\n hidden_with_time_axis = tf.expand_dims(query, 1)\n\n # score shape == (batch_size, max_length, 1)\n # スコアを self.V に適用するために最後の軸は 1 となる\n # self.V に適用する前のテンソルの shape は (batch_size, max_length, units)\n score = self.V(tf.nn.tanh(\n self.W1(values) + self.W2(hidden_with_time_axis)))\n\n # attention_weights の shape == (batch_size, max_length, 1)\n attention_weights = tf.nn.softmax(score, axis=1)\n\n # context_vector の合計後の shape == (batch_size, hidden_size)\n context_vector = attention_weights * values\n context_vector = tf.reduce_sum(context_vector, axis=1)\n\n return context_vector, attention_weights\n\nattention_layer = BahdanauAttention(10)\nattention_result, attention_weights = attention_layer(sample_hidden, sample_output)\n\nprint(\"Attention result shape: (batch size, units) {}\".format(attention_result.shape))\nprint(\"Attention weights shape: (batch_size, sequence_length, 1) {}\".format(attention_weights.shape))\n\nclass Decoder(tf.keras.Model):\n def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):\n super(Decoder, self).__init__()\n self.batch_sz = batch_sz\n self.dec_units = dec_units\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n self.gru = tf.keras.layers.GRU(self.dec_units,\n return_sequences=True,\n return_state=True,\n recurrent_initializer='glorot_uniform')\n self.fc = tf.keras.layers.Dense(vocab_size)\n\n # アテンションのため\n self.attention = BahdanauAttention(self.dec_units)\n\n def call(self, x, hidden, enc_output):\n # enc_output の shape == (batch_size, max_length, hidden_size)\n context_vector, attention_weights = self.attention(hidden, enc_output)\n\n # 埋め込み層を通過したあとの x の shape == (batch_size, 1, embedding_dim)\n x = self.embedding(x)\n\n # 結合後の x の shape == (batch_size, 1, embedding_dim + hidden_size)\n x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)\n\n # 結合したベクトルを GRU 層に渡す\n output, state = self.gru(x)\n\n # output shape == (batch_size * 1, hidden_size)\n output = tf.reshape(output, (-1, output.shape[2]))\n\n # output shape == (batch_size, vocab)\n x = self.fc(output)\n\n return x, state, attention_weights\n\ndecoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)\n\nsample_decoder_output, _, _ = decoder(tf.random.uniform((64, 1)),\n sample_hidden, sample_output)\n\nprint ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))", "オプティマイザと損失関数の定義", "optimizer = tf.keras.optimizers.Adam()\nloss_object = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True, reduction='none')\n\ndef loss_function(real, pred):\n mask = tf.math.logical_not(tf.math.equal(real, 0))\n loss_ = loss_object(real, pred)\n\n mask = tf.cast(mask, dtype=loss_.dtype)\n loss_ *= mask\n\n return tf.reduce_mean(loss_)", "チェックポイント(オブジェクトベースの保存)", "checkpoint_dir = './training_checkpoints'\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")\ncheckpoint = tf.train.Checkpoint(optimizer=optimizer,\n encoder=encoder,\n decoder=decoder)", "訓練\n\n入力 を エンコーダー に通すと、エンコーダー出力 と エンコーダーの隠れ状態 が返される\nエンコーダーの出力とエンコーダーの隠れ状態、そしてデコーダーの入力(これが 開始トークン)がデコーダーに渡される\nデコーダーは 予測値 と デコーダーの隠れ状態 を返す\nつぎにデコーダーの隠れ状態がモデルに戻され、予測値が損失関数の計算に使用される\nデコーダーへの次の入力を決定するために Teacher Forcing が使用される\nTeacher Forcing は、正解単語 をデコーダーの 次の入力 として使用するテクニックである\n最後に勾配を計算し、それをオプティマイザに与えて誤差逆伝播を行う", "@tf.function\ndef train_step(inp, targ, enc_hidden):\n loss = 0\n\n with tf.GradientTape() as tape:\n enc_output, enc_hidden = encoder(inp, enc_hidden)\n\n dec_hidden = enc_hidden\n\n dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)\n\n # Teacher Forcing - 正解値を次の入力として供給\n for t in range(1, targ.shape[1]):\n # passing enc_output to the decoder\n predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)\n\n loss += loss_function(targ[:, t], predictions)\n\n # Teacher Forcing を使用\n dec_input = tf.expand_dims(targ[:, t], 1)\n\n batch_loss = (loss / int(targ.shape[1]))\n\n variables = encoder.trainable_variables + decoder.trainable_variables\n\n gradients = tape.gradient(loss, variables)\n\n optimizer.apply_gradients(zip(gradients, variables))\n\n return batch_loss\n\nEPOCHS = 10\n\nfor epoch in range(EPOCHS):\n start = time.time()\n\n enc_hidden = encoder.initialize_hidden_state()\n total_loss = 0\n\n for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):\n batch_loss = train_step(inp, targ, enc_hidden)\n total_loss += batch_loss\n\n if batch % 100 == 0:\n print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,\n batch,\n batch_loss.numpy()))\n # 2 エポックごとにモデル(のチェックポイント)を保存\n if (epoch + 1) % 2 == 0:\n checkpoint.save(file_prefix = checkpoint_prefix)\n\n print('Epoch {} Loss {:.4f}'.format(epoch + 1,\n total_loss / steps_per_epoch))\n print('Time taken for 1 epoch {} sec\\n'.format(time.time() - start))", "翻訳\n\n評価関数は、Teacher Forcing を使わないことを除いては、訓練ループと同様である。タイムステップごとのデコーダーへの入力は、過去の予測値に加えて、隠れ状態とエンコーダーのアウトプットである。\nモデルが 終了トークン を予測したら、予測を停止する。\nまた、タイムステップごとのアテンションの重み を保存する。\n\nNote: エンコーダーの出力は 1 つの入力に対して 1 回だけ計算されます。", "def evaluate(sentence):\n attention_plot = np.zeros((max_length_targ, max_length_inp))\n\n sentence = preprocess_sentence(sentence)\n\n inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]\n inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],\n maxlen=max_length_inp,\n padding='post')\n inputs = tf.convert_to_tensor(inputs)\n\n result = ''\n\n hidden = [tf.zeros((1, units))]\n enc_out, enc_hidden = encoder(inputs, hidden)\n\n dec_hidden = enc_hidden\n dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)\n\n for t in range(max_length_targ):\n predictions, dec_hidden, attention_weights = decoder(dec_input,\n dec_hidden,\n enc_out)\n\n # 後ほどプロットするためにアテンションの重みを保存\n attention_weights = tf.reshape(attention_weights, (-1, ))\n attention_plot[t] = attention_weights.numpy()\n\n predicted_id = tf.argmax(predictions[0]).numpy()\n\n result += targ_lang.index_word[predicted_id] + ' '\n\n if targ_lang.index_word[predicted_id] == '<end>':\n return result, sentence, attention_plot\n\n # 予測された ID がモデルに戻される\n dec_input = tf.expand_dims([predicted_id], 0)\n\n return result, sentence, attention_plot\n\n# アテンションの重みをプロットする関数\ndef plot_attention(attention, sentence, predicted_sentence):\n fig = plt.figure(figsize=(10,10))\n ax = fig.add_subplot(1, 1, 1)\n ax.matshow(attention, cmap='viridis')\n\n fontdict = {'fontsize': 14}\n\n ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)\n ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)\n\n ax.xaxis.set_major_locator(ticker.MultipleLocator(1))\n ax.yaxis.set_major_locator(ticker.MultipleLocator(1))\n\n plt.show()\n\ndef translate(sentence):\n result, sentence, attention_plot = evaluate(sentence)\n\n print('Input: %s' % (sentence))\n print('Predicted translation: {}'.format(result))\n\n attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]\n plot_attention(attention_plot, sentence.split(' '), result.split(' '))", "最後のチェックポイントを復元しテストする", "# checkpoint_dir の中の最後のチェックポイントを復元\ncheckpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))\n\ntranslate(u'hace mucho frio aqui.')\n\ntranslate(u'esta es mi vida.')\n\ntranslate(u'¿todavia estan en casa?')\n\n# 翻訳あやまりの例\ntranslate(u'trata de averiguarlo.')", "次のステップ\n\n異なるデータセットをダウンロードして翻訳の実験を行ってみよう。たとえば英語からドイツ語や、英語からフランス語。\nもっと大きなデータセットで訓練を行ったり、もっと多くのエポックで訓練を行ったりしてみよう。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dipanjank/ml
text_classification_and_clustering/step_2_classification_of_sample_dataset.ipynb
gpl-3.0
[ "<h1 align=\"center\">Level and Group Classification on Small Sample Dataset</h1>\n\nWe have two classification tasks:\n\nPredict the level, which ranges from 1-16.\nPredict the group of a given text, given this mapping from levels to group:\nLevels 1-3 = Group A1\nLevels 4-6 = Group A2\nLevels 7-9 = Group B1\nLevels 10-12 = Group B2\nLevels 13-15 = Group C1\nLevels 16 = Group C2", "%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\n\nimport pandas as pd \nimport numpy as np\nimport seaborn as sns", "Map Level to Group\nHere we load the saved pickle file containing the DataFrame for the entire dataset. Then we map the level column to group so we have the target labels for both classification tasks.`", "raw_input = pd.read_pickle('input.pkl')\n\ngp_mapper = {\n 1: 'A1', 2: 'A1', 3: 'A1',\n 4: 'A2', 5: 'A2', 6: 'A2',\n 7: 'B1', 8: 'B1', 9: 'B1',\n 10: 'B2', 11: 'B2', 12: 'B2',\n 13: 'C1', 14: 'C1', 15: 'C1',\n 16: 'C2'\n}\n\nraw_input = raw_input.assign(group=raw_input.level.map(gp_mapper)) \n\nraw_input.info()\n\nraw_input.head()", "Train-test Split\nHere, we split the raw input into train (80%) and test (20%) sets. From the train set, we take 1000 samples for each level to construct a small sample dataset that we can experiment quickly on.", "from sklearn.model_selection import train_test_split\n\n# Split the index of `raw_input` DataFrame into train and test and the use the to split the DataFrame.\ntrain_idx, test_idx = train_test_split( \n raw_input.index, \n test_size=0.2,\n stratify=raw_input.level, \n shuffle=True,\n random_state=0)\n\ntrain_df, test_df = raw_input.loc[train_idx], raw_input.loc[test_idx]\ntrain_df.to_pickle('train_full.pkl')\ntest_df.to_pickle('test.pkl')\n\n# Small sample Dataset from train set using 1000 elements per level\ntrain_df_small = train_df.groupby('level').apply(lambda g: g.sample(n=1000, replace=False, random_state=1234))\ntrain_df_small.index = train_df_small.index.droplevel(0)\n\ntrain_df_small.to_pickle('train_small.pkl')", "For the rest of this notebook, we use the small sample dataset as input.", "raw_input = pd.read_pickle('train_small.pkl')", "Check for Class Imbalance", "level_counts = raw_input.level.value_counts().sort_index()\ngroup_counts = raw_input.group.value_counts().sort_index()\n\n_, ax = plt.subplots(1, 2, figsize=(10, 5))\n\n_ = level_counts.plot(kind='bar', title='Counts per Level', ax=ax[0], rot=0)\n_ = group_counts.plot(kind='bar', title='Counts per Group', ax=ax[1], rot=0)\n\nplt.tight_layout()", "Level Classification Based on Text", "import nltk\nnltk.download('stopwords')\nnltk.download('punkt')\nfrom nltk.corpus import stopwords\n\nen_stopwords = set(stopwords.words('english'))\nprint(en_stopwords)\n\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import cross_val_predict, StratifiedKFold\n\n\ndef classify_v1(input_df, target_label='level'):\n \"\"\"\n Build a classifier for the `target_label` column in the DataFrame `input_df` using the `text` column. \n Return the (labels, predicted_labels) tuple. \n Use a 10-fold Stratified K-fold cross-validator to generate the out-of-sample predictions.\"\"\"\n \n assert target_label in input_df.columns\n \n counter = TfidfVectorizer(\n ngram_range=(1, 2), \n stop_words=en_stopwords, \n max_df=0.4, \n min_df=25, \n max_features=3000, \n sublinear_tf=True\n )\n\n scaler = StandardScaler(with_mean=False)\n model = LogisticRegression(penalty='l2', max_iter=200, random_state=4321)\n pipeline = make_pipeline(counter, scaler, model)\n\n cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=1234)\n\n X = raw_input.text\n y = raw_input.level\n y_pred = cross_val_predict(pipeline, X=X.values, y=y.values, cv=cv, n_jobs=16, verbose=2)\n y_pred = pd.Series(index=raw_input.index.copy(), data=y_pred)\n \n return y.copy(), y_pred\n\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\n\n\ndef display_results(y, y_pred):\n \"\"\"Given some predications y_pred for a target label y, \n display the precision/recall/f1 score and the confusion matrix.\"\"\"\n \n report = classification_report(y_pred, y)\n print(report)\n\n level_values = y.unique()\n level_values.sort()\n cm = confusion_matrix(y_true=y, y_pred=y_pred.values, labels=level_values)\n cm = pd.DataFrame(index=level_values, columns=level_values, data=cm)\n\n fig, ax = plt.subplots(1, 1, figsize=(10, 8))\n ax = sns.heatmap(cm, annot=True, ax=ax, fmt='d')\n\n%%time\nlevels, levels_predicted = classify_v1(raw_input, target_label='level')\ndisplay_results(levels, levels_predicted)", "Misclassifications\nHere we look at the misclassified samples to try to understand why the tf-idf model doesn't work for them.", "# assign the predicated level as a column to the input data\ninput_with_preds = raw_input.assign(level_predicted=levels_predicted)\ninput_with_preds.head()", "We can identify the misclassified examples by the condition\ninput_with_preds.level != input_with_preds.level_predicted\n\nFor these rows, we want to identify the pair ('level', 'level_predicted') that produces the greatest number of mismatches, because addressing these will produce the biggest improvement in the overall score.", "misclassifications = input_with_preds[input_with_preds.level != input_with_preds.level_predicted]\nm_counts = misclassifications.groupby(by=['level', 'level_predicted'])['text'].count()\nm_counts.sort_values(ascending=False).head(8)", "As an example, we investigate the misclassifications between levels 7 and 8.", "cond = (misclassifications.level.isin([7, 8])) & (misclassifications.level_predicted.isin([7, 8]))\nmis_sample = misclassifications.loc[cond, ['topic_text', 'topic_id', 'text', 'level', 'level_predicted']]\nmis_sample.groupby(['topic_id', 'topic_text', 'level', 'level_predicted'])['text'].count().sort_values(ascending=False)", "So, most of the misclassifications for true level 7 occur for the topic \"Planning for the future\", whereas for level 8, it is \n\"Making a 'to do' list of your dreams\". Intuitively, this makes sense. These two topics are similar, so the word frequency distributions could very well be similar.\nNext we extract wordcount tf-idf matrices for a subset of these articles and compare different aspects of them.", "from sklearn.feature_extraction.text import CountVectorizer\n\ndef calc_bow_matrix_for_topic_id(df, topic_id, limit=5):\n \"\"\"Return a dense DataFrame of Word counts with words as index, article IDs as columns.\"\"\"\n all_texts = df[df.topic_id == topic_id].text.head(limit)\n\n cv = CountVectorizer(stop_words=en_stopwords)\n t = cv.fit_transform(all_texts.values)\n words = cv.get_feature_names()\n\n tf_idf_matrix = pd.DataFrame(index=all_texts.index.copy(), columns=words, data=t.todense()).T\n return tf_idf_matrix\n\ntid_50, tid_59 = map(lambda x: calc_bow_matrix_for_topic_id(mis_sample, x), [50, 59])\n\ntid_50.head(20)\n\ntid_59.head(20)\n\nuncommon_words = tid_50.index.symmetric_difference(tid_59.index).tolist()\nprint(uncommon_words)", "So the word count matrix is extremely sparse and a fair amount of words only appear in one set of articles and not the other. Based on that, I concluded that the presence / absence of rare words could be a better indicator of level instead of tf-idf. \nNext we re-run the model evaluation step using binary valued features indicating presence / absence.\nImproving the Model", "from sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import cross_val_predict, StratifiedKFold\n\n\ndef classify_v2(input_df, target_label='level'):\n \"\"\"\n Build a classifier for the `target_label` column in the DataFrame `input_df` using the `text` column. \n Return the (labels, predicted_labels) tuple. \n Use a 10-fold Stratified K-fold cross-validator to generate the out-of-sample predictions.\"\"\"\n \n assert target_label in input_df.columns\n \n counter = CountVectorizer(\n lowercase=True, \n stop_words=en_stopwords, \n ngram_range=(1, 1), \n min_df=5,\n max_df=0.4,\n binary=True)\n\n model = LogisticRegression(\n penalty='l2', \n max_iter=200, \n multi_class='multinomial', \n solver='lbfgs', \n verbose=True,\n random_state=4321)\n \n pipeline = make_pipeline(counter, model)\n cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=1234)\n\n X = input_df.text\n y = input_df.loc[:, target_label]\n y_pred = cross_val_predict(pipeline, X=X.values, y=y.values, cv=cv, n_jobs=10, verbose=2)\n y_pred = pd.Series(index=raw_input.index.copy(), data=y_pred)\n\n return y.copy(), y_pred\n\n%%time\nlevels, levels_predicted = classify_v2(raw_input, target_label='level')\ndisplay_results(levels, levels_predicted)", "Using binary features the composite f1-score has improved to 0.87 from 0.85.\nGroup Classification Based on Text\nThis is essentially the same classification problem as the level classification but with collapsed categories, so my intuition is that the same feature-classifier combination will work well and should produce slightly better performance.", "%%time\n\ngroups, groups_predicted = classify_v2(raw_input, target_label='group')\ndisplay_results(groups, groups_predicted)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
katelynneese/dmdd
.ipynb_checkpoints/Testing Simulation_AM-checkpoint.ipynb
mit
[ "import dmdd\nimport numpy as np\n%matplotlib inline\n\n\nreload(dmdd)", "This cell demonstrates that dRdQ_AM has the same value as dRdQ at day 0 and day 365, when the v_lag is = 220.", "print dmdd.dRdQ_AM(Q = [100.], sigma_si = 75.5)\nprint dmdd.rate_UV.dRdQ(Q = np.asarray([100.]), sigma_si = 75.5)\nprint dmdd.dRdQ_AM(Q = [100.], sigma_si = 75.5, time = 365)\n\n", "This cell demonstates the rate_UV.dRdQ can recieve multiple Qs and return them as an array.\nThe following cell shows the same thing, but for dRdQ_AM.", "dmdd.rate_UV.dRdQ(Q = np.array([50., 60., 70., 80., 90., 100.]), sigma_si = 75.5)\n\n# demonstrats that rate_UV.dRdQ can take multiple Q's\n\ndmdd.dRdQ_AM(Q = [10., 30., 50., 100.], sigma_si = 75.5, time = 0)\n# demonstrates that dRdQ_AM can take multiple Q's as a list", "This cell is showing the relative progression of rate over various times for fixed parameters.", "print dmdd.dRdQ_AM(sigma_si = 75.5, time = 50)\nprint dmdd.dRdQ_AM(sigma_si = 75.5, time = 100)\nprint dmdd.dRdQ_AM(sigma_si = 75.5, time = 150)\nprint dmdd.dRdQ_AM(sigma_si = 75.5, time = 200)\nprint dmdd.dRdQ_AM(sigma_si = 75.5, time = 250)\nprint dmdd.dRdQ_AM(sigma_si = 75.5, time = 300)\nprint dmdd.dRdQ_AM(sigma_si = 75.5, time = 350)", "Demonstration of integral function", "dmdd.integral(1., 100., 0., 365., sigma_si = 75.5)", "Testing integral function for a known integral with an exact value of 1/6 .", "def funct1(Q,time, sigma_si, sigma_anapole, mass, element, v_amplitude):\n return (Q**2)*(time)\n#have to define like this due to the way integral is defined, but still returns correct answer\n\ndmdd.integral(0, 1, 0, 1, function = funct1)", "Model and experiment to be used in simulations", "# shortcut for scattering models corresponding to rates coded in rate_UV:\nanapole_model = dmdd.UV_Model('Anapole', ['mass','sigma_anapole'])\nSI_model = dmdd.UV_Model('SI', ['mass','sigma_si'])\n\nprint 'model: {}, parameters: {}.'.format(anapole_model.name, anapole_model.param_names)\nprint 'model: {}, parameters: {}.'.format(SI_model.name, SI_model.param_names)\n\n# intialize an Experiment with XENON target, to be passed to Simulation_AM:\nxe = dmdd.Experiment('1xe', 'xenon', 5, 150, 1000, dmdd.eff.efficiency_unit, energy_resolution=True)\n\n\n\n", "Attempting to run Simulation_AM", "xe = dmdd.Simulation_AM('AM_xenon', xe, SI_model, \n {'mass':50.,'sigma_si':75.5}, Qmin = np.asarray([5.]), \n Qmax = np.asarray([150.]), \n Tmin = 0, Tmax = 365, sigma_si = 75.5, \n element = 'xenon', force_sim = True)\n\n\n\n\n\nxe = dmdd.Simulation_AM('AM_xenon', xe, anapole_model, \n {'mass':50.,'sigma_anapole':44.25}, Qmin = np.asarray([5.]), \n Qmax = np.asarray([150.]), \n Tmin = 0, Tmax = 365, sigma_anapole = 44.25, \n element = 'xenon', force_sim = True)\n\n\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SylvainCorlay/bqplot
examples/Tutorials/Linking Plots With Widgets.ipynb
apache-2.0
[ "Building interactive plots using bqplot and ipywidgets\n\nbqplot is built on top of the ipywidgets framework\nipwidgets and bqplot widgets can be seamlessly integrated to build interactive plots\nbqplot figure widgets can be stacked with UI controls available in ipywidgets by using Layout classes (Box, HBox, VBox) in ipywidgets\n(Note that only Figure objects (not Mark objects) inherit from DOMWidget class and can be combined with other widgets from ipywidgets)\nTrait attributes of widgets can be linked using callbacks. Callbacks should be registered using the observe method\n\nPlease follow these links for detailed documentation on:\n1. Layout and Styling of Jupyter Widgets\n* Linking Widgets\n<br>Let's look at examples of linking plots with UI controls", "import numpy as np\n\nimport ipywidgets as widgets\nimport bqplot.pyplot as plt", "Update the plot on a button click", "y = np.random.randn(100).cumsum() # simple random walk\n\n# create a button\nupdate_btn = widgets.Button(description='Update', button_style='success')\n\n# create a figure widget\nfig1 = plt.figure(animation_duration=750)\nline = plt.plot(y)\n\n# define an on_click function\ndef on_btn_click(btn):\n # update the y attribute of line mark\n line.y = np.random.randn(100).cumsum() # another random walk\n\n# register the on_click function\nupdate_btn.on_click(on_btn_click)\n \n# stack button and figure using VBox\nwidgets.VBox([fig1, update_btn])", "Let's look at an example where we link a plot to a dropdown menu", "import pandas as pd\n\n# create a dummy time series for 5 dummy stock tickers\ndates = pd.date_range(start='20180101', end='20181231')\nn = len(dates)\ntickers = list('ABCDE')\nprices = pd.DataFrame(np.random.randn(n, 5).cumsum(axis=0), columns=tickers)\n\n# create a dropdown menu for tickers\ndropdown = widgets.Dropdown(description='Ticker', options=tickers)\n\n# create figure for plotting time series\ncurrent_ticker = dropdown.value\nfig_title_tmpl = '\"{}\" Time Series' # string template for title of the figure \nfig2 = plt.figure(title=fig_title_tmpl.format(current_ticker))\nfig2.layout.width = '900px'\ntime_series = plt.plot(dates, prices[current_ticker])\nplt.xlabel('Date')\nplt.ylabel('Price')\n\n# 1. create a callback which updates the plot when dropdown item is selected\ndef update_plot(*args):\n selected_ticker = dropdown.value\n \n # update the y attribute of the mark by selecting \n # the column from the price data frame\n time_series.y = prices[selected_ticker]\n \n # update the title of the figure\n fig2.title = fig_title_tmpl.format(selected_ticker)\n\n# 2. register the callback by using the 'observe' method\ndropdown.observe(update_plot, 'value')\n\n# stack the dropdown and fig widgets using VBox\nwidgets.VBox([dropdown, fig2])", "Let's now create a scatter plot where we select X and Y data from the two dropdown menus", "# create two dropdown menus for X and Y attributes of scatter\nx_dropdown = widgets.Dropdown(description='X', options=tickers, value='A')\ny_dropdown = widgets.Dropdown(description='Y', options=tickers, value='B')\n\n# create figure for plotting the scatter\nx_ticker = x_dropdown.value\ny_ticker = y_dropdown.value\n\n# set up fig_margin to allow space to display color bar\nfig_margin = dict(top=20, bottom=40, left=60, right=80)\nfig3 = plt.figure(animation_duration=1000, fig_margin=fig_margin)\n\n# custom axis options for color data\naxes_options = {'color': {'tick_format': '%m/%y', \n 'side': 'right',\n 'num_ticks': 5}}\nscatter = plt.scatter(x=prices[x_ticker], \n y=prices[y_ticker],\n color=dates, # represent chronology using color scale\n stroke='black',\n colors=['red'],\n default_size=32,\n axes_options=axes_options)\nplt.xlabel(x_ticker)\nplt.ylabel(y_ticker)\n\n# 1. create a callback which updates the plot when dropdown item is selected\ndef update_scatter(*args):\n x_ticker = x_dropdown.value\n y_ticker = y_dropdown.value\n \n # update the x and y attributes of the mark by selecting\n # the column from the price data frame\n with scatter.hold_sync():\n scatter.x = prices[x_ticker]\n scatter.y = prices[y_ticker]\n \n # update the title of the figure\n plt.xlabel(x_ticker)\n plt.ylabel(y_ticker)\n\n# 2. register the callback by using the 'observe' method\nx_dropdown.observe(update_scatter, 'value')\ny_dropdown.observe(update_scatter, 'value')\n\n# stack the dropdown and fig widgets using VBox\nwidgets.VBox([widgets.HBox([x_dropdown, y_dropdown]), fig3])", "In the example below, we'll look at plots of trigonometic functions", "funcs = dict(sin=np.sin, cos=np.cos, tan=np.tan, sinh=np.sinh, tanh=np.tanh)\ndropdown = widgets.Dropdown(options=funcs, description='Function')\n\nfig = plt.figure(title='sin(x)', animation_duration=1000)\n\n# create x and y data attributes for the line chart\nx = np.arange(-10, 10, .1)\ny = np.sin(x)\n\nline = plt.plot(x, y ,'m')\n\ndef update_line(*args):\n f = dropdown.value\n fig.title = f'{f.__name__}(x)'\n line.y = f(line.x)\n \ndropdown.observe(update_line, 'value')\n\nwidgets.VBox([dropdown, fig])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eds-uga/csci1360e-su17
lectures/L18.ipynb
mit
[ "Lecture 18: Natural Language Processing II\nCSCI 1360E: Foundations for Informatics and Analytics\nOverview and Objectives\nLast week, we introduced the concept of natural language processing, and in particular the \"bag of words\" model for representing and quantifying text for later analysis. In this lecture, we'll expand on those topics, including some additional preprocessing and text representation methods. By the end of this lecture, you should be able to\n\nImplement several preprocessing techniques like stemming, stopwords, and minimum counts\nUnderstand the concept of feature vectors in natural language processing\nCompute inverse document frequencies to up or down-weight term frequencies\n\nPart 1: Feature Vectors\nThe \"bag of words\" model: why do we do it? what does it give us?\nIt's a way of representing documents in a format that is convenient and amenable to sophisticated analysis.\nYou're interested in blogs. Specifically, you're interested in how blogs link to each other. Do politically-liberal blogs tend to link to politically-conservative blogs, and vice versa? Or do they mostly link to themselves?\nImagine you have a list of a few hundred blogs. To get their political leanings, you'd need to analyze the blogs and see how similar they are.\nTo do that, you need some notion of similarity...\nWe need to be able to represent the blogs as feature vectors.\nIf you can come up with a quantitative representation of your \"thing\" of interest, then you can compare it to other instances of that thing.\nThe bag-of-words model is just one way of turning a document into a feature vector that can be used in analysis. By considering each blog to be a single document, you can therefore convert each blog to its own bag-of-words and compare them directly.\n(in fact, this has actually been done)\n\nhttp://waxy.org/2008/10/memeorandum_colors/\n\nIf you have some data point $\\vec{x}$ that is an $n$-dimensional vector (pictured above: a three-dimensional vector), each dimension is a single feature.\n(Hint: what does this correspond to with NumPy arrays?)\nTherefore, a bag-of-words model is just a way of representing a document as a vector, where the dimensions are word counts!\n\nPictured above are three separate documents, and the number of times each of the words appears is given by the height of the histogram bar. Stare at this until you get some understanding of what's happening--these are three documents that share the same words (as you can see, they have the same x-axes), but what differs are the relative heights of the bars, meaning they have different values along the x-axes.\nOf course there are other ways of representing documents as vectors, but bag-of-words is the easiest.\nPart 2: Text Preprocessing\nWhat is \"preprocessing\"?\nName some preprocessing techniques with text we've covered!\n\nLower case (or upper case) everything\nSplit into single words\nRemove trailing whitespace (spaces, tabs, newlines)\n\nThere are a few more that can be very powerful.\nTo start, let's go back to the Alice in Wonderland example from the previous lecture, but this time, we'll add a few more books for comparison:\n\nPride and Prejudice, by Jane Austen\nFrankenstein, by Mary Shelley\nBeowulf, by Lesslie Hall\nThe Adventures of Sherlock Holmes, by Sir Arthur Conan Doyle\nThe Adventures of Tom Sawyer, by Mark Twain\nThe Adventures of Huckleberry Finn, by Mark Twain\n\nHopefully this variety should give us a good idea what we're dealing with!\nFirst, we'll read all the books' raw contents into a dictionary.", "books = {} # We'll use a dictionary to store all the text from the books.\nfiles = ['Lecture18/alice.txt',\n 'Lecture18/pride.txt',\n 'Lecture18/frank.txt',\n 'Lecture18/bwulf.txt',\n 'Lecture18/holmes.txt',\n 'Lecture18/tom.txt',\n 'Lecture18/finn.txt']\n\nfor f in files:\n # This weird line just takes the part of the filename between the \"/\" and \".\" as the dict key.\n prefix = f.split(\"/\")[-1].split(\".\")[0]\n try:\n with open(f, \"r\", encoding = \"ISO-8859-1\") as descriptor:\n books[prefix] = descriptor.read()\n except:\n print(\"File '{}' had an error!\".format(f))\n books[prefix] = None\n\n# Here you can see the dict keys (i.e. the results of the weird line of code in the last cell)\nprint(books.keys())", "Just like before, let's go ahead and lower case everything, strip out whitespace, then count all the words.", "def preprocess(book):\n # First, lowercase everything.\n lower = book.lower()\n \n # Second, split into lines.\n lines = lower.split(\"\\n\")\n \n # Third, split each line into words.\n words = []\n for line in lines:\n words.extend(line.strip().split(\" \"))\n\n # That's it!\n return count(words)\n\nfrom collections import defaultdict # Our good friend from the last lecture, defaultdict!\n\ndef count(words):\n counts = defaultdict(int)\n for w in words:\n counts[w] += 1\n return counts\n\ncounts = {}\nfor k, v in books.items():\n counts[k] = preprocess(v)", "Let's see how our basic preprocessing techniques from the last lecture worked out.", "from collections import Counter\n\ndef print_results(counts):\n for key, bag_of_words in counts.items():\n word_counts = Counter(bag_of_words)\n mc_word, mc_count = word_counts.most_common(1)[0]\n print(\"'{}' has {} unique words, and the most common is '{}', occuring {} times.\"\n .format(key, len(bag_of_words.keys()), mc_word, mc_count))\nprint_results(counts)", "Yeesh.\nNot only are the most common words among the most boring (\"the\"? \"and\"?), but there are occasions where the most common word isn't even a word, but rather a blank space. (How do you think that could happen?)\nStop words\nA great first step is to implement stop words. (I used this list of 319 stop words)", "with open(\"Lecture18/stopwords.txt\", \"r\") as f:\n lines = f.read().split(\"\\n\")\n stopwords = [w.strip() for w in lines]\nprint(stopwords[:5])", "Now we'll augment our preprocess function to include stop word processing.", "def preprocess_v2(book, stopwords): # Note the \"_v2\"--this is a new function!\n # First, lowercase everything.\n lower = book.lower()\n \n # Second, split into lines.\n lines = lower.split(\"\\n\")\n \n # Third, split each line into words.\n words = []\n for line in lines:\n tokens = line.strip().split(\" \")\n \n # Check for stopwords.\n for t in tokens:\n if t in stopwords: continue # This \"continue\" SKIPS the stopword entirely!\n words.append(t)\n\n # That's it!\n return count(words)", "Now let's see what we have!", "counts = {}\nfor k, v in books.items():\n counts[k] = preprocess_v2(v, stopwords)\n \nprint_results(counts)", "Well, this seems even worse! What could we try next?\nMinimum length\nPretty straightforward: cut out all the words under a certain length; say, 2. After all--how many words do you know that are semantically super-important to a book, and yet are fewer than 2 letters long?\n(I'm sure you can think of a few; my point being, they're so few they're unlikely to matter much)", "def preprocess_v3(book, stopwords): # We've reached \"_v3\"!\n # First, lowercase everything.\n lower = book.lower()\n \n # Second, split into lines.\n lines = lower.split(\"\\n\")\n \n # Third, split each line into words.\n words = []\n for line in lines:\n tokens = line.strip().split(\" \")\n \n # Check for stopwords.\n for t in tokens:\n if t in stopwords or len(t) <= 2: continue # Skip stopwords AND words with length under 2\n words.append(t)\n\n # That's it!\n return count(words)", "Maybe this will be better?", "counts = {}\nfor k, v in books.items():\n counts[k] = preprocess_v3(v, stopwords)\n \nprint_results(counts)", "Ooh! Definite improvement! Though clearly, punctuation is getting in the way; you can see it in at least two of the top words in the list above.\nWe spoke last time about how removing punctuation could be a little dangerous; what if the punctuation is inherent to the meaning of the word (i.e., a contraction)? \nHere, we'll compromise a little: we'll get the \"easy\" punctuation, like exclamation marks, periods, and commas, and leave the rest.", "def preprocess_v4(book, stopwords):\n # First, lowercase everything.\n lower = book.lower()\n \n # Second, split into lines.\n lines = lower.split(\"\\n\")\n \n # Third, split each line into words.\n words = []\n for line in lines:\n tokens = line.strip().split(\" \")\n \n # Check for stopwords.\n for t in tokens:\n\n # Cut off any end-of-sentence punctuation.\n if t.endswith(\",\") or t.endswith(\"!\") or t.endswith(\".\") or \\\n t.endswith(\":\") or t.endswith(\";\") or t.endswith(\"?\"):\n t = t[:-1] # This says: take everything except the last letter\n\n if t in stopwords or len(t) <= 2: continue\n words.append(t)\n\n # That's it!\n return count(words)", "Alright, let's check it out again.", "counts = {}\nfor k, v in books.items():\n counts[k] = preprocess_v4(v, stopwords)\n \nprint_results(counts)", "Now we're getting somewhere! But this introduces a new concept--in looking at this list, wouldn't you say that \"says\" and \"said\" are probably, semantically, more or less the same word?\nStemming\nStemming is the process by which we convert words with similar meaning into the same word, so their similarity is reflected in our analysis. Words like \"imaging\" and \"images\", or \"says\" and \"said\" should probably be considered the same thing.\nTo do this, we'll need an external Python package: the Natural Language Toolkit, or NLTK. (it's installed on JupyterHub, so go ahead and play with it!)", "import nltk # This package!\n\ndef preprocess_v5(book, stopwords):\n lower = book.lower()\n lines = lower.split(\"\\n\")\n \n # Create the stemmer.\n stemmer = nltk.stem.SnowballStemmer('english')\n\n words = []\n for line in lines:\n tokens = line.strip().split(\" \")\n \n for t in tokens:\n if t.endswith(\",\") or t.endswith(\"!\") or t.endswith(\".\") or \\\n t.endswith(\":\") or t.endswith(\";\") or t.endswith(\"?\"):\n t = t[:-1]\n\n if t in stopwords or len(t) <= 2: continue\n stemmed = stemmer.stem(t) # This is all that is required--nltk does the rest!\n words.append(stemmed)\n\n # That's it!\n return count(words)", "How did this go?", "counts = {}\nfor k, v in books.items():\n counts[k] = preprocess_v5(v, stopwords)\n \nprint_results(counts)", "Well, this kinda helped--\"says\" was reduced to \"say\", and its count clearly increased from the 628 it was before, meaning stemmed versions that were previously viewed as different words were merged. But \"said\" is still there; clearly, there are limitations to this stemmer.\nAs one final step--it's convenient sometimes to simply drop words that occur only once or twice. This can dramatically help with processing time, as quite a few words (usually proper nouns) will only be seen a few times.", "def preprocess_v6(book, stopwords):\n lower = book.lower()\n lines = lower.split(\"\\n\")\n \n # Create the stemmer.\n stemmer = nltk.stem.SnowballStemmer('english')\n\n words = []\n for line in lines:\n tokens = line.strip().split(\" \")\n \n for t in tokens:\n if t.endswith(\",\") or t.endswith(\"!\") or t.endswith(\".\") or \\\n t.endswith(\":\") or t.endswith(\";\") or t.endswith(\"?\"):\n t = t[:-1]\n\n if t in stopwords or len(t) <= 2: continue\n stemmed = stemmer.stem(t)\n words.append(stemmed)\n\n # Only keep words that were observed more than once.\n word_counts = count(words)\n return {k: v for k, v in word_counts.items() if v > 1}", "One final check:", "counts = {}\nfor k, v in books.items():\n counts[k] = preprocess_v6(v, stopwords)\n \nprint_results(counts)", "The most common words and their counts haven't changed, but hopefully you can see there's a big difference in the number of unique words!\n\nFrankenstein: 5256 to 3053\nBeowulf: 6871 to 2543\nAlice in Wonderland: 3174 to 1148\nTom Sawyer: 7596 to 3459\nPride and Prejudice: 5848 to 3059\nSherlock Holmes: 8004 to 3822\nHuckleberry Finn: 8070 to 3521\n\nNow that we have the document vectors in bag of words format, fully preprocessed, we can do some analysis, right?\nFirst, let's step back and discuss again what these features actually mean.\n\n\nWhat are the features?\n\n\nHow do we use these features to differentiate between and identify similar documents?\n\n\nWhat are the implicit assumptions being made using the model as we've computed it so far?\n\n\nHere's food for thought: if we are assuming high word count = high importance, what do we do about those two books that both have \"said\" as their most frequent word?\nWe could just add it to our stop list. But as we saw last time, another word that may only be slightly more meaningful will take its place. How do we know when to stop adding words to our stoplist?\nMaybe, like we did with cutting out words that only appeared once, we could cut out the top 5% of words. But then why not 4% or 6%? Where do we set that threshold?\nWhat is it we're really hoping these word counts tell us?\nPart 3: Term Frequency-Inverse Document Frequency (TF-IDF)\nSo far, what we've computed in our bag of words, is term frequencies.\nEach individual word is a term, and we've simply counted how many times those terms appeared, or how frequent they were.\nThe underlying assumption of using word frequencies as features is that some number of shared words will appear across ALL the documents, but vary considerably in how frequently they appear. Remember our feature vectors plot from earlier in the lecture:\n\nYou'll notice the x-axes for these three plots are the same! It's the relative frequencies of those x-values that differ over the three documents. That's what we're hoping for.\nBut since documents, in general, don't have an upper limit to the number of words they can have, you can run into the following situation:", "document1 = [\"Happy\", \"Halloween\"]\ndocument2 = [\"Happy\", \"Happy\", \"Happy\", \"Halloween\", \"Halloween\", \"Halloween\"]", "This is an admittedly blunt example, but using just term frequencies, which document would you say would be more important if I searched for the word \"Halloween\"? What should the relative importance of the word \"Halloween\" to these two documents be?\nTo handle this problem, we combine term frequency (TF) with inverse document frequency (IDF).\nThis quantity consists of two terms (TF and IDF) that are multiplied together.\nYou already know how to compute TF: just the count of how many times $word_i$ appears in document $d$ (though this is usually normalized by the total number of words in the document, so it becomes a probability!).\nIDF is a little more involved. To compute it, you need access to all the documents.\nIDF is a fraction:\n\n\nThe numerator is the number of total documents you're using (in our case: 7 books)\n\n\nThe denominator is the number of documents that contain the term you're looking at, so this number should always be less than or equal to the numerator\n\n\nLet's think about this.\n\n\nIf a word occurs frequently in a document, it will have a high count (or high probability, if you normalized by total words).\n\n\nIf a word occurs in only one document (out of $N$), the IDF term will be $N / 1$, multiplying the TF term by $N$.\n\n\nOn the other hand, if the word occurs in every document (think: stop word!), then the IDF term will be $N / N$, or 1, effectively leaving the TF term alone.\n\n\nThis has the effect of weighting the words by how often they're found across all documents.\nTo do the TF-IDF computation, let's first convert our counts to a matrix: the rows are documents (so there will be 7 rows), and the columns are the counts for each word.", "import numpy as np\n\nall_words = set([w for d in counts.values() for w in d.keys()])\ncount_matrix = np.zeros(shape = (len(counts.keys()), len(all_words)))\n\nfor i, word in enumerate(all_words):\n for j, doc in enumerate(counts.keys()):\n doc_counts = counts[doc]\n if word in doc_counts:\n count_matrix[j][i] = doc_counts[word]\n\nprint(count_matrix[0, :5])", "Now we'll compute the inverse document frequencies.", "tfidf = np.zeros(shape = (len(counts.keys()), len(all_words)))\n\nnormalizers = count_matrix.sum(axis = 1)\nnum_docs = len(counts.keys())\n\nfor j in range(count_matrix.shape[1]):\n column = count_matrix[:, j]\n greater_than_zero = len(column[column > 0])\n tf = column / normalizers\n idf = num_docs / greater_than_zero\n tfidf[:, j] = tf * idf", "TF-IDF may have a fancy name and a cool looking acronym, but it's still a glorified word count, just with weights.\nWord counts are nothing more than histograms, so we should be able to make some bar plots of our 7 TF-IDF vectors, one for each book.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nx = np.arange(len(all_words)) + 1\nbooks = list(counts.keys())\n\nfor i in range(tfidf.shape[0]):\n book_tfidf = tfidf[i]\n \n plt.subplot(2, 4, i + 1)\n plt.plot(x, book_tfidf)\n plt.title(books[i])\n plt.ylim([0, 0.12])\n plt.xlim([0, x.shape[0] + 1])\n plt.xticks([])\n plt.yticks([])", "One final bit of introspective magic: the above histograms are great, but they don't really give us a good intuition for what's really going on in these books.\nEver seen a word cloud before?", "from wordcloud import WordCloud\n\nfor i in range(tfidf.shape[0]):\n book_tfidf = tfidf[i]\n freqs = []\n for j, word in enumerate(all_words):\n if book_tfidf[j] > 0:\n freqs.append((word, book_tfidf[j]))\n wc = WordCloud().generate_from_frequencies(freqs)\n\n #plt.subplot(4, 2, i + 1)\n plt.figure(i)\n plt.imshow(wc)\n plt.title(books[i])\n plt.axis(\"off\")", "Review Questions\nSome questions to discuss and consider:\n1: There are many machine learning algorithms that rely on the probability of a single word, $P(w)$, under some condition (e.g. the probability of a word in a conservative or liberal blog). Explain what this would involve.\n2: So far we've considered only nonzero word counts. When you combine all the unique words in a collection of documents together, it's possible (even likely) that quite a few of the words will have counts of 0 in some of the documents. Why is this? What problems might this cause for the later analyses?\n3: Another point we haven't even touched on, but which can really generate very powerful features, is the inclusion of $n$-grams. If you consider $n = 1$ to be the case of individual words as we've used them throughout the last two lectures, with $n = 2$ we instead consider a single \"token\" to be every sequence of 2 consecutive words. What advantages and disadvantages would this approach have (e.g. $n = 2$, or bigrams), over using single words ($n = 1$, or unigrams)?\nCourse Administrivia\n\n\nHow is Assignment 8 going? Due TONIGHT! Post any questions to the Slack #questions channel!\n\n\nAssignment 9 will be out tomorrow. Second-to-last homework!\n\n\nAdditional Resources\n\nGrus, Joel. Data Science from Scratch, Chapter 9. 2015. ISBN-13: 978-1491901427\nSaha, Amit. Doing Math with Python, Chapter 3. 2015. ISBN-13: 978-1593276409\nRichert, Willi and Coelho, Luis Pedro. Building Machine Learning Systems with Python, Chapter 2. 2013. ISBN-13: 978-1782161400" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
d-li14/CS231n-Assignments
assignment2/BatchNormalization.ipynb
gpl-3.0
[ "Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\nThe authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n[3] Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.", "# As usual, a bit of setup\nfrom __future__ import print_function\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)", "Batch normalization: Forward\nIn the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.", "# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization\n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint(' means: ', a.mean(axis=0))\nprint(' stds: ', a.std(axis=0))\n\n# Means should be close to zero and stds close to one\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})\nprint(' mean: ', a_norm.mean(axis=0))\nprint(' std: ', a_norm.std(axis=0))\n\n# Now means should be close to beta and stds close to gamma\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint('After batch normalization (nontrivial gamma, beta)')\nprint(' means: ', a_norm.mean(axis=0))\nprint(' stds: ', a_norm.std(axis=0))\n\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint(' means: ', a_norm.mean(axis=0))\nprint(' stds: ', a_norm.std(axis=0))", "Batch Normalization: backward\nNow implement the backward pass for batch normalization in the function batchnorm_backward.\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\nOnce you have finished, run the following to numerically check your backward pass.", "# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))", "Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.\nSurprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\nNOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.", "np.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))", "Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.\nConcretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\nHINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.", "np.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n use_batchnorm=True)\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()", "Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.", "np.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nbn_solver.train()\n\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nsolver.train()", "Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.", "plt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 1)\nplt.plot(solver.loss_history, 'o', label='baseline')\nplt.plot(bn_solver.loss_history, 'o', label='batchnorm')\n\nplt.subplot(3, 1, 2)\nplt.plot(solver.train_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')\n\nplt.subplot(3, 1, 3)\nplt.plot(solver.val_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()", "Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.", "np.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers = {}\nsolvers = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers[weight_scale] = solver\n\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))\n \n best_val_accs.append(max(solvers[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(10, 15)\nplt.show()", "Question:\nDescribe the results of this experiment, and try to give a reason why the experiment gave the results that it did.\nAnswer:\nNetwork with Batch Normalization layer is more robust to bad initialization, while in such case vanilla FCN will likely suffer from respectible loss. Moreover, preprocessing the affine outputs to be centered and normalized contributes to make their distribution more homogeneous, thus the training process will converge faster." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]