repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
Merinorus/adaisawesome
|
Homework/03 - Interactive Viz/HW3_Interactive_Viz.ipynb
|
gpl-3.0
|
[
"(1) Build a Choropleth map which shows intuitively (i.e., use colors wisely) how much grant money goes to each Swiss canton. First we will need to get the data ready, which means determining which canton wach university is at and summing up the grant amounts by canton.",
"import pandas as pd\nimport numpy as np\n# We will read json files, for instance API keys stored in our computers for using Google Maps API, so they're not publicly visible\nimport json\n# Geolocation\nimport geopy\nfrom geopy.geocoders import geonames\nimport math\nimport logging\n\np3_grant_export_data = pd.read_csv(\"P3_GrantExport.csv\", sep=\";\")\np3_grant_export_data.head()\n\n# Here is the total number of rows we will have to deal with\nlen(p3_grant_export_data.index)\n\n# We keep only the rows which mention how much money has been granted (the amount column starts by a number)\n# ie : we keep rows where the 'Approved Amount' column starts with a number\np3_grant_export_data = p3_grant_export_data[p3_grant_export_data['Approved Amount'].apply(lambda x : x[0].isdigit())]\n\n# Almost 200k rows have been removed\np3_grant_export_data.size\n\n# We don't need this data\np3_grant_export_data = p3_grant_export_data.drop(p3_grant_export_data.columns[[0]], axis = 1)\np3_grant_export_data = p3_grant_export_data.drop(['Project Title', 'Project Title English', 'Responsible Applicant', 'Discipline Number', 'Discipline Name', 'Discipline Name Hierarchy', 'Keywords'], axis=1)\np3_grant_export_data.size",
"First, we will locate projcets according to the University name.\nWe will ignore all project in which the University is not mentioned : we assume that if it's not, the project is probably outside Switzerland.\nIf we have the time, a better solution would be taking the institution's location into account as well.",
"# Removing rows in which University is not mentioned\n# p3_grant_export_data = p3_grant_export_data.dropna(subset=['University'])\n# p3_grant_export_data.size\n\np3_grant_export_data",
"Using only university names as a parametrer for geolocators isn't enough, because we get about half of the results.\nA better idea would be using university name, and if there is no result, using the institution name as a second chance.\nWe will then take both universities and institution names into account in order to do our research:\n1) Create initial data containers :\n - a key-value (name-canton) dictionary for universities and institutions :\n -['University', 'Canton'] and ['Institution', 'Canton']\n - a table that contains all cantons that have been found\n2) Go trough the dataframe:\n - Check if the university name exists in our index. If not, geolocate the address \n - Check if the address is in Switzerland, otherwise canton will be considered as 'None'\n - If the university address is not found, try to find it with institution name, the same way as above\n - Extract the canton of the address (if it found an address and if it's in Switzerland). If no canton, let's say 'None' canton\n - Add the canton name to a the dictionary (or add something like 'None' if no canton has been found), so next university or institution that has been already found won't have to be geolocated again\n - Add the canton to the canton table\n3) Add the canton table to the above dataframe in a way that they match with the universities or institutions",
"# Let's start by creating our geolocator. We will use Google Maps API :\ngooglemapsapikeyjson = json.loads(open('google_maps_api_keys.json').read())\n# We might need several API keys, to make a potentially huge number of requests\ngooglemapsapikeys = googlemapsapikeyjson['keys']\n\n# Specifying the region for the geolocator: for instance, University of Geneva might be localized in the US !\ntest_geolocator = geopy.geocoders.GoogleV3(api_key=googlemapsapikeys[0])\ntest_university_geneva = test_geolocator.geocode(\"University of Geneva\", region='ch')\ntest_university_geneva",
"Now let's start by creating the indexes for universities and institutions :",
"try:\n university_canton_dict = json.loads(open('university_canton_dict.json').read())\nexcept FileNotFoundError:\n print('The dictionary for universities has not been saved yet. Let''s create a new dictionary.')\n university_canton_dict = {}\n \ntry:\n institution_canton_dict = json.loads(open('institution_canton_dict.json').read())\nexcept FileNotFoundError:\n print('The dictionary for institutions has not been saved yet. Let''s create a new dictionary.')\n institution_canton_dict = {}\n",
"We excpect some dirty values if the dataframe, so we are anticipate the problems:",
"# We can already add the values in our dataframe that won't lead to an address\nuniversity_canton_dict['Nicht zuteilbar - NA'] = {'long_name': 'N/A', 'short_name': 'N/A'} # it means \"Not Available\" in German !\ninstitution_canton_dict['NaN'] = {'long_name': 'N/A', 'short_name': 'N/A'}\ninstitution_canton_dict['nan'] = {'long_name': 'N/A', 'short_name': 'N/A'}",
"We will need to log the next steps in order de debug easily the part of code related to geolocation...\nIt seems like it's hard to create a log file in iPython, so we adapted the following of code. Basically, it writes to a file named geolocation.log",
"# set root logger level\nroot_logger = logging.getLogger()\nroot_logger.setLevel(logging.DEBUG)\n\n# setup custom logger\nlogger = logging.getLogger(__name__)\nhandler = logging.FileHandler('geolocation.log')\n\nformatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\nhandler.setFormatter(formatter)\nlogger.addHandler(handler)\n\n# log\nlogger.info('This file is used to debug the next code part related to geolocation of universities/institutions')",
"It's rather dirty, but we will need more than one API key to make all the requests we need for our data.\nSo we created several Google API keys and switch the key each time the current one cannot be used anymore !\nHere is the main code to get all the cantons that we will associate with our dataframe:",
"# We create tables that will contains every canton we find, so we'll be able to match it with the dataframe at the end.\nlogger.debug('Beginning of geolocation : creating canton tables')\ncanton_shortname_table = [] # eg: VD\ncanton_longname_table = []# eg: Vaud\n\n# number of rows analysed. Can be limited for debuging (eg : 10) because the number of requests to Google Maps API is limited !\nMAX_ROWS = math.inf # values between 0 and math.inf \nrow_counter = 0 # will be incremented each time we iterate over a row\n\n# maximum duration of a query to the geocoder, in seconds\ngeocoder_timeout = 5\n\n# We're going to use more than one API key if we want to make all the requests !! :@\n# Keys are referenced in a table, se we start with the first key:\nAPIkeynumber = 0\n\n# This function definition makes the geolocator \"stubborn\" : it uses all the keys that are available and if it gets a timeout error, it just tries again ! \ndef stubborn_geocode(geolocator, address):\n global APIkeynumber\n \n try:\n geolocator = geopy.geocoders.GoogleV3(api_key=googlemapsapikeys[APIkeynumber])\n return geolocator.geocode(address, region='ch', timeout=geocoder_timeout)\n \n except geopy.exc.GeocoderTimedOut:\n print(\"Error : the geocoder timed out. Let's try again...\")\n return stubborn_geocode(geolocator, address)\n \n except geopy.exc.GeocoderQuotaExceeded:\n print(\"Error : The given key has gone over the requests limit in the 24 hour period or has submitted too many requests in too short a period of time. Let's try again with a different key...\")\n APIkeynumber = APIkeynumber + 1\n \n try:\n print(\"Trying API key n°\" + str(APIkeynumber) + \"...\") \n return stubborn_geocode(geolocator, address)\n \n except IndexError:\n print(\"Error : Out of API keys ! We need to request another API key from Google :(\")\n print(\"When you get a new API key, add it to the json file containing the others keys.\")\n # We have to stop there... the error will be raised and the execution stopped.\n raise\n\n \n# Go through the dataframe that contains all universities and institutions\nfor index, row in p3_grant_export_data.iterrows():\n logger.debug(\"Iterating over row n°\" + str(row_counter) + \":\")\n # initialize variables that will contain canton name for the current row\n canton_longname = 'N/A'\n canton_shortname = 'N/A'\n # Check if the university name exists in our index\n university_name = row['University']\n institution_name = row['Institution']\n if university_name in university_canton_dict:\n # The university has already been located. Let's add the canton to the canton table\n if university_canton_dict[university_name]['long_name'] is not None:\n logger.debug('University already exists in dictionary (' + university_canton_dict[university_name]['long_name'] + ')')\n else:\n logger.debug('University already exists in dictionary, but no canton is associated to it (it might be outside Switzerland).')\n \n canton_longname = university_canton_dict[university_name]['long_name']\n canton_shortname = university_canton_dict[university_name]['short_name']\n \n elif institution_name in institution_canton_dict:\n # The institution has already ben located, so we add its canton to the canton table\n logger.debug('University wasn''t found, but institution already exists in dictionary (' + institution_canton_dict[institution_name]['long_name'] + ')')\n \n canton_longname = institution_canton_dict[institution_name]['long_name']\n canton_shortname = institution_canton_dict[institution_name]['short_name']\n \n else:\n # Nor the university neither the institution has been found yet, so we have to geolocate it\n logger.debug(str(university_name) + ' / ' + str(institution_name) + ' not found in dictionaries, geolocating...')\n adr = stubborn_geocode(geolocator, university_name)\n if adr is None:\n # No address has been found for this University. So we have to do the same with Institution \n adr = stubborn_geocode(geolocator, institution_name)\n \n # Now, the address should have been found, either by locating the university or the institution\n if adr is not None: \n # Check if it's a Swiss address and finds the right canton\n try:\n swiss_address = False\n for i in adr.raw['address_components']:\n if i[\"types\"][0] == \"country\" and i[\"long_name\"] == \"Switzerland\":\n # The address is located in Switerland\n swiss_address = True\n # So, we go on only if we found a Swiss address. Otherwise, there is no point to continue.\n if swiss_address:\n for i in adr.raw['address_components']:\n if i[\"types\"][0] == \"administrative_area_level_1\":\n # We found a canton !\n canton_longname = (i['long_name'])\n canton_shortname = (i['short_name']) \n break\n \n \n \n except IndexError:\n # I don't know where this error comes from exactly, just debugging... it just comes from this line :\n # if i[\"types\"][0] == \"country\" and i[\"long_name\"] == \"Switzerland\":\n # For the moment I assume that the the address doesn't match the requirements, so it should not be located in Switzerland\n # Thus, we just forget it and look for the next address.\n print(\"IndexError : no canton found for the current row\")\n \n except KeyError:\n print(\"KeyError : no canton found for the current row\")\n print(\"Current item: n°\" + str(len(canton_shortname_table)))\n # The address doesn't act as excpected. There are two possibilities :\n # - The address doesn't contain the field related to the canton\n # - The address doesn't contain the field related to the country\n # So we don't consider this address as a Swiss one and we give up with this one.\n \n # Let's add what we found about the canton !\n # If we didn't find any canton for the current university/institution, it will just append 'N/A' to the tables.\n logger.debug(\"Appending canton to the table: \" + canton_longname)\n canton_shortname_table.append(canton_shortname)\n canton_longname_table.append(canton_longname)\n \n # We also add it to the university/institution dictionary, in order to limit the number of requests\n university_canton_dict[university_name] = {}\n university_canton_dict[university_name]['short_name'] = canton_shortname\n university_canton_dict[university_name]['long_name'] = canton_longname\n institution_canton_dict[institution_name] = {}\n institution_canton_dict[institution_name]['short_name'] = canton_shortname\n institution_canton_dict[institution_name]['long_name'] = canton_longname\n \n\n row_counter = row_counter + 1\n if row_counter >= MAX_ROWS:\n print(\"Maximum number of rows reached ! (\" + str(MAX_ROWS) + \")\")\n print(\"Increase the MAX_ROWS variable to analyse more locations\")\n print(\"No limit : MAX_ROWS = maths.inf\")\n break\n\n\n# We have the table containing all cantons !\nlen(canton_shortname_table)\n\ncanton_longname_table\n\n# We save the dictionary of cantons associated with universities\n# Thus we won't need to make requests that have already been made to Google Maps next time we run this notebook !\nwith open('university_canton_dict.json', 'w') as fp:\n json.dump(university_canton_dict, fp, indent=4)\nuniversity_canton_dict\n\n# We save the dictionary of cantons/institutions as well\nwith open('institution_canton_dict.json', 'w') as fp:\n json.dump(institution_canton_dict, fp, indent=4)\ninstitution_canton_dict\n\ncanton_shortname_series = pd.Series(canton_shortname_table, name='Canton Shortname')\ncanton_shortname_series.size\n\ncanton_longname_series = pd.Series(canton_longname_table, name='Canton Longname')\ncanton_longname_series.size\n\nlen(p3_grant_export_data.index)\n\n# Reindex the dataframe to make the match with cantons\np3_grant_export_data_reindex = p3_grant_export_data.reset_index(drop=True)\np3_grant_export_data_reindex\n\n# Let's add the cantons to our dataframe !\np3_grant_cantons = pd.concat([p3_grant_export_data_reindex, canton_longname_series, canton_shortname_series], axis=1)\np3_grant_cantons.columns.get_value\np3_grant_cantons",
"Now we have the cantons associated with the universities/institutions :)\nWe save the dataframe into several formats, just in case, in order to use them in another notebook.",
"try:\n p3_grant_cantons.to_csv('P3_Cantons.csv', encoding='utf-8')\nexcept PermissionError:\n print(\"Couldn't access to the file. Maybe close Excel and try again :)\")\n\np3_grant_cantons_json = p3_grant_cantons.to_json()\nwith open('P3_cantons.json', 'w') as fp:\n json.dump(p3_grant_cantons_json, fp, indent=4)\n\n# The pickle format seems convenients to works with in Python, we're going to use it for transfering data to another notebook\np3_grant_cantons.to_pickle('P3_Cantons.pickle')",
"This is the end of the first part. Now that we have linked universities and institutions to cantons, we can start working with the map !\n(2) In this part of the exercise, we now need to put the data which we have procured about the funding levels of the different universities that are located in different cantons onto a canton map. We will do so using Folio and take the example TopoJSON mapping which they use.",
"import folium\n\n# Import the Switzerland map (from the folio pylib notebook)\ntopo_geo = r'ch-cantons.topojson.json'\n\n# Import our csv file with all of the values for the amounts of the grants \ngrants_data = pd.read_csv('P3_Cantons_Sum.csv')\n#grants_data['Approved Amount'] = (grants_data['Approved Amount']).astype(int)\n\nmissing_cantons = pd.Series(['UR','OW','NW','GL','BL','AR','AI','JU'], name='Canton Shortname')\nmissing_cantons_zeros = pd.Series([0,0,0,0,0,0,0,0], name='Approved Amount')\nmissing_cantons_df = pd.DataFrame([missing_cantons, missing_cantons_zeros]).T\ngrants_data_all_cantons = grants_data.append(missing_cantons_df)\ngrants_data_all_cantons = grants_data_all_cantons.reset_index(drop=True)\n\ngrants_data_all_cantons['Approved Amount'] = grants_data_all_cantons['Approved Amount']/10000000\n\ngrants_data_all_cantons\n\n# Need to be able to extract the id of the canton from the topo file\nfrom pprint import pprint\n\nwith open(topo_geo) as data_file: \n data = json.load(data_file)\n#pprint(data)\n\ndata['objects']['cantons']['geometries'][25]['id']\n\n#len(data['objects']['cantons']['geometries'])\n\n# Need to overlay the Swiss topo file on the generic Folio map\nch_map = folium.Map(location=[46.9, 8.3], tiles='Mapbox Bright', zoom_start=7)\n#folium.TopoJson(open(topo_geo), 'objects.cantons', name = 'topojson').add_to(ch_map)\n#folium.LayerControl().add_to(ch_map)\n\nch_map.geo_json(geo_path=topo_geo, \n data=grants_data_all_cantons,\n columns=['Canton Shortname', 'Approved Amount'],\n key_on='feature.id',\n topojson='objects.cantons',\n fill_color='YlGnBu',\n fill_opacity=0.7,\n line_opacity=0.5,\n legend_name='Total Grant Amount by Canton (tens millions CHF) (1970+)',\n threshold_scale=[0,0.01,10,150,300,400],\n reset = True)\nch_map.save('swiss.html')\nch_map"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
WNoxchi/Kaukasos
|
FAI_old/lesson1/lesson1.ipynb
|
mit
|
[
"Using Convolutional Neural Networks\nWelcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.\nIntroduction to this week's task: 'Dogs vs Cats'\nWe're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): \"State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task\". So if we can beat 80%, then we will be at the cutting edge as at 2013!\nBasic setup\nThere isn't too much to do to get started - just a few simple configuration steps.\nThis shows plots in the web page itself - we always wants to use this when using jupyter notebook:",
"%matplotlib inline",
"Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)",
"# path = \"data/dogscats/\"\npath = \"data/\"\n#path = \"data/dogscats/sample/\"",
"A few basic libraries that we'll need for the initial exercises:",
"from __future__ import division,print_function\n\nimport os, json\nfrom glob import glob\nimport numpy as np\nnp.set_printoptions(precision=4, linewidth=100)\nfrom matplotlib import pyplot as plt",
"We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.",
"LESSON_HOME_DIR = os.getcwd()\nimport sys\nsys.path.insert(1, os.path.join(LESSON_HOME_DIR, '../utils'))\nimport utils; reload(utils)\nfrom utils import plots",
"Use a pretrained VGG model with our Vgg16 class\nOur first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.\nWe have created a python class, Vgg16, which makes using the VGG 16 model very straightforward. \nThe punchline: state of the art custom model in 7 lines of code\nHere's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.",
"# As large as you can, but no larger than 64 is recommended. \n# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.\nbatch_size=64\n\n# Import our class, and instantiate\nimport vgg16; reload(vgg16)\nfrom vgg16 import Vgg16\n\nprint(vgg)\n\nvgg = Vgg16()\n# Grab a few images at a time for training and validation.\n# NB: They must be in subdirectories named based on their category\nbatches = vgg.get_batches(path+'train', batch_size=batch_size)\nval_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)\nvgg.finetune(batches)\nvgg.fit(batches, val_batches, nb_epoch=1)",
"The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.\nLet's take a look at how this works, step by step...\nUse Vgg16 for basic image recognition\nLet's start off by using the Vgg16 class to recognise the main imagenet category for each image.\nWe won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.\nFirst, create a Vgg16 object:",
"vgg = Vgg16()",
"Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.\nLet's grab batches of data from our training folder:",
"batches = vgg.get_batches(path+'train', batch_size=4)",
"(BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)\nBatches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.",
"imgs,labels = next(batches)",
"As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding. \nThe arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.",
"plots(imgs, titles=labels)",
"We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.",
"vgg.predict(imgs, True)",
"The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:",
"vgg.classes[:4]",
"(Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)\nUse our Vgg16 class to finetune a Dogs vs Cats model\nTo change our model so that it outputs \"cat\" vs \"dog\", instead of one of 1,000 very specific categories, we need to use a process called \"finetuning\". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.\nHowever, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().\nWe create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.",
"batch_size=64\n\nbatches = vgg.get_batches(path+'train', batch_size=batch_size)\nval_batches = vgg.get_batches(path+'valid', batch_size=batch_size)",
"Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.",
"vgg.finetune(batches)",
"Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)",
"vgg.fit(batches, val_batches, nb_epoch=1)",
"That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.\nNext up, we'll dig one level deeper to see what's going on in the Vgg16 class.\nCreate a VGG model from scratch in Keras\nFor the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.\nModel setup\nWe need to import all the modules we'll be using from numpy, scipy, and keras:",
"from numpy.random import random, permutation\nfrom scipy import misc, ndimage\nfrom scipy.ndimage.interpolation import zoom\n\nimport keras\nfrom keras import backend as K\nfrom keras.utils.data_utils import get_file\nfrom keras.models import Sequential, Model\nfrom keras.layers.core import Flatten, Dense, Dropout, Lambda\nfrom keras.layers import Input\nfrom keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D\nfrom keras.optimizers import SGD, RMSprop\nfrom keras.preprocessing import image",
"Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.",
"FILES_PATH = 'http://www.platform.ai/models/'; CLASS_FILE='imagenet_class_index.json'\n# Keras' get_file() is a handy function that downloads files, and caches them for re-use later\nfpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')\nwith open(fpath) as f: class_dict = json.load(f)\n# Convert dictionary with string indexes into an array\nclasses = [class_dict[str(i)][1] for i in range(len(class_dict))]",
"Here's a few examples of the categories we just imported:",
"classes[:5]",
"Model creation\nCreating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.\nVGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:",
"def ConvBlock(layers, model, filters):\n for i in range(layers): \n model.add(ZeroPadding2D((1,1)))\n model.add(Convolution2D(filters, 3, 3, activation='relu'))\n model.add(MaxPooling2D((2,2), strides=(2,2)))",
"...and here's the fully-connected definition.",
"def FCBlock(model):\n model.add(Dense(4096, activation='relu'))\n model.add(Dropout(0.5))",
"When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:",
"# Mean of each channel as provided by VGG researchers\nvgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))\n\ndef vgg_preprocess(x):\n x = x - vgg_mean # subtract mean\n return x[:, ::-1] # reverse axis bgr->rgb",
"Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!",
"def VGG_16():\n model = Sequential()\n model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))\n\n ConvBlock(2, model, 64)\n ConvBlock(2, model, 128)\n ConvBlock(3, model, 256)\n ConvBlock(3, model, 512)\n ConvBlock(3, model, 512)\n\n model.add(Flatten())\n FCBlock(model)\n FCBlock(model)\n model.add(Dense(1000, activation='softmax'))\n return model",
"We'll learn about what these different blocks do later in the course. For now, it's enough to know that:\n\nConvolution layers are for finding patterns in images\nDense (fully connected) layers are for combining patterns across an image\n\nNow that we've defined the architecture, we can create the model like any python object:",
"model = VGG_16()",
"As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem. \nDownloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.",
"fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')\nmodel.load_weights(fpath)",
"Getting imagenet predictions\nThe setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.",
"batch_size = 4",
"Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:",
"def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, \n batch_size=batch_size, class_mode='categorical'):\n return gen.flow_from_directory(path+dirname, target_size=(224,224), \n class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)",
"From here we can use exactly the same steps as before to look at predictions from the model.",
"batches = get_batches('train', batch_size=batch_size)\nval_batches = get_batches('valid', batch_size=batch_size)\nimgs,labels = next(batches)\n\n# This shows the 'ground truth'\nplots(imgs, titles=labels)",
"The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.",
"def pred_batch(imgs):\n preds = model.predict(imgs)\n idxs = np.argmax(preds, axis=1)\n\n print('Shape: {}'.format(preds.shape))\n print('First 5 classes: {}'.format(classes[:5]))\n print('First 5 probabilities: {}\\n'.format(preds[0, :5]))\n print('Predictions prob/class: ')\n \n for i in range(len(idxs)):\n idx = idxs[i]\n print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))\n\npred_batch(imgs)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
vitojph/2016progpln
|
examen/progpln-examen-feb.ipynb
|
mit
|
[
"Examen de Programación para el Procesamiento del Lenguaje Natural\nGrado en Lingüística y Lenguas Aplicadas, UCM\n9 de febrero de 2017\ntl;dr\nVamos a analizar una colección de tweets en inglés publicados durante un partido de fútbol.\nContexto\nEl pasado domingo se celebró la 51ª edición de la Superbowl, la gran final del campeonato de fútbol americano de la NFL. El partido enfrentó a los New England Patriots (los favoritos, los de la costa este, con Tom Brady a la cabeza) contra los Atlanta Falcons (los aspirantes, los del Sur, encabezados por Matt Ryan).\n\nComo cualquier final, el resultado a priori era impredecible y a un partido podía ganar cualquiera. Pero el del otro día fue un encuentro inolvidable porque comenzó con el equipo débil barriendo al favorito y con un Brady que no daba una. Al descanso, el marcador reflejaba un inesperado 3 - 28 y todo indicaba que los Falcons ganarían su primer anillo.\n\nPero, en la segunda mitad, Brady resurgió... y su equipo comenzó a anotar una y otra vez... con los Falcons ko. Los Patriots consiguieron darle la vuelta al marcador y vencieron por 34 - 28 su quinta Superbowl. Brady fue elegido MVP del encuentro y aclamado como el mejor quaterback de la historia.\n\nComo os imaginaréis, tanto vaivén nos va a dar mucho juego a la hora de analizar un corpus de mensajes de Twitter. Durante la primera mitad, es previsible que encuentres mensajes a favor de Atlanta y burlas a New England y a sus jugadores, que no estaban muy finos. Pero al final del partido, con la remontada, las opiniones y las burlas cambiarán de sentido.\nComo tanto Tom Brady como su entrenador, Bill Belichick, habían declarado públicamente sus preferencias por Donald Trump durante las elecciones a la presidencia, es muy probable que encuentres mensajes al respecto y menciones a demócratas y republicanos.\nPor último, durante el half time show actuó Lady Gaga, que también levanta pasiones a su manera, así que es probable que haya menciones a otras reinas de la música y comparaciones con actuaciones pasadas.\n\nLos datos\nEl fichero 2017-superbowl-tweets.tsv ubicado en el directorio /opt/textos/ contiene una muestra, ordenada cronológicamente, de mensajes escritos en inglés publicados antes, durante y después del partido. Todos los mensajes contienen el hashtag #superbowl. Hazte una copia de este fichero en el directorio notebooks de tu espacio personal.\nEl fichero es en realidad una tabla con cuatro columnas separadas por tabuladores, que contiene líneas (una por tweet) con el siguiente formato:\nid_del_tweet fecha_y_hora_de_publicación autor_del_tweet texto_del_mensaje\n\nLa siguiente celda te permite abrir el fichero para lectura y cargar los mensajes en la lista tweets. Modifica el código para que la ruta apunte a la copia local de tu fichero.",
"tweets = []\nRUTA = ''\nfor line in open(RUTA).readlines():\n tweets.append(line.split('\\t'))",
"Fíjate en la estructura de la lista: se trata de una lista de tuplas con cuatro elementos. Puedes comprobar si el fichero se ha cargado como debe en la siguiente celda:",
"ultimo_tweet = tweets[-1]\nprint('id =>', ultimo_tweet[0])\nprint('fecha =>', ultimo_tweet[1])\nprint('autor =>', ultimo_tweet[2])\nprint('texto =>', ultimo_tweet[3])",
"Al lío\nA partir de aquí puedes hacer distintos tipos de análisis. Añade tantas celdas como necesites para intentar, por ejemplo:\n\ncalcular distintas estadísticas de la colección: número de mensajes, longitud de los mensajes, presencia de hashtags y emojis, etc.\nnúmero de menciones a usuarios, frecuencia de aparición de menciones, frecuencia de autores\ncalcular estadísticas sobre usuarios: menciones, mensajes por usuario, etc.\ncalcular estadísticas sobre las hashtags\ncalcular estadísticas sobre las URLs presentes en los mensajes\ncalcular estadísticas sobre los emojis y emoticonos de los mensajes\nextraer automáticamente las entidades nombradas que aparecen en los mensajes y su frecuencia\nprocesar los mensajes para extraer y analizar opiniones: calcular la subjetividad y la polaridad de los mensajes\nextraer las entidades nombradas que levantan más pasiones, quiénes son los más queridos y los más odiados, atendiendo a la polaridad de los mensajes\ncomprobar si la polaridad de alguna entidad varía radicalmente a medida que avanza el partido\ncualquier otra cosa que se te ocurra :-P\n\nRecuerda que tienes a tu disposición las librerías de Procesamiento del Lenguaje Natural que hemos usado durante el curso y que puedes utilizar apuntes de clase y cualquier otro material que encuentres en internet. Si necesitas alguna librería extra, avísame y la instalamos en seguida. También puedes utilizar herramientas de la línea de comandos (accediendo desde este cuaderno o conectándote por SSH).\nEs tu turno. ¡Mucha suerte! ;-)",
"# escribe tu código a continuación\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
compsocialscience/summer-institute
|
2018/materials/boulder/day3-networks/Day 3 - Case Study - Networkx.ipynb
|
mit
|
[
"#!pip3 install networkx\n#!pip3 install matplotlib\n#!pip3 install numpy\n\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport re",
"Network Analysis\nThis group exercise is designed to develop an understanding of basic network measures and to start participants thinking about interesting research questions that can be enabled by network science.\n<ol>\n <li>Divide yourselves into groups of four by counting off in order around the room.</li>\n <li>For 10 minutes, explore the <a href=\"https://icon.colorado.edu/#!/networks\">Index of Complex Networks (ICON)</a> database and identify a network your group might like to investigate further. (If someone in your group has a network ready, that you'd all like to analyze feel free to work on this network instead.)</li>\n <li>Write code to import this network into Python. Play with the <a href=\"https://networkx.github.io/documentation/stable/reference/algorithms/index.html\">built-in functionality</a> of `networkx`. (See the code below for help with this step.)</li>\n <li>For 15 minutes, identify a possible research question using this data. Evaluate the strengths and weaknesses of this data.</li>\n <li>Outline a research design that could be used to address the weaknesses of the data you collected (e.g. think about possible data sets you could combine with this network), or otherwise improve your ability to answer the research question.</li>\n</ol>\n\nThere is only one requirement: the group member with the least amount of experience coding should be responsible for typing the code into a computer. After 40 minutes you should be prepared to give a 3 minute presentation of your work. Remember that these daily exercises are for you to get to know each other better, are not expected to be fully-fleshed out research project, and a way for you to explore research areas that may be new to you.\nImporting ICON data\nVisit the ICON website (<a href=\"https://icon.colorado.edu/#!/networks\">link</a>). You can search the index using the checkboxes under the tabs \"network domain,\" \"subdomain,\" \"graph properties,\" and \"size\". You can also type in keywords related to the network you would like to find. Here is a screenshot:\n<img src=\"https://user-images.githubusercontent.com/6633242/45270410-79e66a00-b45a-11e8-83df-852d919cdcec.png\"></img>\nTo download a network, click the small yellow downward arrow and follow the link listed under \"source\". Importing this data into Python using networkx will depend on the file type of the network you download. (Check out the <a href=\"https://networkx.github.io/documentation/stable/reference/readwrite/index.html\">package's documentation</a> for how to import networks from different file types.) \nHere's what it looks like to import the Zachary Karate Club from the edglist provided:",
"with open('karate_edges_77.txt', 'rb') as file: \n karate_club = nx.read_edgelist(file) # Read in the edges\n\ngroups = {}\nwith open('karate_groups.txt', 'r') as file:\n for line in file:\n [node, group] = re.split(r'\\t+', line.strip())\n groups[node] = int(group)\n\nnx.set_node_attributes(karate_club, name = 'group', values = groups) # Add attributes to the nodes (e.g. group membership)",
"Introduction to networkx\nFor very small networks, it can be helpful to visualize the nodes and edges. Below we have colored the nodes with respect to their group within the karate club.",
"position = nx.spring_layout(karate_club)\nnx.draw_networkx_labels(karate_club, pos = position) \n\ncolors = [] # Color the nodes acording to their group\nfor attr in nx.get_node_attributes(karate_club, 'group').values():\n if attr == 1: colors.append('blue') \n else: colors.append('green')\n \nnx.draw(karate_club, position, node_color = colors) # Visualize the graph ",
"A natural question you might like to ask about a network, is what are the most \"important\" nodes? There are many definitions of network importance or centrality. Here let's just consider one of the most straightforward measures: degree centrality -- the number of edges that start or end at a given node.",
"print([(n, karate_club.degree(n)) for n in karate_club.nodes()])",
"NetworkX can be used to return a normalized (divided by the maximum possible degree of the network) degree centrality for all nodes in the network.",
"degrees = nx.degree_centrality(karate_club)\nprint(degrees)",
"From both measures, we can see that nodes 1 and 34 have the highest degree. (These happen to be the two leaders from the two groups within the club.)\nOn large networks, you might want to look at the degree distribution of your network ...",
"# Enron email data set: http://snap.stanford.edu/data/email-Enron.html.\n# (You can search \"Email network (Enron corpus)\" in ICON.)\nwith open('email_enron.txt', 'rb') as file: \n enron = nx.read_edgelist(file, comments='#') # Read in the edges\n\nprint(\"Enron network contains {0} nodes, and {1} edges.\".format(len(enron.nodes()), len(enron.edges())))\n\ndegree_sequence = list(dict(enron.degree()).values()) \nprint(\"Average degree: {0}, Maximum degree: {1}\".format(np.mean(degree_sequence), max(degree_sequence)))\n\nplt.hist(degree_sequence, bins=30) # Plots histogram of degree sequence\nplt.show()",
"Another network feature that you might like to know about your network, is how assortative or modular is it. Another way of asking this, is how likely is it for similar nodes to be connected to each other? This similarity can be measured along any number of network attributes. Here we ask, how much more likely are nodes from the same group within the karate club connected to each, than we would expect at random?",
"assort = nx.attribute_assortativity_coefficient(karate_club, 'group') \nprint(\"Assortativity coefficient: {0}\".format(assort))",
"You can also add edge attributes, either all at once using set_edge_attributes (like we did above for set_node_attributes), or on an edge by edge basis as shown below. The shortest path between two nodes using that weight can then be calculated.",
"# Example borrowed from: https://www.cl.cam.ac.uk/teaching/1314/L109/tutorial.pdf\ng = nx.Graph()\ng.add_edge('a', 'b', weight=0.1)\ng.add_edge('b', 'c', weight=1.5)\ng.add_edge('a', 'c', weight=1.0)\ng.add_edge('c', 'd', weight=2.2)\n\nprint(nx.shortest_path(g, 'b', 'd'))\nprint(nx.shortest_path(g, 'b', 'd', weight='weight'))",
"Lastly, one might want to create a function on top of these networks. For example, to measure the average degree of a node's neighbors:",
"# Example borrowed from: https://www.cl.cam.ac.uk/teaching/1314/L109/tutorial.pdf\ndef avg_neigh_degree(g):\n data = {}\n for n in g.nodes():\n if g.degree(n):\n data[n] = float(sum(g.degree(i) for i in g[n]))/g.degree(n) \n return data\n\navg_neigh_degree(g) # Can you confirm that this is returning the correct results?",
"Now, run a similar analysis on the network from ICON (or a network that a group member has ready) you have chosen. Feel free to not be confined by the networkx functionality I have shown above; tap into your group's expertise and academic disciplines to identify other network measures you might be interested in."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
scikit-optimize/scikit-optimize.github.io
|
dev/notebooks/auto_examples/plots/partial-dependence-plot.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Partial Dependence Plots\nSigurd Carlsen Feb 2019\nHolger Nahrstaedt 2020\n.. currentmodule:: skopt\nPlot objective now supports optional use of partial dependence as well as\ndifferent methods of defining parameter values for dependency plots.",
"print(__doc__)\nimport sys\nfrom skopt.plots import plot_objective\nfrom skopt import forest_minimize\nimport numpy as np\nnp.random.seed(123)\nimport matplotlib.pyplot as plt",
"Objective function\nPlot objective now supports optional use of partial dependence as well as\ndifferent methods of defining parameter values for dependency plots",
"# Here we define a function that we evaluate.\ndef funny_func(x):\n s = 0\n for i in range(len(x)):\n s += (x[i] * i) ** 2\n return s",
"Optimisation using decision trees\nWe run forest_minimize on the function",
"bounds = [(-1, 1.), ] * 3\nn_calls = 150\n\nresult = forest_minimize(funny_func, bounds, n_calls=n_calls,\n base_estimator=\"ET\",\n random_state=4)",
"Partial dependence plot\nHere we see an example of using partial dependence. Even when setting\nn_points all the way down to 10 from the default of 40, this method is\nstill very slow. This is because partial dependence calculates 250 extra\npredictions for each point on the plots.",
"_ = plot_objective(result, n_points=10)",
"It is possible to change the location of the red dot, which normally shows\nthe position of the found minimum. We can set it 'expected_minimum',\nwhich is the minimum value of the surrogate function, obtained by a\nminimum search method.",
"_ = plot_objective(result, n_points=10, minimum='expected_minimum')",
"Plot without partial dependence\nHere we plot without partial dependence. We see that it is a lot faster.\nAlso the values for the other parameters are set to the default \"result\"\nwhich is the parameter set of the best observed value so far. In the case\nof funny_func this is close to 0 for all parameters.",
"_ = plot_objective(result, sample_source='result', n_points=10)",
"Modify the shown minimum\nHere we try with setting the minimum parameters to something other than\n\"result\". First we try with \"expected_minimum\" which is the set of\nparameters that gives the miniumum value of the surrogate function,\nusing scipys minimum search method.",
"_ = plot_objective(result, n_points=10, sample_source='expected_minimum',\n minimum='expected_minimum')",
"\"expected_minimum_random\" is a naive way of finding the minimum of the\nsurrogate by only using random sampling:",
"_ = plot_objective(result, n_points=10, sample_source='expected_minimum_random',\n minimum='expected_minimum_random')",
"We can also specify how many initial samples are used for the two different\n\"expected_minimum\" methods. We set it to a low value in the next examples\nto showcase how it affects the minimum for the two methods.",
"_ = plot_objective(result, n_points=10, sample_source='expected_minimum_random',\n minimum='expected_minimum_random',\n n_minimum_search=10)\n\n_ = plot_objective(result, n_points=10, sample_source=\"expected_minimum\",\n minimum='expected_minimum', n_minimum_search=2)",
"Set a minimum location\nLastly we can also define these parameters ourself by parsing a list\nas the minimum argument:",
"_ = plot_objective(result, n_points=10, sample_source=[1, -0.5, 0.5],\n minimum=[1, -0.5, 0.5])"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dnc1994/MachineLearning-UW
|
ml-foundations/backup/house-price/Predicting house prices.ipynb
|
mit
|
[
"Fire up graphlab create",
"import graphlab",
"Load some house sales data\nDataset is from house sales in King County, the region where the city of Seattle, WA is located.",
"sales = graphlab.SFrame('home_data.gl/')\n\nsales",
"Exploring the data for housing sales\nThe house price is correlated with the number of square feet of living space.",
"graphlab.canvas.set_target('ipynb')\nsales.show(view=\"Scatter Plot\", x=\"sqft_living\", y=\"price\")",
"Create a simple regression model of sqft_living to price\nSplit data into training and testing.\nWe use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).",
"train_data,test_data = sales.random_split(.8,seed=0)",
"Build the regression model using only sqft_living as a feature",
"sqft_model = graphlab.linear_regression.create(train_data, target='price', features=['sqft_living'])",
"Evaluate the simple model",
"print test_data['price'].mean()\n\nprint sqft_model.evaluate(test_data)",
"RMSE of about \\$255,170!\nLet's show what our predictions look like\nMatplotlib is a Python plotting library that is also useful for plotting. You can install it with:\n'pip install matplotlib'",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(test_data['sqft_living'],test_data['price'],'.',\n test_data['sqft_living'],sqft_model.predict(test_data),'-')",
"Above: blue dots are original data, green line is the prediction from the simple regression.\nBelow: we can view the learned regression coefficients.",
"sqft_model.get('coefficients')",
"Explore other features in the data\nTo build a more elaborate model, we will explore using more features.",
"my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']\n\nsales[my_features].show()\n\nsales.show(view='BoxWhisker Plot', x='zipcode', y='price')",
"Pull the bar at the bottom to view more of the data. \n98039 is the most expensive zip code.\nBuild a regression model with more features",
"my_features_model = graphlab.linear_regression.create(train_data,target='price',features=my_features)\n\nprint my_features",
"Comparing the results of the simple model with adding more features",
"print sqft_model.evaluate(test_data)\nprint my_features_model.evaluate(test_data)",
"The RMSE goes down from \\$255,170 to \\$179,508 with more features.\nApply learned models to predict prices of 3 houses\nThe first house we will use is considered an \"average\" house in Seattle.",
"house1 = sales[sales['id']=='5309101200']\n\nhouse1",
"<img src=\"house-5309101200.jpg\">",
"print house1['price']\n\nprint sqft_model.predict(house1)\n\nprint my_features_model.predict(house1)",
"In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better.\nPrediction for a second, fancier house\nWe will now examine the predictions for a fancier house.",
"house2 = sales[sales['id']=='1925069082']\n\nhouse2",
"<img src=\"house-1925069082.jpg\">",
"print sqft_model.predict(house2)\n\nprint my_features_model.predict(house2)",
"In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house. \nLast house, super fancy\nOur last house is a very large one owned by a famous Seattleite.",
"bill_gates = {'bedrooms':[8], \n 'bathrooms':[25], \n 'sqft_living':[50000], \n 'sqft_lot':[225000],\n 'floors':[4], \n 'zipcode':['98039'], \n 'condition':[10], \n 'grade':[10],\n 'waterfront':[1],\n 'view':[4],\n 'sqft_above':[37500],\n 'sqft_basement':[12500],\n 'yr_built':[1994],\n 'yr_renovated':[2010],\n 'lat':[47.627606],\n 'long':[-122.242054],\n 'sqft_living15':[5000],\n 'sqft_lot15':[40000]}",
"<img src=\"house-bill-gates.jpg\">",
"print my_features_model.predict(graphlab.SFrame(bill_gates))",
"The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
deepchem/deepchem
|
examples/tutorials/Multisequence_Alignments.ipynb
|
mit
|
[
"Multisequence Alignment (MSA)\nProteins are made up of sequences of amino acids chained together. Their amino acid sequence determines their structure and function. Finding proteins with similar sequences, or homologous proteins, is very useful in identifying the structures and functions of newly discovered proteins as well as identifying their ancestry. Below is an example of what a protein amino acid multisequence alignment may look like, taken from [2].\n\nColab\nThis tutorial and the rest in this sequence can be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.\n\nHH-suite\nThis tutorial will show you the basics of how to use hh-suite. hh-suite is an open source package for searching protein sequence alignments for homologous proteins. It is the current state of the art for building highly accurate multisequence alignments (MSA) from a single sequence or from MSAs.\nReferences:\n[1] Steinegger M, Meier M, Mirdita M, Vöhringer H, Haunsberger S J, and Söding J (2019) HH-suite3 for fast remote homology detection and deep protein annotation, BMC Bioinformatics, 473. doi: 10.1186/s12859-019-3019-7\n[2] Kunzmann, P., Mayer, B.E. & Hamacher, K. Substitution matrix based color schemes for sequence alignment visualization. BMC Bioinformatics 21, 209 (2020). https://doi.org/10.1186/s12859-020-3526-6\nSetup\nLet's start by importing the deepchem sequence_utils module and downloading a database to compare our query sequence to.\nhh-suite provides a set of HMM databases that will work with the software, which you can find here: http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs\ndbCAN is a good one for this tutorial because it is a relatively smaller download.",
"from deepchem.utils import sequence_utils\n\n%%bash\nmkdir hh\ncd hh \nmkdir databases; cd databases\nwget http://wwwuser.gwdg.de/~compbiol/data/hhsuite/databases/hhsuite_dbs/dbCAN-fam-V9.tar.gz\ntar xzvf dbCAN-fam-V9.tar.gz",
"Using hhsearch\nhhblits and hhsearch are the main functions in hhsuite which identify homologous proteins. They do this by calculating a profile hidden Markov model (HMM) from a given alignment and searching over a reference HMM proteome database using the Viterbi algorithm. Then the most similar HMMs are realigned and output to the user. To learn more, check out the original paper in the references above.\nRun a function from hhsuite with no parameters to read its documentation.",
"!hhsearch",
"Let's do an example. Say we have a protein which we want to compare to a MSA in order to identify any homologous regions. For this we can use hhsearch. \nNow let's take some protein sequence and search through the dbCAN database to see if we can find any potential homologous regions. First we will specify the sequence and save it as a FASTA file or a3m file in order to be readable by hhsearch. I pulled this sequence from the example query.a3m in the hhsuite data directory.",
"with open('protein.fasta', 'w') as f:\n f.write(\"\"\"\n>Uncharacterized bovine protein (Fragment)\n--PAGGQCtgiWHLLTRPLRP--QGRLPGLRVKYVFLVWLGVFAGSWMAYTHYSSYAELCRGHICQVVICDQFRKGIISGSICQDLCHLHQVEWRTCLSSVPGQQVYSGLWQGKEVTIKCGIEESLNSKAGSDGAPRRELVLFDKPSRGTSIKEFREMTLSFLKANLGDLPSLPALVGRVLLMADFNKDNRVSLAEAKSVWALLQRNEFLLLLSLQEKEHASRLLGYCGDLYVTEGVPLSSWPGATLPPLLRPLLPPALHGALQQWLGPAWPWRAKIAMGLLEFVEDLFHGAYGNFYMCETTLANVGYTAKYDFRMADLQQVAPEAAVRRFLRGRRCEHSADCTYGRDCRAPCDTLMRQCKGDLVQPNLAKVCELLRDYLLPGAPAALRPELGKQLRTCTTLSGLASQVEAHHSLVLSHLKSLLWKEISDSRYT\n\"\"\")",
"Then we can call hhsearch, specifying the query sequence with the -i flag, the database to search through with -d, and the output with -o.",
"from deepchem.utils import sequence_utils\ndataset_path = 'protein.fasta'\ndata_dir = 'hh/databases'\nresults = sequence_utils.hhsearch(dataset_path,database='dbCAN-fam-V9', data_dir=data_dir)\n\n\n\n#open the results and print them\nf = open(\"protein.hhr\", \"r\")\nprint(f.read())",
"Two files are output and saved to the dataset directory, results.hhr and results.a3m. results.hhr is the hhsuite results file, which is a summary of the results. results.a3m is the actual MSA file.\nIn the hhr file, the 'Prob' column describes the estimated probability of the query sequence being at least partially homologous to the template. Probabilities of 95% or more are nearly certain, and probabilities of 30% or more call for closer consideration. The E value tells you how many random matches with a better score would be expected if the searched database was unrelated to the query sequence. These results show that none of the sequences align well with our randomly chosen protein, which is to be expected because our query sequence was chosen at random.\nNow let's check the results if we use a sequence that we know will align with something in the dbCAN database. I pulled this protein from the dockerin.faa file in dbCAN.",
"with open('protein2.fasta', 'w') as f:\n f.write(\"\"\">dockerin,22,NCBI-Bacteria,gi|125972715|ref|YP_001036625.1|,162-245,0.033\nSCADLNGDGKITSSDYNLLKRYILHLIDKFPIGNDETDEGINDGFNDETDEDINDSFIEANSKFAFDIFKQISKDEQGKNVFIS\n\"\"\")\n \ndataset_path = 'protein2.fasta'\nsequence_utils.hhsearch(dataset_path,database='dbCAN-fam-V9', data_dir=data_dir)\n\n#open the results and print them\nf = open(\"protein2.hhr\", \"r\")\nprint(f.read())",
"As you can see, there are 2 sequences which are a match for our query sequence. \nUsing hhblits\nhhblits works in much the same way as hhsearch, but it is much faster and slightly less sensitive. This would be more suited to searching very large databases, or producing a MSA with multiple sequences instead of just one. Let's make use of that by using our query sequence to create an MSA. We could then use that MSA, with its family of proteins, to search a larger database for potential matches. This will be much more effective than searching a large database with a single sequence.\nWe will use the same dbCAN database. I will pull a glycoside hydrolase protein from UnipProt, so it will likely be related to some proteins in dbCAN, which has carbohydrate-active enzymes.\nThe option -oa3m will tell hhblits to output an MSA as an a3m file. The -n option specifies the number of iterations. This is recommended to keep between 1 and 4, we will try 2.",
"\n!wget -O protein3.fasta https://www.uniprot.org/uniprot/G8M3C3.fasta\n\ndataset_path = 'protein3.fasta'\nsequence_utils.hhblits(dataset_path,database='dbCAN-fam-V9', data_dir=data_dir)\n\n#open the results and print them\nf = open(\"protein3.hhr\", \"r\")\nprint(f.read())",
"We can see that the exact protein was found in dbCAN in hit 1, but also some highly related proteins were found in hits 1-5. This query.a3m MSA can then be useful if we want to search a larger database like UniProt or Uniclust because it includes this more diverse selection of related protein sequences. \nOther hh-suite functions\nhhsuite contains other functions which may be useful if you are working with MSA or HMMs. For more detailed information, see the documentation at https://github.com/soedinglab/hh-suite/wiki\nhhmake: Build an HMM from an input MSA\nhhfilter: Filter an MSA by max sequence identity, coverage, and other criteria\nhhalign: Calculate pairwise alignments etc. for two HMMs/MSAs\nhhconsensus: Calculate the consensus sequence for an A3M/FASTA input file\nreformat.pl: Reformat one or many MSAs\naddss.pl: Add PSIPRED predicted secondary structure to an MSA or HHM file\nhhmakemodel.pl: Generate MSAs or coarse 3D models from HHsearch or HHblits \nresults\nhhmakemodel.py: Generates coarse 3D models from HHsearch or HHblits results and modifies cif files such that they are compatible with MODELLER\nhhsuitedb.py: Build HHsuite database with prefiltering, packed MSA/HMM, and index files\nsplitfasta.pl: Split a multiple-sequence FASTA file into multiple single-sequence files\nrenumberpdb.pl: Generate PDB file with indices renumbered to match input sequence indices\nHHPaths.pm: Configuration file with paths to the PDB, BLAST, PSIPRED etc.\nmergeali.pl: Merge MSAs in A3M format according to an MSA of their seed sequences\npdb2fasta.pl: Generate FASTA sequence file from SEQRES records of globbed pdb files\ncif2fasta.py: Generate a FASTA sequence from the pdbx_seq_one_letter_code entry of the entity_poly of globbed cif files\npdbfilter.pl: Generate representative set of PDB/SCOP sequences from pdb2fasta.pl output\npdbfilter.py: Generate representative set of PDB/SCOP sequences from cif2fasta.py output\nCongratulations! Time to join the Community!\nCongratulations on completing this tutorial notebook! If you enjoyed working through the tutorial, and want to continue working with DeepChem, we encourage you to finish the rest of the tutorials in this series. You can also help the DeepChem community in the following ways:\nStar DeepChem on GitHub\nThis helps build awareness of the DeepChem project and the tools for open source drug discovery that we're trying to build.\nJoin the DeepChem Gitter\nThe DeepChem Gitter hosts a number of scientists, developers, and enthusiasts interested in deep learning for the life sciences. Join the conversation!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
maxis42/ML-DA-Coursera-Yandex-MIPT
|
1 Mathematics and Python/Homework/1 linear algebra and texts similarity and function approximation/Task 1.ipynb
|
mit
|
[
"Задача 1: сравнение предложений\n1. Считывание предложений из файла. Приведение к нижнему регистру.",
"with open('sentences.txt', 'r') as fileSentences:\n dataSentences = list(fileSentences)\n\nfor line in dataSentences:\n print line\n\ndataSentencesLower = []\nfor i in xrange(len(dataSentences)):\n dataSentencesLower.append(dataSentences[i].lower())\n\nfor line in dataSentencesLower:\n print line",
"2. Произвести токенизацию (разбиение текстов на слова). Удалить пустые слова.",
"import re\n\ndataWords = []\nfor line in dataSentencesLower:\n dataWords.append(re.split('[^a-z]', line))\n \nfor line in dataWords:\n print line\n\ndataWordsCleared = [[] for i in xrange(len(dataWords))]\n\ni = 0\nfor line in dataWords:\n for word in line:\n if word != '':\n dataWordsCleared[i].append(word)\n i = i + 1\n\nfor line in dataWordsCleared:\n print line",
"3. Составить список всех слов.",
"dictWords = {}\n\ni = 0\nfor line in dataWordsCleared:\n for word in line:\n if word not in dictWords:\n dictWords[word] = i\n i += 1\n \nfor item in dictWords:\n print item, \": \", dictWords[item]\n\nimport pandas as pd\n\nprint len(dictWords)\n#число различных слов в файле\n\nprint dictWords.keys()\n\nframeWords = pd.DataFrame(dictWords, xrange(len(dataWordsCleared)))\nrowsCnt, colCnt = frameWords.shape\nprint rowsCnt, colCnt\n\nfor i in xrange(rowsCnt):\n for j in xrange(colCnt):\n frameWords.ix[i, j] = 0\n \nframeWords\n\nprint len(dataWordsCleared)\nfor i in xrange(len(dataWordsCleared)):\n for word in dataWordsCleared[i]:\n frameWords.ix[i, word] += 1\n \nframeWords",
"6. Найти косинусное расстояние от предложения в самой первой строке до всех остальных.",
"from scipy.spatial.distance import cosine\n\ndistanceFromFirstSentence = []\n\nfor i in xrange(rowsCnt):\n distanceFromFirstSentence.append(cosine(frameWords.ix[0], frameWords.ix[i]))\n \nprint distanceFromFirstSentence\n\ndistanceFromFirstSentenceCopy = list(distanceFromFirstSentence)\ntwoClosestValues = [[-1, 0], [-1, 0]]\n\ndistanceFromFirstSentenceCopy.remove(min(distanceFromFirstSentenceCopy))\n\nfor i in xrange(2):\n twoClosestValues[i][1] = min(distanceFromFirstSentenceCopy)\n for j in xrange(len(distanceFromFirstSentence)):\n if twoClosestValues[i][1] == distanceFromFirstSentence[j]:\n twoClosestValues[i][0] = j\n distanceFromFirstSentenceCopy.remove(min(distanceFromFirstSentenceCopy))\n\ntwoClosestValues = sorted(twoClosestValues)\ntwoClosestValues",
"7. Запись ответа в файл.",
"with open('answer.txt', 'w') as fileAnswer:\n for i in xrange(len(twoClosestValues)):\n fileAnswer.write(str(twoClosestValues[i][0]) + ' ')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gloriakang/vax-sentiment
|
to_do/vax_temp/multigraph-analysis.ipynb
|
mit
|
[
"Plot graph and basic analysis\n\nMultiGraph\nnormal template for reading gml file\nfor undirected multigraph\n\narticle1.gml",
"import networkx as nx\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom glob import glob\n\n# read .gml file\ngraph = nx.read_gml('article0.gml')\n\n# read pajek file\n# graph = nx.read_pajek('article1.net')\n\n# plot spring layout\nplt.figure(figsize=(12,12))\nnx.draw_spring(graph, arrows=True, with_labels=True)\n\ninfo = nx.info(graph)\nprint info\n\nplt.figure(figsize=(12,12))\nnx.draw_circular(graph, arrows=True, with_labels=True)",
"Degree histogram\nReturn a list of the frequency of each degree value",
"# returns a list of frequencies of degrees;\n# The degree values are the index in the list.\nprint nx.degree_histogram(graph)\n\ndegree_sequence=sorted(nx.degree(graph).values(),reverse=True) # degree sequence\n#print \"Degree sequence\", degree_sequence\ndmax=max(degree_sequence)\n\nplt.loglog(degree_sequence,'b-',marker='o')\nplt.title(\"Degree rank plot\")\nplt.ylabel(\"degree\")\nplt.xlabel(\"rank\")\n\n# draw graph in inset\nplt.axes([0.45,0.45,0.45,0.45])\nGcc=sorted(nx.connected_component_subgraphs(graph), key = len, reverse=True)[0]\npos=nx.spring_layout(Gcc)\nplt.axis('off')\nnx.draw_networkx_nodes(Gcc,pos,node_size=20)\nnx.draw_networkx_edges(Gcc,pos,alpha=0.4)\n\nplt.show()",
"Density\nNotes: The density is 0 for a graph without edges and 1 for a complete graph. The density of multigraphs can be higher than 1. Self loops are counted in the total number of edges so graphs with self loops can have density higher than 1.",
"density = nx.density(graph)\nprint \"Density =\", density",
"Degree centrality\nDegree centrality for a node v is the fraction of nodes it is connected to",
"# get all the values of the dictionary, this returns a list of centrality scores\n# turn the list into a numpy array\n# take the mean of the numpy array\ndeg_cen = np.array(nx.degree_centrality(graph).values()).mean()\nprint \"Degree centrality =\", deg_cen",
"Closeness centrality\n\nCloseness centrality of a node u is the reciprocal of the sum of the shortest path distances from u to all n-1 other nodes. Since the sum of distances depends on the number of nodes in the graph, closeness is normalized by the sum of minimum possible distances n-1\nHigher values of closeness indicate higher centrality",
"clo_cen = np.array(nx.closeness_centrality(graph).values()).mean()\nprint \"Closeness centrality =\", clo_cen",
"Betweenness centrality\nBetweenness centrality of a node v is the sum of the fraction of all pairs shortest paths that pass through v\n- Compute the shortest-path betweenness centrality for nodes",
"nx.betweenness_centrality(graph)\nbet_cen = np.array(nx.betweenness_centrality(graph).values()).mean()\nprint \"Betweenness centrality =\", bet_cen",
"Current-flow betweenness centrality\n\nCurrent-flow betweenness centrality uses an electrical current model for information spreading in contrast to betweenness centrality which uses shortest paths.\nCurrent-flow betweenness centrality is also known as random-walk betweenness centrality",
"# graph must be connected\n# nx.current_flow_betweenness_centrality(graph)",
"Degree assortativity coefficient",
"deg_ac = nx.degree_assortativity_coefficient(graph)\nprint \"Degree assortativity coefficient =\", deg_ac",
"Degree pearson correlation coefficient\nAssortativity measures the similarity of connections in the graph with respect to the node negree\n- Returns r -- Assortativity of graph by degree",
"deg_pcc = nx.degree_pearson_correlation_coefficient(graph)\nprint \"Degree pearson correlation coefficient =\", deg_pcc\n\n## Clustering coefficient\n# (cannot be multigraph)\n# nx.average_clustering(graph)\n\n## Condensation\n# nx.condensation(graph)",
"Average node connectivity\nThe average connectivity \\bar{\\kappa} of a graph G is the average of local node connectivity over all pairs of nodes of G",
"#nx.edge_connectivity(graph)\n#nx.node_connectivity(graph)\navg_node_con = nx.average_node_connectivity(graph)\nprint \"Average node connectivity =\", avg_node_con",
"Closeness vitality\nCompute closeness vitality for nodes. Closeness vitality of a node is the change in the sum of distances between all node pairs when excluding that node.",
"# example\nG = nx.cycle_graph(3)\nnx.draw(G)\nnx.closeness_vitality(G)\n\n#nx.closeness_vitality(graph)\n\n# intersection_all()\n# return a new graph that contains only the edges that exist in all graphs\n# all supplied graphs must have the same node set",
"Summary",
"print info\nprint \"Density =\", density\nprint \"Degree centrality =\", deg_cen\nprint \"Closeness centrality =\", clo_cen\nprint \"Betweenness centrality =\", bet_cen\nprint \"Degree assortativity coefficient =\", deg_ac\nprint \"Degree pearson correlation coefficient =\", deg_pcc\nprint \"Average node connectivity =\", avg_node_con\n#print \"Closeness vitality =\""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ngcm/training-public
|
FEEG6016 Simulation and Modelling/09-Stochastic-DEs-Lab-1.ipynb
|
mit
|
[
"Stochastic Differential Equations: Lab 1",
"from IPython.core.display import HTML\ncss_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'\nHTML(url=css_file)",
"This background for these exercises is article of D Higham, An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, SIAM Review 43:525-546 (2001).\nHigham provides Matlab codes illustrating the basic ideas at http://personal.strath.ac.uk/d.j.higham/algfiles.html, which are also given in the paper.\nFor random processes in python you should look at the numpy.random module. To set the initial seed (which you should not do in a real simulation, but allows for reproducible testing), see numpy.random.seed.\nBrownian processes\nA random walk or Brownian process or Wiener process is a way of modelling error introduced by uncertainty into a differential equation. The random variable representing the walk is denoted $W$. A single realization of the walk is written $W(t)$. We will assume that\n\nThe walk (value of $W(t)$) is initially (at $t=0$) $0$, so $W(0)=0$, to represent \"perfect knowledge\" there;\nThe walk is on average zero, so $\\mathbb{E}[W(t+h) - W(t)] = 0$, where the expectation value is\n$$\n \\mathbb{E}[W] = \\int_{-\\infty}^{\\infty} t W(t) \\, \\text{d}t\n$$\nAny step in the walk is independent of any other step, so $W(t_2) - W(t_1)$ is independent of $W(s_2) - W(s_1)$ for any $s_{1,2} \\ne t_{1,2}$.\n\nThese requirements lead to a definition of a discrete random walk: given the points ${ t_i }$ with $i = 0, \\dots, N$ separated by a uniform timestep $\\delta t$, we have - for a single realization of the walk - the definition\n$$\n\\begin{align}\n \\text{d}W_i &= \\sqrt{\\delta t} {\\cal N}(0, 1), \\\n W_i &= \\left( \\sum_{j=0}^{i-1} \\text{d}W_j \\right), \\\n W_0 &= 0\n\\end{align}\n$$\nHere ${\\cal N}(0, 1)$ means a realization of a normally distributed random variable with mean $0$ and standard deviation $1$: programmatically, the output of numpy.random.randn.\nWhen working with discrete Brownian processes, there are two things we can do.\n\nWe can think about a single realization at different timescales, by averaging over more points. E.g.\n$$\n W_i = \\left( \\sum_{j=0}^{i_1} \\sum_{k=0}^{p} \\text{d}W_{(p j + k)} \\right)\n$$\nis a Brownian process with timestep $p \\, \\delta t$.\nWe can think about multiple realizations by computing a new set of steps $\\text{d}W$, whilst at the same timestep.\n\nBoth viewpoints are important.\nTasks\n\nSimulate a single realization of a Brownian process over $[0, 1]$ using a step length $\\delta t = 1/N$ for $N = 500, 1000, 2000$. Use a fixed seed of 100. Compare the results.\nSimulation different realizations of a Brownian process with $\\delta t$ of your choice. Again, compare the results.",
"%matplotlib inline\nimport numpy\nfrom matplotlib import pyplot\nfrom matplotlib import rcParams\nrcParams['font.family'] = 'serif'\nrcParams['font.size'] = 16\nrcParams['figure.figsize'] = (12,6)\nfrom scipy.integrate import quad",
"Evaluate the function $u(W(t)) = \\sin^2(t + W(t))$, where $W(t)$ is a Brownian process, on $M$ Brownian paths for $M = 500, 1000, 2000$. Compare the average path for each $M$.\nThe average path at time $t$ should be given by\n$$\n\\begin{equation}\n \\int_{-\\infty}^{\\infty} \\frac{\\sin(t+s)^2 \\exp(-s^2 / 2t)}{\\sqrt{2 \\pi t}} \\,\\text{d}s.\n\\end{equation}\n$$",
"# This computes the exact solution!\n\nt_int = numpy.linspace(0.005, numpy.pi, 1000)\n\ndef integrand(x,t):\n return numpy.sin(t+x)**2*numpy.exp(-x**2/(2.0*t))/numpy.sqrt(2.0*numpy.pi*t)\n\nint_exact = numpy.zeros_like(t_int)\nfor i, t in enumerate(t_int):\n int_exact[i], err = quad(integrand, -numpy.inf, numpy.inf, args=(t,))",
"Stochastic integrals\nWe have, in eg finite elements or multistep methods for IVPs, written the solution of differential equations in terms of integrals. We're going to do the same again, so we need to integrate random variables. The integral of a random variable with respect to a Brownian process is written\n$$\n \\int_0^t G(s) \\, \\text{d}W_s,\n$$\nwhere the notation $\\text{d}W_s$ indicates that the step in the Brownian process depends on the (dummy) independent variable $s$.\nWe'll concentrate on the case $G(s) = W(s)$, so we're trying to integrate the Brownian process itself. If this were a standard, non-random variable, the answer would be \n$$\n \\int_0^t W(s) \\, \\text{d}W_s = \\frac{1}{2} \\left( W(t)^2 - W(0)^2 \\right).\n$$\nWhen we approximate the quadrature numerically than we would split the interval $[0, T]$ into strips (subintervals), approximate the integral on each subinterval by picking a point inside the interval, evaluating the integrand at that point, and weighting it by the width of the subinterval. In normal integration it doesn't matter which point within the subinterval we choose.\nIn the stochastic case that is not true. We pick a specific point $\\tau_i = a t_i + (1-a) t_{i-1}$ in the interval $[t_{i-1}, t_i]$. The value $a \\in [0, 1]$ is a constant that says where within each interval we are evaluating the integrand. We can then approximate the integral by\n\\begin{equation}\n \\int_0^T W(s) \\, dW_s = \\sum_{i=1}^N W(\\tau_i) \\left[ W(t_i) - W(t_{i-1}) \\right] = S_N.\n\\end{equation}\nNow we can compute (using that the expectation of the products of $W$ terms is the covariance, which is the minimum of the arguments)\n\\begin{align}\n \\mathbb{E}(S_N) &= \\mathbb{E} \\left( \\sum_{i=1}^N W(\\tau_i) \\left[ W(t_i) - W(t_{i-1}) \\right] \\right) \\\n &= \\sum_{i=1}^N \\mathbb{E} \\left( W(\\tau_i) W(t_i) \\right) - \\mathbb{E} \\left( W(\\tau_i) W(t_{i-1}) \\right) \\\n &= \\sum_{i=1}^N (\\min{\\tau_i, t_i} - \\min{\\tau_i, t_{i-1}}) \\\n &= \\sum_{i=1}^N (\\tau_i - t_{i-1}) \\\n &= (t - t_0) a.\n\\end{align}\nThe choice of evaluation point matters.\nSo there are multiple different stochastic integrals, each (effectively) corresponding to a different choice of $a$. The two standard choices are\nThere are two standard choices of stochastic integral.\n\nIto: choose $a=0$.\nStratonovich: choose $a=1/2$.\n\nThese lead to\n$$\n \\int_0^t G(s) \\, \\text{d}W_s \\simeq_{\\text{Ito}} \\sum_{j=0}^{N-1} G(s_j, W(s_j)) \\left( W(s_{j+1}) - W(s_j) \\right) = \\sum_{j=0}^{N-1} G(s_j) \\text{d}W(s_{j})\n$$\nfor the Ito integral, and\n$$\n \\int_0^t G(s) \\, \\text{d}W_s \\simeq_{\\text{Stratonovich}} \\sum_{j=0}^{N-1} \\frac{1}{2} \\left( G(s_j, W(s_j)) + G(s_{j+1}, W(s_{j+1})) \\right) \\left( W(s_{j+1}) - W(s_j) \\right) = \\sum_{j=0}^{N-1} \\frac{1}{2} \\left( G(s_j, W(s_j)) + G(s_{j+1}, W(s_{j+1})) \\right) \\text{d}W(s_{j}).\n$$\nfor the Stratonovich integral.\nTasks\nWrite functions to compute the Itô and Stratonovich integrals of a function $h(t, W(t))$ of a given Brownian process $W(t)$ over the interval $[0, 1]$.",
"def ito(h, trange, dW):\n \"\"\"Compute the Ito stochastic integral given the range of t.\n \n Parameters\n ----------\n \n h : function\n integrand\n trange : list of float\n the range of integration\n dW : array of float\n Brownian increments\n seed : integer\n optional seed for the Brownian path\n Returns\n -------\n \n ito : float\n the integral\n \"\"\"\n \n return ito\n\ndef stratonovich(h, trange, dW):\n \"\"\"Compute the Stratonovich stochastic integral given the range of t.\n \n Parameters\n ----------\n \n h : function\n integrand\n trange : list of float\n the range of integration\n dW : array of float\n the Brownian increments\n \n Returns\n -------\n \n stratonovich : float\n the integral\n \"\"\"\n \n return stratonovich",
"Test the functions on $h = W(t)$ for various $N$. Compare the limiting values of the integrals.\nEuler-Maruyama's method\nNow we can write down a stochastic differential equation.\nThe differential form of a stochastic differential equation is\n$$\n \\frac{\\text{d}X}{\\text{d}t} = f(X) + g(X) \\frac{\\text{d}W}{\\text{d}t}\n$$\nand the comparable (and more useful) integral form is\n$$\n \\text{d}X = f(X) \\, \\text{d}t + g(X) \\text{d}W.\n$$\nThis has formal solution\n$$\n X(t) = X_0 + \\int_0^t f(X(s)) \\, \\text{d}s + \\int_0^t g(X(s)) \\, \\text{d}W_s.\n$$\nWe can use our Ito integral above to write down the Euler-Maruyama method\n$$\n X(t+h) \\simeq X(t) + h f(X(t)) + g(X(t)) \\left( W(t+h) - W(t) \\right) + {\\cal{O}}(h^p).\n$$\nWritten in discrete, subscript form we have\n$$\n X_{n+1} = X_n + h f_n + g_n \\, \\text{d}W_{n}\n$$\nThe order of convergence $p$ is an interesting and complex question.\nTasks\nApply the Euler-Maruyama method to the stochastic differential equation\n$$\n\\begin{equation}\n dX(t) = \\lambda X(t) + \\mu X(t) dW(t), \\qquad X(0) = X_0.\n\\end{equation}\n$$\nChoose any reasonable values of the free parameters $\\lambda, \\mu, X_0$.\nThe exact solution to this equation is $X(t) = X(0) \\exp \\left[ \\left( \\lambda - \\tfrac{1}{2} \\mu^2 \\right) t + \\mu W(t) \\right]$. Fix the timetstep and compare your solution to the exact solution.\nVary the timestep of the Brownian path and check how the numerical solution compares to the exact solution.\nConvergence\nWe have two ways of thinking about Brownian paths or processes. \nWe can fix the path (ie fix $\\text{d}W$) and vary the timescale on which we're looking at it: this gives us a single random path, and we can ask how the numerical method converges for this single realization. This is strong convergence.\nAlternatively, we can view each path as a single realization of a random process that should average to zero. We can then look at how the method converges as we average over a large number of realizations, also looking at how it converges as we vary the timescale. This is weak convergence.\nFormally, denote the true solution as $X(T)$ and the numerical solution for a given step length $h$ as $X^h(T)$. The order of convergence is denoted $p$.\nStrong convergence\n$$\n \\mathbb{E} \\left| X(T) - X^h(T) \\right| \\le C h^{p}\n$$\nFor Euler-Maruyama, expect $p=1/2$!.\nWeak convergence\n$$\n \\left| \\mathbb{E} \\left( \\phi( X(T) ) \\right) - \\mathbb{E} \\left( \\phi( X^h(T) ) \\right) \\right| \\le C h^{p}\n$$\nFor Euler-Maruyama, expect $p=1$.\nTasks\nInvestigate the weak and strong convergence of your method, applied to the problem above."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
statkraft/shyft-doc
|
notebooks/api/api-material-not-used.ipynb
|
lgpl-3.0
|
[
"Starting with the Shyft api\nIntroduction\nAt its core, Shyft provides functionality through an API (Application Programming Interface). All the functionality of Shyft is available through this API.\nWe begin the tutorials by introducing the API as it provides the building blocks for the framework. Once you have a good understanding, you can move toward configured runs that make use of orchestation. To make use of configured runs, you need to understand how we 'serialize' configurations and input data through repositories.\nIn a separate of the simulation tutorials, we cover conducting a very simple simulation of an example catchment using configuration files. This is a typical use case, but assumes that you have a model well configured and ready for simulation. In practice, one is interested in working with the model, testing different configurations, and evaluating different data sources.\nThis is in fact a key idea of Shyft -- to make it simple to evaluate the impact of the selection of model routine on the performance of the simulation. In this notebook we walk through this lower level paradigm of working with the toolbox and using the Shyft api directly to conduct the simulations.\nThis notebook is guiding through the simulation process of a catchment. The following steps are described:\n1. Loading required python modules and setting path to SHyFT installation\n2. Building a Shyft model\n3. Running a Shyft simulation with updated parameters\n4. Activating the simulation only for selected catchments\n5. Setting up different input datasets\n6. Changing state collection settings\n7. Post processing and extracting results\n1. Loading required python modules and setting path to SHyFT installation\nFor the notebook tutorials we require several imports. In addition, be sure your shyft environment is correctly configured. This is required before importing shyft itself. Lastly, import the shyft classes and modules.",
"# Pure python modules and jupyter notebook functionality\n# first you should import the third-party python modules which you'll use later on\n# the first line enables that figures are shown inline, directly in the notebook\n%matplotlib inline\nimport os\nimport datetime as dt\nimport numpy as np\nfrom os import path\nimport sys\nfrom matplotlib import pyplot as plt\nfrom netCDF4 import Dataset\n\n# try to auto-configure the path. This will work in the case\n# that you have checked out the doc and data repositories\n# at same level. Make sure this is done **before** importing shyft\nshyft_data_path = path.abspath(\"../../../shyft-data\")\nif path.exists(shyft_data_path) and 'SHYFT_DATA' not in os.environ:\n os.environ['SHYFT_DATA']=shyft_data_path\n\n# shyft should be available either by it's install in python\n# or by PYTHONPATH set by user prior to starting notebook.\n# If you have cloned the repositories according to guidelines:\nshyft_path=path.abspath('../../../shyft')\nsys.path.insert(0,shyft_path)\n\nfrom shyft import api\nimport shyft\nimport shyft.api.pt_gs_k\nimport shyft.repository\nimport shyft.repository.netcdf\n\nprint(shyft.__path__)\nfor env in os.environ:\n if 'SHYFT' in env:\n print('{0}:\\n{1}'.format(env, os.environ[env]))\nfrom shyft import shyftdata_dir\nprint(shyftdata_dir)",
"2. Build a Shyft model\nThe first point of simulation is to define the model that you will create. In this example, we will use Shyft's pure api approach to create a model from scratch.\nThe simulation domain\nWhat is required to set up a simulation? At the most basic level of Shyft, we need to define the simulation domain / geometry. Shyft does not care about the specific shape of the cells. Shyft just needs a 'geocentroid location' and an area. We will create a container of this information as a first step to provide to one of Shyft's model types later.\nWe are going to be working with the data from the Nea-Nidelva catchment, example dataset. This is available in the shyft-data repository. Above, you should have set your SHYFT_DATA environment variable to point to this directory so that we can easily read the data.\nThe first thing to do is to take a look at the geography of our cells.",
"# load the data from the example datasets\ncell_data = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/cell_data.nc'))\n\n# plot the coordinates of the cell data provided\n# fetch the x- and y-location of the cells\nx = cell_data.variables['x'][:]\ny = cell_data.variables['y'][:]\nz = cell_data.variables['z'][:]\ncid = cell_data.variables['catchment_id'][:]\n\n# and make a quick catchment map...\n# using a scatter plot of the cells\nfig, ax = plt.subplots(figsize=(15,5))\ncm = plt.cm.get_cmap('rainbow')\nelv_col = ax.scatter(x, y, c=z, marker='.', s=40, lw=0, cmap=cm)\n# cm = plt.cm.get_cmap('gist_gray')\n# cid_col = ax.scatter(x, y, c=cid, marker='.', s=40, lw=0, alpha=0.4, cmap=cm)\nplt.colorbar(elv_col).set_label('catchment elevation [m]')\n# plt.colorbar(cid_col).set_label('catchment indices [id]')\nplt.title('Nea Nidelva Catchment')\n# print(set(cid))",
"Create a collection of simulation cells\nIn Shyft we work with 'cells', which is the basic simulation unit. In the example netcdf file, we provide the attributes for the cells we are going to plot. But you made need to extract this information from your own GIS, or other data. The essential variables that are minimally required include::\n\nx, generally an easting coordinate in UTM space, [meters]\ny, generally a northing coordinate in UTM space, [meters]\nz, elevation [meters]\narea, the area of the cell, [square meters]\nland cover type fractions (these are float values that sum to 1):\nglacier\nlake\nreservoir\nforest\nunspecified\n\n\ncatchment_id, an integer to associate the cell with a catchment\na radiation factor (set to 0.9 by default)\n\nIf you look at the netcdf file, you'll see these are included:",
"print(cell_data.variables.keys())\n",
"So the first step is to extract these from the netcdf file, and get them into the model.",
"# Let's first create a 'container' that will hold all of our model domains cells:\ncell_data_vector = api.GeoCellDataVector()\n# help(cell_data_vector)\n#help(api.GeoPoint)\n\nnum_cells = cell_data.dimensions['cell'].size\n\nfor i in range(num_cells):\n\n gp = api.GeoPoint(x[i], y[i], z[i]) # recall, we extracted x,y,z above\n cid = cell_data.variables['catchment_id'][i]\n cell_area = cell_data.variables['area'][i]\n\n # land fractions:\n glac = cell_data.variables['glacier-fraction'][i]\n lake = cell_data.variables['lake-fraction'][i]\n rsvr = cell_data.variables['reservoir-fraction'][i]\n frst = cell_data.variables['forest-fraction'][i]\n unsp = 1 - (glac + lake + rsvr + frst)\n \n land_cover_frac = api.LandTypeFractions(glac, lake, rsvr, frst, unsp)\n \n rad_fx = 0.9\n # note, for now we need to make sure we cast some types to pure python, not numpy\n geo_cell_data = api.GeoCellData(gp, float(cell_area), int(cid), rad_fx, land_cover_frac)\n\n cell_data_vector.append(geo_cell_data)\n\nnci.close()\n\n# now get the forcing data ready.\n# first create a region_environment object, the 'container' that will hold all \n# the forcing data sources\nre = api.ARegionEnvironment()\n\n# map the variable names in the netcdf file to the source types\nsource_map = {'precipitation' : (api.PrecipitationSource, re.precipitation),\n 'radiation' : (api.RadiationSource, re.radiation),\n 'temperature' : (api.TemperatureSource, re.temperature),\n 'wind_speed' : (api.WindSpeedSource, re.wind_speed), \n 'relative_humidity' : (api.RelHumSource, re.rel_hum) }\n\n# load the data from the example datasets\n# station_met = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/stations_met.nc'))\n\n# for station in station_met.groups.keys():\n# stn = station_met.groups[station]\n# print(stn)\n# time = api.UtcTimeVector([int(t) for t in stn.variables['time'][:]])\n# dt = time[1] - time[0] if len(time) > 1 else api.deltahours(1)\n# x = stn.x\n# y = stn.y\n# z = stn.z\n# gp = api.GeoPoint(x, y, z)\n# for var, (source, source_vec) in source_map.items():\n# if var in stn.variables.keys():\n# data = stn.variables[var][:]\n \n# time_axis = api.TimeAxis(int(time[0]), api.deltahours(dt), len(time)) \n# cts = api.TsFactory().create_time_point_ts(time_axis.total_period(), time, data, api.POINT_AVERAGE_VALUE)\n# # add it to the precipitation source\n# source_vec.append(source(gp, cts))\n\n### ANOTHER APPROACH\nre2 = api.ARegionEnvironment()\n# precip = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/precipitation.nc'))\n# rad = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/radiation.nc'))\n# windsp = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/wind_speed.nc'))\n# relhum = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/relative_humidity.nc'))\n# temp = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/temperature.nc'))\n\n# map the variable names in the netcdf file to the source types\nsource_map = {'precipitation' : ('precipitation.nc', api.PrecipitationSource, re2.precipitation),\n 'global_radiation' : ('radiation.nc', api.RadiationSource, re2.radiation),\n 'temperature' : ('temperature.nc', api.TemperatureSource, re2.temperature),\n 'wind_speed' : ('wind_speed.nc', api.WindSpeedSource, re2.wind_speed), \n 'relative_humidity' : ('relative_humidity.nc', api.RelHumSource, re2.rel_hum) }\n\n\nfor var, (file_name, source, source_vec) in source_map.items():\n nci = Dataset( os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/' + file_name))\n \n time = api.UtcTimeVector([int(t) for t in nci.variables['time'][:]])\n dt = time[1] - time[0] if len(time) > 1 else api.deltahours(1)\n for i in range(nci.dimensions['station'].size):\n x = nci.variables['x'][i]\n y = nci.variables['y'][i]\n z = nci.variables['z'][i]\n gp = api.GeoPoint(x, y, z)\n data = nci.variables[var][:, i]\n print(data)\n time_axis = api.TimeAxis(int(time[0]), api.deltahours(dt), len(time)) \n cts = api.TsFactory().create_time_point_ts(time_axis.total_period(), time, data, api.POINT_AVERAGE_VALUE)\n # add it to the precipitation source\n source_vec.append(source(gp, cts))\n\n\nre2.temperature.values_at_time(int(stn.variables['time'][100]))\n\n def create_dummy_region_environment(self, time_axis, mid_point):\n re = api.ARegionEnvironment()\n re.precipitation.append(self._create_constant_geo_ts(api.PrecipitationSource, mid_point, time_axis.total_period(), 5.0))\n re.temperature.append(self._create_constant_geo_ts(api.TemperatureSource, mid_point, time_axis.total_period(), 10.0))\n re.wind_speed.append(self._create_constant_geo_ts(api.WindSpeedSource, mid_point, time_axis.total_period(), 2.0))\n re.rel_hum.append(self._create_constant_geo_ts(api.RelHumSource, mid_point, time_axis.total_period(), 0.7))\n re.radiation = api.RadiationSourceVector() # just for testing BW compat\n re.radiation.append(self._create_constant_geo_ts(api.RadiationSource, mid_point, time_axis.total_period(), 300.0))\n return re\n \n \n \n def _create_constant_geo_ts(self, geo_ts_type, geo_point, utc_period, value):\n \"\"\"Create a time point ts, with one value at the start\n of the supplied utc_period.\"\"\"\n tv = api.UtcTimeVector()\n tv.push_back(utc_period.start)\n vv = api.DoubleVector()\n vv.push_back(value)\n cts = api.TsFactory().create_time_point_ts(utc_period, tv, vv, api.POINT_AVERAGE_VALUE)\n return geo_ts_type(geo_point, cts)\n\n# next, create the simulation dictionary\nRegionDict = {'region_model_id': 'demo', #a unique name identifier of the simulation\n 'domain': {'EPSG': 32633,\n 'nx': 400,\n 'ny': 80,\n 'step_x': 1000,\n 'step_y': 1000,\n 'lower_left_x': 100000,\n 'lower_left_y': 6960000},\n 'repository': {'class': shyft.repository.netcdf.cf_region_model_repository.CFRegionModelRepository,\n 'params': {'data_file': 'netcdf/orchestration-testdata/cell_data.nc'}},\n }",
"The first keys, are probably quite clear:\n\nstart_datetime: a string in the format: \"2013-09-01T00:00:00\"\nrun_time_step: an integer representing the time step of the simulation (in seconds), so for a daily step: 86400\nnumber_of_steps: an integer for how long the simulatoin should run: 365 (for a year long simulation)\nregion_model_id: a string to name the simulation: 'neanidelva-ptgsk'\n\nWe also need to know where the simulation is taking place. This information is contained in the domain:\n\nEPSG: an EPSG string to identify the coordinate system\nnx: number of 'cells' in the x direction\nny: number of 'cells' in the y direction\nstep_x: size of cell in x direction (m)\nstep_y: size of cell in y direction (m)\nlower_left_x: where (x) in the EPSG system the cells begin\nlower_left_y: where (y) in the EPSG system the cells begin\nrepository: a repository that can read the file containing data for the cells (in this case it will read a netcdf file)\n\nModel specification\nThe next dictionary provides information about the model that we would like to use in Shyft, or the 'Model Stack' as it is generally referred to. In this case, we are going to use the PTGSK model, and the rest of the dictionary provides the parameter values.",
"from shyft.api.pt_gs_k import PTGSKModel\nModelDict = {'model_t': PTGSKModel, # model to construct\n 'model_parameters': {\n 'actual_evapotranspiration':{\n 'ae_scale_factor': 1.5},\n 'gamma_snow':{\n 'calculate_iso_pot_energy': False,\n 'fast_albedo_decay_rate': 6.752787747748934,\n 'glacier_albedo': 0.4,\n 'initial_bare_ground_fraction': 0.04,\n 'max_albedo': 0.9,\n 'max_water': 0.1,\n 'min_albedo': 0.6,\n 'slow_albedo_decay_rate': 37.17325702015658,\n 'snow_cv': 0.4,\n 'tx': -0.5752881492890207,\n 'snowfall_reset_depth': 5.0,\n 'surface_magnitude': 30.0,\n 'wind_const': 1.0,\n 'wind_scale': 1.8959672005350063,\n 'winter_end_day_of_year': 100},\n 'kirchner':{ \n 'c1': -3.336197322290274,\n 'c2': 0.33433661533385695,\n 'c3': -0.12503959620315988},\n 'precipitation_correction': {\n 'scale_factor': 1.0},\n 'priestley_taylor':{'albedo': 0.2,\n 'alpha': 1.26},\n }\n } ",
"In this dictionary we define two variables:\n\nmodel_t: the import path to a shyft 'model stack' class\nmodel_parameters: a dictionary containing specific parameter values for a particular model class\n\nSpecifics of the model_parameters dictionary will vary based on which class is used.\nOkay, so far we have two dictionaries. One which provides information regarding our simulation domain, and a second which provides information on the model that we wish to run over the domain (e.g. in each of the cells). The next step, then, is to map these together and create a region_repo class.\nThis is achieved by using a repository, in this case, the CFRegionModelRepository we imported above.",
"region_repo = CFRegionModelRepository(RegionDict, ModelDict)",
"The region_model\n<div class=\"alert alert-info\">\n\n**TODO:** a notebook documenting the CFRegionModelRepository\n\n</div>\n\nThe first step in conducting a hydrologic simulation is to define the domain of the simulation and the model type which we would like to simulate. To do this we create a region_model object. Above we created dictionaries that contain this information, and we instantiated a class called teh region_repo. In this next step, we put it together so that we have a single object which we can work with \"at our fingertips\". You'll note above that we have pointed to a 'data_file' earlier when we defined the RegionDict. This data file contains all the required elements to fill the cells of our domain. The informaiton is contained in a single netcdf file\nBefore we go further, let's look briefly at the contents of this file:",
"cell_data_file = os.path.join(shyftdata_dir, 'netcdf/orchestration-testdata/cell_data.nc')\nprint(cell_data_file)\ncell_data = Dataset(cell_data_file)\nprint(cell_data)",
"You might be surprised to see the dimensions are 'cells', but recall that in Shyft everything is vectorized. Each 'cell' is an element within a domain, and each cell has associated variables:\n\nlocation: x, y, z\ncharacteristics: forest-fraction, reservoir-fraction, lake-fraction, glacier-fraction, catchment-id\n\nWe'll bring this data into our workspace via the region_model. Note that we have instantiated a region_repo class using one of the existing Shyft repositories, in this case one that was built for reading in the data as it is contained in the example shyft-data netcdf files: CFRegionModelRepository.\nNext, we'll use the region_repo.get_region_model method to get the region_model. Note the name 'demo', in this case is arbitrary. However, depending on how you create your repository, you can specify what region model to return using this string.\n<div class=\"alert alert-info\">\n\n\n**note:** *you are strongly encouraged to learn how to create repositories. This particular repository is just for demonstration purposes. In practice, one may use a repository that connects directly to a GIS service, a database, or some other data sets that contain the data required for simulations.*\n\n<div class=\"alert alert-warning\">\n\n**warning**: *also, please note that below we call the 'get_region_model' method as we instantiate the class. This behavior may change in the future.*\n\n</div>\n</div>",
"region_model = region_repo.get_region_model('demo')",
"Exploring the region_model\nSo we now have created a region_model, but what is it actually? This is a very fundamental class in Shyft. It is actually one of the \"model stacks\", such as 'PTGSK', or 'PTHSK'. Essentially, the region_model contains all the information regarding the simulation type and domain. There are many methods associated with the region_model and it will take time to understand all of them. For now, let's just explore a few key methods:\n\nbounding_region: provides information regarding the domain of interest for the simulation\ncatchment_id_map: indices of the various catchments within the domain\ncells: an instance of PTGSKCellAllVector that holds the individual cells for the simulation (note that this is type-specific to the model type)\nncore: an integer that sets the numbers of cores to use during simulation (Shyft is very greedy if you let it!)\ntime_axis: a shyft.api.TimeAxisFixedDeltaT class (basically contains information regarding the timing of the simulation)\n\nKeep in mind that many of these methods are more 'C++'-like than 'Pythonic'. This means, that in some cases, you'll have to 'call' the method. For example: region_model.bounding_region.epsg() returns a string. You can use tab-completion to explore the region_model further:",
"region_model.bounding_region.epsg()",
"You'll likely note that there are a number of intriguing fucntions, e.g. initialize_cell_environment or interpolate. But before we can go further, we need some more information. Perhaps you are wondering about forcing data. So far, we haven't said anything about model input or the time of the simulation, we've only set up a container that holds all the domain and model type information about our simulation. \nStill, we have made some progress. Let's look for instance at the cells:",
"cell_0 = region_model.cells[0]\nprint(cell_0.geo)",
"So you can see that so far, each of the cells in the region_model contain information regarding their LandTypeFractions, geolocation, catchment_id, and area. \nA particulary important attribute is region_model.region_env. This is a container for each cell that holds the \"environmental timeseries\", or forcing data, for the simulation. By \"tabbing\" from cell. you can see that each cell also has and env_ts attribute. These are containers customized to provide timeseries as required by the model type we selected, but there is no data yet. In this case we used the PTGSKModel (see the ModelDict). So for every cell in your simulation, there is a container prepared to accept the forcing data as the next cell shows.",
"#just so we don't see 'private' attributes\nprint([d for d in dir(cell_0.env_ts) if '_' not in d[0]]) \nregion_model.size()",
"Adding forcing data to the region_model\nClearly the next step is to add forcing data to our region_model object. Let's start by thinking about what kind of data we need. From above, where we looked at the env_ts attribute, it's clear that this particular model stack, PTGSKModel, requires:\n\nprecipitation\nradiation\nrelative humidity (rel_hum)\ntemperature\nwind speed\n\nWe have stored this information each in seperate netcdf files which each contain the observational series for a number of different stations. \n<div class=\"alert alert-warning\">\n\nAgain, these files **do not represent the recommended practice**, but are *only for demonstration purposes*. The idea here is just to demonstrate with an example repository, but *you should create your own to match **your** data*.\n\n</div>\n\nOur goal now is to populate the region_env. \n\"Sources\"\nWe use the term sources to define a location data may be coming from. You may also come across destinations. In both cases, it just means a file, database, service of some kind, etc. that is capable of providing data. Repositories are written to connect to sources. Following our earlier approach, we'll create another dictionary to define our data sources, but first we need to import another repository:",
"from shyft.repository.netcdf.cf_geo_ts_repository import CFDataRepository\n\nfrom shyft.repository.netcdf.cf_geo_ts_repository import CFDataRepository\nForcingData = {'sources': [\n \n {'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,\n 'params': {'epsg': 32633,\n 'selection_criteria': None,\n 'stations_met': 'netcdf/orchestration-testdata/precipitation.nc'},\n 'types': ['precipitation']},\n \n {'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,\n 'params': {'epsg': 32633,\n 'selection_criteria': None,\n 'stations_met': 'netcdf/orchestration-testdata/temperature.nc'},\n 'types': ['temperature']},\n \n {'params': {'epsg': 32633,\n 'selection_criteria': None,\n 'stations_met': 'netcdf/orchestration-testdata/wind_speed.nc'},\n 'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,\n 'types': ['wind_speed']},\n \n {'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,\n 'params': {'epsg': 32633,\n 'selection_criteria': None,\n 'stations_met': 'netcdf/orchestration-testdata/relative_humidity.nc'},\n 'types': ['relative_humidity']},\n \n {'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,\n 'params': {'epsg': 32633,\n 'selection_criteria': None,\n 'stations_met': 'netcdf/orchestration-testdata/radiation.nc'},\n 'types': ['radiation']}]\n }\n",
"Data Repositories\nIn another notebook, further information will be provided regarding the repositories. For the time being, let's look at this configuration dictionary that was created. It essentially just contains a list, keyed by the name \"sources\". This key is known in some of the tools that are built in the Shyft orchestration, so it is recommended to use it.\nEach item in the list is a dictionary for each of the source types, the keys in the dictionaries are: repository, params, and types. The general idea and concept is that in orchestration, the object keyed by repository is a class that is instantiated by passing the objects contained in params.\nLet's repeat that. From our Datasets dictionary, we get a list of \"sources\". Each of these sources contains a class (a repository) that is capable of getting the source data into Shyft. Whatever parameters that are required for the class to work, will be included in the \"sources\" dictionary. In our case, the params are quite simple, just a path to a netcdf file. But suppose our repository required credentials or other information for a database? This information could also be included in the params stanza of the dictionary.\nYou should explore the above referenced netcdf files that are available at the shyft-data git repository. These files contain the forcing data that will be used in the example simulation. Each one contains observational data from some stations in our catchment. Depending on how you write your repository, this data may be provided to Shyft in many different formats.\nLet's explore this concept further by getting the 'temperature' data:",
"# get the temperature sources:\ntmp_sources = [source for source in ForcingData['sources'] if 'temperature' in source['types']]\n\n# in this example there is only one\nt0 = tmp_sources[0]\n\n# We will now instantiate the repository with the parameters that are provided\n# in the dictionary. \n# Note the 'call' structure expects params to contain keyword arguments, and these\n# can be anything you want depending on how you create your repository\ntmp_repo = t0['repository'](**t0['params'])\n",
"tmp_repo is now an instance of the Shyft CFDataRepository, and this will provide Shyft with the data when it sets up a simulation by reading the data directly out of the file referenced in the 'source'. But that is just one repository, and we defined many in fact. Furthermore, you may have a heterogenous collection of data sources -- if for example you want to get your temperature from station data, but radiation from model output. You could define different repositories in the ForcingData dictionary.\nUltimately, we bundle all these repositories up into a new class called a GeoTsRepositoryCollection that we can use to populate the region_model.region_env with data.",
"# we'll actually create a collection of repositories, as we have different input types.\nfrom shyft.repository.geo_ts_repository_collection import GeoTsRepositoryCollection\n\ndef construct_geots_repo(datasets_config, epsg=None):\n \"\"\" iterates over the different sources that are provided \n and prepares the repository to read the data for each type\"\"\"\n geo_ts_repos = []\n src_types_to_extract = []\n for source in datasets_config['sources']:\n if epsg is not None:\n source['params'].update({'epsg': epsg})\n # note that here we are instantiating the different source repositories\n # to place in the geo_ts list \n geo_ts_repos.append(source['repository'](**source['params']))\n src_types_to_extract.append(source['types'])\n \n return GeoTsRepositoryCollection(geo_ts_repos, src_types_per_repo=src_types_to_extract)\n\n# instantiate the repository\ngeots_repo = construct_geots_repo(ForcingData)",
"geots_repo is now a \"geographic timeseries repository\", meaning that the timeseries it holds are spatially aware of their x,y,z coordinates (see CFDataRepository for details). It also has several methods. One in particular we are interested in is the get_timeseries method. However, before we can proceed, we need to define the period for the simulation.\nShyft TimeAxis\nTime in Shyft is handled with specialized C++ types for computational efficiency. These are custom built objects that are 'calendar' aware. But since in python, most like to use datetime objects, we create a function:",
"# next, create the simulation dictionary\nTimeDict = {'start_datetime': \"2013-09-01T00:00:00\",\n 'run_time_step': 86400, # seconds, daily\n 'number_of_steps': 365 # one year\n }\n\ndef time_axis_from_dict(t_dict)->api.TimeAxis:\n utc = api.Calendar()\n \n sim_start = dt.datetime.strptime(t_dict['start_datetime'], \"%Y-%m-%dT%H:%M:%S\")\n utc_start = utc.time(sim_start.year, sim_start.month, sim_start.day,\\\n sim_start.hour, sim_start.minute, sim_start.second)\n tstep = t_dict['run_time_step']\n nstep = t_dict['number_of_steps']\n time_axis = api.TimeAxis(utc_start, tstep, nstep)\n \n return time_axis\n\nta_1 = time_axis_from_dict(TimeDict)\nprint(f'1. {ta_1} \\n {ta_1.total_period()}')\n# or shyft-wise, ready tested, precise and less effort, two lines\nutc = api.Calendar() # 'Europe/Oslo' can be passed to calendar for time-zone\nta_2 = api.TimeAxis(utc.time(2013, 9, 1), api.deltahours(24), 365)\nprint(f'2. {ta_2} \\n {ta_2.total_period()}')",
"We now have an object that defines the time dimension for the simulation, and we will use this to initialize the region_model with the \"environmental timeseries\" or env_ts data. These containers will be given data from the appropriate repositories using the get_timeseries function. Following the templates in the shyft.repository.interfaces module, you'll see that the repositories should provide the capability to \"screen\" data based on time criteria and optinally* geo_location criteria.",
"# we can extract our \"bounding box\" based on the `region_model` we set up\nbbox = region_model.bounding_region.bounding_box(region_model.bounding_region.epsg())\n\nperiod = ta_1.total_period() #just defined above\n\n# required forcing data sets we want to retrieve\ngeo_ts_names = (\"temperature\", \"wind_speed\", \"precipitation\",\n \"relative_humidity\", \"radiation\")\n\nsources = geots_repo.get_timeseries( geo_ts_names, period, geo_location_criteria=bbox )",
"Now we have a new dictionary, called 'sources' that contains specialized Shyft api types specific to each forcing data type. You can look at one for example:",
"prec = sources['precipitation']\nprint(len(prec))\n",
"We can explore further and see each element is in itself an api.PrecipitationSource, which has a timeseries (ts). Recall from the first tutorial that we can easily convert the timeseries.time_axis into datetime values for plotting.\nLet's plot the precip of each of the sources:",
"fig, ax = plt.subplots(figsize=(15,10))\n\nfor pr in prec:\n t,p = [dt.datetime.utcfromtimestamp(t_.start) for t_ in pr.ts.time_axis], pr.ts.values\n ax.plot(t,p, label=pr.mid_point().x) #uid is empty now, but we reserve for later use\nfig.autofmt_xdate()\nax.legend(title=\"Precipitation Input Sources\")\nax.set_ylabel(\"precip[mm/hr]\")",
"Finally, the next step will take the data from the sources and connect it to our region_model.region_env class:",
"def get_region_environment(sources):\n region_env = api.ARegionEnvironment()\n region_env.temperature = sources[\"temperature\"]\n region_env.precipitation = sources[\"precipitation\"]\n region_env.radiation = sources[\"radiation\"]\n region_env.wind_speed = sources[\"wind_speed\"]\n region_env.rel_hum = sources[\"relative_humidity\"]\n return region_env\n\nregion_model.region_env = get_region_environment(sources)",
"And now our forcing data is connected to the region_model. We are almost ready to run a simulation. There is just one more step. We've connected the sources to the model, but remember that Shyft is a distributed modeling framework, and we've connected point data sources (in this case). So we need to get the data from the observed points to each cell. This is done through interpolation.\nShyft Interpolation\nIn Shyft there are predefined routines for interpolation. In the interp_config class below one quickly recognizes the same input source type keywords that are used as keys to the params dictionary. params is simply a dictionary of dictionaries which contains the parameters used by the interpolation model that is specific for each source type.",
"from shyft.repository.interpolation_parameter_repository import InterpolationParameterRepository\n\nclass interp_config(object):\n \"\"\" a simple class to provide the interpolation parameters \"\"\"\n\n def __init__(self):\n \n self.interp_params = {'precipitation': {'method': 'idw',\n 'params': {'distance_measure_factor': 1.0,\n 'max_distance': 600000.0,\n 'max_members': 10,\n 'scale_factor': 1.02}},\n 'radiation': {'method': 'idw',\n 'params': {'distance_measure_factor': 1.0,\n 'max_distance': 600000.0,\n 'max_members': 10}},\n 'relative_humidity': {'method': 'idw',\n 'params': {'distance_measure_factor': 1.0,\n 'max_distance': 600000.0,\n 'max_members': 10}},\n 'temperature': {'method': 'btk',\n 'params': {'nug': 0.5,\n 'range': 200000.0,\n 'sill': 25.0,\n 'temperature_gradient': -0.6,\n 'temperature_gradient_sd': 0.25,\n 'zscale': 20.0}},\n 'wind_speed': {'method': 'idw',\n 'params': {'distance_measure_factor': 1.0,\n 'max_distance': 600000.0,\n 'max_members': 10}}}\n\n def interpolation_parameters(self):\n return self.interp_params\n\nip_conf = interp_config()\nip_repo = InterpolationParameterRepository(ip_conf)\n\nregion_model.interpolation_parameter = ip_repo.get_parameters(0) #just a '0' for now",
"The next step is to set the intial states of the model using our last repository. This one, the GeneratedStateRepository will set empty default values.\nNow we are nearly ready to conduct a simulation. We just need to run a few methods to prepare the model and cells for the simulation. The region_model has a method called initalize_cell_environment that takes a time_axis type as input. We defined the time_axis above, so now we'll use it to initialize the model. At the same time, we'll set the initial_state. Then we can actually run a simulation!",
"from shyft.repository.generated_state_repository import GeneratedStateRepository\n\ninit_values = {'gs': {'acc_melt': 0.0,\n 'albedo': 0.65,\n 'alpha': 6.25,\n 'iso_pot_energy': 0.0,\n 'lwc': 0.1,\n 'sdc_melt_mean': 0.0,\n 'surface_heat': 30000.0,\n 'temp_swe': 0.0},\n 'kirchner': {'q': 0.01}}\n\n \nstate_generator = GeneratedStateRepository(region_model)#, init_values=init_values)\n\n# we need the state_repository to have the same size as the model\n#state_repo.n = region_model.size()\n# there is only 1 state (indexed '0')\ns0 = state_generator.get_state(0)\nnot_applied_list=region_model.state.apply_state( # apply state set the current state according to arguments\n cell_id_state_vector=s0, # ok, easy to get\n cids=[] # empty means apply all, if we wanted to only apply state for specific catchment-ids, this is where to put them\n)\nassert len(not_applied_list)==0, 'Ensure all states was matched and applied to the model'\nregion_model.initial_state=region_model.current_state # now we stash the current state to the initial state\n\nstate_generator.find_state?",
"Conduct the simulation\nWe now have a region_model that is ready for simulation. As we discussed before, we still need to get the data from our point observations interpolated to the cells, and we need to get the env_ts of each cell populated. But all the machinery is now in place to make this happen. \nTo summarize, we've created:\n\nregion_repo, a region repository that contains information related to region of simulation and the model to be used in the simulation. From this we get a region_model\ngeots_repo, a geo-timeseries repository that provides a mechanism to pull the data we require from our 'sources'.\ntime_axis, created from the TimeAxisFixedDeltaT class of shyft to provide the period of simulation.\nip_repo, an interpolation repository which provides all the required parameters for interpolating our data to the distributed cells -- following variable specific protocols/models.\nstate_repo, a GeneratedStateRepository used to provide our simulation an initial state.\n\nThe next step is simply to initialize the cell environment and run the interpolation. As a practive, before simulation we reset to the initial state (we're there already, but it is something you have to do before a new simulation), and then run the cells. First we'll initialize the cell environment:",
"region_model.initialize_cell_environment(ta_1)",
"As a habit, we have a quick \"sanity check\" function to see if the model is runnable. Itis recommended to have this function when you create 'run scripts'.",
"def runnable(reg_mod):\n \"\"\" returns True if model is properly configured \n **note** this is specific depending on your model's input data requirements \"\"\"\n return all((reg_mod.initial_state.size() > 0, reg_mod.time_axis.size() > 0,\n all([len(getattr(reg_mod.region_env, attr)) > 0 for attr in\n (\"temperature\", \"wind_speed\", \"precipitation\", \"rel_hum\", \"radiation\")])))\n\n# run the model, e.g. as you may configure it in a script:\nif runnable(region_model):\n \n region_model.interpolate(region_model.interpolation_parameter, region_model.region_env)\n region_model.revert_to_initial_state()\n region_model.run_cells()\nelse:\n print('Something wrong with model configuration.')\n\n\n ",
"Okay, so the simulation was run. Now we may be interested in looking at some of the output. We'll take a brief summary glance in the next section, and save a deeper dive into the simulation results for another notebook.\n3. Simulation results\nThe first step will be simply to look at the discharge results for each subcatchment within our simulation domain. For simplicity, we can use a pandas.DataFrame to collect the data from each catchment.",
"# Here we are going to extact data from the simulation.\n# We start by creating a list to hold discharge for each of the subcatchments.\n# Then we'll get the data from the region_model object\nimport pandas as pd\n# mapping of internal catch ID to catchment\ncatchment_id_map = region_model.catchment_id_map \n\n# First get the time-axis which we'll use as the index for the data frame\nta = region_model.time_axis\n# and convert it to datetimes\nindex = [dt.datetime.utcfromtimestamp(p.start) for p in ta]\n\n# Now we'll add all the discharge series for each catchment \ndata = {}\nfor cid in catchment_id_map:\n # get the discharge time series for the subcatchment\n q_ts = region_model.statistics.discharge([int(cid)])\n data[cid] = q_ts.values.to_numpy()\n\ndf = pd.DataFrame(data, index=index)\n# we can simply use:\nax = df.plot(figsize=(20,15))\nax.legend(title=\"Catch. ID\")\nax.set_ylabel(\"discharge [m3 s-1]\")",
"Okay, that was simple. Let's look at the timeseries in some individual cells. The following is a bit of a contrived example, but it shows some aspects of the api. We'll plot the temperature series of all the cells in one sub-catchment, and color them by elevation. This doesn't necessarily show anything about the simulation, per se, but rather results from the interpolation step.",
"from matplotlib.cm import jet as jet\nfrom matplotlib.colors import Normalize\n\n# get all the cells for one sub-catchment with 'id' == 1228\nc1228 = [c for c in region_model.cells if c.geo.catchment_id() == 1228]\n\n# for plotting, create an mpl normalizer based on min,max elevation\nelv = [c.geo.mid_point().z for c in c1228]\nnorm = Normalize(min(elv), max(elv))\n\n#plot with line color a function of elevation\nfig, ax = plt.subplots(figsize=(15,10))\n\n# here we are cycling through each of the cells in c1228\nfor dat,elv in zip([c.env_ts.temperature.values for c in c1228], [c.mid_point().z for c in c1228]):\n ax.plot(dat, color=jet(norm(elv)), label=int(elv))\n \n \n# the following is just to plot the legend entries and not related to Shyft\nhandles, labels = ax.get_legend_handles_labels()\n\n# sort by labels\nimport operator\nhl = sorted(zip(handles, labels),\n key=operator.itemgetter(1))\nhandles2, labels2 = zip(*hl)\n\n# show legend, but only every fifth entry\nax.legend(handles2[::5], labels2[::5], title='Elevation [m]')",
"As we would expect from the temperature kriging method, we should find higher elevations have colder temperatures. As an exercise you could explore this relationship using a scatter plot.\nNow we're going to create a function that will read initial states from the initial_state_repo. In practice, this is already done by the ConfgiSimulator, but to demonstrate lower level functions, we'll reset the states of our region_model:",
"# create a function to read the states from the state repository\ndef get_init_state_from_repo(initial_state_repo_, region_model_id_=None, timestamp=None):\n state_id = 0\n if hasattr(initial_state_repo_, 'n'): # No stored state, generated on-the-fly\n initial_state_repo_.n = region_model.size()\n else:\n states = initial_state_repo_.find_state(\n region_model_id_criteria=region_model_id_,\n utc_period_criteria=timestamp)\n if len(states) > 0:\n state_id = states[0].state_id # most_recent_state i.e. <= start time\n else:\n raise Exception('No initial state matching criteria.')\n return initial_state_repo_.get_state(state_id)\n \ninit_state = get_init_state_from_repo(state_generator, timestamp=region_model.time_axis.start)\n",
"Don't worry too much about the function for now, but do take note of the init_state object that we created. This is another container, this time it is a class that contains PTGSKStateWithId objects, which are specific to the model stack implemented in the simulation (in this case PTGSK). If we explore an individual state object, we'll see init_state contains, for each cell in our simulation, the state variables for each 'method' of the method stack.\nLet's look more closely:",
"def print_pub_attr(obj):\n #only public attributes\n print(f'{obj.__class__.__name__}:\\t',[attr for attr in dir(obj) if attr[0] is not '_']) \n \nprint(len(init_state))\ninit_state_cell0 = init_state[0] \n# the identifier\nprint_pub_attr(init_state_cell0.id)\n# gam snow states\nprint_pub_attr(init_state_cell0.state.gs)\n\n#init_state_cell0.kirchner states\nprint_pub_attr(init_state_cell0.state.kirchner)\n",
"Summary\nWe have now explored the region_model and looked at how to instantiate a region_model by using a api.ARegionEnvironment, containing a collection of timeseries sources, and passing an api.InterpolationParameter class containing the parameters to use for the data interpolation algorithms. The interpolation step \"populated\" our cells with data from the point sources.\nThe cells each contain all the information related to the simulation (their own timeseries, env_ts; their own model parameters, parameter; and other attributes and methods). In future tutorials we'll work with the cells indivdual \"resource collector\" (.rc) and \"state collector\" (.sc) attributes."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
chemelnucfin/tensorflow
|
tensorflow/contrib/autograph/examples/notebooks/rnn_keras_estimator.ipynb
|
apache-2.0
|
[
"!pip install -U tf-nightly\n\nimport os\nimport time\n\nimport tensorflow as tf\nfrom tensorflow.contrib import autograph\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport six\n\nfrom google.colab import widgets",
"Case study: training a custom RNN, using Keras and Estimators\nIn this section, we show how you can use AutoGraph to build RNNColorbot, an RNN that takes as input names of colors and predicts their corresponding RGB tuples. The model will be trained by a custom Estimator.\nTo get started, set up the dataset. The following cells defines methods that download and format the data needed for RNNColorbot; the details aren't important (read them in the privacy of your own home if you so wish), but make sure to run the cells before proceeding.",
"def parse(line):\n \"\"\"Parses a line from the colors dataset.\"\"\"\n items = tf.string_split([line], \",\").values\n rgb = tf.string_to_number(items[1:], out_type=tf.float32) / 255.0\n color_name = items[0]\n chars = tf.one_hot(tf.decode_raw(color_name, tf.uint8), depth=256)\n length = tf.cast(tf.shape(chars)[0], dtype=tf.int64)\n return rgb, chars, length\n\n\ndef set_static_batch_shape(batch_size):\n def apply(rgb, chars, length):\n rgb.set_shape((batch_size, None))\n chars.set_shape((batch_size, None, 256))\n length.set_shape((batch_size,))\n return rgb, chars, length\n return apply\n\n\ndef load_dataset(data_dir, url, batch_size, training=True):\n \"\"\"Loads the colors data at path into a tf.PaddedDataset.\"\"\"\n path = tf.keras.utils.get_file(os.path.basename(url), url, cache_dir=data_dir)\n dataset = tf.data.TextLineDataset(path)\n dataset = dataset.skip(1)\n dataset = dataset.map(parse)\n dataset = dataset.cache()\n dataset = dataset.repeat()\n if training:\n dataset = dataset.shuffle(buffer_size=3000)\n dataset = dataset.padded_batch(\n batch_size, padded_shapes=((None,), (None, 256), ()))\n # To simplify the model code, we statically set as many of the shapes that we\n # know.\n dataset = dataset.map(set_static_batch_shape(batch_size))\n return dataset",
"To show the use of control flow, we write the RNN loop by hand, rather than using a pre-built RNN model.\nNote how we write the model code in Eager style, with regular if and while statements. Then, we annotate the functions with @autograph.convert to have them automatically compiled to run in graph mode.\nWe use Keras to define the model, and we will train it using Estimators.",
"@autograph.convert()\nclass RnnColorbot(tf.keras.Model):\n \"\"\"RNN Colorbot model.\"\"\"\n\n def __init__(self):\n super(RnnColorbot, self).__init__()\n self.lower_cell = tf.contrib.rnn.LSTMBlockCell(256, dtype=tf.float32)\n self.upper_cell = tf.contrib.rnn.LSTMBlockCell(128, dtype=tf.float32)\n self.relu_layer = tf.layers.Dense(3, activation=tf.nn.relu)\n\n def _rnn_layer(self, chars, cell, batch_size, training):\n \"\"\"A single RNN layer.\n\n Args:\n chars: A Tensor of shape (max_sequence_length, batch_size, input_size)\n cell: An object of type tf.contrib.rnn.LSTMBlockCell\n batch_size: Int, the batch size to use\n training: Boolean, whether the layer is used for training\n\n Returns:\n A Tensor of shape (max_sequence_length, batch_size, output_size).\n \"\"\"\n hidden_outputs = tf.TensorArray(tf.float32, 0, True)\n state, output = cell.zero_state(batch_size, tf.float32)\n for ch in chars:\n cell_output, (state, output) = cell.call(ch, (state, output))\n hidden_outputs.append(cell_output)\n hidden_outputs = autograph.stack(hidden_outputs)\n if training:\n hidden_outputs = tf.nn.dropout(hidden_outputs, 0.5)\n return hidden_outputs\n\n def build(self, _):\n \"\"\"Creates the model variables. See keras.Model.build().\"\"\"\n self.lower_cell.build(tf.TensorShape((None, 256)))\n self.upper_cell.build(tf.TensorShape((None, 256)))\n self.relu_layer.build(tf.TensorShape((None, 128))) \n self.built = True\n\n\n def call(self, inputs, training=False):\n \"\"\"The RNN model code. Uses Eager.\n\n The model consists of two RNN layers (made by lower_cell and upper_cell),\n followed by a fully connected layer with ReLU activation.\n\n Args:\n inputs: A tuple (chars, length)\n training: Boolean, whether the layer is used for training\n\n Returns:\n A Tensor of shape (batch_size, 3) - the model predictions.\n \"\"\"\n chars, length = inputs\n batch_size = chars.shape[0]\n seq = tf.transpose(chars, (1, 0, 2))\n\n seq = self._rnn_layer(seq, self.lower_cell, batch_size, training)\n seq = self._rnn_layer(seq, self.upper_cell, batch_size, training)\n\n # Grab just the end-of-sequence from each output.\n indices = (length - 1, list(range(batch_size)))\n indices = tf.stack(indices, 1)\n sequence_ends = tf.gather_nd(seq, indices)\n return self.relu_layer(sequence_ends)\n\n@autograph.convert()\ndef loss_fn(labels, predictions):\n return tf.reduce_mean((predictions - labels) ** 2)",
"We will now create the model function for the custom Estimator.\nIn the model function, we simply use the model class we defined above - that's it!",
"def model_fn(features, labels, mode, params):\n \"\"\"Estimator model function.\"\"\"\n chars = features['chars']\n sequence_length = features['sequence_length']\n inputs = (chars, sequence_length)\n\n # Create the model. Simply using the AutoGraph-ed class just works!\n colorbot = RnnColorbot()\n colorbot.build(None)\n\n if mode == tf.estimator.ModeKeys.TRAIN:\n predictions = colorbot(inputs, training=True)\n loss = loss_fn(labels, predictions)\n\n learning_rate = params['learning_rate']\n optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\n global_step = tf.train.get_global_step()\n train_op = optimizer.minimize(loss, global_step=global_step)\n return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)\n\n elif mode == tf.estimator.ModeKeys.EVAL:\n predictions = colorbot(inputs)\n loss = loss_fn(labels, predictions)\n\n return tf.estimator.EstimatorSpec(mode, loss=loss)\n\n elif mode == tf.estimator.ModeKeys.PREDICT:\n predictions = colorbot(inputs)\n\n predictions = tf.minimum(predictions, 1.0)\n return tf.estimator.EstimatorSpec(mode, predictions=predictions)",
"We'll create an input function that will feed our training and eval data.",
"def input_fn(data_dir, data_url, params, training=True):\n \"\"\"An input function for training\"\"\"\n batch_size = params['batch_size']\n \n # load_dataset defined above\n dataset = load_dataset(data_dir, data_url, batch_size, training=training)\n\n # Package the pipeline end in a format suitable for the estimator.\n labels, chars, sequence_length = dataset.make_one_shot_iterator().get_next()\n features = {\n 'chars': chars,\n 'sequence_length': sequence_length\n }\n\n return features, labels",
"We now have everything in place to build our custom estimator and use it for training and eval!",
"params = {\n 'batch_size': 64,\n 'learning_rate': 0.01,\n}\n\ntrain_url = \"https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/archive/extras/colorbot/data/train.csv\"\ntest_url = \"https://raw.githubusercontent.com/random-forests/tensorflow-workshop/master/archive/extras/colorbot/data/test.csv\"\ndata_dir = \"tmp/rnn/data\"\n\nregressor = tf.estimator.Estimator(\n model_fn=model_fn,\n params=params)\n\nregressor.train(\n input_fn=lambda: input_fn(data_dir, train_url, params),\n steps=100)\neval_results = regressor.evaluate(\n input_fn=lambda: input_fn(data_dir, test_url, params, training=False),\n steps=2\n)\n\nprint('Eval loss at step %d: %s' % (eval_results['global_step'], eval_results['loss']))",
"And here's the same estimator used for inference.",
"def predict_input_fn(color_name):\n \"\"\"An input function for prediction.\"\"\"\n _, chars, sequence_length = parse(color_name)\n\n # We create a batch of a single element.\n features = {\n 'chars': tf.expand_dims(chars, 0),\n 'sequence_length': tf.expand_dims(sequence_length, 0)\n }\n return features, None\n\n\ndef draw_prediction(color_name, pred):\n pred = pred * 255\n pred = pred.astype(np.uint8)\n plt.axis('off')\n plt.imshow(pred)\n plt.title(color_name)\n plt.show()\n\n\ndef predict_with_estimator(color_name, regressor):\n predictions = regressor.predict(\n input_fn=lambda:predict_input_fn(color_name))\n pred = next(predictions)\n predictions.close()\n pred = np.minimum(pred, 1.0)\n pred = np.expand_dims(np.expand_dims(pred, 0), 0)\n\n draw_prediction(color_name, pred)\n\ntb = widgets.TabBar([\"RNN Colorbot\"])\nwhile True:\n with tb.output_to(0):\n try:\n color_name = six.moves.input(\"Give me a color name (or press 'enter' to exit): \")\n except (EOFError, KeyboardInterrupt):\n break\n if not color_name:\n break\n with tb.output_to(0):\n tb.clear_tab()\n predict_with_estimator(color_name, regressor)\n "
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fw1121/Roary
|
contrib/roary_plots/roary_plots.ipynb
|
gpl-3.0
|
[
"Roary pangenome plots\n<h6><a href=\"javascript:toggle()\" target=\"_self\">Toggle source code</a></h6>",
"# Plotting imports\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.set_style('white')\n\n# Other imports\nimport os\nimport pandas as pd\nimport numpy as np\nfrom Bio import Phylo",
"parSNP tree\nAny other valid newick file is fine, if the tip labels is the same as in the gene_presence_absence matrix from roary.",
"t = Phylo.read('parsnp.tree', 'newick')\n\n# Max distance to create better plots\nmdist = max([t.distance(t.root, x) for x in t.get_terminals()])",
"Roary",
"# Load roary\nroary = pd.read_table('gene_presence_absence.csv',\n sep=',',\n low_memory=False)\n# Set index (group name)\nroary.set_index('Gene', inplace=True)\n# Drop the other info columns\nroary.drop(list(roary.columns[:10]), axis=1, inplace=True)\n\n# Transform it in a presence/absence matrix (1/0)\nroary.replace('.{2,100}', 1, regex=True, inplace=True)\nroary.replace(np.nan, 0, regex=True, inplace=True)\n\n# Sort the matrix by the sum of strains presence\nidx = roary.sum(axis=1).order(ascending=False).index\nroary_sorted = roary.ix[idx]\n\n# Pangenome frequency plot\nplt.figure(figsize=(7, 5))\n\nplt.hist(roary.sum(axis=1), roary.shape[1],\n histtype=\"stepfilled\", alpha=.7)\n\nplt.xlabel('Number of genomes')\nplt.ylabel('Number of genes')\n\nsns.despine(left=True,\n bottom=True)\n\n# Sort the matrix according to tip labels in the tree\nroary_sorted = roary_sorted[[x.name for x in t.get_terminals()]]\n\n# PLot presence/absence matrix against the tree\nwith sns.axes_style('whitegrid'):\n fig = plt.figure(figsize=(17, 10))\n\n ax1=plt.subplot2grid((1,40), (0, 10), colspan=30)\n a=ax1.imshow(roary_sorted.T, cmap=plt.cm.Blues,\n vmin=0, vmax=1,\n aspect='auto',\n interpolation='none',\n )\n ax1.set_yticks([])\n ax1.set_xticks([])\n ax1.axis('off')\n\n ax = fig.add_subplot(1,2,1)\n ax=plt.subplot2grid((1,40), (0, 0), colspan=10, axisbg='white')\n\n fig.subplots_adjust(wspace=0, hspace=0)\n\n ax1.set_title('Roary matrix\\n(%d gene clusters)'%roary.shape[0])\n\n Phylo.draw(t, axes=ax, \n show_confidence=False,\n label_func=lambda x: None,\n xticks=([],), yticks=([],),\n ylabel=('',), xlabel=('',),\n xlim=(-0.01,mdist+0.01),\n axis=('off',),\n title=('parSNP tree\\n(%d strains)'%roary.shape[1],), \n )\n\n# Plot the pangenome pie chart\nplt.figure(figsize=(10, 10))\n\ncore = roary[roary.sum(axis=1) == roary.shape[1]].shape[0]\nsoftcore = roary[(roary.sum(axis=1) < roary.shape[1]) &\n (roary.sum(axis=1) >= roary.shape[1]*0.95)].shape[0]\nshell = roary[(roary.sum(axis=1) < roary.shape[1]*0.95) &\n (roary.sum(axis=1) >= roary.shape[1]*0.15)].shape[0]\ncloud = roary[roary.sum(axis=1) < roary.shape[1]*0.15].shape[0]\n\ntotal = roary.shape[0]\n\ndef my_autopct(pct):\n val=int(pct*total/100.0)\n return '{v:d}'.format(v=val)\n\na=plt.pie([core, softcore, shell, cloud],\n labels=['core\\n(%d strains)'%roary.shape[1],\n 'soft-core\\n(%d <= strains < %d)'%(roary.shape[1]*.95,\n roary.shape[1]),\n 'shell\\n(%d <= strains < %d)'%(roary.shape[1]*.15,\n roary.shape[1]*.95),\n 'cloud\\n(strains < %d)'%(roary.shape[1]*.15)],\n explode=[0.1, 0.05, 0.02, 0], radius=0.9,\n colors=[(0, 0, 1, float(x)/total) for x in (core, softcore, shell, cloud)],\n autopct=my_autopct)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
phoebe-project/phoebe2-docs
|
2.1/tutorials/20_21_meshes.ipynb
|
gpl-3.0
|
[
"2.0 - 2.1 Migration: Meshes\nLet's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).",
"!pip install -I \"phoebe>=2.1,<2.2\"",
"In this tutorial we will review the changes in the PHOEBE mesh structures. We will first explain the changes and then demonstrate them in code. As usual, let us import phoebe and create a default binary bundle:",
"import phoebe\nb = phoebe.default_binary()",
"PHOEBE 2.0 had a mesh dataset along with pbmesh and protomesh options you could send to b.run_compute(). These options were quite convenient, but had a few inherit problems:\n\nThe protomesh was exposed at only t0 and was in Roche coordinates, despite using the same qualifiers 'x', 'y', 'z'.\nPassband-dependent parameters were exposed in the mesh if pbmesh=True, but only if the times matched exactly with the passband (lc, rv, etc) dataset.\nStoring more than a few meshes become very memory intensive due to their large size and the large number of columns.\n\nAddressing these shortcomings required a complete redesign of the mesh dataset. The most important changes are:\n\npbmesh and protomesh are no longer valid options to b.run_compute(). Everything is done through the mesh dataset itself, i.e. b.add_dataset('mesh').\nThe default columns that are computed for each mesh include the elements in both Roche and plane-of-sky coordinate systems. These columns cannot be disabled.\nThe columns parameter in the mesh dataset lists additional columns to be exposed in the model mesh when calling b.run_compute(). See the section on columns below for more details.\nYou can choose whether to expose coordinates in the Roche coordinate system ('xs', 'ys', 'zs') or the plane-of-sky coordinate system ('us', 'vs', 'ws').\nWhen plotting, the default is the plane-of-sky coordinate system, and the axes will be correctly labeled as uvw, whereas in PHOEBE 2.0.x these were still labeled xyz. Note that this also applies to velocities ('vxs', 'vys', 'vzs' vs 'vus', 'vvs', 'vws').\nThe include_times parameter allows for importing timestamps from other datasets. It also provides support for important orbital times: 't0' (zero-point), 't0_perpass' (periastron passage), 't0_supconj' (superior conjunction) and 't0_ref' (zero-phase reference point).\nBy default, the times parameter is empty. If you do not set times or include_times before calling b.run_compute(), your model will be empty.\n\nThe 'columns' parameter\nThis parameter is a SelectParameter (a new type of Parameter introduced in PHOEBE 2.1). Its value is one of the values in a list of allowed options. You can list the options by calling param.get_choices() (same as you would for a ChoiceParameter). The value also accepts wildcards, as long as the expression matches at least one of the choices. This allows you to easily select, say, rvs from all datasets, by passing rvs@*. To see the full list of matched options, use param.expand_value().\nTo demonstrate, let us add a few datasets and look at the available choices for the columns parameter.",
"b.add_dataset('mesh')\nprint b.get_parameter('columns').get_choices()\n\nb.add_dataset('lc')\nprint b.get_parameter('columns').get_choices()\n\nb['columns'] = ['*@lc01', 'teffs']\nb.get_parameter('columns').get_value()\n\nb.get_parameter('columns').expand_value()",
"The 'include_times' parameter\nSimilarly, the include_times parameter is a SelectParameter, with the choices being the existing datasets, as well as the t0s mentioned above.",
"print b.get_parameter('include_times').get_value()\n\nprint b.get_parameter('include_times').get_choices()\n\nb['include_times'] = ['lc01', 't0@system']\n\nprint b.get_parameter('include_times').get_value()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/thu/cmip6/models/sandbox-3/landice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: THU\nSource ID: SANDBOX-3\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:40\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'thu', 'sandbox-3', 'landice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --> Mass Balance\n7. Ice --> Mass Balance --> Basal\n8. Ice --> Mass Balance --> Frontal\n9. Ice --> Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Ice Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify how ice albedo is modelled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Atmospheric Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Oceanic Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the ocean and ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs an adative grid being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Base Resolution\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThe base resolution (in metres), before any adaption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Resolution Limit\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Projection\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of glaciers in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of glaciers, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Dynamic Areal Extent\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes the model include a dynamic glacial extent?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Grounding Line Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.3. Ice Sheet\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice sheets simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.4. Ice Shelf\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice shelves simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Ice --> Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Ice --> Mass Balance --> Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Ocean\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Ice --> Mass Balance --> Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Melting\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Ice --> Dynamics\n**\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Approximation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nApproximation type used in modelling ice dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Adaptive Timestep\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.4. Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ocean-color-ac-challenge/evaluate-pearson
|
evaluation.ipynb
|
apache-2.0
|
[
"E-CEO Challenge #3 Evaluation\nWeights\nDefine the weight of each wavelength",
"w_412 = 0.56\nw_443 = 0.73\nw_490 = 0.71\nw_510 = 0.36\nw_560 = 0.01",
"Run\nProvide the run information:\n* run id\n* run metalink containing the 3 by 3 kernel extractions\n* participant",
"run_id = '0000021-150601000007545-oozie-oozi-W'\nrun_meta = 'http://sb-10-16-10-53.dev.terradue.int:50075/streamFile/ciop/run/participant-a/0000021-150601000007545-oozie-oozi-W/results.metalink'\nparticipant = 'participant-a'",
"Define all imports in a single cell",
"import glob\nimport pandas as pd\nfrom scipy.stats.stats import pearsonr\nimport numpy\nimport math",
"Manage run results\nDownload the results and aggregate them in a single Pandas dataframe",
"!curl http://sb-10-16-10-53.dev.terradue.int:50075/streamFile/ciop/run/participant-a/0000021-150601000007545-oozie-oozi-W/results.metalink | aria2c -d participant-a -M -\n\npath = participant # use your path\n\nallFiles = glob.glob(path + \"/*.txt\")\nframe = pd.DataFrame()\nlist_ = []\nfor file_ in allFiles:\n df = pd.read_csv(file_,index_col=None, header=0)\n list_.append(df)\n frame = pd.concat(list_)",
"Number of points extracted from MERIS level 2 products",
"len(frame.index)",
"Calculate Pearson\nFor all three sites, AAOT, BOUSSOLE and MOBY, calculate the Pearson factor for each band.\n\nNote AAOT does not have measurements for band @510\n\nAAOT site",
"insitu_path = './insitu/AAOT.csv'\ninsitu = pd.read_csv(insitu_path)\nframe_full = pd.DataFrame.merge(frame.query('Name == \"AAOT\"'), insitu, how='inner', on = ['Date', 'ORBIT'])\n\nframe_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()\nr_aaot_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @412\")\n\nframe_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()\nr_aaot_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @443\")\n\nframe_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()\nr_aaot_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @490\")\n\nr_aaot_510 = 0\nprint(\"0 observations for band @510\")\n\nframe_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()\nr_aaot_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nprint(str(len(frame_xxx.index)) + \" observations for band @560\")\n\ninsitu_path = './insitu/BOUSS.csv'\ninsitu = pd.read_csv(insitu_path)\nframe_full = pd.DataFrame.merge(frame.query('Name == \"BOUS\"'), insitu, how='inner', on = ['Date', 'ORBIT'])\n\nframe_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()\nr_bous_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nframe_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()\nr_bous_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nframe_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()\nr_bous_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nframe_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()\nr_bous_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nframe_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()\nr_bous_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n \n\ninsitu_path = './insitu/MOBY.csv'\ninsitu = pd.read_csv(insitu_path)\nframe_full = pd.DataFrame.merge(frame.query('Name == \"MOBY\"'), insitu, how='inner', on = ['Date', 'ORBIT'])\n\nframe_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()\nr_moby_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nframe_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()\nr_moby_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nframe_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()\nr_moby_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nframe_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()\nr_moby_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\nframe_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()\nr_moby_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0] \n\n[r_aaot_412, r_aaot_443, r_aaot_490, r_aaot_510, r_aaot_560]\n\n[r_bous_412, r_bous_443, r_bous_490, r_bous_510, r_bous_560]\n\n[r_moby_412, r_moby_443, r_moby_490, r_moby_510, r_moby_560]\n\nr_final = (numpy.mean([r_bous_412, r_moby_412, r_aaot_412]) * w_412 \\\n + numpy.mean([r_bous_443, r_moby_443, r_aaot_443]) * w_443 \\\n + numpy.mean([r_bous_490, r_moby_490, r_aaot_490]) * w_490 \\\n + numpy.mean([r_bous_510, r_moby_510, r_aaot_510]) * w_510 \\\n + numpy.mean([r_bous_560, r_moby_560, r_aaot_560]) * w_560) \\\n / (w_412 + w_443 + w_490 + w_510 + w_560)\n\nr_final"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
keras-team/keras-io
|
examples/vision/ipynb/video_classification.ipynb
|
apache-2.0
|
[
"Video Classification with a CNN-RNN Architecture\nAuthor: Sayak Paul<br>\nDate created: 2021/05/28<br>\nLast modified: 2021/06/05<br>\nDescription: Training a video classifier with transfer learning and a recurrent model on the UCF101 dataset.\nThis example demonstrates video classification, an important use-case with\napplications in recommendations, security, and so on.\nWe will be using the UCF101 dataset\nto build our video classifier. The dataset consists of videos categorized into different\nactions, like cricket shot, punching, biking, etc. This dataset is commonly used to\nbuild action recognizers, which are an application of video classification.\nA video consists of an ordered sequence of frames. Each frame contains spatial\ninformation, and the sequence of those frames contains temporal information. To model\nboth of these aspects, we use a hybrid architecture that consists of convolutions\n(for spatial processing) as well as recurrent layers (for temporal processing).\nSpecifically, we'll use a Convolutional Neural Network (CNN) and a Recurrent Neural\nNetwork (RNN) consisting of GRU layers.\nThis kind of hybrid architecture is popularly known as a CNN-RNN.\nThis example requires TensorFlow 2.5 or higher, as well as TensorFlow Docs, which can be\ninstalled using the following command:",
"!pip install -q git+https://github.com/tensorflow/docs",
"Data collection\nIn order to keep the runtime of this example relatively short, we will be using a\nsubsampled version of the original UCF101 dataset. You can refer to\nthis notebook\nto know how the subsampling was done.",
"!wget -q https://git.io/JGc31 -O ucf101_top5.tar.gz\n!tar xf ucf101_top5.tar.gz",
"Setup",
"from tensorflow_docs.vis import embed\nfrom tensorflow import keras\nfrom imutils import paths\n\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport pandas as pd\nimport numpy as np\nimport imageio\nimport cv2\nimport os",
"Define hyperparameters",
"IMG_SIZE = 224\nBATCH_SIZE = 64\nEPOCHS = 10\n\nMAX_SEQ_LENGTH = 20\nNUM_FEATURES = 2048",
"Data preparation",
"train_df = pd.read_csv(\"train.csv\")\ntest_df = pd.read_csv(\"test.csv\")\n\nprint(f\"Total videos for training: {len(train_df)}\")\nprint(f\"Total videos for testing: {len(test_df)}\")\n\ntrain_df.sample(10)",
"One of the many challenges of training video classifiers is figuring out a way to feed\nthe videos to a network. This blog post\ndiscusses five such methods. Since a video is an ordered sequence of frames, we could\njust extract the frames and put them in a 3D tensor. But the number of frames may differ\nfrom video to video which would prevent us from stacking them into batches\n(unless we use padding). As an alternative, we can save video frames at a fixed\ninterval until a maximum frame count is reached. In this example we will do\nthe following:\n\nCapture the frames of a video.\nExtract frames from the videos until a maximum frame count is reached.\nIn the case, where a video's frame count is lesser than the maximum frame count we\nwill pad the video with zeros.\n\nNote that this workflow is identical to problems involving texts sequences. Videos of the UCF101 dataset is known\nto not contain extreme variations in objects and actions across frames. Because of this,\nit may be okay to only consider a few frames for the learning task. But this approach may\nnot generalize well to other video classification problems. We will be using\nOpenCV's VideoCapture() method\nto read frames from videos.",
"# The following two methods are taken from this tutorial:\n# https://www.tensorflow.org/hub/tutorials/action_recognition_with_tf_hub\n\n\ndef crop_center_square(frame):\n y, x = frame.shape[0:2]\n min_dim = min(y, x)\n start_x = (x // 2) - (min_dim // 2)\n start_y = (y // 2) - (min_dim // 2)\n return frame[start_y : start_y + min_dim, start_x : start_x + min_dim]\n\n\ndef load_video(path, max_frames=0, resize=(IMG_SIZE, IMG_SIZE)):\n cap = cv2.VideoCapture(path)\n frames = []\n try:\n while True:\n ret, frame = cap.read()\n if not ret:\n break\n frame = crop_center_square(frame)\n frame = cv2.resize(frame, resize)\n frame = frame[:, :, [2, 1, 0]]\n frames.append(frame)\n\n if len(frames) == max_frames:\n break\n finally:\n cap.release()\n return np.array(frames)\n",
"We can use a pre-trained network to extract meaningful features from the extracted\nframes. The Keras Applications module provides\na number of state-of-the-art models pre-trained on the ImageNet-1k dataset.\nWe will be using the InceptionV3 model for this purpose.",
"\ndef build_feature_extractor():\n feature_extractor = keras.applications.InceptionV3(\n weights=\"imagenet\",\n include_top=False,\n pooling=\"avg\",\n input_shape=(IMG_SIZE, IMG_SIZE, 3),\n )\n preprocess_input = keras.applications.inception_v3.preprocess_input\n\n inputs = keras.Input((IMG_SIZE, IMG_SIZE, 3))\n preprocessed = preprocess_input(inputs)\n\n outputs = feature_extractor(preprocessed)\n return keras.Model(inputs, outputs, name=\"feature_extractor\")\n\n\nfeature_extractor = build_feature_extractor()",
"The labels of the videos are strings. Neural networks do not understand string values,\nso they must be converted to some numerical form before they are fed to the model. Here\nwe will use the StringLookup\nlayer encode the class labels as integers.",
"label_processor = keras.layers.StringLookup(\n num_oov_indices=0, vocabulary=np.unique(train_df[\"tag\"])\n)\nprint(label_processor.get_vocabulary())",
"Finally, we can put all the pieces together to create our data processing utility.",
"\ndef prepare_all_videos(df, root_dir):\n num_samples = len(df)\n video_paths = df[\"video_name\"].values.tolist()\n labels = df[\"tag\"].values\n labels = label_processor(labels[..., None]).numpy()\n\n # `frame_masks` and `frame_features` are what we will feed to our sequence model.\n # `frame_masks` will contain a bunch of booleans denoting if a timestep is\n # masked with padding or not.\n frame_masks = np.zeros(shape=(num_samples, MAX_SEQ_LENGTH), dtype=\"bool\")\n frame_features = np.zeros(\n shape=(num_samples, MAX_SEQ_LENGTH, NUM_FEATURES), dtype=\"float32\"\n )\n\n # For each video.\n for idx, path in enumerate(video_paths):\n # Gather all its frames and add a batch dimension.\n frames = load_video(os.path.join(root_dir, path))\n frames = frames[None, ...]\n\n # Initialize placeholders to store the masks and features of the current video.\n temp_frame_mask = np.zeros(shape=(1, MAX_SEQ_LENGTH,), dtype=\"bool\")\n temp_frame_features = np.zeros(\n shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype=\"float32\"\n )\n\n # Extract features from the frames of the current video.\n for i, batch in enumerate(frames):\n video_length = batch.shape[0]\n length = min(MAX_SEQ_LENGTH, video_length)\n for j in range(length):\n temp_frame_features[i, j, :] = feature_extractor.predict(\n batch[None, j, :]\n )\n temp_frame_mask[i, :length] = 1 # 1 = not masked, 0 = masked\n\n frame_features[idx,] = temp_frame_features.squeeze()\n frame_masks[idx,] = temp_frame_mask.squeeze()\n\n return (frame_features, frame_masks), labels\n\n\ntrain_data, train_labels = prepare_all_videos(train_df, \"train\")\ntest_data, test_labels = prepare_all_videos(test_df, \"test\")\n\nprint(f\"Frame features in train set: {train_data[0].shape}\")\nprint(f\"Frame masks in train set: {train_data[1].shape}\")",
"The above code block will take ~20 minutes to execute depending on the machine it's being\nexecuted.\nThe sequence model\nNow, we can feed this data to a sequence model consisting of recurrent layers like GRU.",
"# Utility for our sequence model.\ndef get_sequence_model():\n class_vocab = label_processor.get_vocabulary()\n\n frame_features_input = keras.Input((MAX_SEQ_LENGTH, NUM_FEATURES))\n mask_input = keras.Input((MAX_SEQ_LENGTH,), dtype=\"bool\")\n\n # Refer to the following tutorial to understand the significance of using `mask`:\n # https://keras.io/api/layers/recurrent_layers/gru/\n x = keras.layers.GRU(16, return_sequences=True)(\n frame_features_input, mask=mask_input\n )\n x = keras.layers.GRU(8)(x)\n x = keras.layers.Dropout(0.4)(x)\n x = keras.layers.Dense(8, activation=\"relu\")(x)\n output = keras.layers.Dense(len(class_vocab), activation=\"softmax\")(x)\n\n rnn_model = keras.Model([frame_features_input, mask_input], output)\n\n rnn_model.compile(\n loss=\"sparse_categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"]\n )\n return rnn_model\n\n\n# Utility for running experiments.\ndef run_experiment():\n filepath = \"/tmp/video_classifier\"\n checkpoint = keras.callbacks.ModelCheckpoint(\n filepath, save_weights_only=True, save_best_only=True, verbose=1\n )\n\n seq_model = get_sequence_model()\n history = seq_model.fit(\n [train_data[0], train_data[1]],\n train_labels,\n validation_split=0.3,\n epochs=EPOCHS,\n callbacks=[checkpoint],\n )\n\n seq_model.load_weights(filepath)\n _, accuracy = seq_model.evaluate([test_data[0], test_data[1]], test_labels)\n print(f\"Test accuracy: {round(accuracy * 100, 2)}%\")\n\n return history, seq_model\n\n\n_, sequence_model = run_experiment()",
"Note: To keep the runtime of this example relatively short, we just used a few\ntraining examples. This number of training examples is low with respect to the sequence\nmodel being used that has 99,909 trainable parameters. You are encouraged to sample more\ndata from the UCF101 dataset using the notebook mentioned above and train the same model.\nInference",
"\ndef prepare_single_video(frames):\n frames = frames[None, ...]\n frame_mask = np.zeros(shape=(1, MAX_SEQ_LENGTH,), dtype=\"bool\")\n frame_features = np.zeros(shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype=\"float32\")\n\n for i, batch in enumerate(frames):\n video_length = batch.shape[0]\n length = min(MAX_SEQ_LENGTH, video_length)\n for j in range(length):\n frame_features[i, j, :] = feature_extractor.predict(batch[None, j, :])\n frame_mask[i, :length] = 1 # 1 = not masked, 0 = masked\n\n return frame_features, frame_mask\n\n\ndef sequence_prediction(path):\n class_vocab = label_processor.get_vocabulary()\n\n frames = load_video(os.path.join(\"test\", path))\n frame_features, frame_mask = prepare_single_video(frames)\n probabilities = sequence_model.predict([frame_features, frame_mask])[0]\n\n for i in np.argsort(probabilities)[::-1]:\n print(f\" {class_vocab[i]}: {probabilities[i] * 100:5.2f}%\")\n return frames\n\n\n# This utility is for visualization.\n# Referenced from:\n# https://www.tensorflow.org/hub/tutorials/action_recognition_with_tf_hub\ndef to_gif(images):\n converted_images = images.astype(np.uint8)\n imageio.mimsave(\"animation.gif\", converted_images, fps=10)\n return embed.embed_file(\"animation.gif\")\n\n\ntest_video = np.random.choice(test_df[\"video_name\"].values.tolist())\nprint(f\"Test video path: {test_video}\")\ntest_frames = sequence_prediction(test_video)\nto_gif(test_frames[:MAX_SEQ_LENGTH])",
"Next steps\n\nIn this example, we made use of transfer learning for extracting meaningful features\nfrom video frames. You could also fine-tune the pre-trained network to notice how that\naffects the end results.\nFor speed-accuracy trade-offs, you can try out other models present inside\ntf.keras.applications.\nTry different combinations of MAX_SEQ_LENGTH to observe how that affects the\nperformance.\nTrain on a higher number of classes and see if you are able to get good performance.\nFollowing this tutorial, try a\npre-trained action recognition model from DeepMind.\nRolling-averaging can be useful technique for video classification and it can be\ncombined with a standard image classification model to infer on videos.\nThis tutorial\nwill help understand how to use rolling-averaging with an image classifier.\nWhen there are variations in between the frames of a video not all the frames might be\nequally important to decide its category. In those situations, putting a\nself-attention layer in the\nsequence model will likely yield better results.\nFollowing this book chapter,\nyou can implement Transformers-based models for processing videos."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
piyueh/SEM-Toolbox
|
Huynh2007/Fig9.1-9.2.ipynb
|
mit
|
[
"Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2016 Pi-Yueh Chuang",
"import numpy\nfrom matplotlib import pyplot\nfrom matplotlib import colors\n% matplotlib inline\n\nimport functools\n\nimport os, sys\nsys.path.append(os.path.split(os.getcwd())[0])\n\nimport utils.poly as poly\nimport utils.quadrature as quad",
"A. Problem description\nA-1. PDE\n$$\n\\frac{\\partial u}{\\partial t} + \\frac{\\partial u}{\\partial x} = 0\n$$\nA-2. Initial condition\n$$\nu_{init} = e^{-40(x-0.5)^2}\n$$",
"def u_init(x):\n \"\"\"initial condition of u\"\"\"\n temp = x - 0.5\n return numpy.exp(-40 * temp * temp)",
"A-3. Exact solution\n$$\nu_{exact} = e^{\\left[-40(x-0.5-t)^2\\right]}\n$$",
"def u_exact(x, t):\n \"\"\"initial condition of u\"\"\"\n temp = x - 0.5 - t\n return numpy.exp(-40 * temp * temp)",
"A-4. Flux function\nThe PDE is already in conservative form.\nAnd thus the flux of the conservation law in this problem is\n$$\nf(x) = f(u(x)) = u(x)\n$$",
"def flux(u):\n \"\"\"flux in PDE\"\"\"\n return u",
"A-5. Wave propagation speed, a\n$$\na(x) = a(u) = \\frac{df}{du} = 1\n$$\nB. Distribution of solution points on single element\nB-1. Equally distribution\nSolution points are equally distributed on a single element but exclude the two end nodes.",
"def equalDistrib(N):\n \"\"\"\"\"\"\n return numpy.linspace(-1., 1., N, endpoint=False) + 1. / N",
"B-2. Legendre-Lobatto\nSolution points are the quadrature nodes of Gauss-Lobatto-Legendre quadrature.",
"def LegendreLobatto(N):\n \"\"\"\"\"\"\n return quad.GaussLobattoJacobi(N).nodes",
"B-3. Gauss (or Legendre-Gauss)\nSolution points are the quadrature nodes of Gauss-Legendre quadrature.",
"def LegendreGauss(N):\n \"\"\"\"\"\"\n return quad.GaussJacobi(N).nodes",
"B-4. Chebyshev-Lobatto\nSolution points are the quadrature nodes of Gauss-Lobatto-Chebyshev quadrature.",
"def ChebyshevLobatto(N):\n \"\"\"\"\"\"\n return - numpy.cos(numpy.arange(N, dtype=numpy.float64) * numpy.pi / (N-1))",
"Finally, let's define a dictionary collecting these distribution functions for the purpose of later coding.",
"solnPointDistrib = {\n \"equal\": equalDistrib,\n \"LegendreLobatto\": LegendreLobatto,\n \"LegendreGauss\": LegendreGauss,\n \"ChebyshevLobatto\": ChebyshevLobatto\n}",
"C. Flux correction function ($g_{LB}$ and $g_{RB}$)\nThough what we'll define mathematically are flux correction functions (i.e. $g_{LB}$ and $g_{RB}$), \nwe only need the derivatives of these functions at solution points in numerical solvers, that is, $\\frac{dg_{LB}}{d\\xi}(\\xi_k)$ and $\\frac{dg_{RB}}{d\\xi}(\\xi_k)$, where $\\xi_k,\\ k=1,\\dots,N$ are solution points on each element.\nSo the Python function defined below only return a numpy.ndarray of these derivatives.\n$N$ in this section always represents the number of solution points on each element.\nC-1. Discontinuous Galerkin, $g_{DG}$ (a.k.a. $g_1$)\n$g_{DG}$ is defined as Radau polynomial (p.s. the roots of Radau polynomials are the quadrature nodes of Gauss-Radau-Legendre quadrature).\n$$\n\\left{\n\\begin{align}\ng_{DG,LB}(\\xi) & = R_{R,N}(\\xi) = (-1)^N\\frac{1}{2}(L_N(\\xi) - L_{N-1}(\\xi)) \\\ng_{DG,RB}(\\xi) & = R_{L,N}(\\xi) = \\frac{1}{2}(L_N(\\xi) + L_{N-1}(\\xi)) \\\n\\end{align}\n\\right.\n$$\nwhere the subscripts $LB$ and $RB$ denote whether the functions apply to left or right interface of each element; $R_{R,N}$ and $R_{L,N}$ are right- and left-Radau polynomial of order $N$; $L_N$ represents Legendre polynomial of order $N$.",
"def dgDG(xi, side):\n \"\"\"flux correction scheme: discontinuous Galerkin, aka. g1\n \n This function returns the derivatives of g_{DG} at solution points\n \n Args:\n xi: numpy.ndarray, local coordinates of solution points\n side: string, either 'left' or 'right', indicating which bouyndary of \n element will be corrected\n \"\"\"\n \n # number of solution points, also the order of correction polynomial\n N = xi.size\n \n # dg/dx\n if side == 'left':\n dg = poly.Radau(N, end=1).derive()\n elif side == 'right':\n dg = poly.Radau(N, end=-1).derive()\n else:\n raise ValueError(\"illegal value for argument 'side'\")\n return dg(xi) ",
"C-2. Correction function based on staggered-grid spectral difference, $g_{SG}$\n$g_{SG}$ is a Lagrange interpolation polynomial of which the basis polynomials are defined by Chebyshev-Lobatto points and the values at these points are zero except the targeting interface point. The value of the targeting interface point (i.e. $\\xi=-1$ or $\\xi=1$) is one.\n$$\ng_{SG}(\\xi) = \\sum_{i=0}^{N} g_i \\prod_{\\begin{smallmatrix}j=0 \\ j\\ne i\\end{smallmatrix}}^{N} \\frac{\\xi-\\xi_j}{\\xi_i-\\xi_j}\n$$\nFor $g_{SG,LB}$, $g_0=1$ and $g_i=0$ for $i=1,\\dots,N$. While for $g_{SG,RB}$, $g_i=0$ for $k=0,\\dots,N-1$ and $g_N=1$. $\\xi_i,\\ k=0,\\dots,N$ denotes $N+1$ Chebyshev-Lobatto points.\nGiven that values at most basis points (i.e. $g_i$) are zero, we can further simplify it.\n$$\n\\left{\n\\begin{align}\ng_{SG,LB}(\\xi) & = \\prod_{\\begin{smallmatrix}j=0 \\ j\\ne 0\\end{smallmatrix}}^{N} \\frac{\\xi-\\xi_j}{\\xi_0-\\xi_j} \\\ng_{SG,RB}(\\xi) & = \\prod_{\\begin{smallmatrix}j=0 \\ j\\ne N\\end{smallmatrix}}^{N} \\frac{\\xi-\\xi_j}{\\xi_N-\\xi_j}\n\\end{align}\n\\right.\n$$",
"def dgSG(xi, side):\n \"\"\"flux correction scheme: stagger grid spectral difference\n \n This function returns the derivatives of g_{SG} at solution points\n \n Args:\n xi: numpy.ndarray, local coordinates of solution points\n side: string, either 'left' or 'right', indicating which bouyndary of \n element will be corrected\n \"\"\"\n \n # number of solution points, also the order of correction polynomial\n N = xi.size\n nodes = ChebyshevLobatto(N+1)\n \n # dg/dx\n if side == 'left':\n g = poly.Polynomial(roots=nodes[1:])\n g /= g(nodes[0])\n elif side == 'right':\n g = poly.Polynomial(roots=nodes[:-1])\n g /= g(nodes[-1])\n else:\n raise ValueError(\"illegal value for argument 'side'\")\n \n return g.derive()(xi)",
"C-3. Lumping for Lobatto points, $g_{Lump,Lo}$ (a.k.a. $g_2$)\n$g_{Lump,Lo}$ is defined through $N$ Legendre-Lobatto points, $\\xi_i,\\ i=1,\\dots,N$.\nFor correction at left interface ($g_{Lump,Lo,LB}$), we want the following properties:\n\n$g_{Lump,Lo,LB}(\\xi_1)$=$g_{Lump,Lo,LB}(-1)=1$\n$g_{Lump,Lo,LB}(\\xi_N)$=$g_{Lump,Lo,LB}(1)=0$\n$g'_{Lump,Lo,LB}(\\xi_i)=0,\\ for\\ i=2,\\dots,N$\n\nBased on these properties, we can first define the differential of the function:\n$$\ng'{Lump,Lo,LB}(\\xi) = C_1\\prod{i=2}^{N}(\\xi-xi_i)\n$$\nCarry out indefinite integration, we get\n$$\ng_{Lump,Lo,LB}(\\xi) = C_1\\int\\prod_{i=2}^{N}(\\xi-xi_i)d\\xi + C_2 = C_1 p(\\xi) + C_2\n$$\nwhere $p(\\xi) \\equiv \\int\\prod_{i=2}^{N}(\\xi-xi_i)d\\xi$. After applying the first and the second property abovementioned, we can obtain that\n$$\n\\left{\n\\begin{align}\nC_1 = & 1 \\mathbin{/} (p(-1) - p(1)) \\\nC_2 = & - p(1) \\mathbin{/} (p(-1) - p(1))\n\\end{align}\n\\right.\n$$\nFor right-interface correction function, similar properties are induced:\n1. $g_{Lump,Lo,RB}(\\xi_N)$=$g_{Lump,Lo,RB}(1)=1$\n2. $g_{Lump,Lo,RB}(\\xi_1)$=$g_{Lump,Lo,RB}(-1)=0$\n3. $g'_{Lump,Lo,LB}(\\xi_i)=0,\\ for\\ i=1,\\dots,N-1$\nAnd $g_{Lump,Lo,RB}(\\xi)$ can be obtained following the same workflow.\nIn fact, given that $\\xi_i$ are Legendre-Lobatto points, we can get Radau polynomial representations of $g_{Lump,Lo}$:\n$$\n\\left{\n\\begin{align}\ng_{Lump,Lo,LB}(\\xi) &= \\frac{1}{2N-1}((K-1)R_{R,N}(\\xi) + KR_{R,N}(\\xi)) \\\ng_{Lump,Lo,RB}(\\xi) &= \\frac{1}{2N-1}((K-1)R_{L,N}(\\xi) + KR_{L,N}(\\xi))\n\\end{align}\n\\right.\n$$\nHowever, if $\\xi_i$ are not Legendre-Lobatto points, we can still go through the workflow to obtain $g_{Lump,XX}$ (for example, see the next correction function), but we may not be able to get the Radau representations.\nAdditional note\nIf solution points are chosen to be the same as points defining $g_{Lump,XX}$, then we don't even have to define correction function nor differential of the function. This is because the derivatives of $g_{Lump,XX}$ at interior solution points are zero by definition, and the derivative at targeting interface point can be obtained analytically, while that at the other interface point is zero.\nFor example, if solution points are Legendre-Lobatto points, and the correction function is $g_{Lump,Lo}$, then the derivatives of $g_{Lump,Lo}$ at these solution points are\n$$\n\\left{\n\\begin{align}\n&g'{Lump,Lo,LB}(\\xi_1) = -N(N-1)/2;\\ g'{Lump,Lo,LB}(\\xi_i) = 0\\ for\\ i=2,\\dots,N\\\n&g'{Lump,Lo,RB}(\\xi_N) = N(N-1)/2;\\ g'{Lump,Lo,LB}(\\xi_i) = 0\\ for\\ i=1,\\dots,N-1\n\\end{align}\n\\right.\n$$",
"def dgLumpLo(xi, side):\n \"\"\"flux correction scheme: lumpped Legendre-Lobatto, aka. g2\n \n This function returns the derivatives of g_{Lump,Lo} at solution points\n \n Args:\n xi: numpy.ndarray, local coordinates of solution points\n side: string, either 'left' or 'right', indicating which bouyndary of \n element will be corrected\n \"\"\"\n \n # number of solution points, also the order of correction polynomial\n N = xi.size\n nodes = quad.GaussLobattoJacobi(N).nodes\n \n # dg/dx\n if side == 'left':\n dg = poly.Polynomial(roots=nodes[1:])\n \n g = dg.integral() # note: this is indefinite integral!\n # correct the scaling, so that final g(-1) = 1\n leading = 1. / (g(-1) - g(1))\n elif side == 'right':\n dg = poly.Polynomial(roots=nodes[:-1])\n \n g = dg.integral() # note: this is indefinite integral!\n # correct the scaling, so that final g(1) = 1\n leading = 1. / (g(1) - g(-1))\n else:\n raise ValueError(\"illegal value for argument 'side'\")\n \n return dg(xi) * leading",
"C-4. Lumping for Chebyshev-Lobatto points, $g_{Lump,Ch-Lo}$\n$g_{Lump,Ch-Lo}$ is defined by the same definitions of $g_{Lump,Lo}$, except now the points defining the function (i.e. $\\xi_i$) are Chebyshev-Lobatto points. Therefore, we can get $g_{Lump,Ch-Lo}$ through the same workflow of $g_{Lump,Lo}$.",
"def dgLumpChLo(xi, side):\n \"\"\"flux correction scheme: lumpped Chebyshev-Lobatto, aka. g2\n \n This function returns the derivatives of g_{Lump,Ch-Lo} at solution points\n \n Args:\n xi: numpy.ndarray, local coordinates of solution points\n side: string, either 'left' or 'right', indicating which bouyndary of \n element will be corrected\n \"\"\"\n \n # number of solution points, also the order of correction polynomial\n N = xi.size\n nodes = ChebyshevLobatto(N)\n \n # dg/dx\n if side == 'left':\n dg = poly.Polynomial(roots=nodes[1:])\n \n g = dg.integral() # note: this is indefinite integral!\n # correct the scaling, so that final g(-1) = 1\n leading = 1. / (g(-1) - g(1))\n elif side == 'right':\n dg = poly.Polynomial(roots=nodes[:-1])\n \n g = dg.integral() # note: this is indefinite integral!\n # correct the scaling, so that final g(1) = 1\n leading = 1. / (g(1) - g(-1))\n else:\n raise ValueError(\"illegal value for argument 'side'\")\n \n return dg(xi) * leading",
"C-5. Staggered-grid scheme with Gauss points, $g_{Ga}$\nThe definition of $g_{Ga}$ follows the same concept of $g_{SG}$, except that now the points defining Lagrange basis polynomial are $N-1$ Gauss points plus $\\xi=-1$ and $\\xi=1$ at the ends. That is,\n$$\n\\left{\n\\begin{align}\n&\\xi_0 = -1 \\\n&\\xi_N = 1 \\\n&\\xi_i = the\\ i_{th}\\ root\\ of\\ L_{N-1}(\\xi),\\ for\\ i=1,\\dots,N-1\n\\end{align}\n\\right.\n$$\nThe remaining part of the definition follows that of $g_{SG}$.",
"def dgGa(xi, side):\n \"\"\"flux correction scheme: stagger grid spectral difference, but with Gauss\n quadrature points as interior zeros\n \n This function returns the derivatives of g_{Ga} at solution points\n \n Args:\n xi: numpy.ndarray, local coordinates of solution points\n side: string, either 'left' or 'right', indicating which bouyndary of \n element will be corrected\n \"\"\"\n \n # number of solution points, also the order of correction polynomial\n N = xi.size\n nodes = numpy.pad(quad.GaussJacobi(N-1).nodes, \n (1, 1), 'constant', constant_values=(-1., 1.))\n \n # dg/dx\n if side == 'left':\n g = poly.Polynomial(roots=nodes[1:])\n g /= g(nodes[0])\n elif side == 'right':\n g = poly.Polynomial(roots=nodes[:-1])\n g /= g(nodes[-1])\n else:\n raise ValueError(\"illegal value for argument 'side'\")\n \n return g.derive()(xi)",
"Again, let's define a dictionary to collect these correction function.",
"dCorrectionFunc = {\n \"DG\": dgDG,\n \"SG\": dgSG,\n \"LumpLo\": dgLumpLo,\n \"LumpChLo\": dgLumpChLo,\n \"Ga\": dgGa\n}",
"D. Solver and related functions\nD-1. 4th order explicit Runge-Kutta",
"def RK4(u, dt, rhs):\n \"\"\"4th order explicit Runge-Kutta\"\"\"\n k1 = rhs(u)\n k2 = rhs(u+0.5*dt*k1)\n k3 = rhs(u+0.5*dt*k2)\n k4 = rhs(u+dt*k3)\n u += dt * (k1 + 2. * k2 + 2. * k3 + k4) / 6.\n return u",
"D-2. Flux reconstruction",
"def FR1D(u, basis, dMatrix, intID, dgL, dgR, invJ):\n \"\"\"flux reconstruction\n \n Args:\n u: Ne by K numpy ndarray (Ne: number of elements; K: number of solution\n points per element); primary variable in PDE\n basis: Lagrange basis polynomial based on solution points on each \n element; so far, given that the domain is discretized by the same \n type of standard element, so only one set of Lagrange basis is \n required\n dMatrix: derivative matrix associated to the given Lagrnage basis\n intID: Ne+1 by 2 numpy integer ndarray; array indicating the id of left\n and right elements at each interface\n dgL, dgR: numpy ndarray with length K; correction values of locally \n derived flux due to left and right interfaces; given that the \n distributions and orders of solution points on all elements are the\n same, all elements share the same dgL and dgR (because they only \n depend on local coordinate)\n invJ: float; inversed Jacobian; due to the size of elements in this \n problem are the same, all elements share single invJ\n \"\"\"\n \n # obtain number of elements and number of solution points per element\n Ne, K = u.shape\n \n # upwind flux at each interface\n # note: in this problem, a = df/du = 1, so upwind values always come from\n # the left elements of interfaces\n fupw = numpy.array(\n [numpy.dot(basis(1), u[intID[i, 0]]) for i in range(Ne+1)])\n\n # dF/dxi, local derived and corrected flux\n RHS = numpy.zeros_like(u)\n for i in range(Ne):\n f = u[i] # for this problem, flux is u itself\n df = numpy.dot(dMatrix, f)\n fL = numpy.dot(basis(-1), f)\n fR = numpy.dot(basis(1), f)\n RHS[i] = df + (fupw[i] - fL) * dgL + (fupw[i+1] - fR) * dgR\n \n # dF/dx = invJ * (dF/dxi)\n RHS *= invJ\n \n # in explicit time-marching schemes, RHS = - (dF/dx)\n return -RHS\n\ndef solve(DistribType, CorrectionType, dt, Nt):\n \"\"\"\n solve the PDE with given parameters\n \"\"\"\n \n xLB = 0. # coordinate of left boundary\n xRB = 1. # coordinate of right boundary\n L = 1. # length of the domain (= xRB - xLB)\n Ne = 10 # number of elements\n K = 4 # number of solution points per element\n NSP = K * Ne # total number of solution points\n \n h = L / Ne # size of each element\n J = 0.5 * h # Jacobian of each element\n Jinv = 2. / h # inversed Jacobian of each element\n \n # coordinate of centers of elements\n xc = numpy.linspace(xLB, xRB, Ne, endpoint=False) + 0.5 * h\n \n # local coordinate of chosen distribution\n xSPLocal = solnPointDistrib[DistribType](K)\n \n # Lagrange basis polynomials\n basisLocal = poly.LagrangeBasis(xSPLocal)\n \n # derivative matrix\n D = basisLocal.derivative(xSPLocal)\n \n # global coordinates of all solution points\n xSPGlobal = numpy.array([xc[i] + J * xSPLocal for i in range(Ne)])\n \n # id of the two elements composing each interface\n interfaceID = numpy.column_stack(\n (numpy.arange(-1, Ne), numpy.arange(0, Ne+1)))\n interfaceID[0, 0] = Ne - 1 # apply periodic BC at left boundary\n interfaceID[-1, 1] = 0 # apply periodic BC at right boundary\n \n # IC at solution points\n u = u_init(xSPGlobal)\n \n # get values of dgLB and dgRB at solution points (local)\n dgLeft = dCorrectionFunc[CorrectionType](xSPLocal, \"left\")\n dgRight = dCorrectionFunc[CorrectionType](xSPLocal, \"right\")\n \n # define RHS function using these local variables\n rhs = functools.partial(FR1D, basis=basisLocal, dMatrix=D, \n intID=interfaceID, dgL=dgLeft, dgR=dgRight, invJ=Jinv)\n\n # time marching\n t = 0\n for i in range(Nt):\n u = RK4(u, dt, rhs)\n t += dt\n\n return xSPGlobal.flatten(), u.flatten(), t",
"D-3. Fromm's scheme",
"def FrommScheme(u, dx):\n \"\"\"\"\"\"\n \n Np = u.size\n flux = numpy.zeros_like(u)\n \n # the most left point\n flux[0] = - 0.25 * (u[-2] - 5 * u[-1] + 3 * u[0] + u[1]) / dx\n \n # interior points\n for i in range(1, Np-1):\n flux[i] = - 0.25 * (u[i-2] - 5 * u[i-1] + 3 * u[i] + u[i+1]) / dx\n \n # the most right point\n flux[-1] = - 0.25 * (u[-3] - 5 * u[-2] + 3 * u[-1] + u[0]) / dx\n \n return flux\n\ndef solveFromm(dt, Nt):\n \"\"\"a wrapper for solving advection equation with Fromm's scheme\"\"\"\n \n xBC_L = 0.\n xBC_R = 1.\n L = 1.\n Np = 40\n\n dx = L / Np\n \n x = numpy.linspace(xBC_L, xBC_R, Np, endpoint=False) + 0.5 * dx\n u = u_init(x)\n \n # define RHS function using local variables\n rhs = functools.partial(FrommScheme, dx=dx)\n\n # time marching\n t = 0\n for i in range(Nt):\n u = RK4(u, dt, rhs)\n t += dt\n \n return x, u, t",
"D-4. Plotting function",
"def plotResults(x, u, title, ax):\n \"\"\"plot numerical results and exact solutions\"\"\"\n ax.plot(x, u_exact(x, 0), 'bx-', label=\"Exact solution\")\n ax.plot(x, u, 'r.-', label=\"Numerical results\")\n ax.set_title(title, fontsize=20)\n ax.set_xlim(0, 1)\n ax.set_ylim(-0.1, 1.05)\n ax.grid()",
"E. Fig 9.1",
"fig, ax = pyplot.subplots(2, 3, True, True)\nfig.suptitle(\"Equally spaced solution points, t=10\", fontsize=25)\nfig.set_figheight(8)\nfig.set_figwidth(18)\npyplot.close()\n\n# (a) piecewise-linear upwind (Van Lee's MUSCL)\nNt = 533; dt = 10. / Nt\nx, u, t = solveFromm(dt, Nt)\nplotResults(x, u, r\"$Fromm's scheme$\", ax[0, 0])\n\n# (b) DG\nNt = 764; dt = 10. / Nt\nx, u, t = solve(\"equal\", \"DG\", dt, Nt)\nplotResults(x, u, r\"$DG$\", ax[0, 1])\n\n# (c) staggered-grid\nNt = 432; dt = 10. / Nt\nx, u, t = solve(\"equal\", \"SG\", dt, Nt)\nplotResults(x, u, r\"$Staggered-grid$\", ax[0, 2])\n\n# (d) Lump, Lo\nNt = 385; dt = 10. / Nt\nx, u, t = solve(\"equal\", \"LumpLo\", dt, Nt)\nplotResults(x, u, r\"$g_{Lump,Lo}$\", ax[1, 0])\n\n# (e) Ga\nNt = 490; dt = 10. / Nt\nx, u, t = solve(\"equal\", \"Ga\", dt, Nt)\nplotResults(x, u, r\"$g_{Ga}$\", ax[1, 1])\n\n# (f) Lump, Ch-Lo\nNt = 562; dt = 10. / Nt\nx, u, t = solve(\"equal\", \"LumpChLo\", dt, Nt)\nplotResults(x, u, r\"$g_{Lump,Ch-Lo}$\", ax[1, 2])\n\nfig",
"F. Fig 9.2",
"fig, ax = pyplot.subplots(2, 3, True, True)\nfig.suptitle(\"Gauss solution points, t=50\", fontsize=25)\nfig.set_figheight(8)\nfig.set_figwidth(18)\npyplot.close()\n\n# (a) piecewise-linear upwind (Van Lee's MUSCL)\nNt = 2667; dt = 50. / Nt\nx, u, t = solveFromm(dt, Nt)\nplotResults(x, u, r\"$Fromm's scheme$\", ax[0, 0])\n\n# (b) DG\nNt = 764 * 5; dt = 50. / Nt\nx, u, t = solve(\"LegendreGauss\", \"DG\", dt, Nt)\nplotResults(x, u, r\"$DG$\", ax[0, 1])\n\n# (c) staggered-grid\nNt = 432 * 5; dt = 50. / Nt\nx, u, t = solve(\"LegendreGauss\", \"SG\", dt, Nt)\nplotResults(x, u, r\"$Staggered-grid$\", ax[0, 2])\n\n# (d) Lump, Lo\nNt = 385 * 5; dt = 50. / Nt\nx, u, t = solve(\"LegendreGauss\", \"LumpLo\", dt, Nt)\nplotResults(x, u, r\"$g_{Lump,Lo}$\", ax[1, 0])\n\n# (e) Ga\nNt = 490 * 5; dt = 50. / Nt\nx, u, t = solve(\"LegendreGauss\", \"Ga\", dt, Nt)\nplotResults(x, u, r\"$g_{Ga}$\", ax[1, 1])\n\n# (f) Lump, Ch-Lo\nNt = 562 * 5; dt = 50. / Nt\nx, u, t = solve(\"LegendreGauss\", \"LumpChLo\", dt, Nt)\nplotResults(x, u, r\"$g_{Lump,Ch-Lo}$\", ax[1, 2])\n\nfig"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
muneebalam/scrapenhl2
|
examples/5v5 TOI for teams and players/5v5 TOI for teams and players.ipynb
|
mit
|
[
"import pandas as pd\n\nfrom scrapenhl2.scrape import autoupdate, schedules, team_info, players\nfrom scrapenhl2.manipulate import manipulate as manip",
"The purpose of this script is to get game-by-game 5v5 toi counts by player and team for every game since 2012-13. We can get this information from the 5v5 player log easily.",
"# Update data\n# autoupdate.autoupdate() # Comment in if needed, and loop if needed\n# manip.get_5v5_player_log(2017, force_create) # Comment in if needed, and loop if needed\nlog = pd.concat([manip.get_5v5_player_log(season).assign(Season=season) for season in range(2012, 2018)])\nsch = pd.concat([schedules.get_season_schedule(season).assign(Season=season) for season in range(2012, 2018)])\nlog.head()",
"All we need to do is:\n- Sum TOION and TOIOFF, and take distinct values to get team counts\n- Take TOION for individual counts",
"# Teams\nteamtoi = log.assign(TOI=log.TOION + log.TOIOFF) \\\n [['Season', 'Game', 'TOI']] \\\n .groupby(['Season', 'Game'], as_index=False) \\\n .max() # take max to avoid floating point errors that may fell drop_duplicates\nteamtoi = sch[['Season', 'Game', 'Home', 'Road']] \\\n .melt(id_vars=['Season', 'Game'], var_name='HR', value_name='TeamID') \\\n .merge(teamtoi, how='inner', on=['Season', 'Game']) \\\n .drop_duplicates()\n \n# Make names into str, and convert TOI from hours to minutes\nteamtoi.loc[:, 'Team'] = teamtoi.TeamID.apply(lambda x: team_info.team_as_str(x))\nteamtoi.loc[:, 'TOI(min)'] = teamtoi.TOI * 60\nteamtoi = teamtoi.drop(['TeamID', 'TOI'], axis=1)\nteamtoi.head()\n\n# Individuals\nindivtoi = log[['Season', 'Game', 'PlayerID', 'TOION', 'TeamID']]\n\n# IDs to names and TOI from hours to minutes\nindivtoi.loc[:, 'Player'] = players.playerlst_as_str(indivtoi.PlayerID.values)\nindivtoi.loc[:, 'Team'] = indivtoi.TeamID.apply(lambda x: team_info.team_as_str(x))\nindivtoi.loc[:, 'TOI(min)'] = indivtoi.TOION * 60\n\nindivtoi = indivtoi.drop(['TeamID', 'TOION', 'PlayerID'], axis=1)\nindivtoi.head()\n\n# Write to file\nteamtoi.to_csv('/Users/muneebalam/Desktop/teamtoi.csv')\nindivtoi.to_csv('/Users/muneebalam/Desktop/indivtoi.csv')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
great-expectations/great_expectations
|
tests/test_fixtures/rule_based_profiler/example_notebooks/DataAssistants_Instantiation_And_Running.ipynb
|
apache-2.0
|
[
"How to use DataAssistants\n\nA DataAssistant enables you to quickly profile your data by providing a thin API over a pre-constructed RuleBasedProfiler configuration.\n\nAs a result of the profiling, you get back a result object consisting of \n\nMetrics that describe the current state of the data\nExpectations that are able to alert you if the data deviates from the expected state in the future. \n\n\n\nDataAssistant results can also be plotted to help you understand their data visually.\n\nThere are multiple DataAssistants centered around a theme (volume, nullity etc), and this notebook walks you through an example VolumeDataAssistant to show the capabilities and potential of this new interface.\n\nWhat is a VolumeDataAssistant?\n\nThe VolumeDataAssistant allows you to automatically build a set of Expectations that alerts you if the volume of records significantly deviates from the norm. \n\nMore specfically, the VolumeDataAssistant profiles the data and outputs an ExpectationSuite containing 2 Expecation types \n\nexpect_table_row_count_to_be_between\nexpect_column_unique_value_count_to_be_between\n\nwith automatically selected values for upper and lower bound. The ranges are selected using a bootstrapping step on the sample Batches. This allows the DataAssistant to account for outliers, allowing it to obtain a more accurate estimate of the true ranges by taking into account the underlying distribution.",
"import great_expectations as ge\nfrom great_expectations.core.yaml_handler import YAMLHandler\nfrom great_expectations.core.batch import BatchRequest\nfrom great_expectations.core import ExpectationSuite\nfrom great_expectations.core.expectation_configuration import ExpectationConfiguration\nfrom great_expectations.validator.validator import Validator\nfrom great_expectations.rule_based_profiler.data_assistant import (\n DataAssistant,\n VolumeDataAssistant,\n)\nfrom great_expectations.rule_based_profiler.types.data_assistant_result import (\n VolumeDataAssistantResult,\n)\nfrom typing import List\nyaml = YAMLHandler()",
"Set-up: Adding taxi_data Datasource\n\nAdd taxi_data as a new Datasource\nWe are using an InferredAssetFilesystemDataConnector to connect to data in the test_sets/taxi_yellow_tripdata_samples folder and get one DataAsset (yellow_tripdata_sample) that has 36 Batches, corresponding to one batch per month from 2018-2020.",
"data_context: ge.DataContext = ge.get_context()\n\ndata_path: str = \"../../../../test_sets/taxi_yellow_tripdata_samples\"\n\ndatasource_config: dict = {\n \"name\": \"taxi_data_all_years\",\n \"class_name\": \"Datasource\",\n \"module_name\": \"great_expectations.datasource\",\n \"execution_engine\": {\n \"module_name\": \"great_expectations.execution_engine\",\n \"class_name\": \"PandasExecutionEngine\",\n },\n \"data_connectors\": {\n \"inferred_data_connector_all_years\": {\n \"class_name\": \"InferredAssetFilesystemDataConnector\",\n \"base_directory\": data_path,\n \"default_regex\": {\n \"group_names\": [\"data_asset_name\", \"year\", \"month\"],\n \"pattern\": \"(yellow_tripdata_sample)_(2018|2019|2020)-(\\\\d.*)\\\\.csv\",\n },\n },\n },\n}\n\ndata_context.test_yaml_config(yaml.dump(datasource_config))\n\n# add_datasource only if it doesn't already exist in our configuration\ntry:\n data_context.get_datasource(datasource_config[\"name\"])\nexcept ValueError:\n data_context.add_datasource(**datasource_config)",
"Configure BatchRequest\nIn this example, we will be using a BatchRequest that will return all 36 batches of data from the taxi_data dataset. We will refer to the Datasource and DataConnector configured in the previous step.",
"multi_batch_all_years_batch_request: BatchRequest = BatchRequest(\n datasource_name=\"taxi_data_all_years\",\n data_connector_name=\"inferred_data_connector_all_years\",\n data_asset_name=\"yellow_tripdata_sample\",\n)\n\nbatch_request: BatchRequest = multi_batch_all_years_batch_request",
"Run the VolumeDataAssistant\n\nThe VolumeDataAssistant can be run directly from the DataContext by specifying assistants and volume, and passing in the BatchRequest from the previous step.",
"result: VolumeDataAssistantResult = data_context.assistants.volume.run(batch_request=batch_request)",
"Explore DataAssistantResult by plotting\nThe resulting DataAssistantResult can be best explored by plotting. For each Domain considered (Table and Column in our case), the plots will display the value for each Batch (36 in total).",
"result.plot_metrics()",
"An additional layer of information that can be retrieved from the DataAssistantResult is the prescriptive information, which corresponds to the range values of the Expectations that result from the DataAssistant run. \nFor example the vendor_id plot will show that the range of distinct vendor_id values ranged from 2-3 across all of our Batches, as indicated by the blue band around the plotted values. These values correspond to the max_value and min_value for the resulting Expectation, expect_column_unique_value_count_to_be_between.",
"result.plot_expectations_and_metrics()",
"Save ExpectationSuite\nFinally, we can save the ExpectationConfiguration objext resulting from the DataAssistant in our ExpectationSuite and then use the DataContext's save_expectation_suite() method to pass in our ExpectationSuite, updated with the DataAssistant.",
"suite: ExpectationSuite = ExpectationSuite(expectation_suite_name=\"taxi_data_suite\")\n\nresulting_configurations: List[ExpectationConfiguration] = suite.add_expectation_configurations(expectation_configurations=result.expectation_configurations)\n\ndata_context.save_expectation_suite(expectation_suite=suite)",
"Optional: Clean-up Directory\nAs part of running this notebook, the DataAssistant will create a number of ExpectationSuite configurations in the great_expectations/expectations/tmp directory. Optionally run the following cell to clean up the directory.",
"#import shutil, os\n#shutil.rmtree(\"great_expectations/expectations/tmp\")\n#os.remove(\"great_expectations/expectations/.ge_store_backend_id\")\n#os.remove(\"great_expectations/expectations/taxi_data_suite.json\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
abulbasar/machine-learning
|
SparkML - 02 Credit Default.ipynb
|
apache-2.0
|
[
"Launch spark session behind the jupyter notebook",
"!ls -l $SPARK_HOME\n\n# Note: set SPARK_HOME to Spark binaries before launching the Jupyter session.\nimport os, sys\nSPARK_HOME = os.environ['SPARK_HOME']\nsys.path.insert(0, os.path.join(SPARK_HOME, \"python\", \"lib\", \"py4j-0.10.4-src.zip\"))\nsys.path.insert(0, os.path.join(SPARK_HOME, \"python\"))\n\nfrom pyspark.sql import SparkSession\nspark = SparkSession.builder.getOrCreate()\nprint(\"Spark version: \", spark.version)\n\nspark.sparkContext.uiWebUrl",
"Import libararies",
"from pyspark.ml.feature import StringIndexer, OneHotEncoder, VectorAssembler\nfrom pyspark.ml.pipeline import Pipeline\n\nfrom pyspark.ml.classification import RandomForestClassifier\nfrom pyspark.ml import evaluation\nfrom pyspark.sql.functions import * \n\nimport pandas as pd\nimport pyspark\nimport numpy as np\n\npd.__version__, np.__version__,pyspark.__version__",
"Check version of the libraries. For this notebook, I am using Spark 2.2.0\nLoad Dataset\nYou can download the dataset from here",
"credit = spark.read.options(header = True, inferSchema = True).csv(\"/data/credit-default.csv\").cache()\nprint(\"Total number of records: \", credit.count())\ncredit.limit(10).toPandas().head() \n# Taking 10 samples records from spark dtaframe into a Pandas dataframe to display the values\n# I prefer the pandas dataframe display to that by spark dataframe show function.",
"View the schema",
"credit.printSchema()",
"As I can see, there are number of columns of string type - checking_balance, credit_history etc.\nLet me define a function that take a catgorical column and pass it through StringIndexer and OneHotEncoder it gives back a dataframe with same column name as the original categorical column. It reurns a new dataframe that contains categorical column replaced by OneHotEncoded vector. \nFind all columns of String datatype\nTransform each string column type into OneHotEncoded value and collect distinct values for each categorical column in list as shown below.",
"cols = credit.columns\ncols.remove(\"default\")\ncols\n\nfrom pyspark.ml import Model, Estimator \n\nclass DFOneHotEncoderModel(Model):\n \n def get_col_labels(self):\n \n cols = []\n feature_columns = [c for c in self.columns if not c == self.label_column]\n \n for col in feature_columns:\n if col in self.categorical_fields:\n string_indexer, _ = self.categorical_fields[col]\n values = string_indexer.labels\n values = values[:-1] if self.drop_last else values\n values = [col + \"_\" + v for v in values]\n cols.extend(values)\n else:\n cols.append(col) \n \n return cols\n \n def transform(self, df, params= None):\n \n for colname in self.categorical_fields:\n string_indexer, one_hot_encoder = self.categorical_fields[colname]\n \n df = string_indexer.transform(df)\n df = df.drop(colname)\n df = df.withColumnRenamed(colname + \"_idx\", colname)\n\n if one_hot_encoder:\n df = one_hot_encoder.transform(df)\n df = df.drop(colname)\n df = df.withColumnRenamed(colname + \"_ohe\", colname)\n \n return df\n \nclass DFOneHotEncoder(Estimator):\n \n def __init__(self, label_column, categorical_fields= None, one_hot = True, drop_last = True):\n self.categorical_fields = None\n self.one_hot = one_hot\n self.drop_last = drop_last\n self.label_column = label_column \n \n if not categorical_fields is None:\n self.categorical_fields = dict([(c, None) for c in categorical_fields]) \n\n def fit(self, df):\n cols = df.columns\n if self.categorical_fields is None:\n self.categorical_fields = dict([(col, None) for col, dtype in df.dtypes if dtype == \"string\"])\n \n \n for colname in self.categorical_fields:\n string_indexer = StringIndexer(inputCol=colname, outputCol= colname + \"_idx\").fit(df)\n \n one_hot_encoder = None\n if self.one_hot:\n one_hot_encoder = OneHotEncoder(inputCol=colname\n , outputCol=colname + \"_ohe\" , dropLast = self.drop_last)\n\n self.categorical_fields[colname] = (string_indexer, one_hot_encoder)\n \n\n model = DFOneHotEncoderModel()\n model.categorical_fields = self.categorical_fields\n model.one_hot = self.one_hot\n model.drop_last = self.drop_last\n model.columns = cols\n model.label_column = self.label_column\n \n return model\n \n\nmodel = DFOneHotEncoder(label_column = \"default\").fit(credit)\ndf = model.transform(credit)\nprint(df.dtypes)\nprint(\"\\n\")\nprint(model.get_col_labels())",
"Verify that all columns in df is either of numeric or numeric vector type",
"df.printSchema()",
"Create a list of columns except the label column\nUse a vector assembler to transform all features into a single feature column",
"df_vect = VectorAssembler(inputCols = cols, outputCol=\"features\").transform(df)\ndf_vect.select(\"features\", \"default\").limit(5).toPandas()",
"Let me spot check whether OneHotEncode worked ok.",
"credit.first()\n\npd.DataFrame({\"feature\": model.get_col_labels(), \"value\": df_vect.select(\"features\").first().features})\n\ndf_train, df_test = df_vect.randomSplit(weights=[0.7, 0.3], seed=1)\ndf_train.count(), df_test.count()",
"Build a RandomForest Classifier",
"forest = RandomForestClassifier(labelCol=\"default\", featuresCol=\"features\", seed = 123)\nforest_model = forest.fit(df_train)",
"Run prediction on the whole dataset",
"df_test_pred = forest_model.transform(df_test)\ndf_test_pred.show(5)",
"Confusion Metrics",
"df_test_pred.groupBy(\"default\").pivot(\"prediction\").count().show()",
"Evaluate",
"evaluator = evaluation.MulticlassClassificationEvaluator(labelCol=\"default\", \n metricName=\"accuracy\", predictionCol=\"prediction\")\nevaluator.evaluate(df_test_pred)\n\nprint(\"Total number of features: \", forest_model.numFeatures, \"\\nOrder of feature importance: \\n\")\npd.DataFrame({\"importance\": forest_model.featureImportances.toArray(), \n \"feature\": model.get_col_labels()\n }).sort_values(\"importance\", ascending = False)",
"Building a pipeline",
"from pyspark.ml.pipeline import Pipeline, PipelineModel\n\ncredit = spark.read.options(header = True, inferSchema = True).csv(\"/data/credit-default.csv\").cache()\n\nlabel_col = \"default\"\nfeature_cols = credit.columns\nfeature_cols.remove(label_col)\n\ndf_train, df_test = credit.randomSplit(weights=[0.7, 0.3], seed=1)\n\n\npipeline = Pipeline()\nprint(pipeline.explainParams())\nencoder = DFOneHotEncoder(label_column = label_col)\nvectorizer = VectorAssembler(inputCols = feature_cols, outputCol=\"features\")\nforest = RandomForestClassifier(labelCol=\"default\", featuresCol=\"features\", seed = 123)\n\npipeline.setStages([encoder, vectorizer, forest])\npipelineModel = pipeline.fit(df_train)\ndf_test_pred = pipelineModel.transform(df_test)\nevaluator = evaluation.MulticlassClassificationEvaluator(labelCol=\"default\", \n metricName=\"accuracy\", predictionCol=\"prediction\")\n\naccuracy = evaluator.evaluate(df_test_pred)\nprint(\"Accuracy\", accuracy)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
napsternxg/DataMiningPython
|
Check installs.ipynb
|
gpl-3.0
|
[
"import sys\n\nprint(\"Following are your python version details:\\n%s\" % sys.version)\n\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.set_context(\"poster\")\nsns.set_style(\"ticks\")\n\nprint \"Numpy version: \", np.__version__\nprint \"Pandas version: \", pd.__version__\nprint \"Matplotlib version: \", plt.matplotlib.__version__\nprint \"Seaborn version: \", sns.__version__\n\nx = np.arange(-10,10,0.14)\ny = x**2\nprint \"x.shape: \", x.shape\nprint \"y.shape: \", y.shape",
"Matplotlib checks\nMore details at: http://matplotlib.org/users/pyplot_tutorial.html",
"plt.plot(x,y, marker=\"o\", color=\"r\", label=\"demo\")\nplt.xlabel(\"X axis\")\nplt.ylabel(\"Y axis\")\nplt.title(\"Demo plot\")\nplt.legend()",
"Pandas checks\nMore details at: http://pandas.pydata.org/pandas-docs/stable/tutorials.html",
"df = pd.DataFrame()\ndf[\"X\"] = x\ndf[\"Y\"] = y\ndf[\"G\"] = np.random.randint(1,10,size=x.shape)\ndf[\"E\"] = np.random.randint(1,5,size=x.shape)\ndf.shape\n\ndf.head()\n\ndf.describe()\n\ndf.G = df.G.astype(\"category\")\ndf.E = df.E.astype(\"category\")",
"Seaborn checks\nMore details at: https://stanford.edu/~mwaskom/software/seaborn/index.html",
"sns.barplot(x=\"G\", y=\"Y\", data=df, estimator=np.mean, color=\"dodgerblue\")\n\ng = sns.jointplot(\"X\", \"Y\", data=df, kind=\"reg\",\n color=\"r\", size=7)\n\nsns.pairplot(df, hue=\"E\")\n\n# Initialize a grid of plots with an Axes for each walk\ngrid = sns.FacetGrid(df, col=\"G\", hue=\"E\", col_wrap=4, size=3, legend_out=True)\n\n# Draw a horizontal line to show the starting point\ngrid.map(plt.axhline, y=30, ls=\":\", c=\".5\")\n\n# Draw a line plot to show the trajectory of each random walk\nt = grid.map(plt.plot, \"X\", \"Y\", marker=\"o\", ms=4).add_legend(title=\"E values\")\n#grid.fig.tight_layout(w_pad=1)",
"Sklearn checks\nMore details at: http://scikit-learn.org/stable/index.html",
"from sklearn.linear_model import LinearRegression, LogisticRegression\nfrom sklearn.metrics import classification_report",
"Linear regreession",
"X = df[[\"X\"]].copy()\ny = df[\"Y\"].copy()\nprint \"X.shape: \", X.shape\nprint \"Y.shape: \", y.shape\n\nmodel_linear = LinearRegression()\nmodel_linear.fit(X, y)\n\ny_pred = model_linear.predict(X)\nprint \"Y_pred.shape: \", y_pred.shape\n\nX[\"X^2\"] = X[\"X\"]**2\n\nX.columns\n\nmodel_sqr = LinearRegression()\nmodel_sqr.fit(X, y)\ny_pred_sqr = model_sqr.predict(X)\nprint \"Y_pred_sqr.shape: \", y_pred_sqr.shape\n\nplt.scatter(X[\"X\"], y, marker=\"o\", label=\"data\", alpha=0.5, s=30)\nplt.plot(X[\"X\"], y_pred, linestyle=\"--\", linewidth=1.5, color=\"k\", label=\"fit [linear]\")\nplt.plot(X[\"X\"], y_pred_sqr, linestyle=\"--\", linewidth=1.5, color=\"r\", label=\"fit [square]\")\nplt.xlabel(\"X\")\nplt.ylabel(\"Y\")\nplt.legend()\n\nmodel_linear.coef_\n\nmodel_sqr.coef_",
"Statsmodels\nMore details at: http://statsmodels.sourceforge.net/",
"import statsmodels.api as sm\n\nmodel = sm.OLS(y, X)\nres = model.fit()\nres.summary2()\n\nmodel = sm.OLS.from_formula(\"Y ~ X + I(X**2)\", data=df)\nres = model.fit()\nres.summary2()",
"Logistic regression",
"X = df[[\"X\", \"Y\"]]\ny = df[\"E\"]\n\nmodel = LogisticRegression(multi_class=\"multinomial\", solver=\"lbfgs\")\nmodel.fit(X, y)\ny_pred = model.predict(X)\nprint classification_report(y, y_pred)\n\ny_pred_p = model.predict_proba(X)\n\ny_pred_p[:10]\n\nmodel = sm.MNLogit.from_formula(\"E ~ Y + X\", data=df)\nres = model.fit()\n#res.summary2()\n\nres.summary()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hich28/mytesttxx
|
tests/python/automata.ipynb
|
gpl-3.0
|
[
"from IPython.display import display\nimport spot\nspot.setup()",
"To build an automaton, simply call translate() with a formula, and a list of options to characterize the automaton you want (those options have the same name as the long options name of the ltl2tgba tool, and they can be abbreviated).",
"a = spot.translate('(a U b) & GFc & GFd', 'BA', 'complete'); a",
"The call the spot.setup() in the first cells has installed a default style for the graphviz output. If you want to change this style temporarily, you can call the show(style) method explicitely. For instance here is a vertical layout with the default font of GraphViz.",
"a.show(\"v\")",
"If you want to add some style options to the existing one, pass a dot to the show() function in addition to your own style options:",
"a.show(\".ast\")",
"The translate() function can also be called with a formula object. Either as a function, or as a method.",
"f = spot.formula('a U b'); f\n\nspot.translate(f)\n\nf.translate()",
"When used as a method, all the arguments are translation options. Here is a monitor:",
"f.translate('mon')",
"The following three cells show a formulas for which it makes a difference to select 'small' or 'deterministic'.",
"f = spot.formula('Ga | Gb | Gc'); f\n\nf.translate('ba', 'small').show('.v')\n\nf.translate('ba', 'det').show('v.')",
"Here is how to build an unambiguous automaton:",
"spot.translate('GFa -> GFb', 'unambig')",
"Compare with the standard translation:",
"spot.translate('GFa -> GFb')",
"And here is the automaton above with state-based acceptance:",
"spot.translate('GFa -> GFb', 'sbacc')",
"Some example of running the self-loopization algorithm on an automaton:",
"a = spot.translate('F(a & X(!a &Xb))', \"any\"); a\n\nspot.sl(a)\n\na.is_empty()",
"Reading from file (see automaton-io.ipynb for more examples).",
"%%file example1.aut\nHOA: v1\nStates: 3\nStart: 0\nAP: 2 \"a\" \"b\"\nacc-name: Buchi\nAcceptance: 4 Inf(0)&Fin(1)&Fin(3) | Inf(2)&Inf(3) | Inf(1)\n--BODY--\nState: 0 {3}\n[t] 0\n[0] 1 {1}\n[!0] 2 {0}\nState: 1 {3}\n[1] 0\n[0&1] 1 {0}\n[!0&1] 2 {2}\nState: 2\n[!1] 0\n[0&!1] 1 {0}\n[!0&!1] 2 {0}\n--END--\n\na = spot.automaton('example1.aut')\ndisplay(a.show('.a'))\ndisplay(spot.remove_fin(a).show('.a'))\ndisplay(a.postprocess('TGBA', 'complete').show('.a'))\ndisplay(a.postprocess('BA'))\n\n!rm example1.aut\n\nspot.complete(a)\n\nspot.complete(spot.translate('Ga'))\n\n# Using +1 in the display options is a convient way to shift the \n# set numbers in the output, as an aid in reading the product.\na1 = spot.translate('a W c'); display(a1.show('.bat'))\na2 = spot.translate('a U b'); display(a2.show('.bat+1'))\n# the product should display pairs of states, unless asked not to (using 1).\np = spot.product(a1, a2); display(p.show('.bat')); display(p.show('.bat1'))",
"Explicit determinization after translation:",
"a = spot.translate('FGa')\ndisplay(a)\ndisplay(a.is_deterministic())\n\nspot.tgba_determinize(a).show('.ba')",
"Determinization by translate(). The generic option allows any acceptance condition to be used instead of the default generalized Büchi.",
"aut = spot.translate('FGa', 'generic', 'deterministic'); aut",
"Adding an automatic proposition to all edges",
"import buddy\nb = buddy.bdd_ithvar(aut.register_ap('b'))\nfor e in aut.edges():\n e.cond &= b\naut",
"Adding an atomic proposition to the edge between 0 and 1:",
"c = buddy.bdd_ithvar(aut.register_ap('c'))\nfor e in aut.out(0):\n if e.dst == 1:\n e.cond &= c\naut"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/inm/cmip6/models/sandbox-2/aerosol.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: INM\nSource ID: SANDBOX-2\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:05\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inm', 'sandbox-2', 'aerosol')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Meteorological Forcings\n5. Key Properties --> Resolution\n6. Key Properties --> Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --> Absorption\n12. Optical Radiative Properties --> Mixtures\n13. Optical Radiative Properties --> Impact Of H2o\n14. Optical Radiative Properties --> Radiative Scheme\n15. Optical Radiative Properties --> Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of aerosol model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrognostic variables in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of tracers in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre aerosol calculations generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the aerosol model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Variables 2D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Frequency\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of transport in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for aerosol transport modeling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n",
"7.3. Mass Conservation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to ensure mass conservation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.4. Convention\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTransport by convention",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prescribed Climatology\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nSpecify the climatology type for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n",
"8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Other Method Characteristics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCharacteristics of the "other method" used for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Prescribed Fields Mmr\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as mass mixing ratios.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Prescribed Fields Mmr\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of optical and radiative properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Optical Radiative Properties --> Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.2. Dust\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Organics\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12. Optical Radiative Properties --> Mixtures\n**\n12.1. External\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there external mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Internal\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.3. Mixing Rule\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Optical Radiative Properties --> Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact size?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.2. Internal Mixture\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact internal mixture?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Optical Radiative Properties --> Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Shortwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of shortwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Optical Radiative Properties --> Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol-cloud interactions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Twomey\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the Twomey effect included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.3. Twomey Minimum Ccn\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Drizzle\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect drizzle?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.5. Cloud Lifetime\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect cloud lifetime?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the Aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n",
"16.3. Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther model components coupled to the Aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.4. Gas Phase Precursors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of gas phase aerosol precursors.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.5. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.6. Bulk Scheme Species\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of species covered by the bulk scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kriete/cie5703_notebooks
|
week_6_Charlotte.ipynb
|
mit
|
[
"Assignment CIE 5703 - week 6\nImport Libraries",
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\nplt.style.use('ggplot')",
"Charlotte rain gauge dataset 15 min data from 2003 - 2014\nShow the data source locations. Red dots are available gauges, blue cross denotes the selected station. No specific criteria was chosen to select that specific station.\nPlot the positions of gauges. Optional code! Requires additional sources. Not easily copy-paste-able!",
"from mpl_toolkits.basemap import Basemap\n\ndef get_basemap(_resolution):\n return Basemap(projection='merc', llcrnrlat=25, urcrnrlat=38, llcrnrlon=275, urcrnrlon=285, lat_ts=35.,\n resolution=_resolution)\n\npositions = pd.read_csv('./Raw_RG_Data/RG_lat_lon.csv', header=None)\npositions.columns=['lat', 'lon']\n\nplt.figure(figsize=(24,12))\nm = get_basemap('h')\nm.drawcoastlines()\nm.drawcountries()\nm.fillcontinents(color = 'gray')\nm.drawmapboundary()\nfor index, row in positions.iterrows():\n x,y = m(row['lon']+360, row['lat'])\n m.plot(x, y, 'ro', markersize=6)\nx,y = m(positions['lon'][0]+360, positions['lat'][0])\nm.plot(x, y, 'bx', markersize=6)\nm.drawstates()\nm.drawrivers()\nplt.show()",
"Read in data",
"charlotte_rainfall = pd.read_csv('./charlotte_rg_2003-2014.csv', header = None)\n\n#charlotte_rainfall = pd.read_csv('./Raw_RG_Data/Charlotte_CRN_gage_2003.csv', header = None)\n#for i in range(2004,2014):\n# cur_rainfall = pd.read_csv('./Raw_RG_Data/Charlotte_CRN_gage_%d.csv' % i, header = None)\n# charlotte_rainfall = charlotte_rainfall.append(cur_rainfall, ignore_index=True)",
"Format data to year, month, day, hour, min and rainfall & select only ONE rain gauge",
"#charlotte_rainfall = charlotte_rainfall.iloc[:,:6]\ncharlotte_rainfall.columns = [\"year\",\"month\",\"day\", \"hour\", \"min\", \"Rainfall\"]\ncharlotte_rainfall.loc[:,'dt'] = pd.to_datetime(dict(year=charlotte_rainfall['year'], month=charlotte_rainfall['month'], day=charlotte_rainfall['day'], hour=charlotte_rainfall['hour'], minute=charlotte_rainfall['min']))\ncharlotte_rainfall.index=charlotte_rainfall['dt']\n\ncharlotte_rainfall.head()",
"Plot rain data as read",
"plt.plot(charlotte_rainfall['dt'], charlotte_rainfall[\"Rainfall\"])\nplt.ylabel('mm/15min')\nplt.gcf().autofmt_xdate()",
"Replace invalid data with NaNs and plot again",
"charlotte_rainfall[\"Rainfall\"] = charlotte_rainfall[\"Rainfall\"].replace(-99, np.nan)\n\nplt.plot(charlotte_rainfall['dt'], charlotte_rainfall[\"Rainfall\"])\nplt.ylabel('mm/15min')\nplt.gcf().autofmt_xdate()\n\ncharlotte_rainfall.head()",
"Resample the 10-min dataset to 24h accumulated rainfall data",
"charlotte_24h_rainfall = pd.DataFrame()\ncharlotte_24h_rainfall['mean_rain'] = charlotte_rainfall.Rainfall.resample('D').mean()\ncharlotte_24h_rainfall['accum_rain'] = charlotte_rainfall.Rainfall.resample('D').sum()\n\ncharlotte_24h_rainfall.head()\n\nplt.plot(charlotte_24h_rainfall[\"accum_rain\"])\nplt.ylabel('mm/24h')\nplt.gcf().autofmt_xdate()\n\nplt.plot(charlotte_24h_rainfall[\"mean_rain\"])\nplt.ylabel(r'mm/15min ($\\varnothing$ of 24h)')\nplt.gcf().autofmt_xdate()",
"Resample 15 min data to 1h accumulated dataset",
"charlotte_1h_rainfall = pd.DataFrame()\ncharlotte_1h_rainfall['mean_rain'] = charlotte_rainfall.Rainfall.resample('H').mean()\ncharlotte_1h_rainfall['accum_rain'] = charlotte_rainfall.Rainfall.resample('H').sum()\n\ncharlotte_1h_rainfall.head()\n\nplt.plot(charlotte_1h_rainfall[\"accum_rain\"])\nplt.ylabel('mm/h')\nplt.gcf().autofmt_xdate()\n\nplt.plot(charlotte_1h_rainfall[\"mean_rain\"])\nplt.ylabel(r'mm/15min ($\\varnothing$ of 1h)')\nplt.gcf().autofmt_xdate()",
"Select only summer months (April - Sept)",
"charlotte_summer_1h_rainfall = charlotte_1h_rainfall.loc[(charlotte_1h_rainfall.index.month>=4) & (charlotte_1h_rainfall.index.month<=9)]\n\nplt.plot(charlotte_summer_1h_rainfall[\"accum_rain\"])\nplt.ylabel('mm/h')\nplt.gcf().autofmt_xdate()",
"Select only winter months (Oct - Mar)",
"mask_start = (charlotte_1h_rainfall.index.month >= 1) & (charlotte_1h_rainfall.index.month <= 3)\nmask_end = (charlotte_1h_rainfall.index.month >= 10) & (charlotte_1h_rainfall.index.month <= 12)\nmask = mask_start | mask_end\n\ncharlotte_winter_1h_rainfall = charlotte_1h_rainfall.loc[mask]\n\nplt.plot(charlotte_winter_1h_rainfall[\"accum_rain\"])\nplt.ylabel('mm/h')\nplt.gcf().autofmt_xdate()\n\ncharlotte_winter_1h_rainfall.head()",
"Resample 15 min dataset to monthly accumulated dataset",
"charlotte_monthly_rainfall = pd.DataFrame()\ncharlotte_monthly_rainfall['mean_rain'] = charlotte_rainfall.Rainfall.resample('M').mean()\ncharlotte_monthly_rainfall['accum_rain'] = charlotte_rainfall.Rainfall.resample('M').sum()\n\nplt.plot(charlotte_monthly_rainfall[\"accum_rain\"])\nplt.ylabel('mm/month')\nplt.gcf().autofmt_xdate()\n\nplt.plot(charlotte_monthly_rainfall[\"mean_rain\"])\nplt.ylabel(r'mm/15min ($\\varnothing$ per month)')\nplt.gcf().autofmt_xdate()",
"Answering the assignments\n1. General statistics for 24-hour and 15-min datasets: compute mean, standard deviation, skewness; plot histograms\n15 min dataset\nMean, standard deviation and skewness of the 15 min dataset",
"print('Mean: %s' % str(charlotte_rainfall.Rainfall.mean()))\nprint('Std: %s' % str(charlotte_rainfall.Rainfall.std()))\nprint('Skew: %s' % str(charlotte_rainfall.Rainfall.skew()))",
"Histogram of the data",
"charlotte_rainfall.Rainfall.hist(bins = 100)\nplt.xlabel('mm/15min')\nplt.gca().set_yscale(\"log\")",
"Histogram of the data without zeros",
"cur_data = charlotte_rainfall.Rainfall.loc[charlotte_rainfall.Rainfall>0]\nhist_d = plt.hist(cur_data, bins=100)\nplt.xlabel('mm/15min')\nplt.gca().set_yscale(\"log\")",
"24h accumulated dataset\nMean, standard deviation and skewness of 24h accumulated dataset",
"print('Mean: %s' % str(charlotte_24h_rainfall.accum_rain.mean()))\nprint('Std: %s' % str(charlotte_24h_rainfall.accum_rain.std()))\nprint('Skew: %s' % str(charlotte_24h_rainfall.accum_rain.skew()))",
"Histogram of the dataset",
"charlotte_24h_rainfall.accum_rain.hist(bins = 100)\nplt.xlabel('mm/24h')\nplt.gca().set_yscale(\"log\")\n\ncharlotte_24h_rainfall.mean_rain.hist(bins = 100)\nplt.xlabel(r'mm/15min ($\\varnothing$ per 24h)')\nplt.gca().set_yscale(\"log\")",
"Histogram without zeros",
"cur_data = charlotte_24h_rainfall.accum_rain.loc[charlotte_24h_rainfall.accum_rain>0]\nhist_d = plt.hist(cur_data, bins=100)\nplt.xlabel('mm/24h')\nplt.gca().set_yscale(\"log\")",
"2. a. Analysis of seasonal cycles: create boxplots for monthly totals across all years\nBoxplot of monthly totals",
"charlotte_monthly_rainfall['mon'] = charlotte_monthly_rainfall.index.month\ncharlotte_monthly_rainfall['year'] = charlotte_monthly_rainfall.index.year\ncharlotte_monthly_rainfall.boxplot(column=['accum_rain'], by='mon', sym='+')\nplt.ylabel('mm/month')",
"Or on a yearly scale:",
"charlotte_monthly_rainfall.dropna().boxplot(column=['accum_rain'], by='year', sym='+')\nplt.ylabel('mm/month')\nplt.gcf().autofmt_xdate()",
"2. b. Analysis of diurnal cycles: create boxplots for hourly totals for entire dataseries",
"charlotte_1h_rainfall['hour'] = charlotte_1h_rainfall.index.hour\ncharlotte_1h_rainfall.boxplot(column=['accum_rain'], by='hour', sym='+')\nplt.ylabel('mm/h')",
"Neglecting events < 1mm/h",
"cur_df = charlotte_1h_rainfall.copy()\ncur_df.loc[cur_df.accum_rain<1, 'accum_rain'] = np.nan\ncur_df.boxplot(column=['accum_rain'], by='hour', sym='+')\nplt.ylabel('mm/h')",
"Neglecting events < 3mm/h",
"cur_df = charlotte_1h_rainfall.copy()\ncur_df.loc[cur_df.accum_rain<3, 'accum_rain'] = np.nan\ncur_df.boxplot(column=['accum_rain'], by='hour', sym='+')\nplt.ylabel('mm/h')",
"2. c. Variation of diurnal cycles with seasons: create boxplots for hourly totals for summer season (April – September) and for winter season (October-March)\nMerge summer hourly data",
"pd.options.mode.chained_assignment = None # default='warn'\ncharlotte_summer_1h_rainfall['hour'] = charlotte_summer_1h_rainfall.index.hour\ncharlotte_summer_1h_rainfall.boxplot(column=['accum_rain'], by='hour', sym='+')",
"Neglecting events <1mm/hour",
"cur_df = charlotte_summer_1h_rainfall.copy()\ncur_df.loc[cur_df.accum_rain<1, 'accum_rain'] = np.nan\ncur_df.boxplot(column=['accum_rain'], by='hour', sym='+')\nplt.ylabel('mm/h')",
"Neglecting events <3mm/hour",
"cur_df = charlotte_summer_1h_rainfall.copy()\ncur_df.loc[cur_df.accum_rain<3, 'accum_rain'] = np.nan\ncur_df.boxplot(column=['accum_rain'], by='hour', sym='+')\nplt.ylabel('mm/h')",
"Merge hourly winter data",
"charlotte_winter_1h_rainfall['hour'] = charlotte_winter_1h_rainfall.index.hour\ncharlotte_winter_1h_rainfall.boxplot(column=['accum_rain'], by='hour', sym='+')\nplt.ylabel('mm/h')",
"Neglecting events <1mm/h",
"cur_df = charlotte_winter_1h_rainfall.copy()\ncur_df.loc[cur_df.accum_rain<1, 'accum_rain'] = np.nan\ncur_df.boxplot(column=['accum_rain'], by='hour', sym='+')\nplt.ylabel('mm/h')",
"Neglecting events <3mm/h",
"cur_df = charlotte_winter_1h_rainfall.copy()\ncur_df.loc[cur_df.accum_rain<3, 'accum_rain'] = np.nan\ncur_df.boxplot(column=['accum_rain'], by='hour', sym='+')\nplt.ylabel('mm/h')",
"2. d. Diurnal cycles of intense storm events: Count nr of exceedances above 10 mm/h threshold for each hour of the day, for entire data series and for summer months only\nShow rainfall events > 10mm /h over entire 1h accumulated dataset",
"charlotte_1h_exceeds = charlotte_1h_rainfall.accum_rain[charlotte_1h_rainfall.accum_rain>10]",
"Amount of hourly events",
"print(len(charlotte_1h_exceeds))\n\ny = np.array(charlotte_1h_exceeds)\nN = len(y)\nx = range(N)\nwidth = 1\nplt.bar(x, y, width)\nplt.ylabel('mm/h')",
"10 mm/h events in summer periods",
"charlotte_1h_exceeds_summer = charlotte_summer_1h_rainfall.accum_rain[charlotte_summer_1h_rainfall.accum_rain>10]\n\ny = np.array(charlotte_1h_exceeds_summer)\nN = len(y)\nx = range(N)\nwidth = 1\nplt.bar(x, y, width)\nplt.ylabel('mm/h')",
"Amount of hourly events",
"print(len(charlotte_1h_exceeds_summer))",
"3. Fit GEV-distribution for POT values in the time series\n3. a. Create plots: histogram and GEV fit and interpret",
"plt.plot(charlotte_1h_exceeds)\nplt.gcf().autofmt_xdate()\n\ncharlotte_1h_exceeds.hist(bins=100)\n\nfrom scipy.stats import genextreme\n\nx = np.linspace(0, 80, 1000)\ny = np.array(charlotte_1h_exceeds[:])\n\nnp.seterr(divide='ignore', invalid='ignore')\ngenextreme.fit(y)\n\npdf = plt.plot(x, genextreme.pdf(x, *genextreme.fit(y)))\npdf_hist = plt.hist(y, bins=50, normed=True, histtype='stepfilled', alpha=0.8)",
"3. c. Compute rainfall amounts associated with return periods of 1 year, 10 years and 100 years",
"genextreme.ppf((1-1/1), *genextreme.fit(y))\n\ngenextreme.ppf((1-1/10), *genextreme.fit(y))\n\ngenextreme.ppf((1-1/100), *genextreme.fit(y))",
"Update 10.10.2017\nBlock maxima & GEV",
"from scipy.stats import genpareto\n\ntemp_monthly = charlotte_1h_rainfall.groupby(pd.TimeGrouper(freq='M'))\nblock_max_y = np.array(temp_monthly.accum_rain.max())\nprint(block_max_y)\n\nprint(len(block_max_y))\n\nx = np.linspace(0, 100, 1000)\n\npdf = plt.plot(x, genextreme.pdf(x, *genextreme.fit(block_max_y)))\npdf_hist = plt.hist(block_max_y, bins=50, normed=True, histtype='stepfilled', alpha=0.8)",
"GEV and block maxima of monthly maxima of 1h data",
"genextreme.fit(block_max_y)\n\ngenextreme.ppf((1-1/10), *genextreme.fit(block_max_y))",
"POT & GPD",
"pdf_bm = plt.plot(x, genpareto.pdf(x, *genpareto.fit(y)))\npdf_hist_bm = plt.hist(y, bins=100, normed=True, histtype='stepfilled', alpha=0.8)",
"GPD and POT of data>10mm/h",
"genpareto.fit(y)\n\ngenpareto.ppf((1-1/10), *genpareto.fit(y))",
"Boxplot of POT values",
"event_occurences = pd.DataFrame(charlotte_1h_exceeds)\nevent_occurences['hour'] = event_occurences.index.hour\nevent_occurences.boxplot(column=['accum_rain'], by='hour', sym='+')",
"Number of occurences per hour",
"event_occurences.hour.value_counts(sort=False)\n\n# plt.plot(asd.hour.value_counts(sort=False))\n\ncur_hist = plt.hist(event_occurences.hour, bins=24, histtype='stepfilled')\nplt.xticks(range(24))\nplt.xlabel('hour')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rain1024/underthesea
|
labs/lab_unicode.ipynb
|
gpl-3.0
|
[
"Vietnamese Unicode\nThis lab intend to give a demonstration about Vietname unicode normalization problems.",
"def analyze_characters(s):\n \"\"\" core function: analyze characters\n print utf8 number and unicode number of each characters in text\n\n :param unicode s: input string\n :type s: unicode \n \"\"\"\n print u\" utf8 unicode\"\n for i in s:\n unicode_number = hex(ord(i))[2:].zfill(4)\n utf8_number = i.encode(\"utf-8\").encode(\"hex\")\n if utf8_number in [\"cc80\", \"cc82\", \"cc83\", \"cca3\"]:\n format_string = u\"{:3s} -> {:>6s} -> {:>7s}\"\n else:\n format_string = u\"{:2s} -> {:>6s} -> {:>7s}\"\n print format_string.format(i, utf8_number, unicode_number)",
"Unicode tổ hợp và unicode dựng sẵn",
"# unicode tổ hợp\ns = u\"cộng hòa xã hội\"\nanalyze_characters(s)\n\n# unicode dựng sẵn\ns = u\"cộng hòa xã hội\"\nanalyze_characters(s)",
"After normalization",
"import unicodedata\n\n# unicode tổ hợp\ns = u\"cộng hòa xã hội\"\n\nanalyze_characters(s)\n\nanalyze_characters(unicodedata.normalize(\"NFC\", s))\n\nanalyze_characters(unicodedata.normalize(\"NFD\", s))\n\nutf8_code(unicodedata.normalize(\"NFKC\", s))\n\nanalyze_characters(unicodedata.normalize(\"NFKD\", s))",
"One symbol, many characters\nLetter <b>Đ</b> has many characters in unicode\n<table class=\"table\">\n<tr>\n<td><b>Character</b></td>\n<td><b>UTF-Code</b></td>\n<td><b>Unicode</b></td>\n</tr>\n<tr>\n<td>Ð</td>\n<td>C3 90</td>\n<td><a href=\"https://unicode-table.com/en/00D0/\">U+00D0</a></td>\n</tr>\n<tr>\n<td>Đ</td>\n<td>C4 90</td>\n<td><a href=\"https://unicode-table.com/en/0110/\">U+0110</a></td>\n</tr>\n<tr>\n<td>Ɖ</td>\n<td>C6 89</td>\n<td><a href=\"https://unicode-table.com/en/0189/\">U+0189</a></td>\n</tr>\n<tr>\n<td>ᴆ</td>\n<td>E1 B4 86</td>\n<td><a href=\"https://unicode-table.com/en/1D06/\">U+1D06</a></td>\n</tr>\n</table>",
"text = u\"ÐĐƉᴆ\"\nanalyze_characters(text)\n\ndef map_character_to_tcvn(c):\n inverse_mapping_table = {\n # c390\n \"Ð\": [\n \"Đ\" # c490\n ]\n }\n mapping_table = {}\n for key, characters in inverse_mapping_table.iteritems():\n for character in characters:\n mapping_table[character] = key\n utf8_code(c)\n print mapping_table\n print c in mapping_table\n if c in mapping_table:\n return mapping_table[c]\n else:\n return c\n\ndef map_text_to_tcvn(text):\n \"\"\"\n @param unicode text: converted to normalize nfc form\n \"\"\"\n return [map_character_to_tcvn(c) for c in text]\n\ndef convert_to_tcvn(text):\n \"\"\"\n @param text: unicode\n \"\"\"\n text = unicodedata.normalize(\"NFC\", text)\n text = map_text_to_tcvn(text)\n return text\n\nanalyze_characters(convert_to_tcvn(text))",
"Convert to TCVN 6609",
"from locale import LC_ALL, setlocale\nprint setlocale(LC_ALL,\"Vietnamese\")\n\nfrom string import letters\nprint letters"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mbakker7/timml
|
notebooks/timml_notebook1_sol.ipynb
|
mit
|
[
"TimML Notebook 1\nA well in uniform flow\nConsider a well in the middle aquifer of a three aquifer system. Aquifer properties are given in Table 1. The well is located at $(x,y)=(0,0)$, the discharge is $Q=10,000$ m$^3$/d and the radius is 0.2 m. There is a uniform flow from West to East with a gradient of 0.002. The head is fixed to 20 m at a distance of 10,000 m downstream of the well. Here is the cookbook recipe to build this model:\n\nImport pylab to use numpy and plotting: from pylab import *\nSet figures to be in the notebook with %matplotlib notebook\nImport everything from TimML: from timml import *\nCreate the model and give it a name, for example ml with the command ml = ModelMaq(kaq, z, c) (substitute the correct lists for kaq, z, and c).\nEnter the well with the command w = Well(ml, xw, yw, Qw, rw, layers), where the well is called w.\nEnter uniform flow with the command Uflow(ml, slope, angle).\nEnter the reference head with Constant(ml, xr, yr, head, layer).\nSolve the model ml.solve()\n\nTable 1: Aquifer data for exercise 1\n|Layer |$k$ (m/d)|$z_b$ (m)|$z_t$|$c$ (days)|\n|-------------|--------:|--------:|----:|---------:|\n|Aquifer 0 | 10 | -20 | 0 | - |\n|Leaky Layer 1| - | -40 | -20 | 4000 | \n|Aquifer 1 | 20 | -80 | -40 | - |\n|Leaky Layer 2| - | -90 | -80 | 10000 | \n|Aquifer 2 | 5 | -140 | -90 | - ||",
"%matplotlib inline\nfrom pylab import *\nfrom timml import *\nfigsize=(8, 8)\n\nml = ModelMaq(kaq=[10, 20, 5],\n z=[0, -20, -40, -80, -90, -140], \n c=[4000, 10000])\nw = Well(ml, xw=0, yw=0, Qw=10000, rw=0.2, layers=1)\nConstant(ml, xr=10000, yr=0, hr=20, layer=0)\nUflow(ml, slope=0.002, angle=0)\nml.solve()",
"Questions:\nExercise 1a\nWhat are the leakage factors of the aquifer system?",
"print('The leakage factors of the aquifers are:')\nprint(ml.aq.lab)",
"Exercise 1b\nWhat is the head at the well?",
"print('The head at the well is:')\nprint(w.headinside())",
"Exercise 1c\nCreate a contour plot of the head in the three aquifers. Use a window with lower left hand corner $(x,y)=(−3000,−3000)$ and upper right hand corner $(x,y)=(3000,3000)$. Notice that the heads in the three aquifers are almost equal at three times the largest leakage factor.",
"ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[0, 1, 2], levels=10, \n legend=True, figsize=figsize)",
"Exercise 1d\nCreate a contour plot of the head in aquifer 1 with labels along the contours. Labels are added when the labels keyword argument is set to True. The number of decimal places can be set with the decimals keyword argument, which is zero by default.",
"ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[1], levels=np.arange(30, 45, 1), \n labels=True, legend=['layer 1'], figsize=figsize)",
"Exercise 1e\nCreate a contour plot with a vertical cross-section below it. Start three pathlines from $(x,y)=(-2000,-1000)$ at levels $z=-120$, $z=-60$, and $z=-10$. Try a few other starting locations.",
"win=[-3000, 3000, -3000, 3000]\nml.plot(win=win, orientation='both', figsize=figsize)\nml.tracelines(-2000 * ones(3), -1000 * ones(3), [-120, -60, -10], hstepmax=50, \n win=win, orientation='both')\nml.tracelines(0 * ones(3), 1000 * ones(3), [-120, -50, -10], hstepmax=50, \n win=win, orientation='both')",
"Exercise 1f\nAdd an abandoned well that is screened in both aquifer 0 and aquifer 1, located at $(x, y) = (100, 100)$ and create contour plot of all aquifers near the well (from (-200,-200) till (200,200)). What are the discharge and the head at the abandoned well? Note that you have to solve the model again!",
"ml = ModelMaq(kaq=[10, 20, 5],\n z=[0, -20, -40, -80, -90, -140], \n c=[4000, 10000])\nw = Well(ml, xw=0, yw=0, Qw=10000, rw=0.2, layers=1)\nConstant(ml, xr=10000, yr=0, hr=20, layer=0)\nUflow(ml, slope=0.002, angle=0)\nwabandoned = Well(ml, xw=100, yw=100, Qw=0, rw=0.2, layers=[0, 1])\nml.solve()\nml.contour(win=[-200, 200, -200, 200], ngr=50, layers=[0, 2], \n levels=20, color=['C0', 'C1', 'C2'], legend=True, figsize=figsize)\n\nprint('The head at the abandoned well is:')\nprint(wabandoned.headinside())\nprint('The discharge at the abandoned well is:')\nprint(wabandoned.discharge())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fluffy-hamster/A-Beginners-Guide-to-Python
|
A Beginners Guide to Python/10. Strings.ipynb
|
mit
|
[
"Strings\nIn previous lectures we have seen strings being used numerous times. Today we are going to go into a bit more detail. First, some terminology:\n\n'single-quote character' refers to unicode character 34, --> { ' }\n'double-quote character' refers to unicode character 39, --> { \" }\nand if I say 'quote-character' then I refer to both/either of the above.\n\nOkay, lets begin!\nWhat is a string?\nA string is basically a bunch of unicode characters, this makes them the ideal data type for storing written text. The Syntax:\n{quote-character} unicode characters {MATCHING quote-character}\n\nExamples:\n\n\"hello\" # Valid syntax\n'hello' # also valid syntax\n\"hello' # doesn't work; uses both quotation characters.\n\nAnd just as with numbers, we can also convert other data-types to strings using the str function.\nWhy do both single and double quote characters work?\nThe reason Python accepts the use of single OR double quote characters is to make it easier for dealing with text that actually contains quote-characters. Suppose for instance we wanted to store the following sentence as a string:\n\n\"Ahhh!!!! spiders!\", cried the monster. \"Do not worry\" said our hero, \"I have a sharp spoon\".\n\nwow, I'm hooked; with epic character development like that maybe I should be writing novels instead of programming tutorials?\nAnyway, I digress. The point is if I try to save this sentence with double-quotes, problems occur. But I can save the string as is if wrap my string with single-quote characters. As demonstrated by the next two code snippets.",
"# wrapping text with double quotes...\ncool_story_bro = \"\"Ahhh!!!! spiders!\", cried the monster. \"Do not worry\" said our hero, \"I have a sharp spoon\".\"\nprint(cool_story_bro)\n\n# wrapping text with single quotes...\ncool_story_bro = '\"Ahhh!!!! spiders!\", cried the monster.\"Do not worry\" said our hero, \"I have a sharp spoon\".'\n\nprint(cool_story_bro)",
"Because I messed up, you have homework.\nWhen I first wrote the example it took me about 10 minutes to actually get it working, I just couldn't figure out what the problem was! \nIt turns out that in the original draft my spider-maiming hero said the phrase:\n“don’t worry”\n\nThe ' character in don't was messing up my attempt to enclose the whole string within single quotes. Here, let me show you:",
"cool_story_bro = '\"Ahhh!!!! spiders!\", cried the monster.\"Don't worry\" said our hero, \"I have a sharp spoon\".'\n\nprint(cool_story_bro)",
"So what was my genius solution? Well obviously we cheat and change the text!\n“don’t worry” --> “do not worry”\n\nProblem...err….solved? \nAnyway, Python does have ways of handling such inputs, your homework for this week to figure out how to make my intended string work – if it takes you less than 10 minutes then congratulations, you figured it out faster than I did. :)\nThe str function\nJust like the int() and float() functions, the str function is a good way to convert one data-type to another. If I have the an integer and I want to store it as a string I can simply call the str() function, and Python will do the rest. The code snippet below will take any float/integer and return a string representation of that number.",
"def num_to_string(number):\n \"\"\"takes a number of type float/int, returns string of that number\"\"\"\n return str(number)\n\n# For an explanation of the next three lines of code, please see the 'calling functions' lecture. \na = num_to_string(4555549099511) # large integer\nb = num_to_string(-0.0044352334) # negative float\nc = num_to_string(4.3e10) # scientific notation\n\nprint(a, type(a))\nprint(b, type(b))\nprint(c, type(c)) \n\n# and notice that we can use the float/int methods to convert the strings back to numbers just as easily...\nprint( float(c), type(float(c)) )",
"Why might you want to do this?\nOne reason you might want to store a number as a string is because if you convert a number to a string you have access to more 'methods' which may make some processes easier.\nfor example, lets suppose I want to find out what the first two digits of the number are. Converting a number to a string makes this process easy since strings are iterable and can be indexed into, whereas numbers cannot. Thats a lot of techinical jargon right now, but don't worry we shall cover indexing later.",
"def first_two_digits(number):\n n = str(number) # < -- convert number to string\n n = n[:2] # < -- get the first two characters via slicing (more on slicing later).\n n = int(n) # < -- converting n back to a number.\n \n return n\n\nprint(first_two_digits(100000))\nprint(first_two_digits(933323))\nprint(first_two_digits(11))",
"Escape Characters\nText frequently has ‘meta-data’ attached to it, by meta-data in this context I’m mainly talking about things like HTML tags; font colour, size, stylings (e.g bold, italic), and so on. \nThe normal process for handling this is to have the code embedded into the text itself. In other words, the text itself has characters that Python has parse as commands. \nBut for some applications you might want to have the ability to literally print every character passed in. So example, in the example directly below we have two lines of text, a pink heading and some text with tags. Crucially these two pieces of text are the same; the difference in what we see is the difference between literally printing the HTML tags versus executing them.\n\n<h1 style=\"color:pink;\">This is a heading</h1>\n<h1 style="color:pink;">This is a heading</h1>\n\nSo, how does the computer know to interpret text in one way and not the other? Well, the solution is something called “escape characters”. \nJust for completeness, to show you the tags to get pink text I had to use several HTML escape characters, I typed the following monstrocity:\n\n&lt;h1 style=&quot;color:pink;&quot;&gt;This is a heading&lt;/h1&gt;\n\nThats a complex line of jargon I couldn't have done without the help of this tool. So yeah, escaping in HTML can be bit tricky but fortunately for us escape characters in Python are a bit easier to work with.\nConsider the following lines of code.",
"a = \"\\\\\"\nb = \"\\\"",
"At first glance this code seems perfectly fine, right? The variable 'A' should be the string \\\\ right? And variable 'B' should just be a single backslash. But we don't get that, Python throws and error!\nWhat’s going on here? Well, the reason is that the backslash character (\\) is an escape character in Python. To actually get Python to literally print \"\\\\\" or \"\\\" we would actually have to type out:",
"a = \"\\\\\\\\\" # double \\\\\nb = \"\\\\\" # single \\\n\nprint(a, b)\n# Note that I didn't have to do any escaping in the comments, thats because Python just ignores comments!",
"It is important to be aware of these Python features because If you don't know this stuff it you can be easily 'caught-out' the moment you start trying to parse complex strings. In what follows I have a (hopefully humorous) example of why you should care about this stuff. Let’s talk pathing.",
"directory = \"C:\\Documents\\pictures\\selfies\"\nprint(directory)",
"So let's imagine we are building some sort of code that saves a directory as a string for use later on. If we print this particular directory we get no surprises, it just works as we would expect.\nBut hold-up, what if I wanted to send my girlfriend a naughty photo! inside of my 'selfies' folder I have a 'nudes' folder. And inside the 'nudes' folder I have a plethora of Jpegs; my little sausage pictured from a variety of different angles wearing an assortment of novelty hats.\n<img src=\"http://i.imgur.com/wVrnjgc.png\" style=\"width:300px;height:170px;\" ALIGN=\"right\">\n\n“Wait, did he just say little?” \n\nOn this occasion however, let's pretend I'm not a total weirdo (debatable), I want to sent her something arty, something classy.\n[scurries through folder...]\n[finds ... 'tasteful.jpeg' ]\n\nAlright, lets code that up and see what happens...",
"directory2 = \"C:\\Documents\\pictures\\selfies\\nudes\\tasteful.jpeg\" \nprint(directory2)",
"Oh dear! It seems like python doesn't want me to send dick-pics over the internet afterall! thats a pity, a big pity (wink wink). \nWhat has gone wrong? Well, basically every time Python see's a backslash character it looks to see what the next character is. In the case of directory above, we have the following: \\D, \\p, \\s, \\n, \\t\nThis first time we ran the code we didn't get any errors because \\D \\p where not special 'commands'. However, both \\n and \\t are special commands in Python. These commands get executed and we get a different result.\nNew line...\nAs an aside, \\n is a very useful command to use within strings. It starts a new line, and splitting data up into separate lines frequently comes in useful. \n\"{some text}\\n{more text}\"\n\nSimple example:",
"greeting = \"hello\\nworld\"\nprint(greeting)\n\n# using \\t (which is tab)\ngreeting = \"hello\\tworld\"\nprint(greeting)\n\n# There are other commands of course, but I feel that most of them are not useful enough to be worth teaching. ",
"In short, \\n is a newline, and \\t is tab. Thus, if we are trying to save/open files/folders on windows systems that start with t or n we can end up having some difficulties. \nThere are a few solutions to this problem. If you are dealing with directories specificially then the best choice is to you the os module. This module will fix a number of these issues for you (the os module works on linux and windows machines).\nfor example:",
"import os\n\ndirectory = \"C:\\Documents\\pictures\\selfies\"\nphoto_name = \"santa_hat2.jpeg\" \n\n## the bad way\npath_to_photo_1 = directory + \"\\\\\" + photo_name\n\n## the good way\npath_to_photo_2 = os.path.join(directory, photo_name)\n\nprint(path_to_photo_1)\nprint(path_to_photo_2)",
"However, the above method only works for file systems, how can we solve this problem in a more general way?\nRaw strings...\nSo what can we do if we want Python to ignore these commands? Well, the simplest solution is to put an 'r' before the string starts. The 'r' here tells Python we want a raw string.",
"string1 = r\"\\nevery\\nword\\nis\\non\\na\\nnew\\nline\" # notice the 'r' BEFORE the double-quote mark?\nstring2 = \"\\nevery\\nword\\nis\\non\\na\\nnew\\nline\" # without the 'r', for comparision. \n\nprint(\"The raw string version looks like this:\\n\", string1)\nprint(\"\\n\") \nprint(\"The normal version of string looks like this:\\n\", string2)",
"A Few More Operations...\nStrings are a huge topic in Python, and we are going to have come back to them latter. But for now, let me leave you with a few basic operations you can perform on strings...",
"# Repeating strings\n \n# {string} * {integer}\n\n# Examples:\n\nprint(\"a\" * 10)\nprint(\"abc\" * 3) \n\n# Concatenation \n\n# {string} + {string}\n\n# Examples:\n\nprint(\"ab\" + \"c\")\nprint(\"a\" + \"b\" + \"c\")\n\n# Membership\n\n# {string} in {string} \n\n# Examples:\n\nprint(\"a\" in \"ab\")\nprint(\"a\" in \"cb\")\nprint(\"abc\" in \"aabbcc\") # must be an exact match. ",
"HOMEWORK ASSIGNMENT\nName a variable \"cool_story_bro\" and then assign the the following text as a string:\n\n\"Ahhh!!!! spiders!\", cried the monster.\"Don't worry\" said our hero, \"I have a sharp spoon\".\n\nOnce complete, print it.",
"# Your answer here…"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gtrichards/PHYS_T480
|
NonlinearDimensionReduction.ipynb
|
mit
|
[
"Nonlinear Dimensionality Reduction\nG. Richards (2016), based on materials from Ivezic, Connolly, Miller, Leighly, and VanderPlas.\nToday we will talk about the concepts of \n* manifold learning\n* nonlinear dimensionality reduction\nSpecifically using the following algorithms\n* local linear embedding (LLE)\n* isometric mapping (IsoMap)\n* t-distributed Stochastic Neighbor Embedding (t-SNE)\nLet's start by my echoing the brief note of caution given in Adam Miller's notebook: \"astronomers will often try to derive physical insight from PCA eigenspectra or eigentimeseries, but this is not advisable as there is no physical reason for the data to be linearly and orthogonally separable\". Moreover, physical components are (generally) positive definite. So, PCA is great for dimensional reduction, but for doing physics there are generally better choices.\nWhile NMF \"solves\" the issue of negative components, it is still a linear process. For data with non-linear correlations, an entire field, known as Manifold Learning and nonlinear dimensionality reduction, has been developed, with several algorithms available via the sklearn.manifold module. \nFor example, if your data set looks like this:\n\nThen PCA is going to give you something like this. \n\nClearly not very helpful!\nWhat you really want is something more like the results below. For more examples see\nVanderplas & Connolly 2009\n\nLocal Linear Embedding\nLocal Linear Embedding attempts to embed high-$D$ data in a lower-$D$ space. Crucially it also seeks to preserve the geometry of the local \"neighborhoods\" around each point. In the case of the \"S\" curve, it seeks to unroll the data. The steps are\nStep 1: define local geometry\n- local neighborhoods determined from $k$ nearest neighbors.\n- for each point calculate weights that reconstruct a point from its $k$ nearest\nneighbors via\n$$\n\\begin{equation}\n \\mathcal{E}_1(W) = \\left|X - WX\\right|^2,\n\\end{equation}\n$$\nwhere $X$ is an $N\\times K$ matrix and $W$ is an $N\\times N$ matrix that minimizes the reconstruction error.\nEssentially this is finding the hyperplane that describes the local surface at each point within the data set. So, imagine that you have a bunch of square tiles and you are trying to tile the surface with them.\nStep 2: embed within a lower dimensional space\n- set all $W_{ij}=0$ except when point $j$ is one of the $k$ nearest neighbors of point $i$.\n- $W$ becomes very sparse for $k \\ll N$ (only $Nk$ entries in $W$ are non-zero). \n- minimize\n\n$\\begin{equation}\n \\mathcal{E}_2(Y) = \\left|Y - W Y\\right|^2,\n\\end{equation}\n$\n\nwith $W$ fixed to find an $N$ by $d$ matrix ($d$ is the new dimensionality).\nStep 1 requires a nearest-neighbor search.\nStep 2 requires an\neigenvalue decomposition of the matrix $C_W \\equiv (I-W)^T(I-W)$.\nLLE has been applied to data as diverse as galaxy spectra, stellar spectra, and photometric light curves. It was introduced by Roweis & Saul (2000).\nSkikit-Learn's call to LLE is as follows, with a more detailed example already being given above.",
"import numpy as np\nfrom sklearn.manifold import LocallyLinearEmbedding\nX = np.random.normal(size=(1000,2)) # 1000 points in 2D\nR = np.random.random((2,10)) # projection matrix\nX = np.dot(X,R) # now a 2D linear manifold in 10D space\nk = 5 # Number of neighbors to use in fit\nn = 2 # Number of dimensions to fit\nlle = LocallyLinearEmbedding(k,n)\nlle.fit(X)\nproj = lle.transform(X) # 100x2 projection of the data",
"See what LLE does for the digits data, using the 7 nearest neighbors and 2 components.",
"# Execute this cell to load the digits sample\n%matplotlib inline\nimport numpy as np\nfrom sklearn.datasets import load_digits\nfrom matplotlib import pyplot as plt\ndigits = load_digits()\ngrid_data = np.reshape(digits.data[0], (8,8)) #reshape to 8x8\nplt.imshow(grid_data, interpolation = \"nearest\", cmap = \"bone_r\")\nprint grid_data\nX = digits.data\ny = digits.target\n\n#LLE\nfrom sklearn.manifold import LocallyLinearEmbedding\n# Complete",
"Isometric Mapping\nis based on multi-dimensional scaling (MDS) framework. It was introduced in the same volume of science as the article above. See Tenenbaum, de Silva, & Langford (2000).\nGeodestic curves are used to recover non-linear structure.\nIn Scikit-Learn IsoMap is implemented as follows:",
"# Execute this cell\nimport numpy as np\nfrom sklearn.manifold import Isomap\nXX = np.random.normal(size=(1000,2)) # 1000 points in 2D\nR = np.random.random((2,10)) # projection matrix\nXX = np.dot(XX,R) # X is a 2D manifold in 10D space\nk = 5 # number of neighbors\nn = 2 # number of dimensions\niso = Isomap(k,n)\niso.fit(XX)\nproj = iso.transform(XX) # 1000x2 projection of the data",
"Try 7 neighbors and 2 dimensions on the digits data.",
"# IsoMap\nfrom sklearn.manifold import Isomap\n# Complete",
"t-SNE\nt-distributed Stochastic Neighbor Embedding (t-SNE) is not discussed in the book, Scikit-Learn does have a t-SNE implementation and it is well worth mentioning this manifold learning algorithm too. SNE itself was developed by Hinton & Roweis with the \"$t$\" part being added by van der Maaten & Hinton. It works like the other manifold learning algorithms. Try it on the digits data.",
"# t-SNE\nfrom sklearn.manifold import TSNE\n# Complete",
"You'll know if you have done it right if you understand Adam Miller's comment \"Holy freakin' smokes. That is magic. (It's possible we just solved science).\"\nPersonally, I think that some exclamation points may be needed in there!\nWhat's even more illuminating is to make the plot using the actual digits to plot the points. Then you can see why certain digits are alike or split into multiple regions. Can you explain the patterns you see here?",
"# Execute this cell\nfrom matplotlib import offsetbox\n\n#----------------------------------------------------------------------\n# Scale and visualize the embedding vectors\ndef plot_embedding(X):\n x_min, x_max = np.min(X, 0), np.max(X, 0)\n X = (X - x_min) / (x_max - x_min)\n\n plt.figure()\n ax = plt.subplot(111)\n for i in range(X.shape[0]):\n #plt.text(X[i, 0], X[i, 1], str(digits.target[i]), color=plt.cm.Set1(y[i] / 10.), fontdict={'weight': 'bold', 'size': 9})\n plt.text(X[i, 0], X[i, 1], str(digits.target[i]), color=plt.cm.nipy_spectral(y[i]/9.))\n\n\n shown_images = np.array([[1., 1.]]) # just something big\n for i in range(digits.data.shape[0]):\n dist = np.sum((X[i] - shown_images) ** 2, 1)\n if np.min(dist) < 4e-3:\n # don't show points that are too close\n continue\n shown_images = np.r_[shown_images, [X[i]]]\n imagebox = offsetbox.AnnotationBbox(offsetbox.OffsetImage(digits.images[i], cmap=plt.cm.gray_r), X[i])\n ax.add_artist(imagebox)\n plt.xticks([]), plt.yticks([])\n \nplot_embedding(X_reduced)\nplt.show()",
"With the remainder of time in class today, play with the arguments of the algorithms that we have discussed this week and/or try running them on a different data set. For example the iris data set or one of the other samples of data that are included with Scikit-Learn. Or maybe have a look through some of these public data repositories:\n\nhttps://github.com/caesar0301/awesome-public-datasets?utm_content=buffer4245d&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer\nhttp://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A318739\nhttp://www.kdnuggets.com/2015/04/awesome-public-datasets-github.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Yu-Group/scikit-learn-sandbox
|
jupyter/backup_deprecated_nbs/19A_check_weighted_RF.ipynb
|
mit
|
[
"Check the output of weighted random forest",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import load_breast_cancer\nimport numpy as np\nfrom functools import reduce\n\n# Import our custom utilities\nfrom imp import reload\nfrom utils import irf_jupyter_utils\nfrom utils import irf_utils\nreload(irf_jupyter_utils)\nreload(irf_utils)\n\n# Import RF related functions\nfrom sklearn.ensemble import RandomForestClassifier",
"When feature_weight = None, the output should match Random Forest.\noriginal RF result is stored in feature_weight1 below.",
"feature_weight0 = None\n\nX_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(n_estimators=1000, \n feature_weight=feature_weight0)\n\nall_rf_tree_data = irf_utils.get_rf_tree_data(rf=rf, X_train=X_train, X_test=X_test, y_test=y_test)\n#all_rf_tree_data\n\n# Print the feature importance\nfeature_importances_rank_idx0 = all_rf_tree_data['feature_importances_rank_idx']\nfeature_importances0 = all_rf_tree_data['feature_importances']\n\nprint(feature_importances0)\n\ncorrect_feature_importance =[ 0.04153319, 0.0136872, 0.05287382, 0.05537257, 0.00571718, 0.01101297,\n 0.04525511, 0.08925701, 0.00407582, 0.00337926, 0.01301454, 0.00396505,\n 0.01022279, 0.03255195, 0.00498767, 0.00438016, 0.00771317, 0.00459407,\n 0.0037973, 0.00448982, 0.10938616, 0.01690837, 0.14415417, 0.1204331,\n 0.01276175, 0.01472586, 0.03019196, 0.12449026, 0.00858072, 0.00648698]",
"When feature_weight is uniform, it should give the same feature importance.",
"feature_weight1 = [1]*30\n\nX_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(n_estimators=1000, \n feature_weight=feature_weight1)\n\nall_rf_tree_data = irf_utils.get_rf_tree_data(rf=rf, X_train=X_train, X_test=X_test, y_test=y_test)\n#all_rf_tree_data\n\n#feature importance \nfeature_importances_rank_idx1 = all_rf_tree_data['feature_importances_rank_idx']\nfeature_importances1 = all_rf_tree_data['feature_importances']\n\nprint(feature_importances1)",
"When feature_weight is weighted, it should give the roughly same feature ranking.",
"feature_weight2 = correct_feature_importance\n\nX_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(n_estimators=1000, \n feature_weight=feature_weight2)\n\nall_rf_tree_data = irf_utils.get_rf_tree_data(rf=rf, X_train=X_train, X_test=X_test, y_test=y_test)\n#all_rf_tree_data\n\n#feature importance \nfeature_importances_rank_idx2 = all_rf_tree_data['feature_importances_rank_idx']\nfeature_importances2 = all_rf_tree_data['feature_importances']\nfor f in range(X_train.shape[1]):\n print(\"%2d. feature %2d (%10.9f) and feature %2d (%10.9f)\" % (f + 1\n , feature_importances_rank_idx1[f]\n , feature_importances1[feature_importances_rank_idx1[f]]\n , feature_importances_rank_idx2[f]\n , feature_importances2[feature_importances_rank_idx2[f]]))\n\ndef test_iRF_weight1():\n #Check when label is random, whether the feature importance of every feature is the same.\n n_samples = 1000\n n_features = 10\n random_state_classifier = 2018\n np.random.seed(random_state_classifier)\n X_train = np.random.uniform(low=0, high=1, size=(n_samples, n_features))\n y_train = np.random.choice([0, 1], size=(n_samples,), p=[.5, .5])\n X_test = np.random.uniform(low=0, high=1, size=(n_samples, n_features))\n y_test = np.random.choice([0, 1], size=(n_samples,), p=[.5, .5])\n all_rf_weights, all_K_iter_rf_data, \\\n all_rf_bootstrap_output, all_rit_bootstrap_output, \\\n stability_score = irf_utils.run_iRF(X_train=X_train,\n X_test=X_test,\n y_train=y_train,\n y_test=y_test,\n K=5,\n n_estimators=20,\n B=30,\n random_state_classifier=2018,\n propn_n_samples=.2,\n bin_class_type=1,\n M=20,\n max_depth=5,\n noisy_split=False,\n num_splits=2,\n n_estimators_bootstrap=5)\n assert np.max(all_rf_weights['rf_weight5'])<.135\ntest_iRF_weight1()\n\ndef test_iRF_weight2():\n #Check when feature 1 fully predict the label, its importance should be 1.\n n_samples = 1000\n n_features = 10\n random_state_classifier = 2018\n np.random.seed(random_state_classifier)\n X_train = np.random.uniform(low=0, high=1, size=(n_samples, n_features))\n y_train = np.random.choice([0, 1], size=(n_samples,), p=[.5, .5])\n X_test = np.random.uniform(low=0, high=1, size=(n_samples, n_features))\n y_test = np.random.choice([0, 1], size=(n_samples,), p=[.5, .5])\n # first feature is very important\n X_train[:,1] = X_train[:,1] + y_train\n X_test[:,1] = X_test[:,1] + y_test\n all_rf_weights, all_K_iter_rf_data, \\\n all_rf_bootstrap_output, all_rit_bootstrap_output, \\\n stability_score = irf_utils.run_iRF(X_train=X_train,\n X_test=X_test,\n y_train=y_train,\n y_test=y_test,\n K=5,\n n_estimators=20,\n B=30,\n random_state_classifier=2018,\n propn_n_samples=.2,\n bin_class_type=1,\n M=20,\n max_depth=5,\n noisy_split=False,\n num_splits=2,\n n_estimators_bootstrap=5)\n print(all_rf_weights['rf_weight5'])\n assert all_rf_weights['rf_weight5'][1] == 1\ntest_iRF_weight2()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cccma/cmip6/models/sandbox-3/toplevel.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: CCCMA\nSource ID: SANDBOX-3\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:47\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccma', 'sandbox-3', 'toplevel')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Flux Correction\n3. Key Properties --> Genealogy\n4. Key Properties --> Software Properties\n5. Key Properties --> Coupling\n6. Key Properties --> Tuning Applied\n7. Key Properties --> Conservation --> Heat\n8. Key Properties --> Conservation --> Fresh Water\n9. Key Properties --> Conservation --> Salt\n10. Key Properties --> Conservation --> Momentum\n11. Radiative Forcings\n12. Radiative Forcings --> Greenhouse Gases --> CO2\n13. Radiative Forcings --> Greenhouse Gases --> CH4\n14. Radiative Forcings --> Greenhouse Gases --> N2O\n15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\n16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\n17. Radiative Forcings --> Greenhouse Gases --> CFC\n18. Radiative Forcings --> Aerosols --> SO4\n19. Radiative Forcings --> Aerosols --> Black Carbon\n20. Radiative Forcings --> Aerosols --> Organic Carbon\n21. Radiative Forcings --> Aerosols --> Nitrate\n22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\n23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\n24. Radiative Forcings --> Aerosols --> Dust\n25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\n26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\n27. Radiative Forcings --> Aerosols --> Sea Salt\n28. Radiative Forcings --> Other --> Land Use\n29. Radiative Forcings --> Other --> Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop level overview of coupled model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of coupled model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE Type: STRING Cardinality: 1.1\nYear the model was released",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. CMIP3 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP3 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. CMIP5 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP5 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Previous Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPreviously known as",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.4. Components Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.5. Coupler\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nOverarching coupling framework for model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5. Key Properties --> Coupling\n**\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of coupling in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Atmosphere Double Flux\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nWhere are the air-sea fluxes calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.4. Atmosphere Relative Winds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.5. Energy Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.6. Fresh Water Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Conservation --> Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.6. Land Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation --> Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Runoff\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how runoff is distributed and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Iceberg Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Endoreic Basins\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Snow Accumulation\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Key Properties --> Conservation --> Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Key Properties --> Conservation --> Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how momentum is conserved in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Radiative Forcings --> Greenhouse Gases --> CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Radiative Forcings --> Greenhouse Gases --> CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14. Radiative Forcings --> Greenhouse Gases --> N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Radiative Forcings --> Greenhouse Gases --> CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Equivalence Concentration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDetails of any equivalence concentrations used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Radiative Forcings --> Aerosols --> SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Radiative Forcings --> Aerosols --> Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Radiative Forcings --> Aerosols --> Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Radiative Forcings --> Aerosols --> Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.3. RFaci From Sulfate Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"24. Radiative Forcings --> Aerosols --> Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Radiative Forcings --> Aerosols --> Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Radiative Forcings --> Other --> Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28.2. Crop Change Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nLand use change represented via crop change only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Radiative Forcings --> Other --> Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow solar forcing is provided",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
subutai/nupic
|
src/nupic/frameworks/viz/examples/Demo.ipynb
|
agpl-3.0
|
[
"Visualizing Networks\nThe following demonstrates basic use of nupic.frameworks.viz.NetworkVisualizer to visualize a network.\nBefore you begin, you will need to install the otherwise optional dependencies. From the root of nupic repository:\npip install --user .[viz]\nSetup a simple network so we have something to work with:",
"from nupic.engine import Network, Dimensions\n\n# Create Network instance\nnetwork = Network()\n\n# Add three TestNode regions to network\nnetwork.addRegion(\"region1\", \"TestNode\", \"\")\nnetwork.addRegion(\"region2\", \"TestNode\", \"\")\nnetwork.addRegion(\"region3\", \"TestNode\", \"\")\n\n# Set dimensions on first region\nregion1 = network.getRegions().getByName(\"region1\")\nregion1.setDimensions(Dimensions([1, 1]))\n\n# Link regions\nnetwork.link(\"region1\", \"region2\", \"UniformLink\", \"\")\nnetwork.link(\"region2\", \"region1\", \"UniformLink\", \"\")\nnetwork.link(\"region1\", \"region3\", \"UniformLink\", \"\")\nnetwork.link(\"region2\", \"region3\", \"UniformLink\", \"\")\n\n# Initialize network\nnetwork.initialize()",
"Render with nupic.frameworks.viz.NetworkVisualizer, which takes as input any nupic.engine.Network instance:",
"from nupic.frameworks.viz import NetworkVisualizer\n\n# Initialize Network Visualizer\nviz = NetworkVisualizer(network)\n\n# Render to dot (stdout)\nviz.render()",
"That's interesting, but not necessarily useful if you don't understand dot. Let's capture that output and do something else:",
"from nupic.frameworks.viz import DotRenderer\nfrom io import StringIO\n\noutp = StringIO()\nviz.render(renderer=lambda: DotRenderer(outp))",
"outp now contains the rendered output, render to an image with graphviz:",
"# Render dot to image\nfrom graphviz import Source\nfrom IPython.display import Image\n\nImage(Source(outp.getvalue()).pipe(\"png\"))",
"In the example above, each three-columned rectangle is a discrete region, the user-defined name for which is in the middle column. The left-hand and right-hand columns are respective inputs and outputs, the names for which, e.g. \"bottumUpIn\" and \"bottomUpOut\", are specific to the region type. The arrows indicate links between outputs from one region to the input of another.\nI know what you're thinking. That's a cool trick, but nobody cares about your contrived example. I want to see something real!\nContinuing below, I'll instantiate a HTMPredictionModel and visualize it. In this case, I'll use one of the \"hotgym\" examples.",
"from nupic.frameworks.opf.model_factory import ModelFactory\n\n# Note: parameters copied from examples/opf/clients/hotgym/simple/model_params.py\nmodel = ModelFactory.create({'aggregationInfo': {'hours': 1, 'microseconds': 0, 'seconds': 0, 'fields': [('consumption', 'sum')], 'weeks': 0, 'months': 0, 'minutes': 0, 'days': 0, 'milliseconds': 0, 'years': 0}, 'model': 'HTMPrediction', 'version': 1, 'predictAheadTime': None, 'modelParams': {'sensorParams': {'verbosity': 0, 'encoders': {'timestamp_timeOfDay': {'type': 'DateEncoder', 'timeOfDay': (21, 1), 'fieldname': u'timestamp', 'name': u'timestamp_timeOfDay'}, u'consumption': {'resolution': 0.88, 'seed': 1, 'fieldname': u'consumption', 'name': u'consumption', 'type': 'RandomDistributedScalarEncoder'}, 'timestamp_weekend': {'type': 'DateEncoder', 'fieldname': u'timestamp', 'name': u'timestamp_weekend', 'weekend': 21}}, 'sensorAutoReset': None}, 'spParams': {'columnCount': 2048, 'spVerbosity': 0, 'spatialImp': 'cpp', 'synPermConnected': 0.1, 'seed': 1956, 'numActiveColumnsPerInhArea': 40, 'globalInhibition': 1, 'inputWidth': 0, 'synPermInactiveDec': 0.005, 'synPermActiveInc': 0.04, 'potentialPct': 0.85, 'boostStrength': 3.0}, 'spEnable': True, 'clParams': {'implementation': 'cpp', 'alpha': 0.1, 'verbosity': 0, 'steps': '1,5', 'regionName': 'SDRClassifierRegion'}, 'inferenceType': 'TemporalMultiStep', 'tmEnable': True, 'tmParams': {'columnCount': 2048, 'activationThreshold': 16, 'pamLength': 1, 'cellsPerColumn': 32, 'permanenceInc': 0.1, 'minThreshold': 12, 'verbosity': 0, 'maxSynapsesPerSegment': 32, 'outputType': 'normal', 'initialPerm': 0.21, 'globalDecay': 0.0, 'maxAge': 0, 'permanenceDec': 0.1, 'seed': 1960, 'newSynapseCount': 20, 'maxSegmentsPerCell': 128, 'temporalImp': 'cpp', 'inputWidth': 2048}, 'trainSPNetOnlyIfRequested': False}})",
"Same deal as before, create a NetworkVisualizer instance, render to a buffer, then to an image, and finally display it inline.",
"# New network, new NetworkVisualizer instance\nviz = NetworkVisualizer(model._netInfo.net)\n\n# Render to Dot output to buffer\noutp = StringIO()\nviz.render(renderer=lambda: DotRenderer(outp))\n\n# Render Dot to image, display inline\nImage(Source(outp.getvalue()).pipe(\"png\"))",
"In these examples, I'm using graphviz to render an image from the dot document in Python, but you may want to do something else. dot is a generic and flexible graph description language and there are many tools for working with dot files."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
|
dcgan-svhn/DCGAN_Exercises.ipynb
|
mit
|
[
"Deep Convolutional GANs\nIn this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.\nYou'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. \n\nSo, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.",
"%matplotlib inline\n\nimport pickle as pkl\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.io import loadmat\nimport tensorflow as tf\n\n!mkdir data",
"Getting the data\nHere you can download the SVHN dataset. Run the cell above and it'll download to your machine.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\ndata_dir = 'data/'\n\nif not isdir(data_dir):\n raise Exception(\"Data directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(data_dir + \"train_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',\n data_dir + 'train_32x32.mat',\n pbar.hook)\n\nif not isfile(data_dir + \"test_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',\n data_dir + 'test_32x32.mat',\n pbar.hook)",
"These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.",
"trainset = loadmat(data_dir + 'train_32x32.mat')\ntestset = loadmat(data_dir + 'test_32x32.mat')",
"Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.",
"idx = np.random.randint(0, trainset['X'].shape[3], size=36)\nfig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)\nfor ii, ax in zip(idx, axes.flatten()):\n ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\nplt.subplots_adjust(wspace=0, hspace=0)",
"Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.",
"def scale(x, feature_range=(-1, 1)):\n # scale to (0, 1)\n x = ((x - x.min())/(255 - x.min()))\n \n # scale to feature_range\n min, max = feature_range\n x = x * (max - min) + min\n return x\n\nclass Dataset:\n def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):\n split_idx = int(len(test['y'])*(1 - val_frac))\n self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]\n self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]\n self.train_x, self.train_y = train['X'], train['y']\n \n self.train_x = np.rollaxis(self.train_x, 3)\n self.valid_x = np.rollaxis(self.valid_x, 3)\n self.test_x = np.rollaxis(self.test_x, 3)\n \n if scale_func is None:\n self.scaler = scale\n else:\n self.scaler = scale_func\n self.shuffle = shuffle\n \n def batches(self, batch_size):\n if self.shuffle:\n idx = np.arange(len(dataset.train_x))\n np.random.shuffle(idx)\n self.train_x = self.train_x[idx]\n self.train_y = self.train_y[idx]\n \n n_batches = len(self.train_y)//batch_size\n for ii in range(0, len(self.train_y), batch_size):\n x = self.train_x[ii:ii+batch_size]\n y = self.train_y[ii:ii+batch_size]\n \n yield self.scaler(x), y",
"Network Inputs\nHere, just creating some placeholders like normal.",
"def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')\n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z",
"Generator\nHere you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.\nWhat's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.\nYou keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:\n\nNote that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. \n\nExercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.",
"def generator(z, output_dim, reuse=False, alpha=0.2, training=True):\n with tf.variable_scope('generator', reuse=reuse):\n # First fully connected layer\n x\n \n # Output layer, 32x32x3\n logits = \n \n out = tf.tanh(logits)\n \n return out",
"Discriminator\nHere you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.\nYou'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.\nNote: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.\n\nExercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.",
"def discriminator(x, reuse=False, alpha=0.2):\n with tf.variable_scope('discriminator', reuse=reuse):\n # Input layer is 32x32x3\n x =\n \n logits = \n out = \n \n return out, logits",
"Model Loss\nCalculating the loss like before, nothing new here.",
"def model_loss(input_real, input_z, output_dim, alpha=0.2):\n \"\"\"\n Get the loss for the discriminator and generator\n :param input_real: Images from the real dataset\n :param input_z: Z input\n :param out_channel_dim: The number of channels in the output image\n :return: A tuple of (discriminator loss, generator loss)\n \"\"\"\n g_model = generator(input_z, output_dim, alpha=alpha)\n d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)\n d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)\n\n d_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))\n d_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))\n g_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))\n\n d_loss = d_loss_real + d_loss_fake\n\n return d_loss, g_loss",
"Optimizers\nNot much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.",
"def model_opt(d_loss, g_loss, learning_rate, beta1):\n \"\"\"\n Get optimization operations\n :param d_loss: Discriminator loss Tensor\n :param g_loss: Generator loss Tensor\n :param learning_rate: Learning Rate Placeholder\n :param beta1: The exponential decay rate for the 1st moment in the optimizer\n :return: A tuple of (discriminator training operation, generator training operation)\n \"\"\"\n # Get weights and bias to update\n t_vars = tf.trainable_variables()\n d_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n g_vars = [var for var in t_vars if var.name.startswith('generator')]\n\n # Optimize\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)\n g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)\n\n return d_train_opt, g_train_opt",
"Building the model\nHere we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.",
"class GAN:\n def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):\n tf.reset_default_graph()\n \n self.input_real, self.input_z = model_inputs(real_size, z_size)\n \n self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,\n real_size[2], alpha=0.2)\n \n self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)",
"Here is a function for displaying generated images.",
"def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):\n fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, \n sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.axis('off')\n img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)\n ax.set_adjustable('box-forced')\n im = ax.imshow(img, aspect='equal')\n \n plt.subplots_adjust(wspace=0, hspace=0)\n return fig, axes",
"And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.",
"def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):\n saver = tf.train.Saver()\n sample_z = np.random.uniform(-1, 1, size=(72, z_size))\n\n samples, losses = [], []\n steps = 0\n\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in dataset.batches(batch_size):\n steps += 1\n\n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n\n # Run optimizers\n _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})\n _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})\n\n if steps % print_every == 0:\n # At the end of each epoch, get the losses and print them out\n train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})\n train_loss_g = net.g_loss.eval({net.input_z: batch_z})\n\n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g))\n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n\n if steps % show_every == 0:\n gen_samples = sess.run(\n generator(net.input_z, 3, reuse=True, training=False),\n feed_dict={net.input_z: sample_z})\n samples.append(gen_samples)\n _ = view_samples(-1, samples, 6, 12, figsize=figsize)\n plt.show()\n\n saver.save(sess, './checkpoints/generator.ckpt')\n\n with open('samples.pkl', 'wb') as f:\n pkl.dump(samples, f)\n \n return losses, samples",
"Hyperparameters\nGANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.\n\nExercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.",
"real_size = (32,32,3)\nz_size = 100\nlearning_rate = 0.001\nbatch_size = 64\nepochs = 1\nalpha = 0.01\nbeta1 = 0.9\n\n# Create the network\nnet = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)\n\n# Load the data and train the network here\ndataset = Dataset(trainset, testset)\nlosses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator', alpha=0.5)\nplt.plot(losses.T[1], label='Generator', alpha=0.5)\nplt.title(\"Training Losses\")\nplt.legend()\n\n_ = view_samples(-1, samples, 6, 12, figsize=(10,5))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hpparvi/PyTransit
|
notebooks/A2_Supersampling.ipynb
|
gpl-2.0
|
[
"Supersampling\nThe photometric datapoints we're dealing with are always integrations over time. If the exposure time is long, such as with Kepler long cadence data, the light curve features are smeared, and we need to include the effect from the long exposure time into the light curve model. This can be done easily by supersampling the model (that is, calculating the model for several time samples inside each exposure and averaging the results). \nPyTransit implements a basic supersampler in pytransit.supersampler.SuperSampler to facilitate transit model supersampling. You don't generally need to initialize or call the supersampler manually, since the transit model uses it automatically, but knowing how to use it may come handy in some more advanced situations.\nSuperSampler is initialized as SuperSampler(nsamples, exptime), where nsamples is the number of subsamples to create per exposure, and exptime is the exposure duration.",
"%pylab inline\nfrom pytransit.supersampler import SuperSampler\n\nsampler = SuperSampler(nsamples=5, exptime=1)",
"The subsample positions are calculated as\n$$s_i = t + \\mathbf{s} \\times t_e$$\nwhere $\\mathbf{s}$ are the subsample positions normalized to [-0.5, 0.5], and can be accessed from the sampler",
"sampler.sample_positions",
"After the initialization, the SuperSampler offers two methods \n\nsample(times[npt]) $\\rightarrow$ array[nsamples*npt]\naverage(flux[nsamples*npt]) $\\rightarrow$ array[npt]\n\nSample is used to create a set of supersampled time stamps where times is an 1D array storing the exposure center times and expdur is the exposure duration. After the model has been evaluated for the supersampled time stamps, average is used to compute the per-exposure averaged model.",
"time_o = arange(0,3).astype('d')\ntime_s = sampler.sample(time_o)\n\nplot(time_o, full_like(time_o, 0.99), 'o')\nplot(time_s, full_like(time_s, 1.01), '.')\nylim(0.98,1.03);\n\ndef lcfun(time):\n return (time > 1.1).astype('d')\n\nplot(time_o, lcfun(time_o), 'o')\nplot(time_s, lcfun(time_s), '.')\nplot(time_o, sampler.average(lcfun(time_s)), 'ko', ms=10)",
"<center> © 2017 Hannu Parviainen </center>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n
|
site/ko/tutorials/keras/overfit_and_underfit.ipynb
|
apache-2.0
|
[
"Copyright 2018 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"과대적합과 과소적합\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/keras/overfit_and_underfit\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a> </td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/overfit_and_underfit.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/overfit_and_underfit.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub에서 소스 보기</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/keras/overfit_and_underfit.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드</a></td>\n</table>\n\n지금까지 그랬듯이 이 예제의 코드도 tf.keras API를 사용합니다. 텐서플로 케라스 가이드에서 tf.keras API에 대해 더 많은 정보를 얻을 수 있습니다.\n앞서 영화 리뷰 분류와 주택 가격 예측의 두 예제에서 일정 에포크 동안 훈련하면 검증 세트에서 모델 성능이 최고점에 도달한 다음 감소하기 시작한 것을 보았습니다.\n다른 말로 하면, 모델이 훈련 세트에 과대적합(overfitting)된 것입니다. 과대적합을 다루는 방법은 꼭 배워야 합니다. 훈련 세트에서 높은 성능을 얻을 수 있지만 진짜 원하는 것은 테스트 세트(또는 이전에 본 적 없는 데이터)에 잘 일반화되는 모델입니다.\n과대적합의 반대는 과소적합(underfitting)입니다. 과소적합은 테스트 세트의 성능이 향상될 여지가 아직 있을 때 일어납니다. 발생하는 원인은 여러가지입니다. 모델이 너무 단순하거나, 규제가 너무 많거나, 그냥 단순히 충분히 오래 훈련하지 않는 경우입니다. 즉 네트워크가 훈련 세트에서 적절한 패턴을 학습하지 못했다는 뜻입니다.\n모델을 너무 오래 훈련하면 과대적합되기 시작하고 테스트 세트에서 일반화되지 못하는 패턴을 훈련 세트에서 학습합니다. 과대적합과 과소적합 사이에서 균형을 잡아야 합니다. 이를 위해 적절한 에포크 횟수동안 모델을 훈련하는 방법을 배워보겠습니다.\n과대적합을 막는 가장 좋은 방법은 더 많은 훈련 데이터를 사용하는 것입니다. 많은 데이터에서 훈련한 모델은 자연적으로 일반화 성능이 더 좋습니다. 데이터를 더 준비할 수 없을 때 그다음으로 가장 좋은 방법은 규제(regularization)와 같은 기법을 사용하는 것입니다. 모델이 저장할 수 있는 정보의 양과 종류에 제약을 부과하는 방법입니다. 네트워크가 소수의 패턴만 기억할 수 있다면 최적화 과정 동안 일반화 가능성이 높은 가장 중요한 패턴에 촛점을 맞출 것입니다.\n이 노트북에서 널리 사용되는 두 가지 규제 기법인 가중치 규제와 드롭아웃(dropout)을 알아 보겠습니다. 이런 기법을 사용하여 IMDB 영화 리뷰 분류 모델의 성능을 향상시켜 보죠.\n이 노트북에서는 몇 가지 일반적인 정규화 기술을 살펴보고 분류 모델을 개선하는 데 사용할 것입니다.\n설정\n시작하기 전에 필요한 패키지를 가져옵니다.",
"import tensorflow as tf\nfrom tensorflow import keras\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nprint(tf.__version__)\n\n!pip install git+https://github.com/tensorflow/docs\n\nimport tensorflow_docs as tfdocs\nimport tensorflow_docs.modeling\nimport tensorflow_docs.plots\n\nfrom IPython import display\nfrom matplotlib import pyplot as plt\n\nimport numpy as np\n\nimport pathlib\nimport shutil\nimport tempfile\n\n\nlogdir = pathlib.Path(tempfile.mkdtemp())/\"tensorboard_logs\"\nshutil.rmtree(logdir, ignore_errors=True)",
"IMDB 데이터셋 다운로드\n이 튜토리얼의 목표는 입자 물리학을 수행하는 것이 아니므로 데이터 세트의 세부 사항에 집착하지 마세요. 여기에는 각각 28개의 특성과 이진 클래스 레이블이 있는 11,000,000개의 예제가 포함되어 있습니다.",
"gz = tf.keras.utils.get_file('HIGGS.csv.gz', 'http://mlphysics.ics.uci.edu/data/higgs/HIGGS.csv.gz')\n\nFEATURES = 28",
"tf.data.experimental.CsvDataset 클래스는 중간 압축 해제 단계 없이 gzip 파일에서 직접 csv 레코드를 읽는 데 사용할 수 있습니다.",
"ds = tf.data.experimental.CsvDataset(gz,[float(),]*(FEATURES+1), compression_type=\"GZIP\")",
"해당 csv 판독기 클래스는 각 레코드에 대한 스칼라 목록을 반환합니다. 다음 함수는 해당 스칼라 목록을 (feature_vector, label) 쌍으로 다시 압축합니다.",
"def pack_row(*row):\n label = row[0]\n features = tf.stack(row[1:],1)\n return features, label",
"TensorFlow는 대규모 데이터 배치에서 작업할 때 가장 효율적입니다.\n따라서 각 행을 개별적으로 다시 압축하는 대신 10000개 예제의 배치를 취하고 각 배치에 pack_row 함수를 적용한 다음 배치를 다시 개별 레코드로 분할하는 새로운 Dataset를 만듭니다.",
"packed_ds = ds.batch(10000).map(pack_row).unbatch()",
"이 새로운 packed_ds의 일부 레코드를 살펴보세요.\n특성이 완벽하게 정규화되지는 않았지만 이 튜토리얼에서는 이것으로 충분합니다.",
"for features,label in packed_ds.batch(1000).take(1):\n print(features[0])\n plt.hist(features.numpy().flatten(), bins = 101)",
"이 튜토리얼을 비교적 짧게 유지하기 위해 처음 1000개의 샘플만 검증에 사용하고 다음 10,000개는 훈련에 사용합니다.",
"N_VALIDATION = int(1e3)\nN_TRAIN = int(1e4)\nBUFFER_SIZE = int(1e4)\nBATCH_SIZE = 500\nSTEPS_PER_EPOCH = N_TRAIN//BATCH_SIZE",
"Dataset.skip 및 Dataset.take 메서드를 사용하면 이를 쉽게 수행할 수 있습니다.\n동시에 Dataset.cache 메서드를 사용하여 로더가 각 epoch에서 파일의 데이터를 다시 읽을 필요가 없도록 합니다.",
"validate_ds = packed_ds.take(N_VALIDATION).cache()\ntrain_ds = packed_ds.skip(N_VALIDATION).take(N_TRAIN).cache()\n\ntrain_ds",
"이러한 데이터세트는 개별 예제를 반환합니다. .batch 메서드를 사용하여 훈련에 적합한 크기의 배치를 생성합니다. 또한 배치 처리하기 전에 훈련 세트에 대해 .shuffle 및 .repeat를 수행하는 것도 잊지 마세요.",
"validate_ds = validate_ds.batch(BATCH_SIZE)\ntrain_ds = train_ds.shuffle(BUFFER_SIZE).repeat().batch(BATCH_SIZE)",
"과대적합 예제\n과대적합을 막는 가장 간단한 방법은 모델의 규모를 축소하는 것입니다. 즉, 모델에 있는 학습 가능한 파라미터의 수를 줄입니다(모델 파라미터는 층(layer)의 개수와 층의 유닛(unit) 개수에 의해 결정됩니다). 딥러닝에서는 모델의 학습 가능한 파라미터의 수를 종종 모델의 \"용량\"이라고 말합니다. 직관적으로 생각해 보면 많은 파라미터를 가진 모델이 더 많은 \"기억 용량\"을 가집니다. 이런 모델은 훈련 샘플과 타깃 사이를 일반화 능력이 없는 딕셔너리와 같은 매핑으로 완벽하게 학습할 수 있습니다. 하지만 이전에 본 적 없는 데이터에서 예측을 할 땐 쓸모가 없을 것입니다.\n항상 기억해야 할 점은 딥러닝 모델이 훈련 세트에는 학습이 잘 되는 경향이 있지만 진짜 해결할 문제는 학습이 아니라 일반화라는 것입니다.\n반면에 네트워크의 기억 용량이 부족하다면 이런 매핑을 쉽게 학습할 수 없을 것입니다. 손실을 최소화하기 위해서는 예측 성능이 더 많은 압축된 표현을 학습해야 합니다. 또한 너무 작은 모델을 만들면 훈련 데이터를 학습하기 어렵울 것입니다. \"너무 많은 용량\"과 \"충분하지 않은 용량\" 사이의 균형을 잡아야 합니다.\n안타깝지만 어떤 모델의 (층의 개수나 뉴런 개수에 해당하는) 적절한 크기나 구조를 결정하는 마법같은 공식은 없습니다. 여러 가지 다른 구조를 사용해 실험을 해봐야만 합니다.\n알맞은 모델의 크기를 찾으려면 비교적 적은 수의 층과 파라미터로 시작해서 검증 손실이 감소할 때까지 새로운 층을 추가하거나 층의 크기를 늘리는 것이 좋습니다. 영화 리뷰 분류 네트워크를 사용해 이를 실험해 보죠.\nDense 층만 사용하는 간단한 기준 모델을 만들고 작은 규모의 버전와 큰 버전의 모델을 만들어 비교하겠습니다.\nlayers.Dense만 사용하는 간단한 기준 모델로 시작한 다음 더 큰 버전을 만들고 서로 비교합니다.\n기준 모델 만들기\n훈련하는 동안 학습률을 점진적으로 낮추면 많은 모델이 더 잘 훈련됩니다. 시간 경과에 따른 학습률을 줄이려면 optimizers.schedules를 사용하세요.",
"lr_schedule = tf.keras.optimizers.schedules.InverseTimeDecay(\n 0.001,\n decay_steps=STEPS_PER_EPOCH*1000,\n decay_rate=1,\n staircase=False)\n\ndef get_optimizer():\n return tf.keras.optimizers.Adam(lr_schedule)",
"위의 코드는 1000 epoch에서 학습률을 기본 학습률의 1/2, 2000 epoch에서는 1/3 등 쌍곡선 방식으로 줄이도록 schedules.InverseTimeDecay를 설정합니다.",
"step = np.linspace(0,100000)\nlr = lr_schedule(step)\nplt.figure(figsize = (8,6))\nplt.plot(step/STEPS_PER_EPOCH, lr)\nplt.ylim([0,max(plt.ylim())])\nplt.xlabel('Epoch')\n_ = plt.ylabel('Learning Rate')\n",
"이 튜토리얼의 각 모델은 동일한 훈련 구성을 사용합니다. 따라서 콜백 목록부터 시작하여 재사용 가능한 방식으로 설정하세요.\n이 튜토리얼의 훈련은 다수의 짧은 epoch 동안 실행됩니다. 로깅 노이즈를 줄이기 위해 각 epoch에 대해 단순히 .을 인쇄하고 100개의 epoch마다 전체 메트릭을 인쇄하는 tfdocs.EpochDots를 사용합니다.\nNext include callbacks.EarlyStopping to avoid long and unnecessary training times. Note that this callback is set to monitor the val_binary_crossentropy, not the val_loss. This difference will be important later.\ncallbacks.TensorBoard를 사용하여 훈련에 대한 TensorBoard 로그를 생성합니다.",
"def get_callbacks(name):\n return [\n tfdocs.modeling.EpochDots(),\n tf.keras.callbacks.EarlyStopping(monitor='val_binary_crossentropy', patience=200),\n tf.keras.callbacks.TensorBoard(logdir/name),\n ]",
"마찬가지로 각 모델은 동일한 Model.compile 및 Model.fit 설정을 사용합니다.",
"def compile_and_fit(model, name, optimizer=None, max_epochs=10000):\n if optimizer is None:\n optimizer = get_optimizer()\n model.compile(optimizer=optimizer,\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[\n tf.keras.losses.BinaryCrossentropy(\n from_logits=True, name='binary_crossentropy'),\n 'accuracy'])\n\n model.summary()\n\n history = model.fit(\n train_ds,\n steps_per_epoch = STEPS_PER_EPOCH,\n epochs=max_epochs,\n validation_data=validate_ds,\n callbacks=get_callbacks(name),\n verbose=0)\n return history",
"미소 모델\n모델 훈련으로 시작합니다.",
"tiny_model = tf.keras.Sequential([\n layers.Dense(16, activation='elu', input_shape=(FEATURES,)),\n layers.Dense(1)\n])\n\nsize_histories = {}\n\nsize_histories['Tiny'] = compile_and_fit(tiny_model, 'sizes/Tiny')",
"이제 모델이 어떻게 작동했는지 확인합니다.",
"plotter = tfdocs.plots.HistoryPlotter(metric = 'binary_crossentropy', smoothing_std=10)\nplotter.plot(size_histories)\nplt.ylim([0.5, 0.7])",
"작은 모델\n작은 모델의 성능을 능가할 수 있는지 확인하기 위해 일부 큰 모델을 점진적으로 훈련합니다.\n각각 16개 단위가 있는 두 개의 숨겨진 레이어를 사용해 봅니다.",
"baseline_model = keras.Sequential([\n # `.summary` 메서드 때문에 `input_shape`가 필요합니다\n keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),\n keras.layers.Dense(16, activation='relu'),\n keras.layers.Dense(1, activation='sigmoid')\n])\n\nbaseline_model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy', 'binary_crossentropy'])\n\nbaseline_model.summary()\n\nbaseline_history = baseline_model.fit(train_data,\n train_labels,\n epochs=20,\n batch_size=512,\n validation_data=(test_data, test_labels),\n verbose=2)",
"작은 모델 만들기\n앞서 만든 기준 모델과 비교하기 위해 적은 수의 은닉 유닛을 가진 모델을 만들어 보죠:",
"smaller_model = keras.Sequential([\n keras.layers.Dense(4, activation='relu', input_shape=(NUM_WORDS,)),\n keras.layers.Dense(4, activation='relu'),\n keras.layers.Dense(1, activation='sigmoid')\n])\n\nsmaller_model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy', 'binary_crossentropy'])\n\nsmaller_model.summary()",
"같은 데이터를 사용해 이 모델을 훈련합니다:",
"smaller_history = smaller_model.fit(train_data,\n train_labels,\n epochs=20,\n batch_size=512,\n validation_data=(test_data, test_labels),\n verbose=2)",
"큰 모델 만들기\n아주 큰 모델을 만들어 얼마나 빠르게 과대적합이 시작되는지 알아 볼 수 있습니다. 이 문제에 필요한 것보다 훨씬 더 큰 용량을 가진 네트워크를 추가해서 비교해 보죠:",
"bigger_model = keras.models.Sequential([\n keras.layers.Dense(512, activation='relu', input_shape=(NUM_WORDS,)),\n keras.layers.Dense(512, activation='relu'),\n keras.layers.Dense(1, activation='sigmoid')\n])\n\nbigger_model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy','binary_crossentropy'])\n\nbigger_model.summary()",
"역시 같은 데이터를 사용해 모델을 훈련합니다:",
"bigger_history = bigger_model.fit(train_data, train_labels,\n epochs=20,\n batch_size=512,\n validation_data=(test_data, test_labels),\n verbose=2)",
"훈련 손실과 검증 손실 그래프 그리기\n실선은 훈련 손실이고 점선은 검증 손실입니다(낮은 검증 손실이 더 좋은 모델입니다). 여기서는 작은 네트워크가 기준 모델보다 더 늦게 과대적합이 시작되었습니다(즉 에포크 4가 아니라 6에서 시작됩니다). 또한 과대적합이 시작되고 훨씬 천천히 성능이 감소합니다.\n더 큰 모델을 빌드하면 더 많은 파워가 제공되지만 이 파워가 어떤 이유로 제한되지 않으면 훈련 세트에 쉽게 과대적합될 수 있습니다.\n이 예에서는 일반적으로 \"Tiny\" 모델만 과대적합을 완전히 피하고 더 큰 각 모델은 데이터를 더 빠르게 과대적합합니다. \"large\" 모델의 경우 이것이 너무 심각해져서 실제로 어떤 상황이 벌어지는지 보려면 플롯을 로그 스케일로 전환해야 합니다.\n검증 메트릭을 플롯하고 이를 훈련 메트릭과 비교하면 이것이 분명해집니다.\n\n약간의 차이가 있는 것이 정상입니다.\n두 메트릭이 같은 방향으로 움직이면 모든 것이 정상입니다.\n훈련 메트릭이 계속 개선되는 동안 검증 메트릭이 정체되기 시작하면 과대적합에 가까워진 것입니다.\n검증 메트릭이 잘못된 방향으로 가고 있다면 모델이 확실하게 과대적합된 것입니다.",
"def plot_history(histories, key='binary_crossentropy'):\n plt.figure(figsize=(16,10))\n\n for name, history in histories:\n val = plt.plot(history.epoch, history.history['val_'+key],\n '--', label=name.title()+' Val')\n plt.plot(history.epoch, history.history[key], color=val[0].get_color(),\n label=name.title()+' Train')\n\n plt.xlabel('Epochs')\n plt.ylabel(key.replace('_',' ').title())\n plt.legend()\n\n plt.xlim([0,max(history.epoch)])\n\n\nplot_history([('baseline', baseline_history),\n ('smaller', smaller_history),\n ('bigger', bigger_history)])",
"참고: 위의 모든 훈련 실행은 모델이 개선되지 않는다는 것이 분명해지면 훈련을 종료하도록 callbacks.EarlyStopping을 사용했습니다.\nTensorBoard에서 보기\n이러한 모델은 모두 훈련 중에 TensorBoard 로그를 작성했습니다.\n노트북 내에 내장된 TensorBoard 뷰어를 엽니다.",
"#docs_infra: no_execute\n\n# Load the TensorBoard notebook extension\n%load_ext tensorboard\n\n# Open an embedded TensorBoard viewer\n%tensorboard --logdir {logdir}/sizes",
"TensorBoard.dev에서 이 노트북의 이전 실행 결과를 볼 수 있습니다.\nTensorBoard.dev는 ML 실험을 호스팅 및 추적하고 모든 사람과 공유하기 위한 관리 환경입니다.\n편의를 위해 <iframe>에도 포함시켰습니다.",
"display.IFrame(\n src=\"https://tensorboard.dev/experiment/vW7jmmF9TmKmy3rbheMQpw/#scalars&_smoothingWeight=0.97\",\n width=\"100%\", height=\"800px\")",
"TensorBoard 결과를 공유하려면 다음을 코드 셀에 복사하여 TensorBoard.dev에 로그를 업로드할 수 있습니다.\n참고: 이 단계에는 Google 계정이 필요합니다.\n!tensorboard dev upload --logdir {logdir}/sizes\n주의: 이 명령은 종료되지 않으며, 장기 실험 결과를 지속적으로 업로드하도록 설계되었습니다. 데이터가 업로드되면 노트북 도구의 \"실행 중단\" 옵션을 사용하여 이를 중지해야 합니다.\n과대적합을 방지하기 위한 전략\n이 섹션의 내용을 시작하기 전에 위의 \"Tiny\" 모델에서 훈련 로그를 복사하여 비교 기준으로 사용합니다.",
"shutil.rmtree(logdir/'regularizers/Tiny', ignore_errors=True)\nshutil.copytree(logdir/'sizes/Tiny', logdir/'regularizers/Tiny')\n\nregularizer_histories = {}\nregularizer_histories['Tiny'] = size_histories['Tiny']",
"가중치를 규제하기\n아마도 오캄의 면도날(Occam's Razor) 이론을 들어 보았을 것입니다. 어떤 것을 설명하는 두 가지 방법이 있다면 더 정확한 설명은 최소한의 가정이 필요한 가장 \"간단한\" 설명일 것입니다. 이는 신경망으로 학습되는 모델에도 적용됩니다. 훈련 데이터와 네트워크 구조가 주어졌을 때 이 데이터를 설명할 수 있는 가중치의 조합(즉, 가능한 모델)은 많습니다. 간단한 모델은 복잡한 것보다 과대적합되는 경향이 작을 것입니다.\n여기서 \"간단한 모델\"은 모델 파라미터의 분포를 봤을 때 엔트로피(entropy)가 작은 모델입니다(또는 앞 절에서 보았듯이 적은 파라미터를 가진 모델입니다). 따라서 과대적합을 완화시키는 일반적인 방법은 가중치가 작은 값을 가지도록 네트워크의 복잡도에 제약을 가하는 것입니다. 이는 가중치 값의 분포를 좀 더 균일하게 만들어 줍니다. 이를 \"가중치 규제\"(weight regularization)라고 부릅니다. 네트워크의 손실 함수에 큰 가중치에 해당하는 비용을 추가합니다. 이 비용은 두 가지 형태가 있습니다:\n\n\nL1 규제는 가중치의 절댓값에 비례하는 비용이 추가됩니다(즉, 가중치의 \"L1 노름(norm)\"을 추가합니다).\n\n\nL2 규제는 가중치의 제곱에 비례하는 비용이 추가됩니다(즉, 가중치의 \"L2 노름\"의 제곱을 추가합니다). 신경망에서는 L2 규제를 가중치 감쇠(weight decay)라고도 부릅니다. 이름이 다르지만 혼돈하지 마세요. 가중치 감쇠는 수학적으로 L2 규제와 동일합니다.\n\n\nL1 정규화는 가중치를 정확히 0으로 푸시하여 희소 모델을 유도합니다. L2 정규화는 가중치 매개변수를 희소하게 만들지 않고 여기에 페널티를 부여하는데, 작은 가중치에 대해서는 페널티가 0이 되기 때문입니다(L2가 더 일반적인 이유 중 하나).\ntf.keras에서는 가중치 규제 객체를 층의 키워드 매개변수에 전달하여 가중치에 규제를 추가합니다. L2 가중치 규제를 추가해 보죠.",
"l2_model = keras.models.Sequential([\n keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),\n activation='relu', input_shape=(NUM_WORDS,)),\n keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),\n activation='relu'),\n keras.layers.Dense(1, activation='sigmoid')\n])\n\nl2_model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy', 'binary_crossentropy'])\n\nl2_model_history = l2_model.fit(train_data, train_labels,\n epochs=20,\n batch_size=512,\n validation_data=(test_data, test_labels),\n verbose=2)",
"l2(0.001)는 네트워크의 전체 손실에 층에 있는 가중치 행렬의 모든 값이 0.001 * weight_coefficient_value**2만큼 더해진다는 의미입니다. 이런 페널티(penalty)는 훈련할 때만 추가됩니다. 따라서 테스트 단계보다 훈련 단계에서 네트워크 손실이 훨씬 더 클 것입니다.\nL2 규제의 효과를 확인해 보죠:\n따라서 L2 정규화 패널티가 있는 이 동일한 \"Large\" 모델의 성능이 훨씬 더 좋습니다.",
"plot_history([('baseline', baseline_history),\n ('l2', l2_model_history)])",
"결과에서 보듯이 모델 파라미터의 개수는 같지만 L2 규제를 적용한 모델이 기본 모델보다 과대적합에 훨씬 잘 견디고 있습니다.\n더 많은 정보\n이러한 종류의 정규화에 대해 주목해야 할 두 가지 중요한 사항이 있습니다.\n첫 번째: 자체 훈련 루프를 작성하는 경우 모델에 정규화 손실을 요청해야 합니다.",
"result = l2_model(features)\nregularization_loss=tf.add_n(l2_model.losses)",
"두 번째: 이 구현은 모델의 손실에 가중치 패널티를 추가한 다음 표준 최적화 절차를 적용하는 식으로 작동합니다.\n대신 원시 손실에 대해서만 옵티마이저를 실행한 다음, 계산된 단계를 적용하는 동안 옵티마이저가 약간의 가중치 감소를 적용하는 두 번째 접근 방식이 있습니다. 이 \"분리된 가중치 감소\"는 optimizers.FTRL 및 optimizers.AdamW와 같은 옵티마이저에서 볼 수 있습니다.\n드롭아웃 추가하기\n드롭아웃(dropout)은 신경망에서 가장 효과적이고 널리 사용하는 규제 기법 중 하나입니다. 토론토(Toronto) 대학의 힌튼(Hinton)과 그의 제자들이 개발했습니다. 드롭아웃을 층에 적용하면 훈련하는 동안 층의 출력 특성을 랜덤하게 끕니다(즉, 0으로 만듭니다). 훈련하는 동안 어떤 입력 샘플에 대해 [0.2, 0.5, 1.3, 0.8, 1.1] 벡터를 출력하는 층이 있다고 가정해 보죠. 드롭아웃을 적용하면 이 벡터에서 몇 개의 원소가 랜덤하게 0이 됩니다. 예를 들면, [0, 0.5, 1.3, 0, 1.1]가 됩니다. \"드롭아웃 비율\"은 0이 되는 특성의 비율입니다. 보통 0.2에서 0.5 사이를 사용합니다. 테스트 단계에서는 어떤 유닛도 드롭아웃하지 않습니다. 훈련 단계보다 더 많은 유닛이 활성화되기 때문에 균형을 맞추기 위해 층의 출력 값을 드롭아웃 비율만큼 줄입니다.\ntf.keras에서는 Dropout 층을 이용해 네트워크에 드롭아웃을 추가할 수 있습니다. 이 층은 바로 이전 층의 출력에 드롭아웃을 적용합니다.\nIMDB 네트워크에 두 개의 Dropout 층을 추가하여 과대적합이 얼마나 감소하는지 알아 보겠습니다:\n\"드롭아웃 비율\"은 0이 되는 특성의 비율로, 일반적으로 0.2에서 0.5 사이로 설정됩니다. 테스트 시간에는 어떤 유닛도 드롭아웃되지 않고 대신 레이어의 출력 값이 드롭아웃 비율과 동일한 계수만큼 축소되는데, 이는 훈련 시간에 더 많은 유닛이 활성화된다는 사실과 균형을 맞추기 위해서입니다.\ntf.keras에서 Dropout 레이어를 통해 네트워크에 드롭아웃을 도입할 수 있습니다. 이 레이어는 직전 레이어의 출력에 적용됩니다.\n네트워크에 두 개의 드롭아웃 레이어를 추가하여 과대적합을 줄이는 효과가 얼마나 되는지 살펴보겠습니다.",
"dpt_model = keras.models.Sequential([\n keras.layers.Dense(16, activation='relu', input_shape=(NUM_WORDS,)),\n keras.layers.Dropout(0.5),\n keras.layers.Dense(16, activation='relu'),\n keras.layers.Dropout(0.5),\n keras.layers.Dense(1, activation='sigmoid')\n])\n\ndpt_model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy','binary_crossentropy'])\n\ndpt_model_history = dpt_model.fit(train_data, train_labels,\n epochs=20,\n batch_size=512,\n validation_data=(test_data, test_labels),\n verbose=2)\n\nplot_history([('baseline', baseline_history),\n ('dropout', dpt_model_history)])",
"이 플롯으로부터 이러한 정규화 접근 방식 모두 \"Large\" 모델의 동작을 개선한다는 것이 분명합니다. 그러나 여전히 \"Tiny\" 기준을 넘어서지는 못합니다.\n다음으로, 둘 다 함께 시도하고 더 나은지 확인합니다.\nL2 + 드롭아웃 결합",
"combined_model = tf.keras.Sequential([\n layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),\n activation='elu', input_shape=(FEATURES,)),\n layers.Dropout(0.5),\n layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),\n activation='elu'),\n layers.Dropout(0.5),\n layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),\n activation='elu'),\n layers.Dropout(0.5),\n layers.Dense(512, kernel_regularizer=regularizers.l2(0.0001),\n activation='elu'),\n layers.Dropout(0.5),\n layers.Dense(1)\n])\n\nregularizer_histories['combined'] = compile_and_fit(combined_model, \"regularizers/combined\")\n\nplotter.plot(regularizer_histories)\nplt.ylim([0.5, 0.7])",
"\"Combined\" 정규화가 있는 이 모델은 분명히 지금까지 최고의 모델입니다.\nTensorBoard에서 보기\n이러한 모델은 TensorBoard 로그도 기록했습니다.\n노트북 내에 내장된 Tensorboard 뷰어를 열려면 코드 셀에 다음 내용을 복사하세요.\n%tensorboard --logdir {logdir}/regularizers\nTensorDoard.dev에서 이 노트북의 이전 실행 결과를 볼 수 있습니다.\n편의를 위해 <iframe>에도 포함시켰습니다.",
"display.IFrame(\n src=\"https://tensorboard.dev/experiment/fGInKDo8TXes1z7HQku9mw/#scalars&_smoothingWeight=0.97\",\n width = \"100%\",\n height=\"800px\")\n",
"다음과 함께 업로드되었습니다.\n!tensorboard dev upload --logdir {logdir}/regularizers\n결론\n요약하자면 신경망에서 과대적합을 방지하는 가장 일반적인 방법은 다음과 같습니다.\n\n더 많은 훈련 데이터를 얻음\n네트워크 용량을 줄임\n가중치 정규화를 추가함\n드롭아웃을 추가함\n\n이 가이드에서 다루지 않는 두 가지 중요한 접근 방식은 다음과 같습니다.\n\n데이터 증강\n배치 정규화\n\n각 방법은 그 자체로 도움이 될 수 있지만 이를 결합하여 더 큰 효과를 거둘 수 있는 경우가 종종 있다는 점을 기억하기 바랍니다."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
remenska/iSDM
|
notebooks/old/LinearRegression_warmup.ipynb
|
apache-2.0
|
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n# this allows plots to appear directly in notebook\n%matplotlib inline\n\ndata = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)\n\ndata.head()\n\ndata.shape\n\n# visualize the relationship between the features and the response, using scatterplots\n\nfig, axs = plt.subplots(1,3, sharey=True)\ndata.plot(kind='scatter', x='TV', y='Sales', ax=axs[0], figsize=(16,8))\ndata.plot(kind='scatter', x='Radio', y='Sales', ax=axs[1])\ndata.plot(kind='scatter', x='Newspaper', y='Sales', ax=axs[2])",
"Is there a relationship between ads and sales?\nSimple Linear Regression is an approach for predicting a quantitative response using a single feature (or \"predictor\" or \"input variable\"). Generally speaking, coefficients are estimated using the least squares criterion, which means we are find the line (mathematically) which minimizes the sum of squared residuals (or \"sum of squared errors\").",
"# standard import if you're using \"formula notation\"\nimport statsmodels.formula.api as smf\n\nlm = smf.ols(formula='Sales ~ TV', data=data).fit()\n\nlm.params\n\n# lets make a prediction if TV advertising would spend $50,000 \n# Statsmodels formula interface expects a datarames\nX_new = pd.DataFrame({'TV':[50]})\n\n\nX_new\n\nlm.predict(X_new)",
"Plotting the Least Squares Line",
"# create a dataframe with the minimum and maximum values of TV\nX_new = pd.DataFrame({'TV':[data.TV.min(), data.TV.max()]})\n\nX_new\n\npreds = lm.predict(X_new)\n\npreds\n\n# first plot the observed data, then plot the least squares line\ndata.plot(kind='scatter', x='TV', y='Sales')\nplt.plot(X_new, preds, c='red', linewidth=2)\n\n# confidence intervals\nlm.conf_int()",
"null hypothesis: there is no relationship between TV ads and sales \nalternative hypothesis: there is relationship between TV ads and sales . \nTypically we reject the null (and thus believe the alternative) if 95% confidence interval does not include zero. The p-value is the probability that the coefficient is actually zero:",
"lm.pvalues",
"The most common way to evaluate the overall fit of a linear model is by the R-squared value. R-squared is the proportion of variance explained, meaning the proportion of variance in the observed data that is explained by the model, or the reduction in error over the null model.",
"lm.rsquared",
"Multiple Linear Regression",
"# create a fitted model with all three features\nlm = smf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit()\n\nlm.params",
"Interpretation: For a given amount of Radio and Newspaper ad spending, an increase of $1000 in TV ad spending is associated with an increase in Sales of 45.765 widgets.",
"lm.summary()",
"Interpretation: TV and Radio have significant p-values whereas Newspaper does not. Thus we reject the null hypothesis for TV and Radio, and fail to reject the null hypothesis for Newspaper. TV and Radio are both positively associated with Sales. The model has a slightly higher R-squared (0.897) than the previous model, which means that this model provides a better fit to the data than a model that only includes TV.\nRule of thumb: only keep predictors in the model if they have small p-values; check if R-squared value goes up as you add new predictors. But keep in mind that R-squared is susceptible to overfitting, and thus there is no guarantee that high value is better.\nR-squared will always increase as you add more features to the model, even if they are unrelated to the response. Thus, selecting the model with the highest R-squared is not a reliable approach for choosing the best linear model.\nThere is alternative to R-squared called adjusted R-squared that penalizes model complexity (to control for overfitting), but it generally under-penalizes complexity.\nA better approach to feature selection is cross-validation. Cross-validation can be applied to any model, not only linear. Exampe with scikit-learn:",
"# redo above examples with scikit-learn\nfeature_cols = ['TV', 'Radio', 'Newspaper']\nX = data[feature_cols]\ny = data.Sales\n\n# usual scikit-learn pattern; import, instantiate, fit\n\nfrom sklearn.linear_model import LinearRegression\nlm = LinearRegression()\nlm.fit(X,y)\n\nlm.intercept_\n\nlm.coef_\n\n# pair the feature names with the coefficients\nzip(feature_cols, lm.coef_)\n\nlm.predict([100, 25, 25])\n\nlist(zip(feature_cols, lm.coef_))\n\n# calculate the R-squared\nlm.score(X, y)",
"What if one of our predictors was categorical, rather than numeric?",
"# set a seed for reproducibility\nnp.random.seed(12345)\n\nnums = np.random.rand(len(data))\nmask_large = nums > 0.5 # random cathegorical data small/large\n\n# initially set Size to small, then change roughly half to be large\n\ndata['Size'] = 'small'\ndata.loc[mask_large,'Size'] = 'large' # apply mask\ndata.head()\n\n\n# for scikit-learn, we need to represent all data numerically;\n\ndata['IsLarge'] = data.Size.map({'small':0, 'large':1})\n\ndata.head()\n\n# redo multiple linear regression and include IsLarge predictor\n\nfeature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge']\n\nX = data[feature_cols]\ny = data.Sales\n\n# instantiate, fit\nlm = LinearRegression()\nlm.fit(X,y)\n\nlist(zip(feature_cols, lm.coef_))",
"How do we interpret the IsLarge coefficient? For a given amount of TV/Radio/Newspaper ad spending, being a large market is associated with an average increase in sales of 57.42 widgets (compared to small market which is called the baseline level).\nNow what if we had categorical predictors with more than two categories? Say Area: rural, suburban, urban?",
"# for reproducibilitty\nnp.random.seed(123456)\n\n# assign roughly one third of observations in each category\n\nnums = np.random.rand(len(data))\nmask_suburban = (nums > 0.33) & (nums < 0.66)\nmask_urban = (nums > 0.66)\ndata['Area'] = 'rural'\ndata.loc[mask_suburban, 'Area'] = 'suburban'\ndata.loc[mask_urban, 'Area'] = 'urban'\ndata.head()",
"Again, we have to represent Area numerically, but we cannot simply encode it as 0=rural, 1=suburban, 2=urban because it would imply an ordered relationship between suburban and urban. Instead, another dummy:",
"# create three dummy variables using get_dummies, then exclude the first dummy column\narea_dummies = pd.get_dummies(data.Area, prefix='Area').iloc[:,1:]\n\narea_dummies.head()\n\ndata = pd.concat([data, area_dummies], axis=1)\ndata.head()",
"In general, if you have a categorical feature with k levels, you create k-1 dummy variables. Because the other dummies capture all the information about the feature. The \"left out\" will be the baseline. \nLet's include the new dummy variables in the model",
"feature_cols = feature_cols + ['Area_suburban', 'Area_urban']\n\nfeature_cols\n\nX = data[feature_cols]\ny = data.Sales\n\nlm = LinearRegression()\nlm.fit(X,y)\n\nlist(zip(feature_cols, lm.coef_))",
"How do we interpret, again? Holding all other variables fixed, being a suburban area is associated with an average decrease in sales of 106.56 widgets (compared to baseline which is rural). Being an urban area is associated with an average of 268 widgets sales increase.\n All of the above is limited by the fact that it can only make good predictions if there is a linear relationship between the features and the response",
"lm.predict([100,46,45, 1, 1, 0])"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dereneaton/RADmissing
|
emp_nb_Ohomopterus.ipynb
|
mit
|
[
"Notebook 9:\nThis is an IPython notebook. Most of the code is composed of bash scripts, indicated by %%bash at the top of the cell, otherwise it is IPython code. This notebook includes code to download, assemble and analyze a published RADseq data set.",
"### Notebook 9\n### Data set 9 (Ohomopterus)\n### Authors: Takahashi et al. 2014\n### Data Location: DRP001067",
"Download the sequence data\nSequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data. \n\nProject DRA: DRA001025\nStudy: DRP001067\nSRA link: http://trace.ddbj.nig.ac.jp/DRASearch/study?acc=DRP001067",
"%%bash\n## make a new directory for this analysis\nmkdir -p empirical_9/fastq/",
"For each ERS (individuals) get all of the ERR (sequence file accessions).",
"import os\n\ndef wget_download_ddbj(SRR, outdir):\n \"\"\" Python function to get sra data from ncbi and write to\n outdir with a new name using bash call wget \"\"\"\n \n ## create a call string \n call = \"wget -q -r -nH --cut-dirs=9 -P \"+outdir+\" \"+\\\n \"ftp://ftp.ddbj.nig.ac.jp/ddbj_database/dra/sra/ByExp/\"+\\\n \"sra/DRX/DRX011/DRX011{:03d}\".format(SRR)\n \n ## run wget call \n ! $call\n\nfor ID in range(602,634):\n wget_download_ddbj(ID, \"empirical_9/fastq/\")",
"Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.",
"%%bash\n## convert sra files to fastq using fastq-dump tool\n## output as gzipped into the fastq directory\nfastq-dump --gzip -O empirical_9/fastq/ empirical_9/fastq/*.sra\n\n## remove .sra files\nrm empirical_9/fastq/*.sra\n\n%%bash\nls -lh empirical_9/fastq/",
"Make a params file",
"%%bash\npyrad --version\n\n%%bash\n## remove old params file if it exists\nrm params.txt \n\n## create a new default params file\npyrad -n ",
"Note:\nThe data here are from Illumina Casava <1.8, so the phred scores are offset by 64 instead of 33, so we use that in the params file below.",
"%%bash\n## substitute new parameters into file\nsed -i '/## 1. /c\\empirical_9/ ## 1. working directory ' params.txt\nsed -i '/## 6. /c\\TGCAG ## 6. cutters ' params.txt\nsed -i '/## 7. /c\\20 ## 7. N processors ' params.txt\nsed -i '/## 9. /c\\6 ## 9. NQual ' params.txt\nsed -i '/## 10./c\\.85 ## 10. clust threshold ' params.txt\nsed -i '/## 12./c\\4 ## 12. MinCov ' params.txt\nsed -i '/## 13./c\\10 ## 13. maxSH ' params.txt\nsed -i '/## 14./c\\empirical_9_m4 ## 14. output name ' params.txt\nsed -i '/## 18./c\\empirical_9/fastq/*.gz ## 18. data location ' params.txt\nsed -i '/## 29./c\\2,2 ## 29. trim overhang ' params.txt\nsed -i '/## 30./c\\p,n,s ## 30. output formats ' params.txt\n\ncat params.txt",
"Trimming the barcode\nIn this data set the data were uploaded separated by sample, but with the barcode still attached to the sequences. The python code below will remove the 5bp barcode from each sequence.",
"os.path.splitext(fil)\n\nimport glob\nimport itertools\nimport gzip\nimport os\n\n\nfor infile in glob.glob(\"empirical_9/fastq/DRR*\"):\n iofile = gzip.open(infile, 'rb')\n dire, fil = os.path.split(infile)\n fastq = os.path.splitext(fil)[0]\n outhandle = os.path.join(dire, \"T_\"+fastq)\n outfile = open(outhandle, 'wb')\n \n data = iter(iofile)\n store = []\n while 1:\n try:\n line = data.next()\n except StopIteration:\n break\n if len(line) < 80:\n store.append(line)\n else:\n store.append(line[5:])\n \n if len(store) == 10000:\n outfile.write(\"\".join(store))\n store = []\n \n iofile.close()\n outfile.close() \n ! gzip $outhandle",
"Assemble in pyrad",
"%%bash\npyrad -p params.txt -s 234567 >> log.txt 2>&1 \n\n%%bash\nsed -i '/## 12./c\\2 ## 12. MinCov ' params.txt\nsed -i '/## 14./c\\empirical_9_m2 ## 14. output name ' params.txt\n\n%%bash\npyrad -p params.txt -s 7 >> log.txt 2>&1 ",
"Results\nWe are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples. \nRaw data amounts\nThe average number of raw reads per sample is 1.36M.",
"import pandas as pd\nimport numpy as np\n\n## read in the data\ns2dat = pd.read_table(\"empirical_9/stats/s2.rawedit.txt\", header=0, nrows=32)\n\n## print summary stats\nprint s2dat[\"passed.total\"].describe()\n\n## find which sample has the most raw data\nmaxraw = s2dat[\"passed.total\"].max()\nprint \"\\nmost raw data in sample:\"\nprint s2dat['sample '][s2dat['passed.total']==maxraw]",
"Look at distributions of coverage\npyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std of coverages is pretty low in this data set compared to several others.",
"## read in the s3 results\ns9dat = pd.read_table(\"empirical_9/stats/s3.clusters.txt\", header=0, nrows=33)\n\n## print summary stats\nprint \"summary of means\\n==================\"\nprint s9dat['dpt.me'].describe()\n\n## print summary stats\nprint \"\\nsummary of std\\n==================\"\nprint s9dat['dpt.sd'].describe()\n\n## print summary stats\nprint \"\\nsummary of proportion lowdepth\\n==================\"\nprint pd.Series(1-s9dat['d>5.tot']/s9dat[\"total\"]).describe()\n\n## find which sample has the greatest depth of retained loci\nmax_hiprop = (s9dat[\"d>5.tot\"]/s9dat[\"total\"]).max()\nprint \"\\nhighest coverage in sample:\"\nprint s9dat['taxa'][s9dat['d>5.tot']/s9dat[\"total\"]==max_hiprop]\n\nmaxprop =(s9dat['d>5.tot']/s9dat['total']).max()\nprint \"\\nhighest prop coverage in sample:\"\nprint s9dat['taxa'][s9dat['d>5.tot']/s9dat['total']==maxprop]\n\n## print mean and std of coverage for the highest coverage sample\nwith open(\"empirical_9/clust.85/T_DRR021966.depths\", 'rb') as indat:\n depths = np.array(indat.read().strip().split(\",\"), dtype=int)\n \nprint \"Means for sample T_DRR021966\"\nprint depths.mean(), depths.std()\nprint depths[depths>5].mean(), depths[depths>5].std()",
"Plot the coverage for the sample with highest mean coverage\nGreen shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.",
"import toyplot\nimport toyplot.svg\nimport numpy as np\n\n## read in the depth information for this sample\nwith open(\"empirical_9/clust.85/T_DRR021966.depths\", 'rb') as indat:\n depths = np.array(indat.read().strip().split(\",\"), dtype=int)\n \n## make a barplot in Toyplot\ncanvas = toyplot.Canvas(width=350, height=300)\naxes = canvas.axes(xlabel=\"Depth of coverage (N reads)\", \n ylabel=\"N loci\", \n label=\"dataset9/sample=T_DRR021966\")\n\n## select the loci with depth > 5 (kept)\nkeeps = depths[depths>5]\n\n## plot kept and discarded loci\nedat = np.histogram(depths, range(30)) # density=True)\nkdat = np.histogram(keeps, range(30)) #, density=True)\naxes.bars(edat)\naxes.bars(kdat)\n\n#toyplot.svg.render(canvas, \"empirical_9_depthplot.svg\")",
"Print final stats table",
"cat empirical_9/stats/empirical_9_m4.stats\n\n\n%%bash\nhead -n 20 empirical_9/stats/empirical_9_m2.stats",
"Infer ML phylogeny in raxml as an unrooted tree",
"%%bash\n## raxml argumement w/ ...\nraxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \\\n -w /home/deren/Documents/RADmissing/empirical_9/ \\\n -n empirical_9_m4 -s empirical_9/outfiles/empirical_9_m4.phy\n \n\n%%bash\n## raxml argumement w/ ...\nraxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \\\n -w /home/deren/Documents/RADmissing/empirical_9/ \\\n -n empirical_9_m2 -s empirical_9/outfiles/empirical_9_m2.phy\n \n\n%%bash \nhead -n 20 empirical_9/RAxML_info.empirical_9_m4\n\n%%bash \nhead -n 20 empirical_9/RAxML_info.empirical_9_m2",
"Plot the tree in R using ape",
"%load_ext rpy2.ipython\n\n%%R -h 800 -w 800\nlibrary(ape)\ntre <- read.tree(\"empirical_9/RAxML_bipartitions.empirical_9\")\nltre <- ladderize(tre)\n\npar(mfrow=c(1,2))\nplot(ltre, use.edge.length=F)\nnodelabels(ltre$node.label)\n\nplot(ltre, type='u')",
"Get phylo distances (GTRgamma dist)",
"%%R\nmean(cophenetic.phylo(ltre))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/gcp-getting-started-lab-jp
|
machine_learning/cloud_ai_building_blocks/conversation_ja.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/gcp-getting-started-lab-jp/blob/master/machine_learning/cloud_ai_building_blocks/conversation_ja.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n\n```\nCopyright 2019 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n```\n\n事前準備\n\nGCP プロジェクト を作成します。\n課金設定 を有効にします。\nAPI Key を作成します。\nCloud Speech-to-Text API と Cloud Text-to-Speech API を有効にします。\n\nGoogle Cloud API の認証情報を入力\nGoogle Cloud API を REST インタフェースから利用するために、 API Key を利用します。 Google Cloud Console から API Key をコピーしましょう。",
"import getpass\nAPIKEY = getpass.getpass()",
"Cloud Speech-to-Text API を使ってみよう !\nGoogle Cloud Speech-to-Text では、使いやすい API で高度なニューラル ネットワーク モデルを適用し、音声をテキストに変換できます。\nAPI Discovery Service を利用して Cloud Speech-to-Text API を発見します。 Cloud Speech-to-Text の REST API 仕様は こちら に解説されています。",
"from googleapiclient.discovery import build\n\nspeech_service = build('speech', 'v1p1beta1', developerKey=APIKEY)",
"音声データの準備\n音声録音のための関数 record_audio を定義しましょう。",
"#@title このセルを実行して record_audio を定義\n\n# Install required libraries and packages\n!pip install -qq pydub\n!apt-get -qq update\n!apt-get -qq install -y ffmpeg\n\n# Define record_audio\nimport base64\nimport google.colab\nimport pydub\n\nfrom io import BytesIO\n\ndef record_audio(file_id, framerate=16000, channels=1, file_format='flac'):\n # Record webm file from Colaboratory.\n audio = google.colab._message.blocking_request(\n 'user_media',\n {\n 'audio': True,\n 'video': False,\n 'duration': -1\n },\n timeout_sec=600)\n\n # Convert web file into in_memory file.\n mfile = BytesIO(base64.b64decode(audio[audio.index(',')+1:]))\n\n # Store webm file locally.\n with open('{0}.webm'.format(file_id), 'wb') as f:\n mfile.seek(0)\n f.write(mfile.read())\n\n # Open stored web file and save it as wav with sample_rate=16000\n output_file = '{0}.{1}'.format(file_id, file_format)\n _ = pydub.AudioSegment.from_file('{0}.webm'.format(file_id), codec='opus')\n _ = _.set_channels(channels)\n _.set_frame_rate(framerate).export(output_file, format=file_format)\n\n return output_file",
"record_audio を実行して音声を録音しましょう。",
"audio_filename = record_audio('ja-sample', framerate=16000, channels=1)",
"録音結果を確認しましょう。",
"from IPython.display import Audio\n\nAudio(audio_filename, rate=16000)",
"音声認識の実行\nCloud Speech-to-Text API に入力する情報を定義します.",
"from base64 import b64encode\nfrom json import dumps\n\nlanguageCode = 'en-US' #@param [\"en-US\", \"ja-JP\", \"en-IN\"]\nmodel = 'default' #@param [\"command_and_search\", \"phone_call\", \"video\", \"default\"]",
"入力する音声データを定義します。",
"with open(audio_filename, 'rb') as audio_file:\n content = b64encode(audio_file.read()).decode('utf-8')\n\nmy_audio = {\n 'content': content\n}",
"RecognitionConfig を定義します。",
"my_recognition_config = {\n 'encoding': 'FLAC',\n 'sampleRateHertz': 16000,\n 'languageCode': languageCode,\n 'model': model\n}",
"recognize method のリクエストメッセージの body を定義します。",
"my_request_body={\n 'audio': my_audio,\n 'config': my_recognition_config,\n}",
"recognize method を実行します。",
"response = speech_service.speech().recognize(body=my_request_body).execute()",
"recognize method のレスポンスを確認します。",
"response\n\nfor r in response[\"results\"]:\n print('認識結果: ', r['alternatives'][0]['transcript'])\n print('信頼度: ', r['alternatives'][0]['confidence'])",
"単語のタイムスタンプの取得\nRecognitionConfig に enableWordTimeOffsets の設定を追加します。",
"my_recognition_config = {\n 'encoding': 'FLAC',\n 'sampleRateHertz': 16000,\n 'languageCode': languageCode,\n 'model': model,\n 'enableWordTimeOffsets': True\n}\n\nmy_request_body={\n 'audio': my_audio,\n 'config': my_recognition_config,\n}",
"recognize method を実行します。",
"response = speech_service.speech().recognize(body=my_request_body).execute()",
"recognize method のレスポンスを確認します。",
"response\n\nfor r in response[\"results\"]:\n print('認識結果: ', r['alternatives'][0]['transcript'])\n print('信頼度: ', r['alternatives'][0]['confidence'], \"\\n\")\n\nfor r in response[\"results\"][0]['alternatives'][0][\"words\"]:\n print(\"word: \", r[\"word\"])\n print(\"startTime: \", r[\"startTime\"])\n print(\"endTime: \", r[\"endTime\"], \"\\n\")",
"演習問題\n 1. こちらを参考にして、単語レベルの信頼度を見てみましょう\nCloud Text-to-Speech を使ってみよう !\nCloud Text-to-Speech を使うと、自然な会話音声を合成できます。用意されている声は 30 種類。数多くの言語と方言に対応します。\nAPI Discovery Service を利用して Cloud Text-to-Speech API を発見します。 Cloud Text-to-Speech の REST API 仕様は こちら に解説されています。",
"import textwrap\nfrom googleapiclient.discovery import build\nservice = build('texttospeech', 'v1beta1', developerKey=APIKEY)",
"サポートされているすべての音声の一覧表示\nテキスト読み上げ合成のために Cloud Text-to-Speech API で使用できる音声を一覧表示します。なお、 languageCode はこちらを参考にしてください。",
"response = service.voices().list(\n languageCode=\"ja_JP\",\n).execute()\n\nfor voice in response['voices']:\n print(voice)",
"テキストから音声を合成する\ntext.synthesize メソッドを使用すると、単語や文を自然な人間の音声の base64 でエンコードされた音声データに変換できます。このメソッドは、入力を生のテキストまたは音声合成マークアップ言語(SSML)として受け入れます。",
"source_language = \"ja_JP\" #@param {type: \"string\"}\nsource_sentence = \"Google Cloud Text-to-Speech \\u3092\\u4F7F\\u3046\\u3068\\u3001\\u81EA\\u7136\\u306A\\u4F1A\\u8A71\\u97F3\\u58F0\\u3092\\u5408\\u6210\\u3067\\u304D\\u307E\\u3059\\u3002\" #@param {type:\"string\"}\naudio_encoding = 'OGG_OPUS' #@param ['OGG_OPUS', 'LINEAR16', 'MP3']\nvoice_gender = 'FEMALE' #@param ['FEMALE', 'MALE', 'NEUTRAL', 'SSML_VOICE_GENDER_UNSPECIFIED']\ntextwrap.wrap(source_sentence)\nvoice_name = 'ja-JP-Wavenet-A' #@param {type: \"string\"}\n\nresponse = service.text().synthesize(\n body={\n 'input': {\n 'text': source_sentence,\n },\n 'voice': {\n 'languageCode': source_language,\n 'ssmlGender': voice_gender,\n 'name': voice_name,\n },\n 'audioConfig': {\n 'audioEncoding': audio_encoding,\n },\n }\n).execute()",
"合成した音声を確認しましょう",
"import base64\nfrom IPython.display import Audio\n\nAudio(base64.b64decode(response['audioContent']))",
"演習問題\n 1. 日本語のテキストを Standard モデルで音声合成してみましょう。\n2. 英語のテキストを様々なモデルで音声合成してみましょう。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
statsmodels/statsmodels.github.io
|
v0.13.1/examples/notebooks/generated/statespace_cycles.ipynb
|
bsd-3-clause
|
[
"Trends and cycles in unemployment\nHere we consider three methods for separating a trend and cycle in economic data. Supposing we have a time series $y_t$, the basic idea is to decompose it into these two components:\n$$\ny_t = \\mu_t + \\eta_t\n$$\nwhere $\\mu_t$ represents the trend or level and $\\eta_t$ represents the cyclical component. In this case, we consider a stochastic trend, so that $\\mu_t$ is a random variable and not a deterministic function of time. Two of methods fall under the heading of \"unobserved components\" models, and the third is the popular Hodrick-Prescott (HP) filter. Consistent with e.g. Harvey and Jaeger (1993), we find that these models all produce similar decompositions.\nThis notebook demonstrates applying these models to separate trend from cycle in the U.S. unemployment rate.",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\nfrom pandas_datareader.data import DataReader\nendog = DataReader('UNRATE', 'fred', start='1954-01-01')\nendog.index.freq = endog.index.inferred_freq",
"Hodrick-Prescott (HP) filter\nThe first method is the Hodrick-Prescott filter, which can be applied to a data series in a very straightforward method. Here we specify the parameter $\\lambda=129600$ because the unemployment rate is observed monthly.",
"hp_cycle, hp_trend = sm.tsa.filters.hpfilter(endog, lamb=129600)",
"Unobserved components and ARIMA model (UC-ARIMA)\nThe next method is an unobserved components model, where the trend is modeled as a random walk and the cycle is modeled with an ARIMA model - in particular, here we use an AR(4) model. The process for the time series can be written as:\n$$\n\\begin{align}\ny_t & = \\mu_t + \\eta_t \\\n\\mu_{t+1} & = \\mu_t + \\epsilon_{t+1} \\\n\\phi(L) \\eta_t & = \\nu_t\n\\end{align}\n$$\nwhere $\\phi(L)$ is the AR(4) lag polynomial and $\\epsilon_t$ and $\\nu_t$ are white noise.",
"mod_ucarima = sm.tsa.UnobservedComponents(endog, 'rwalk', autoregressive=4)\n# Here the powell method is used, since it achieves a\n# higher loglikelihood than the default L-BFGS method\nres_ucarima = mod_ucarima.fit(method='powell', disp=False)\nprint(res_ucarima.summary())",
"Unobserved components with stochastic cycle (UC)\nThe final method is also an unobserved components model, but where the cycle is modeled explicitly.\n$$\n\\begin{align}\ny_t & = \\mu_t + \\eta_t \\\n\\mu_{t+1} & = \\mu_t + \\epsilon_{t+1} \\\n\\eta_{t+1} & = \\eta_t \\cos \\lambda_\\eta + \\eta_t^ \\sin \\lambda_\\eta + \\tilde \\omega_t \\qquad & \\tilde \\omega_t \\sim N(0, \\sigma_{\\tilde \\omega}^2) \\\n\\eta_{t+1}^ & = -\\eta_t \\sin \\lambda_\\eta + \\eta_t^ \\cos \\lambda_\\eta + \\tilde \\omega_t^ & \\tilde \\omega_t^* \\sim N(0, \\sigma_{\\tilde \\omega}^2)\n\\end{align}\n$$",
"mod_uc = sm.tsa.UnobservedComponents(\n endog, 'rwalk',\n cycle=True, stochastic_cycle=True, damped_cycle=True,\n)\n# Here the powell method gets close to the optimum\nres_uc = mod_uc.fit(method='powell', disp=False)\n# but to get to the highest loglikelihood we do a\n# second round using the L-BFGS method.\nres_uc = mod_uc.fit(res_uc.params, disp=False)\nprint(res_uc.summary())",
"Graphical comparison\nThe output of each of these models is an estimate of the trend component $\\mu_t$ and an estimate of the cyclical component $\\eta_t$. Qualitatively the estimates of trend and cycle are very similar, although the trend component from the HP filter is somewhat more variable than those from the unobserved components models. This means that relatively mode of the movement in the unemployment rate is attributed to changes in the underlying trend rather than to temporary cyclical movements.",
"fig, axes = plt.subplots(2, figsize=(13,5));\naxes[0].set(title='Level/trend component')\naxes[0].plot(endog.index, res_uc.level.smoothed, label='UC')\naxes[0].plot(endog.index, res_ucarima.level.smoothed, label='UC-ARIMA(2,0)')\naxes[0].plot(hp_trend, label='HP Filter')\naxes[0].legend(loc='upper left')\naxes[0].grid()\n\naxes[1].set(title='Cycle component')\naxes[1].plot(endog.index, res_uc.cycle.smoothed, label='UC')\naxes[1].plot(endog.index, res_ucarima.autoregressive.smoothed, label='UC-ARIMA(2,0)')\naxes[1].plot(hp_cycle, label='HP Filter')\naxes[1].legend(loc='upper left')\naxes[1].grid()\n\nfig.tight_layout();"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MadcowD/cs189
|
hw7/hw.ipynb
|
mit
|
[
"%matplotlib inline \nimport scipy.io\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pylab as plt\nfrom IPython import display\nimport random \nimport matplotlib\n\n# LOAD TRAINING DATA\ntrain_mat = scipy.io.loadmat('data/mnist_data/images.mat')\ntrain_images_raw = train_mat['images'].T\ntrain_images_corrected = []\nfor i in range(len(train_images_raw)):\n train_images_corrected.append(train_images_raw[i].T.ravel())\ntrain_images_corrected = np.array(train_images_corrected)\n#Now let's try kmeans clusterings\n\n\n\n# 1 reclassify\n# 2 reset mean.\n\n#A single iteration of reclassification.\n\n\ndef kmeans(data, k):\n mu = [np.random.rand(data[0].shape[0])*255/2.0 for i in range(k)]\n k_classes = [[] for i in range(k)]\n changed = True\n truth = [True for i in range(k)]\n while changed:\n\n\n for image in data:\n index, closest, dif = 0, mu[0], 100000000000\n for i, mu_i in enumerate(mu):\n dist = np.linalg.norm(mu_i - image)\n if dist < dif:\n index,closest,dif = i, mu_i, dist\n\n k_classes[index].append(image)\n\n\n for i in range(len(mu)):\n temp = np.mean(k_classes[i], axis=0)\n truth[i] = temp != mu[i]\n mu[i] = temp\n\n for val in truth:\n if val is True:\n changed = True\n break\n else:\n changed = False\n\n return mu, k_classes\n\nk = 10\nmu, classes = kmeans(train_images_corrected, k)\nfor i in range(k):\n plt.imshow(mu[i].reshape(28,28))\n print(\"Class\", i, \"Count:\", len(classes[i]))\n fuck = \"class\" + str(i);\n plt.savefig(fuck + \".png\")\n plt.show()",
"Joke Recommender System\nFirst we'll need to load the data.",
"joke_train_mat = scipy.io.loadmat('./data/joke_data/joke_train.mat')\nj_up_train = joke_train_mat['train']\n\n#load validation set.\nfrom numpy import genfromtxt\nvalid_data = genfromtxt('./data/joke_data/validation.txt', delimiter=',')\nvalid_data[valid_data == 0] = -1\nvalid_data = valid_data - np.array([1,1,0])\n\n\ndef loss(user, joke, actual, predictor):\n pred = predictor(user,joke)\n return np.sign(pred) != np.sign(actual)*1\n\ndef validate(predictor):\n net_error = 0\n for example in valid_data:\n net_error += loss(example[0], example[1], example[2], predictor)\n return net_error/len(valid_data)\n\n# simple system. Get the average rating of a joke.\n#copy the up_train\nj_avg_train = np.array(j_up_train[:,:])\nj_avg_train_nan_index = np.isnan(j_avg_train)\nj_avg_train[j_avg_train_nan_index] = 0\nj_score = np.mean(j_avg_train, axis=0)\n#j_score is the joke score.\n\ndef lame_score(user, joke):\n return j_score[joke]\n\nvalidate(lame_score)",
"k-Nearest Neighboors.\nWe do a k-nearest neighboors approach and take the average of those nearest neighboors. Supervised power in the aggreegate that is the crowd.",
"# First lets calculate the distance matrix on all of these dudes. :)\nimport scipy.spatial\nj_dist_matrix = scipy.spatial.distance.pdist(j_avg_train, 'euclidean')\n\ncache = {}\ncondensed_idx = lambda i,j,n: i*n + j - i*(i+1)/2 - i - 1\ndef getid(ii,jj):\n i = min(ii,jj)\n j = max(ii,jj)\n return condensed_idx(i,j,len(j_avg_train))\n\ndef get_k_classifier(k):\n print(k)\n def knn_score(user, joke):\n # get k closest neighboors\n if user not in cache:\n cache[user] = np.argsort([j_dist_matrix[getid(user,j)]*(j != user) for j in range(len(j_avg_train))])\n \n neighbors = cache[user][:k]\n return np.mean(np.array(list(map(lambda i: j_avg_train[i][joke], neighbors))))\n\n return knn_score\n \n\nks = [10, 100, 1000]\nerror = []\nfor k in ks:\n error.append(validate(get_k_classifier(k)))\nplt.plot(ks, error)",
"Latent Factor Model\nWe use the latent factor model on our data",
"# First let's try SVD on the 0 replaced matrix\nU, s, V = np.linalg.svd(j_avg_train, full_matrices=False)\n\n\ndef j_avg_mse(U, S, V):\n pred = U.dot(S.dot(V))\n net = 0\n n= 0\n for i in range(j_up_train.shape[0]):\n for j in range(j_up_train.shape[1]):\n if not np.isnan(j_up_train[i,j]):\n net += (pred[i,j] - j_up_train[i,j])**2\n n += 1\n \n return net/n\n \n \n\nds = [2,3,4,5,6,7,8,9,10,20,100]\nerror = []\nfor d in ds:\n print(d)\n S = np.diag(np.append(s[:d], np.zeros(len(s)-d)))\n error.append(j_avg_mse(U,S,V))\nplt.plot(ds,error)\n\n# Let's see our validation error.\ndef get_latent(d):\n S = np.diag(np.append(s[:d], np.zeros(len(s)-d)))\n Vprime = S.dot(V)\n def latent_classifier(user ,joke):\n return U[user].dot(Vprime.T[joke])\n return latent_classifier\n\nds = [1,2,3,4,5,6,7,8,9,10,20,100]\nerror = []\nfor d in ds:\n error.append(validate(get_latent(d)))\nplt.plot(ds,error)\nerror",
"Latent Minimizers\nWe now use the alternating minimizer strategy",
"S = np.isfinite(j_up_train)\n\nclass latent_factor_model:\n def __init__(self, lab):\n self.lab = lab\n self.u = np.random.random(U.shape)\n self.v = np.random.random(V.shape)\n \n def minimize_u(self):\n v_outer = []\n for l in range(len(self.v)):\n v_outer.append(np.outer(self.v[l], self.v[l]))\n \n for i in range(len(self.u)):\n inv = sum([v_outer[l] for l in range(len(self.v)) if S[i,l]])\n inv += self.lab*np.identity(len(inv))\n other = sum([self.v[l]*j_up_train[i,l] for l in range(len(self.v)) if S[i,l]])\n self.u[i] = np.linalg.inv(inv).dot(other)\n pass\n \n def minimize_v(self):\n u_outer = []\n for k in range(len(self.u)):\n u_outer.append(np.outer(self.u[k], self.u[k]))\n \n for j in range(len(self.v)):\n inv = sum([u_outer[k] for k in range(len(self.u)) if S[k,j]])\n inv += self.lab*np.identity(len(inv))\n other = sum([self.u[k]*j_up_train[k,j] for k in range(len(self.u)) if S[k,j]])\n self.v[j] = np.linalg.inv(inv).dot(other)\n pass\n \n \n def trainer(self):\n while True:\n self.minimize_u()\n print(\"Minimized u\")\n self.minimize_v()\n print(\"Minimized v\")\n\n\n yield self.lfm\n \n def lfm(self, user, joke):\n return self.u[user].dot(self.v[joke])\n \n\nmodel = latent_factor_model(10)\ntrainer = model.trainer()\nfor i in range(10):\n print(\"Epoch\", i)\n errors.append(validate(next(trainer)))\n print(errors[i])\n\nfor i in range(10):\n print(\"Epoch\", i)\n errors.append(validate(next(trainer)))\n print(errors[-1])\n\nfor i in range(10):\n print(\"Epoch\", i)\n errors.append(validate(next(trainer)))\n print(errors[-1])\n\nmodel.minimize_v()\n\n#Okay so now let's do our validation\n1 - validate(model.lfm) #accuracy\n\n\n#load test set.\nfrom numpy import genfromtxt\nvalid_data = genfromtxt('./data/joke_data/query.txt', delimiter=',')\nvalid_data = valid_data - np.array([0,1,1])\n\ndef run_test(predictor):\n predictions = []\n for row in valid_data:\n score = predictor(row[1], row[2]) \n \n predictions.append((score > 0)*1)\n return np.array(predictions)\n\npreds = run_test(model.lfm)\n\nnp.savetxt('kaggle.csv', preds.astype(int), delimiter=',') \n\nplt.plot(errors)\n\nnp.savetxt('lfm_u.txt', model.u) \n\nnp.savetxt('lfm_v.txt', model.v)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mehmetcanbudak/JupyterWorkflow
|
JupyterWorkflow.ipynb
|
mit
|
[
"JupyterWorkflow\nFrom exploratory analysis to reproducible research\nMehmetcan Budak",
"URL = \"https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD\"\n\nfrom urllib.request import urlretrieve\nurlretrieve(URL, \"Fremont.csv\")\n\n!head Freemont.csv\n\nimport pandas as pd\ndata = pd.read_csv(\"Fremont.csv\")\ndata.head()\n\ndata = pd.read_csv(\"Fremont.csv\", index_col=\"Date\", parse_dates=True)\ndata.head()\n\n%matplotlib inline\ndata.index = pd.to_datetime(data.index)\ndata.plot()\n\ndata.resample('W').sum().plot()\n\nimport matplotlib.pyplot as plt\nplt.style.use(\"seaborn\")\ndata.resample(\"W\").sum().plot()\n\ndata.columns = [\"West\", \"East\"]\ndata.resample(\"W\").sum().plot()",
"Look for Annual Trend; growth-decline over ridership\nLet's try a rolling window. Over 365 days rolling sum",
"data.resample(\"D\").sum().rolling(365).sum().plot()",
"They don't go all the way to zero so let's set the y lenght to zero to none. current maxima.",
"ax = data.resample(\"D\").sum().rolling(365).sum().plot()\nax.set_ylim(0, None)",
"There seems to be a offset between the left and right sidewalk. Let's plot them. See their trends.",
"data[\"Total\"] = data[\"West\"] + data[\"East\"]\n\nax = data.resample(\"D\").sum().rolling(365).sum().plot()\nax.set_ylim(0, None)",
"Somehow the east and west side trends are reversed so the total bike rides across the bridge hover around 1 million and pretty accurent over the last couple of years +- couple percent.\nLet's group by time of day and let's take it's mean and plot it.",
"data.groupby(data.index.time).mean().plot()",
"Let's see the whole data set in this way not just this average. We will do a pivot table.",
"pivoted = data.pivot_table(\"Total\", index=data.index.time, columns=data.index.date)\npivoted.iloc[:5, :5]\n",
"We now have a 2d data set. Each column is a day and each row is an hour during that day.\nLet's take legend off and plot it.",
"pivoted.plot(legend=False)",
"Let's reduce transparency to see better.",
"pivoted.plot(legend=False,alpha=0.01)",
"Let's do a quick, \"Restart & Run All\" to check if the analysis is reproduced to same way we did.\nLet's upload this to GitHub.\nLet's create a new repository, initialize it with a readme, add a python .gitignore and add MIT license. Create the repository.\nGet the https key from the repository.\nOpen the terminal to the same directory as we are writing this in.\ngit clone {whatever-the-copied-thing-is}\ncd JupyterWorkflow to see that readme and license are there.\nmv JupyterWorkflow.ipynb JupyterWorkflow\ncd JupyterWorkflow\ngit status to see that the newly copied document isn't updated with\ngit add JupyterNotebook.ipynbb\ngit commit - m \"Add initial analysis notebook\nThis is the comment\ngit push origin master\nput your password\ngit status\nthe notebook should be on github now.\nWe have Fremont.csv, data in out github repository and we are okay for now because this data is small but with a larger data set, we wan't to actually make sure we don't want to accidentally commit this data \"Freemont.csv\" into the repository.\nopen up the .gitignore file that we created while setting up the github repository\nAdd this to the bottom telling to ignore it to git status\ndata\nFreemont.csv\ngit add .gitignore\ngit commit -m \"Add data to git ignore\"\ngit status\ngit push origin master"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ssunkara1/bqplot
|
examples/Marks/Pyplot/HeatMap.ipynb
|
apache-2.0
|
[
"Heatmap\nThe HeatMap mark represents a 2d matrix of values as a color image. It can be used to visualize a 2d function, or a grayscale image for instance.\nHeatMap is very similar to the GridHeatMap, but should be preferred for a greater number of points (starting at around 100x100), to avoid overloading the browser. GridHeatMap offers more control (interactions, selections), and is better suited for a smaller number of points.",
"import numpy as np\nfrom ipywidgets import Layout\nimport bqplot.pyplot as plt\nfrom bqplot import *",
"Data Input\n\nx is a 1d array, corresponding to the abscissas of the points (size N)\ny is a 1d array, corresponding to the ordinates of the points (size M)\ncolor is a 2d array, $\\text{color}_{ij}$ is the intensity of the point $(x_i, y_j)$ (size (N, M))\n\nScales must be defined for each attribute:\n- a LinearScale, LogScale or OrdinalScale for x and y\n- a ColorScale for color",
"x = np.linspace(-5, 5, 200)\ny = np.linspace(-5, 5, 200)\nX, Y = np.meshgrid(x, y)\ncolor = np.cos(X**2 + Y**2)",
"Plotting a 2-dimensional function\nThis is a visualization of the function $f(x, y) = \\text{cos}(x^2+y^2)$",
"fig = plt.figure(title='Cosine',\n layout=Layout(width='650px', height='650px'),\n min_aspect_ratio=1, max_aspect_ratio=1, padding_y=0)\nheatmap = plt.heatmap(color, x=x, y=y)\nfig",
"Displaying an image\nThe HeatMap can be used as is to display a 2d grayscale image, by feeding the matrix of pixel intensities to the color attribute",
"from scipy.misc import ascent\nZ = ascent()\nZ = Z[::-1, :]\naspect_ratio = Z.shape[1] / Z.shape[0]\n\nimg = plt.figure(title='Ascent', layout=Layout(width='650px', height='650px'),\n min_aspect_ratio=aspect_ratio, \n max_aspect_ratio=aspect_ratio, padding_y=0)\nplt.scales(scales={'color': ColorScale(scheme='Greys', reverse=True)})\naxes_options = {'x': {'visible': False}, 'y': {'visible': False}, 'color': {'visible': False}}\nascent = plt.heatmap(Z, axes_options=axes_options)\nimg"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
metpy/MetPy
|
v0.8/_downloads/Hodograph_Inset.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Hodograph Inset\nLayout a Skew-T plot with a hodograph inset into the plot.",
"import matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1.inset_locator import inset_axes\nimport numpy as np\nimport pandas as pd\n\nimport metpy.calc as mpcalc\nfrom metpy.cbook import get_test_data\nfrom metpy.plots import add_metpy_logo, Hodograph, SkewT\nfrom metpy.units import units",
"Upper air data can be obtained using the siphon package, but for this example we will use\nsome of MetPy's sample data.",
"col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']\n\ndf = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False),\n skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)\n\ndf['u_wind'], df['v_wind'] = mpcalc.get_wind_components(df['speed'],\n np.deg2rad(df['direction']))\n\n# Drop any rows with all NaN values for T, Td, winds\ndf = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed',\n 'u_wind', 'v_wind'), how='all').reset_index(drop=True)",
"We will pull the data out of the example dataset into individual variables and\nassign units.",
"p = df['pressure'].values * units.hPa\nT = df['temperature'].values * units.degC\nTd = df['dewpoint'].values * units.degC\nwind_speed = df['speed'].values * units.knots\nwind_dir = df['direction'].values * units.degrees\nu, v = mpcalc.get_wind_components(wind_speed, wind_dir)\n\n# Create a new figure. The dimensions here give a good aspect ratio\nfig = plt.figure(figsize=(9, 9))\nadd_metpy_logo(fig, 115, 100)\n\n# Grid for plots\nskew = SkewT(fig, rotation=45)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\nskew.ax.set_ylim(1000, 100)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\n\n# Good bounds for aspect ratio\nskew.ax.set_xlim(-50, 60)\n\n# Create a hodograph\nax_hod = inset_axes(skew.ax, '40%', '40%', loc=1)\nh = Hodograph(ax_hod, component_range=80.)\nh.add_grid(increment=20)\nh.plot_colormapped(u, v, np.hypot(u, v))\n\n# Show the plot\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jhillairet/documents
|
notebooks/TP Master Fusion/LH-Hands-on-mode-converter.ipynb
|
mit
|
[
"Hands-on LH1: the $\\mathrm{TE}{10}$-$\\mathrm{TE}{30}$ Mode Converter\nIntroduction\nThe Tore Supra Lower Hybrid Launchers are equiped by $\\mathrm{TE}{10}$-$\\mathrm{TE}{30}$ mode converters, a waveguide structure which convert the RF power from a propagation mode to another, in order to split the power by three in the poloidal direction. The electric field topology in this device is illustrated in the following figure.\n<img src=\"./LH1_Mode-Converter_data/Efield.png\">\nDuring this hands-on, students are initiated to RF measurements and analysis. Before measuring the RF performances of a mode converter, the students are brought to calibrate the RF measurement apparatus, a (Vectorial) Network Analyser. This device measures the scattering parameters (or S-parameters) between two ports, generally referred as ports 1 and 2. The scattering parameters are defined in terms of incident and reflected waves at ports, and are widely used at microwave frequencies where it becomes difficult to measure voltages and currents directly. \nBefore starting, let's import the necessary Python libraries:",
"# This line configures matplotlib to show figures embedded in the notebook, \n# and also import the numpy library \n%pylab\n%matplotlib inline",
"Scattering parameters measurement\nThe following figure illustrates the measurement setup, and the adopted port indexing convention. \n<img src=\"./LH1_Mode-Converter_data/setup2.png\">\nBelow we import the measurements data generated by the network analyser. These data consist in an ASCII file (tab separated file), with the following usual header :\nwhere we have, by column number: \n\nThe frequency in Hertz, \nThe amplitude of the $S_{11}$ parameter in decibel (dB),\nThe phase of the $S_{11}$ parameter in degree, \nThe amplitude of the $S_{21}$ parameter in decibel (dB),\nThe phase of the $S_{21}$ parameter in degree, \n\nand etc for $S_{12}$ and $S_{22}$.\nBelow we import the first measurement data, performed between the port indexed \"0\" of the mode converter (the input, corresponding to the port 1 of the network analyser) and the port indexed \"1\" of the mode converter (corresponding to port 2 of the network analyser).",
"%cd C:\\Users\\JH218595\\Documents\\Notebooks\\TP Master Fusion\n\nCDM1 = np.loadtxt('./LH1_Mode-Converter_data/CDM01', skiprows=4)\nf_1 = CDM1[:,0]",
"The measurement contains $N$ frequency points, where $N$ is:",
"len(f_1)",
"Let's have a look to these data, for example by plotting the amplitude of the $S_{10}$ parameter, that is the ratio of the power coming from port 0 to port 1 (which corresponds to $S_{21}$ in the network analyser file).",
"S10_dB = CDM1[:,3]\nplot(f_1/1e9,S10_dB, lw=2)\nxlabel('f [GHz]')\nylabel('Amplitude [dB]')\ngrid('on')\ntitle('$S_{10}$ amplitude vs frequency')",
"OK. Let's do the same for the second and third measurements performed, that is for the power transferred from port 0 to ports 2 and 3.",
"CDM2 = loadtxt('LH1_Mode-Converter_data/CDM02', skiprows=4)\nCDM3 = loadtxt('LH1_Mode-Converter_data/CDM03', skiprows=4)\n\nf_2 = CDM2[:,0]\nf_3 = CDM3[:,0]\n\nS20_dB = CDM2[:,3]\nS30_dB = CDM3[:,3]\nS00_dB = CDM1[:,1]\n\nplot(f_1/1e9, S10_dB, f_2/1e9, S20_dB, f_3/1e9, S30_dB, lw=2)\nxlabel('f [GHz]')\nylabel('Amplitude [dB]')\ngrid('on')\ntitle('$S_{i0}$, $i=1,2,3$: amplitude vs frequency')\nlegend(('$S_{10}$','$S_{20}$', '$S_{30}$'),loc='best')\n\nidx = np.argmin(np.abs(f_1/1e9 - 3.7))\nprint(f'Measured values at 3.7 GHz: {S10_dB[idx]}, {S20_dB[idx]} and {S30_dB[idx]} dB')",
"Nice. Now, let's stop and think. The purpose of the mode converter is the transfer the power from the fundamental mode of rectangular waveguides, namely the $\\mathrm{TE}{10}$, into a higher order mode, the $\\mathrm{TE}{30}$. Once these mode conversion achieved, thin metallic wall septum are located in zero E-field regions, which allows to split the power into three independant waveguides.\nDividing the power by 3, is equivalent in decibels to:",
"10*log10(1.0/3.0)",
"Thus, ideally, the three transmission scattering parameters should be equal to -4.77 dB at the operational frequency, 3.7 GHz in our case. Clearly from the previous figure we see that it is not the case. The power splitting is unbalanced, and more power is directed to port 2 than to ports 1 and 3 at 3.7 GHz. In conclusion of this first serie of measurements: this mode converter is not working properly. The big deal is: \"why ?\".\nBefore continuying, it may be usefull to define a function that convert a (dB,degree) number into a natural complex number:",
"def dBdegree_2_natural(ampl_dB, phase_deg):\n amp = 10**(ampl_dB/20)\n phase_rad = np.pi/180*phase_deg\n return amp*np.exp(1j*phase_rad)",
"Thus, we can convert the (dB,degree) data into natural (real,imaginary) numbers",
"S00 = dBdegree_2_natural(CDM1[:,1],CDM1[:,2])\nS10 = dBdegree_2_natural(CDM1[:,3],CDM1[:,4])\nS20 = dBdegree_2_natural(CDM2[:,3],CDM2[:,4])\nS30 = dBdegree_2_natural(CDM3[:,3],CDM3[:,4])",
"Let's check the power conservation. If the calibration has been correctly performed and if the conduction losses are negligible, one has: $$\\sum_{i=0\\ldots 3} \\left | S_{j0} \\right|^2 = 1$$\nLet's try:",
"# Check the power conservation in dB : reflected+incident = all the power\nplot(f_1/1e9, 10**(S00_dB/10)+10**(S10_dB/10)+10**(S20_dB/10)+10**(S30_dB/10))\nylim([0, 1])\n\nplot(f_1/1e9, sum([abs(S00)**2,abs(S10)**2,abs(S20)**2,abs(S30)**2],axis=0))\nxlabel('f [GHz]')\ngrid('on')\nylim([0, 1])",
"We are close to 1. The difference is the conduction losses, but also the intrinsic measurement error. Let's see how much power is lost in the device in terms of percent :",
"plot(f_1/1e9, 100*(1-sum([abs(S00)**2,abs(S10)**2,abs(S20)**2,abs(S30)**2],axis=0)))\nxlabel('f [GHz]')\nylabel('Fraction of RF power lost in the device [%]')\ngrid('on')",
"Electric field measurements\nIn the previous section we figured out that the mode converter was not working properly as a 3-way splitter; indeed the power splitting is unbalanced. It's now time to understand why, since the guy who performed the RF modeling of the stucture is very good and is sure that his design is correct. In order to dig into this problem, we propose to probe the electric field after the mode converter but before the thin septum splitter: the objective is to \"see\" what is the electric field topology after the mode conversion (is it as we expect it should be?). But by the way, how should it be? \nIf the mode converter has performed well, the totality of the electromagnetic field should behave as a $\\mathrm{TE}_{30}$ mode.",
"# Electric field measurement\n# Measurement Hands-On Group #1\n# Feb 2013\n# columns correspond to hole rows 1,2,3\nampl_dB = -1.0*np.array([ # use a numpy array in order to avoid the caveat to multiply a list by a float\n [31.3, 32.0, 31.2],\n [30.6, 31.2, 30.2],\n [33.2, 33.3, 32.5],\n [42.8, 42.0, 41.7],\n [40.1, 41.3, 40.6],\n [32.7, 33.5, 32.7],\n [31.2, 31.4, 30.9],\n [32.5, 33.0, 32.8],\n [39.1, 39.9, 40.9],\n [46.6, 44.0, 42.0],\n [34.8, 33.2, 33.2],\n [32.4, 30.6, 30.6],\n [33.3, 30.9, 32.0]])\n\nphase_deg = np.array(\n [[-111.7, 64.9, -119.2],\n [-111.9, 67.2, -119.9],\n [-110.7, 67.1, -119.8],\n [-99.1, 75.7, -119.7],\n [55.90, -127.4, 60.1],\n [60.4, -123.3, 59.2],\n [64.3, -117.4, 62.8],\n [61.3, -119.7, 63.3],\n [50.9, -118.3, 68.1],\n [-95.8, 70.5, -120.9],\n [-107.9, 74.1, -116.0],\n [-112.0, 71.2, -112.6],\n [-116.6, 72.4, -116.0]])",
"Let's check if we have the same number of points for amplitude and phase:",
"shape(ampl_dB) == shape(phase_deg)",
"What does it look like?",
"f = 3.7e9 # fréquence de la mesure\na = 192e-3 # largeur grand coté du guide surdimensionné\nb = 40e-3\n\n## On ne prend que N mesures aux N abscisses de mesures\n# définition des emplacements des N abscisses :\nx_mes = 24e-3 + arange(0, 12*12e-3, step=12e-3) # Modele @3.7GHz d'après S.Berio\nx = linspace(0, a, 100) # theorical values\n\n## Mesures tirées de la thèse de S.Berio p.30\n# definition des emplacements des 3 séries de \"mesures\"\nz_mes = [0, 28*1e-3, 100*1e-3]\n\nz_mes = [0, 53*1e-3, 106*1e-3]\n\n\nfig, (ax1,ax2) = plt.subplots(2,1,sharex=True)\nax1.plot(x_mes*1e3, ampl_dB, '-o')\nax1.set_ylabel('amplitude [dB]')\nax1.grid(True)\nax1.set_title('Amplitude and phase measurements (3 rows)')\n\nax2.plot(x_mes*1e3, phase_deg, '--.')\nax2.set_xlabel('Measurement location x [mm] #')\nax2.set_ylabel('(relative) phase [deg]')\nax2.grid(True)\nax2.set_ylim(-180, 180)\n\nfig.savefig('mode-converter_measurements_dB.png', dpi=150)",
"In natural values",
"measures = dBdegree_2_natural(ampl_dB, phase_deg)\n\nfig, (ax1,ax2) = plt.subplots(2,1,sharex=True)\nax1.plot(x_mes*1e3, real(measures),'-x')\nax1.set_ylabel('amplitude [a.u.]')\nax1.grid(True)\nax1.set_title('Amplitude and phase measurements (3 rows)')\n\nax2.plot(x_mes*1e3, phase_deg, '--.')\nax2.set_xlabel('Measurement location x [mm] #')\nax2.set_ylabel('(relative) phase [deg]')\nax2.grid(True)\nax2.set_ylim(-180, 180)\n\nfig.savefig('mode-converter_measurements.png', dpi=150)\n",
"From the previous amplitude figures, one can remark that the measureed amplitude is not ideal: the maxima and the minima seem not have exactly the same values. Thus is seems that the mode after the mode converter is not a pure $\\mathrm{TE}_{30}$, but probably a mixture of various modes. The question is : what is that mixture of modes? Our objective is to deduce that from the Efield probe data. \nElectromagnetic model\nLet's define first some usefull functions:\nThe following function calculates the electric field at a point $(x,z)$ of the waveguide of shape $(a,b)$ for $N$ modes:\n$$\nE_y (x,z) = \\sum_{n=1}^N a_n \\sin\\left(\\frac{n\\pi}{a}x \\right) e^{-j \\beta_n z}\n$$\nwhere\n$$\n\\beta_n = \\sqrt{k_0^2 - \\left(\\frac{n\\pi}{a}\\right)^2}\n$$",
"from scipy.constants import c\n\ndef Ey(a_n, x, z, wg_a=192e-3, f=3.7e9):\n '''\n Evaluates the electric field at the (x,z) location \n x and z should be scalars\n '''\n\n k0 = 2*pi*f/c\n \n sin_n = np.zeros_like(a_n, dtype='complex')\n beta_n = np.zeros_like(a_n, dtype='complex')\n exp_n = np.zeros_like(a_n, dtype='complex')\n exp_n2 = np.zeros_like(a_n, dtype='complex')\n \n N_modes = len(a_n)\n \n for n in np.arange(N_modes):\n # Guided wavenumber\n # use a negative imaginary part for the square root\n # in order to insure the convergence of the exponential\n if k0**2 - ((n+1)*pi/wg_a)**2 >= 0:\n beta_n[n] = np.sqrt(k0**2 - ((n+1)*pi/wg_a)**2)\n else:\n beta_n[n] = -1j*np.sqrt(((n+1)*pi/wg_a)**2 - k0**2)\n \n exp_n[n] = np.exp(-1j*beta_n[n]*z)\n \n sin_n[n] = np.sin((n+1)*pi/wg_a*x)\n \n # sum of the modes \n Ey = np.sum(a_n*sin_n*exp_n)\n \n return Ey\n\nu_n = np.array([0.5, 0, 1,])\nx_test = linspace(0, 192e-3, 201)\nz_test = zeros_like(x_test)\nE_test = zeros_like(x_test, dtype='complex')\nfor idx in range(len(x_test)):\n E_test[idx] = Ey(u_n, x_test[idx], z_test[idx])\n\nplot(x_test, real(E_test), lw=2)\nxlabel('x [m]')\nylabel('|$E_y$| [a.u.]')\ngrid()",
"Least Square solving with Python scipy routines",
"print(x_mes)\n\nprint(z_mes)\n\nEmeas = dBdegree_2_natural(ampl_dB, phase_deg)",
"We prescribe additional information concerning the field at the edge : we know the field should be zero in x=0 and x=a.",
"x_mes = hstack((0, x_mes, a))\nEmeas = vstack((array([0,0,0]), Emeas, array([0,0,0]) ))\n\n# Let's reshape x_mes and z_mes vectors \n# in order to get position vectors with the same length\n# x -> [x1 ... x13 x1 ... x13 x1 ... x13]\n# z -> [z1 ... z1 z2 ... z2 z3 ... z3]\nXX = tile(x_mes, len(z_mes))\nZZ = repeat(z_mes, len(x_mes))\n\n# and the same for the measurements :\n# Emes -> [E(x1,z1) ... E(x13,z1) E(x1,z2) ... E(x13,z2) E(x1,z3) ... E(x13,z3)]\nEEmeas = reshape(Emeas, size(Emeas), order='F') # order='F' is important to get the correct ordering\n\ndef optim_fun(a, x, z):\n Emodel = zeros_like(x, dtype='complex')\n \n for idx in range(len(x)):\n Emodel[idx] = Ey(a, x[idx], z[idx])\n \n y = EEmeas - Emodel\n return y.real**2 + y.imag**2\n\nfrom scipy.optimize import leastsq\na0 = np.array([1,0,1,0,0])\nsol=leastsq(optim_fun, a0, args=(XX, ZZ))\na_sol = sol[0]\nprint(abs(a_sol)/norm(sol[0],1)*100)",
"We deduce that there is 76% of TE30 mode and 13% of TE10 mode, 3% of TE40 and 50",
"x_test = linspace(0, a, 201)\n\ndef subplote(x_test, z, a, Emes, ax):\n E_test = zeros_like(x_test, dtype='complex')\n \n for idx in range(len(x_test)):\n E_test[idx] = Ey(a, x_test[idx], z)\n\n ax.plot(x_test*1e3, real(E_test), lw=2)\n ax.plot(x_mes*1e3, real(Emes), 'x', ms=16, lw=2)\n ax.set_ylabel('|$E_y$| [a.u.]')\n ax.grid(True)\n\nfig, axes = plt.subplots(3,1, sharex=True)\nsubplote(x_test, z_mes[0], a_sol, Emeas[:,0], ax=axes[0])\nsubplote(x_test, z_mes[1], a_sol, Emeas[:,1], ax=axes[1])\nsubplote(x_test, z_mes[2], a_sol, Emeas[:,2], ax=axes[2])\n\naxes[-1].set_xlabel('x [mm]')\naxes[1].legend(('Reconstruction', 'Measurement'), loc='best')\nfig.tight_layout()\n\nfig.savefig('moce_converter_reconstruction.png', dpi=150)",
"Least Square Equation Solving (Manually)\nWe can also directly solve the problem, by defining\n$$\n\\phi_{ij} = \\sin\\left(\\frac{j \\pi}{a} x_i \\right) e^{-j \\beta_j z_i}\n$$\nThen\n$$\n\\vec{a} = \\left( \\phi^T \\phi \\right)^{-1} \\phi^T \\vec{E}_{meas}\n$$",
"def phi(n, x, z, wg_a=192e-3, f=3.7e9):\n k0 = 2*pi*f/c\n \n if k0**2 - (n*pi/wg_a)**2 >= 0:\n beta_n = np.sqrt(k0**2 - (n*pi/wg_a)**2)\n else:\n beta_n = -1j*np.sqrt((n*pi/wg_a)**2 - k0**2)\n \n return np.sin(n*pi/wg_a*x) * np.exp(-1j*beta_n*z)\n\nMAT = np.array([phi(1, XX, ZZ), phi(2, XX, ZZ), phi(3, XX, ZZ), phi(4, XX, ZZ), phi(5, XX, ZZ)]).T\n\na_sol = np.linalg.inv(np.dot(MAT.T, MAT)).dot(MAT.T).dot(EEmeas) \nprint(abs(a_sol)/norm(a_sol,1)*100)",
"This evaluation gives 71% of TE30 mode and 9 % for TE10.",
"x_test = linspace(0, a, 201)\n\ndef subplote(x_test, z, a, Emes):\n E_test = zeros_like(x_test, dtype='complex')\n \n for idx in range(len(x_test)):\n E_test[idx] = Ey(a, x_test[idx], z)\n\n plot(x_test, real(E_test), lw=2)\n plot(x_mes, real(Emes), 'x', ms=15, lw=2)\n xlabel('x [m]')\n ylabel('|$E_y$| [a.u.]')\n grid()\n \nsubplot(311)\nsubplote(x_test, z_mes[0], a_sol, Emeas[:,0])\ntitle('z1')\nsubplot(312)\nsubplote(x_test, z_mes[1], a_sol, Emeas[:,1])\ntitle('z2')\nsubplot(313)\nsubplote(x_test, z_mes[2], a_sol, Emeas[:,2])\ntitle('z3')",
"Least Square Equation Solving (with linalg.lstsq)\nSame as in previous section, but using the Python library that do the job for you.",
"from numpy.linalg import lstsq\n\nC, res, _, _ = lstsq(MAT, EEmeas, rcond=None)\nprint(abs(C)/norm(C,1)*100)",
"This gives 86% as TE30 mode.",
"x_test = linspace(0, a, 201)\n\ndef subplote(x_test, z, a, Emes):\n E_test = zeros_like(x_test, dtype='complex')\n \n for idx in range(len(x_test)):\n E_test[idx] = Ey(a, x_test[idx], z)\n\n plot(x_test, real(E_test), lw=2)\n plot(x_mes, real(Emes), 'x', ms=15, lw=2)\n xlabel('x [m]')\n ylabel('|$E_y$| [a.u.]')\n grid()\n \nsubplot(311)\nsubplote(x_test, z_mes[0], C, Emeas[:,0])\ntitle('z1')\nsubplot(312)\nsubplote(x_test, z_mes[1], C, Emeas[:,1])\ntitle('z2')\nsubplot(313)\nsubplote(x_test, z_mes[2], C, Emeas[:,2])\ntitle('z3')",
"2D plot view",
"x = linspace(0, a, 201)\nz = linspace(-0.1e-2, 15e-2, 301)\n\nXX, ZZ = meshgrid(x,z)\nXX2 = reshape(XX, len(x)*len(z))\nZZ2 = reshape(ZZ, len(x)*len(z))\nE_2D = zeros_like(XX2, dtype='complex')\nfor idx in range(len(XX2)):\n E_2D[idx] = Ey(a_sol, XX2[idx], ZZ2[idx])\n\nE_2D = reshape(E_2D, (len(z), len(x))) \n\npcolormesh(ZZ, XX, real(E_2D))\nxlabel('z [m]', size=14)\nylabel('x [m]', size=14)\nfor idx in range(3):\n axvline(z_mes[idx], color='k', ls='--', lw=2)\nxlim(z[0], z[-1])\nylim(x[0], x[-1])",
"Finding the mode content using the Fast Fourier Transform\nAn other solution could be to deduce the mode content from a Fourier analysis of the electric field. \nWe recall that the total electric field measured on a row $\\ell={1,3}$ is :\n$$\nE_{tot}(x,z_\\ell) = \\sum_{m=1}^M E_m \\sin\\left( \\frac{m\\pi}{a} x\\right) e^{-j\\beta_m z} \\;\\;, x \\in [0,a]\n$$\nwhere $E_m$ is complex valued, ie. $E_m=A_m e^{j\\phi_m}$. Our goal is to deduce these coefficients $E_m$, from the measurements $E_{tot}$. \nThe Fourier transform of the field is defined by:\n$$\n\\tilde{E}{tot} =\n\\frac{1}{2\\pi}\n\\iint E{tot}(x,z) e^{j k_x x + j k_z z} \\,dx \\, dz\n$$\nwhich leads to:\n$$\n\\tilde{E}{tot}(k_x,k_z) =\n\\frac{1}{2\\pi}\n\\sum{m=1}^M E_m \n\\int_z \\int_{x=0}^a\n\\sin\\left( \\frac{m\\pi}{a} x\\right) e^{j k_x x + j (k_z-\\beta_m) z} \\,dx \\, dz\n$$\n$$\n\\tilde{E}{tot}(k_x,k_z) =\n\\frac{1}{2\\pi}\n\\sum{m=1}^M E_m \n\\int_z e^{j (k_z-\\beta_m) z} \\, dz\n\\cdot\n\\int_{x=0}^a\\sin\\left( \\frac{m\\pi}{a} x\\right) e^{j k_x x} \\,dx \n$$\nSo,\n$$\n\\tilde{E}{tot}(k_x,k_z) =\n\\sum{m=1}^M E_m \n\\delta(k_z-\\beta_m)\n\\frac{\\frac{m\\pi}{a}}{k_x^2 - \\left(\\frac{m\\pi}{a}\\right)^2 }\n\\left(\n(-1)^m e^{j k_x a} -1\n\\right)\n$$\nwhere we used the following results:\n$$\n\\int_{x=0}^a\\sin\\left( \\frac{m\\pi}{a} x\\right) e^{j k_x x} \\,dx\n=\n\\frac{\\frac{m\\pi}{a}}{k_x^2 - \\left(\\frac{m\\pi}{a}\\right)^2 }\n\\left(\n(-1)^m e^{j k_x a} -1\n\\right)\n$$\nand:\n$$\n\\int_z e^{j (k_z-\\beta_m) z} \\, dz\n=\n2\\pi \\delta(k_z-\\beta_m)\n$$\nNeed a proof? Look at the end of this notebook. \nThe later formula can be simplified for even or odd $m$ values, but it can also be implemented directly:",
"def Etilde_analytic(kx, Em, a):\n Etilde = np.zeros_like(kx, dtype='complex')\n \n for m, Ei in enumerate(Em, start=1):\n Etilde += Ei \\\n * ((-1)**(m) * np.exp(1j*kx*a) - 1) \\\n * (m*pi/a)/ (kx**2 - (m*pi/a)**2)\n \n return Etilde\n\n# Ideal spectrum\nky_ana = np.linspace(0, 7*pi/a, num=101)\nEtilde_ideal = Etilde_analytic(ky_ana, array([0.2, 0, 0.5, 0, 0.1]), a)\n\nfig, ax = plt.subplots()\nax.plot(ky_ana, np.abs(Etilde_ideal))\n\n# shows where the modes 1,2,... are\nfor mode_index in range(8):\n axvline(mode_index*pi/a, color='#888888', linestyle='--')\nxticks(arange(0,8)*pi/a, ('0', '$\\pi/a$', '$2\\pi/a$', '$3\\pi/a$', '$4\\pi/a$', '$5\\pi/a$', '$6\\pi/a$', '$7\\pi/a$'), size=16)\nax.set_xlabel('$k_y$', size=16)",
"Clearly, there is something strange wite the analytical spectrum! This is in fact normal, since the initial field \"wideband\" (from $x\\in [0,a]$) is not large enough, which leads to a reduced precision in spectral dimension. So the solution would be to consider that the field is not terminated at the boundaries $x=0$ and $x=a$ and instead to consider it as infinite (undefinite Fourier integral):\n$$\n\\tilde{E}{tot}(k_x,k_z) =\n\\frac{1}{2\\pi}\n\\sum{m=1}^M E_m \n\\int_z e^{j (k_z-\\beta_m) z} \\, dz\n\\cdot\n\\int_{x=-\\infty}^{+\\infty} \\sin\\left( \\frac{m\\pi}{a} x\\right) e^{j k_x x} \\,dx \n$$\nNow we calculates the spectrum of the field from the spatial fields, using the Fast Fourier Transform:",
"# index of the measurement row (0, 1 or 2)\n# TODO : find a way to use all the rows ? \nindex_row = 0\n\nx_mes2 = x_mes\nE_mes2 = Emeas[:,index_row]\n\n# interpolating the measurements in order to have smoother initial values\nfrom scipy.interpolate import InterpolatedUnivariateSpline\nius_re = InterpolatedUnivariateSpline(x_mes2, real(E_mes2))\nius_im = InterpolatedUnivariateSpline(x_mes2, imag(E_mes2))\n\nx_mes3 = linspace(x_mes2[0], x_mes2[-1], 101)\nE_mes3 = ius_re(x_mes3) + 1j*ius_im(x_mes3)\n\n# Padding the measurements by replicating the data, \n# in order to have a better fourier domain precision\n# (the larger spatial wideband, the better the precision)\n# The trick is to replicate correctly, taking into account the symmetry of the field\nx_mes4 = np.pad(x_mes3, (2**13,), 'reflect', reflect_type='odd')\nE_mes4 = np.pad(E_mes3, (2**13,), 'reflect', reflect_type='odd')\n\nfig, ax = plt.subplots(2,1)\nax[0].plot(x_mes4, real(E_mes4), lw=2) # replicated interpolated data\nax[0].plot(x_mes3, real(E_mes3), lw=2) # interpolated data\nax[0].plot(x_mes, real(Emeas[:,index_row]), 'x', ms=8) # initial data\nax[0].set_xlim(-3*a, 4*a)\nax[0].axvline(0, color='#999999')\nax[0].axvline(a, color='#999999')\nax[0].set_ylabel('real E')\n\nax[1].plot(x_mes4, imag(E_mes4), lw=2) # replicated interpolated data\nax[1].plot(x_mes3, imag(E_mes3), lw=2) # interpolated data\nax[1].plot(x_mes, imag(Emeas[:,index_row]), 'x', ms=8)\nax[1].set_xlim(-3*a, 4*a)\nax[1].axvline(0, color='#999999')\nax[1].axvline(a, color='#999999')\nax[1].set_xlabel('x [m]')\nax[1].set_ylabel('imag E')\n\nfrom numpy.fft import fft, fftshift, fftfreq\n\n# Calcul the numerical spectrum\nky_num = 2*pi*fftshift(fftfreq(len(x_mes4), d=x_mes4[1]-x_mes4[0]))\nEtilde_num = fftshift(fft(E_mes4))\n\nfig,ax=subplots()\nax.plot(ky_num, abs(Etilde_num)/max(abs(Etilde_num)), lw=2, color='r') \nax.set_xlim(0, 1.5)\n\n# shows where the modes 1,2,... are\nfor mode_index in range(8):\n ax.axvline(mode_index*pi/a, color='#888888', linestyle='--')\nax.set_xticks(np.arange(0,8)*pi/a)\nax.set_xticklabels([0] + [f'${m}\\pi/a$' for m in range(1,8)])\nax.set_xlabel('$k_y$', size=16)\n\n# TODO : calculates the relative height of the peak to deduce the mode % content",
"Using this technique, we clearly see the TE30 mode is the dominant one.\nUsing orthonormalization properties\nIn this section we use the fact that waveguide modes form a complete spectrum. The mode base is orthonormal, thus that:\n$$\n\\int_0^a \n\\sin\\left( \\frac{m \\pi}{a} x \\right)\n\\sin\\left( \\frac{n \\pi}{a} x \\right)\ndx =\n\\left{\n\\begin{array}{lr}\na/2 & \\mathrm{if} \\; m=n \\\n0 & \\mathrm{if} \\; m\\neq n\n\\end{array}\n\\right.\n$$\nFor a given $z$, we such multiply by $\\sin(m \\pi/a)$ and integrate between $[0,a]$ the measured data:",
"def interpolate_measurements(row_index, num_points=501): \n # Add two points, at x=0 and x=a, to the measurements.\n # They are the edges of the waveguides, thus the field is zero here.\n x = x_mes\n E = Emeas[:,row_index]\n\n # interpolating the measurements in order to have smoother initial values\n from scipy.interpolate import InterpolatedUnivariateSpline\n ius_re = InterpolatedUnivariateSpline(x, real(E))\n ius_im = InterpolatedUnivariateSpline(x, imag(E))\n\n x2 = linspace(x[0], x[-1], num_points)\n E2 = ius_re(x2) + 1j*ius_im(x2)\n \n return x2, E2\n\nx2, E2 = interpolate_measurements(1)\nplot(x2, real(E2), x2, imag(E2))\nxlabel('x [m]')\ntitle('interpolated measurements')\n\ndef orthonorm(x, E, number_of_modes=5):\n Im = []\n for m in arange(1, number_of_modes+1):\n integrande = E * sin(m*pi/a*x)\n Im.append(2/a*trapz(x, integrande))\n return asarray(Im)\n\ncolors = ['b','g','r']\nfor idx_row in [0,1,2]:\n x2, E2 = interpolate_measurements(idx_row)\n Im = orthonorm(x2, E2)\n scatter(arange(1,len(Im)+1), abs(Im), color=colors[idx_row-1])\n print(abs(Im)/norm(Im,1)*100)\ngrid(True)\nxlabel('m', size=16)\nylim(0,0.04)\n",
"We can see on the picture above that the TE30 mode is dominant, but that there is some TE10 and other mode also present. Almost 81 to 85% of the TE30 modes, and between 5-8% for the TE10 mode.\nSolving a linear system\nThis approach is derived from A.G.Bailey et al. paper, Experimental Determination of Higher Order Mode Conversion in a Multimode Waveguide. Here, the electric field in the waveguide is supposed to be :\n$$\nE_y(x,z) = \n\\sum_{m=1}^4\na_m \\sin\\left(\\frac{m\\pi}{a}x\\right) e^{- j\\beta_m z}\n+\nb_m \\sin\\left(\\frac{m\\pi}{a}x\\right) e^{+ j\\beta_m z}\n$$\nSince measurements give both real and imaginary parts of the previous expression, in the form of magnitude $A(x,z)$ and phase $\\gamma(x,z)$, one has:\n$$\nA(x,z) \\cos\\gamma(x,z) = \n\\sum_{m=1}^4\n|a_m| \n\\sin\\left(\\frac{m\\pi}{a}x\\right) \n\\cos\\left(\\theta_m - \\beta_m z\\right)\n+\n|b_m| \n\\sin\\left(\\frac{m\\pi}{a}x\\right) \n\\cos\\left(\\theta_m - \\beta_m z\\right)\n$$\nand\n$$\nA(x,z) \\sin\\gamma(x,z) = \n\\sum_{m=1}^4\n|a_m| \\sin\\left(\\frac{m\\pi}{a}x\\right) \\sin\\left(\\theta_m - \\beta_m z\\right)\n+\n|b_m| \\sin\\left(\\frac{m\\pi}{a}x\\right) \\sin\\left(\\theta_m - \\beta_m z\\right)\n$$\nAppendix\nFourier transform integral calculation\nHere, using SymPy we want to calculate the integral:\n$$\n\\int_{x=0}^a\\sin\\left( \\frac{m\\pi}{a} x\\right) e^{j k_x x} \\,dx\n$$",
"import sympy as s\ns.init_printing() # render formula nicely\n\nx_, k_x_ = s.symbols('x k_x', real=True) \na_ = s.symbols('a', positive=True) \nm_ = s.symbols('m', integer=True, positive=True)\nI = s.integrate( s.sin(m_*s.pi/a_*x_) * s.exp(s.I*k_x_*x_), (x_, 0, a_))\nI.simplify()",
"The latter can be expressed as:\n$$\n\\frac{\\frac{m\\pi}{a}}{k_x^2 - \\left(\\frac{m\\pi}{a}\\right)^2 }\n\\left(\n(-1)^m e^{j k_x a} -1\n\\right)\n$$\nIn the case of an undefinite integral ($x\\in\\mathbb{R}$):",
"# Unfortunately, SymPy does not know how to compute the FT of a sinus :\ns.fourier_transform(s.sin(m_*pi/a_*x_),x_, k_x_, noconds=True)",
"And we recall the definition of the Dirac Delta function:\n$$\n\\int_z e^{j (k_z-\\beta_m) z} \\, dz\n=\n2\\pi \\delta(k_z-\\beta_m)\n$$\nNumerical Fourier Transform - option 2",
"# This function calculates the FFT of the field \n# and the corresponding wavenumber axis. \n# This function is not used in this notebook, and just given here for example.\n# (The wavenumber axis can be constructed instead with the fftfreq function)\ndef calculate_spectrum(x, E, f=3.7e9):\n k0 = 2*pi*f/c\n lambda0 = c/f\n # fourier domain points\n B = 2**18\n Efft = np.fft.fftshift(np.fft.fft(E,B))\n \n # fourier domain bins\n dx = x[1] - x[0] # assumes spatial period is constant\n df = 1/(B*dx)\n K = arange(-B/2,+B/2)\n # spatial frequency bins\n Fz= K*df\n # spatial wavenumber kx\n kx= (2*pi)*Fz\n # \"power density\" spectrum\n p = (dx)**2/lambda0 * Efft;\n \n return(kx,p)",
"CSS Styling",
"from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bryanwweber/thermostate
|
docs/Tutorial.ipynb
|
bsd-3-clause
|
[
"Tutorial\nThis tutorial will guide you in the basic use of ThermoState. The ThermoState\npackage is designed to ease the evaluation of thermodynamic properties for\ncommon substances used in Mechanical Engineering courses. Rather than looking up\nthe information in a table and interpolating, we can input properties for the\nstates directly, and all unknown values are automatically calculated.\nThermoState uses CoolProp and Pint to enable easy property evaluation in any unit system. The first thing we need to do is import the parts of ThermoState that we will use. This adds them to the set of local variables",
"from thermostate import State, Q_, units, set_default_units",
"Pint and Units\nNow that the interface has been imported, we can create some properties. For instance, let's say we're given the pressure and temperature properties for water, and asked to determine the specific volume. First, let's create variables that set the pressure and temperature. We will use the Pint Quantity function, which we have called Q_. The syntax for the Q_ function is Q_(value, 'units').",
"p_1 = Q_(101325, 'Pa')",
"We can use whatever units we'd like, Pint supports a wide variety of units.",
"p_1 = Q_(1.01325, 'bar')\np_1 = Q_(14.7, 'psi')\np_1 = Q_(1.0, 'atm')",
"Another way to specify the units is to use the units class that we imported. This class has a number of attributes (text following a period) that can be used to create a quantity with units by multiplying a number with the unit. \n```python\nunits.degR\n^^^^\nThis is the attribute\n```\nLet's set the temperature now. The available units of temperature are degF (fahrenheit), degR (rankine), degC (celsius), and K (kelvin).",
"T_1 = 460*units.degR\nT_1 = 25*units.degC\nT_1 = 75*units.degF\nT_1 = 400*units.K",
"The two ways of creating the units are equivalent. The following cell should print True to demonstrate this.",
"Q_(101325, 'Pa') == 1.0*units.atm",
"Note the convention we are using here: the variables are named with the property, followed by an underscore, then the number of the state. In this case, we are setting properties for state 1, hence T_1 and p_1.\nThermoState\nNow that we have defined two properties with units, let's define a state. First, we create a variable to hold the State and tell ThermoState what substance we want to use with that state. The available substances are:\n\nwater\nair\nR134a\nR22\npropane\nammonia\nisobutane\ncarbondioxide\noxygen\nnitrogen\n\nNote that the name of the substance is case-insensitive (it doesn't matter whether you use lower case or upper case). It is often easiest to set the name of the substance in a variable, like:",
"substance = 'water'",
"Now we need to create the State and assign values for the properties. Properties of the state are set as arguments to the State class, and they must always be set in pairs, we cannot set a single property at a time. The syntax is\nst = State(substance, property_1=value, property_2=value)\n\n<div class=\"alert alert-warning\">\n\n**Warning**\n\nRemember that two independent and intensive properties are required to set the state!\n\n</div>\n\nTo demonstrate, we will set the T and p properties of the state and set them equal to the temperature and pressure we defined above. Note that the capitalization of the properties is important! The p is lower case while the T is upper case (lower case t means time).",
"print('T = {}, p = {}'.format(T_1, p_1))\n\nst_1 = State(substance, T=T_1, p=p_1)",
"Note again the convention we are using here: The state is labeled by st, then an underscore, then the number of the state.\nThe variables that we use on the right side of the equal sign in the State function can be named anything we want. For instance, the following code is exactly equivalent to what we did before.",
"luke = Q_(1.0, 'atm')\nleia = Q_(400.0, 'K')\nprint('Does luke equal p_1?', luke == p_1)\nprint('Does leia equal T_1?', leia == T_1)\nst_starwars = State(substance, T=leia, p=luke)\nprint('Does st_starwars equal st_1?', st_starwars == st_1)",
"<div class=\"alert alert-warning\">\n\n**Warning**\n\nTo avoid confusing yourself, name your variables to something useful. For instance, use the property symbol, then an underscore, then the state number, as in `p_1 = Q_(1.0, 'atm')` to indicate the pressure at state 1. In my notes and solutions, this is the convention that I will follow, and I will use `st_#` to indicate a `State` (e.g., `st_1` is state 1, `st_2` is state 2, and so forth).\n\n</div>\n\nIn theory, any two pairs of independent properties can be used to set the state. In reality, the pairs of properties available to set the state is slightly limited because of the way the equation of state is written. The available pairs of properties are\n\nTp\nTs\nTv\nTx\npu\nps\npv\nph\npx\nuv\nsv\nhs\nhv\n\nThe reverse of any of these pairs is also possible and totally equivalent.\nBy setting two properties in this way, the State class will calculate all the other properties we might be interested in. We can access the value of any property by getting the attribute for that property. The available properties are T (temperature), p (pressure), v (mass-specific volume), u (mass-specific internal energy), h (mass-specific enthalpy), s (mass-specific entropy), x (quality), cp (specific heat at constant pressure), cv (specific heat at constant volume), and phase (the phase of this state). The syntax is\nState.property\n\nor\nst_1.T # Gets the temperature\nst_1.p # Gets the pressure\nst_1.v # Gets the specific volume\nst_1.u # Gets the internal energy\nst_1.h # Gets the enthalpy\nst_1.s # Gets the entropy\nst_1.x # Gets the quality\nst_1.cp # Gets the specific heat at constant pressure\nst_1.cv # Gets the specific heat at constant volume\nst_1.phase # Gets the phase at this state\n\n<div class=\"alert alert-info\">\n\n**Note**\n\nCapitalization is important! The temperature has upper case `T`, while the other properties are lower case to indicate that they are mass-specific quantities.\n\n</div>",
"print('T_1 = {}'.format(st_1.T))\nprint('p_1 = {}'.format(st_1.p))\nprint('v_1 = {}'.format(st_1.v))\nprint('u_1 = {}'.format(st_1.u))\nprint('h_1 = {}'.format(st_1.h))\nprint('s_1 = {}'.format(st_1.s))\nprint('x_1 = {}'.format(st_1.x))\nprint('cp_1 = {}'.format(st_1.cp))\nprint('cv_1 = {}'.format(st_1.cv))\nprint('phase_1 = {}'.format(st_1.phase))",
"In this case, the value for the quality is the special Python value None. This is because at 400 K and 101325 Pa, the state of water is a superheated vapor and the quality is undefined except in the vapor dome. To access states in the vapor dome, we cannot use T and p as independent properties, because they are not independent inside the vapor dome. Instead, we have to use the pairs involving the other properties (possibly including the quality) to set the state. When we define the quality, the units are dimensionless or percent. For instance:",
"T_2 = Q_(100.0, 'degC')\nx_2 = Q_(0.1, 'dimensionless')\nst_2 = State('water', T=T_2, x=x_2)\nprint('T_2 = {}'.format(st_2.T))\nprint('p_2 = {}'.format(st_2.p))\nprint('v_2 = {}'.format(st_2.v))\nprint('u_2 = {}'.format(st_2.u))\nprint('h_2 = {}'.format(st_2.h))\nprint('s_2 = {}'.format(st_2.s))\nprint('x_2 = {}'.format(st_2.x))",
"In addition, whether you use the 'dimensionless' \"units\" for the quality as above, or use the 'percent' \"units\", the result is exactly equivalent. The next cell should print True to the screen to demonstrate this.",
"x_2 == Q_(10.0, 'percent')",
"From these results, we can see that the units of the units of the properties stored in the State are always SI units - Kelvin, Pascal, m<sup>3</sup>/kg, J/kg, and J/(kg-Kelvin). We can use the to function to convert the units to anything we want, provided the dimensions are compatible. The syntax is State.property.to('units').",
"print(st_2.T.to('degF'))\nprint(st_2.s.to('BTU/(lb*degR)'))",
"<div class=\"alert alert-info\">\n\n**Note**\n\nThe values are always converted in the `State` to SI units, no matter what the input units are. Therefore, if you want EE units as an output, you have to convert.\n\n</div>\n\nIf we try to convert to a unit with incompatible dimensions, Pint will raise a DimenstionalityError exception.\n<div class=\"alert alert-warning\">\n\n**Warning**\n\nIf you get a `DimensionalityError`, examine your conversion very closely. The error message will tell you why the dimensions are incompatible!\n\n</div>",
"print(st_2.T.to('joule'))",
"Here we have tried to convert from 'kelvin' to 'joule' and the error message which is the last line says\nDimensionalityError: Cannot convert from 'kelvin' ([temperature]) to 'joule' ([length] ** 2 * [mass] / [time] ** 2)\n\nThe dimensions of a temperature are, well, temperature. The formula for energy (Joule) is $mad$ (mass times acceleration times distance), and in terms of dimensions, $ML/T^2L = L^2M/T^2$ (where in dimensions, capital $T$ is time). Clearly, these dimensions are incompatible. A more subtle case might be trying to convert energy to power* (again, not allowed):",
"Q_(1000.0, 'joule').to('watt')## Other Common Errors",
"Default Units\nDefault units can be set either through the set_default_units(\"units\") function, when creating a state, or after a state has been set to change the Units of the state. Units can be set either with \"SI\" or \"EE\" for the corresponding sets of units.",
"set_default_units(\"EE\")\nst_3 = State(\"water\", T = Q_(100, 'degC'), p = Q_(1.0, 'atm'))\nprint(st_3.s)\n\nst_4 = State(\"water\", T = Q_(100, 'degC'), p = Q_(1.0, 'atm'), units = \"SI\")\nprint(st_4.s)\n\nst_4.units = None\nprint(st_4.s)\n\nset_default_units(None)",
"Other Common Errors\nOther common errors generated from using ThermoState will raise StateErrors. These errors may be due to\n\nNot specifying enough properties to fix the state, or specifying too many properties to fix the state\nSpecifying a pair of properties that are not independent at the desired condtions\nEntering an unsupported pair of property inputs (the currently unsupported pairs are Tu, Th, and us, due to limitations in CoolProp)\nSpecifying a Quantity with incorrect dimensions for the property input\n\nAn example demonstrating #4 from above:",
"State('water', v=Q_(1000.0, 'degC'), p=Q_(1.0, 'bar'))",
"Summary\nIn summary, we need to use two (2) independent and intensive properties to fix the state of any simple compressible system. We need to define these quantities with units using Pint, and then use them to set the conditions of a State. Then, we can access the other properties of the State by using the attributes.",
"h_5 = Q_(2000.0, 'kJ/kg')\ns_5 = Q_(3.10, 'kJ/(kg*K)')\nst_5 = State('water', h=h_5, s=s_5)\nprint('T_5 = {}'.format(st_5.T))\nprint('p_5 = {}'.format(st_5.p))\nprint('v_5 = {}'.format(st_5.v))\nprint('u_5 = {}'.format(st_5.u))\nprint('h_5 = {}'.format(st_5.h))\nprint('s_5 = {}'.format(st_5.s))\nprint('x_5 = {}'.format(st_5.x))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
susantabiswas/Natural-Language-Processing
|
Notebooks/Word_Prediction_using_Quadgrams_Memory_Efficient_Encoded_keys.ipynb
|
mit
|
[
"Word prediction based on Quadgram\nThis program reads the corpus line by line so it is slower than the program which reads the corpus\nin one go.This reads the corpus one line at a time loads it into the memory.Also this uses encoded keys making it even more memory efficient\nImport modules",
"#import the modules necessary\nfrom nltk.util import ngrams\nfrom collections import defaultdict\nfrom collections import OrderedDict\nimport nltk\nimport string\nimport time\n\nstart_time = time.time()",
"Do preprocessing:\nEncode keys for dictionary storage",
"#return: string\n#arg:list,list,dict\n#for encoding keys for the dictionary\n#for encoding keys ,index has been used for each unique word \n#for mapping keys with their index\ndef encodeKey(s,index,vocab_dict):\n key = ''\n #print (s)\n for t in s:\n #print (t)\n if t not in vocab_dict:\n vocab_dict[t] = index[0]\n index[0] = index[0] + 1\n\n key = key + str(vocab_dict[t]) + '#' \n #print(key)\n return key",
"Remove the punctuations and lowercase the tokens",
"#returns: string\n#arg: string\n#remove punctuations and make the string lowercase\ndef removePunctuations(sen):\n\n #split the string into word tokens\n temp_l = sen.split()\n i = 0\n\n #changes the word to lowercase and removes punctuations from it\n for word in temp_l :\n for l in word :\n if l in string.punctuation:\n word = word.replace(l,\" \")\n temp_l[i] = word.lower()\n i=i+1 \n\n #spliting is being don here beacause in sentences line here---so after punctuation removal it should \n #become \"here so\" \n content = \" \".join(temp_l)\n\n return content",
"Tokenize the corpus data",
"#returns : void\n#arg: string,dict,dict,dict,list\n#loads the corpus for the dataset and makes the frequency count of quadgram and trigram strings\ndef loadCorupus(filename,tri_dict,quad_dict,vocab_dict,index):\n w1 = '' #for storing the 3rd last word to be used for next token set\n w2 = '' #for storing the 2nd last word to be used for next token set\n w3 = '' #for storing the last word to be used for next token set\n i = 0\n sen = ''\n token = []\n\n with open(filename,'r') as file:\n #read the data line by line\n for line in file:\n token = line.split()\n i = 0\n for word in token :\n for l in word :\n if l in string.punctuation:\n word = word.replace(l,\" \")\n token[i] = word.lower()\n i=i+1 \n\n content = \" \".join(token)\n token = content.split()\n\n if not token:\n continue\n \n #first add the previous words\n if w2!= '':\n token.insert(0,w2)\n if w3!= '':\n token.insert(1,w3)\n \n \n #tokens for trigrams\n temp1 = list(ngrams(token,3))\n\n if w1!= '':\n token.insert(0,w1)\n\n #tokens for quadgrams\n temp2 = list(ngrams(token,4))\n \n #count the frequency of the trigram sentences\n for t in temp1:\n sen = encodeKey(t,index,vocab_dict)\n tri_dict[sen] += 1\n\n #count the frequency of the quadgram sentences\n for t in temp2:\n sen = encodeKey(t,index,vocab_dict)\n quad_dict[sen] += 1\n\n\n #then take out the last 3 words\n n = len(token)\n\n w1 = token[n -3]\n w2 = token[n -2]\n w3 = token[n -1]",
"Find the probability",
"#returns : float\n#arg : string sentence,string word,dict,dict\ndef findprobability(s,w,tri_dict,quad_dict):\n c1 = 0 # for count of sentence 's' with word 'w'\n c2 = 0 # for count of sentence 's'\n s1 = s + w\n \n if s1 in quad_dict:\n c1 = quad_dict[s1]\n if s in tri_dict:\n c2 = tri_dict[s]\n \n if c2 == 0:\n return 0\n return c1/c2",
"Decode key",
"#arg: list\n#return: string,dict\n#for decoding keys \ndef decodeKey(s,vocab_dict):\n key = ''\n l = []\n item = list(vocab_dict.items())\n \n temp_l = s.split('#')\n del temp_l[len(temp_l)-1]\n \n index = 0\n for c in temp_l:\n if c != ' ':\n index = int(c)\n l.append(item[index][0])\n\n key = ' '.join(l) \n return key",
"Driver function for doing the prediction",
"#returns : void\n#arg: string,dict,dict,dict,list\ndef doPrediction(sen,tri_dict,quad_dict,vocab_dict,index):\n \n #remove punctuations and make it lowercase\n temp_l = sen.split()\n i = 0\n \n for word in temp_l :\n for l in word :\n if l in string.punctuation:\n word = word.replace(l,\" \")\n temp_l[i] = word.lower()\n i=i+1 \n \n content = \" \".join(temp_l)\n temp_l = content.split() \n \n #encode the sentence before checking\n sen = encodeKey(temp_l,index,vocab_dict)\n \n max_prob = 0\n #when there is no probable word available\n #now for guessing the word which should exist we use quadgram\n right_word = 'apple' \n \n for word in vocab_dict:\n #print(word)\n #encode the word before checking\n dict_l = []\n dict_l.append(word)\n word = encodeKey(dict_l,index,vocab_dict)\n \n prob = findprobability(sen,word,tri_dict,quad_dict)\n \n if prob > max_prob:\n max_prob = prob\n right_word = word\n \n #decode the right word \n right_word = decodeKey(right_word,vocab_dict)\n \n print('Word Prediction is :',right_word)\n\ndef main():\n\n tri_dict = defaultdict(int)\n quad_dict = defaultdict(int)\n vocab_dict = OrderedDict() #for mapping of words with their index ==> key:word value:index of key in dict\\n\",\n index = [0] #list for assigning index value to keys\\n\",\n\n loadCorupus('mycorpus.txt',tri_dict,quad_dict,vocab_dict,index)\n\n cond = False\n #take input\n while(cond == False):\n sen = input('Enter the string\\n')\n sen = removePunctuations(sen)\n temp = sen.split()\n if len(temp) < 3:\n print(\"Please enter atleast 3 words !\")\n else:\n cond = True\n temp = temp[-3:]\n sen = \" \".join(temp)\n \n doPrediction(sen,tri_dict,quad_dict,vocab_dict,index)\n\n\nif __name__ == '__main__':\n main()\n "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MSRDL/TSA
|
sls_demo.ipynb
|
bsd-3-clause
|
[
"Time-series Anomaly Detection\nThis demo shows how to use a novel SLS (Streaming Least Squares) anomaly detection algorithm and how it performs",
"%pylab inline\nrcParams['figure.figsize'] = [16, 3]\nimport pandas as pd\n\nfrom AnomalyDetection import detect_anomalies",
"Example 1: Detect spikes\nFirst, let's take a quick look at our sample time-series data file, which consists of 2 columns: datetime and counter value, aggregated every 15 minutes.",
"df = pd.read_csv('data/sample1.csv', parse_dates=True, index_col=0)\ndata = df['Counter']\ndata.head(10)\n\nanomalies, thresholds = detect_anomalies(data, lag=20, num_anomalies=10)",
"Example 2: Detect dips\nIn the above, SLS was able to detect spikes in the series. The same algorithm can be used for detecting dips as well, e.g. outages (sudden drop in sessions server).",
"df = pd.read_csv('data/sample2.csv', parse_dates=True, index_col=0)\ndata = df['Value']\n\nanomalies, thresholds = detect_anomalies(data, lag=5, num_anomalies=1)",
"Model deployment for real-time anomaly detection\nThe demo here is for evaluting the effectiveness of the algorithm on historical data. To deploy the model for real-time detection on streaming data, follow this C++ API.\n<pre> <span style=' color: Blue;'>typedef</span> <span style=' color: Blue;'>std</span>::vector<<span style=' color: Blue;'>float</span>> FloatVec; \n\n <span style=' color: Blue;'>class</span> AnomalyDetector { \n <span style=' color: Blue;'>public</span>: \n <span style=' color: Green;'>/// Alert Thresholds, which can be updated (if needed) live without breaking the service</span>\n FloatVec Thresholds; \n\n <span style=' color: Green;'>/// Instantiate the detector from window size and Thresholds</span>\n AnomalyDetector(<span style=' color: Blue;'>int</span> windowSize, FloatVec>& thresholds); \n\n <span style=' color: Green;'>/// Instantiate the detector from a saved model file</span> \n AnomalyDetector(<span style=' color: Blue;'>const</span> <span style=' color: Blue;'>char</span>* modelFile);\n\n <span style=' color: Green;'>/// Save the detector to a model file</span> \n <span style=' color: Blue;'>void</span> Save(<span style=' color: Blue;'>const</span> <span style=' color: Blue;'>char</span>* modelFile);\n\n <span style=' color: Green;'>/// Predict takes an incoming counter value and returns the alert level (0 means no anomaly).</span> \n <span style=' color: Green;'>/// It also produces auxiliary outputs such as trend and the anomaly score </span>\n <span style=' color: Blue;'>int</span> Predict(<span style=' color: Blue;'>float</span> value, <span style=' color: Blue;'>float</span>& trend, <span style=' color: Blue;'>float</span>& score);\n };</pre>\n\n\nInstantiate the anomaly detector from the model file.\nThen, for each new counter value, call Predict to get the alert level as well as the trend and anomaly score.\n\nBulk Predict\nWe also provide a bulk predict API for handling multiple signals in parallel (via OpenMP).\n<pre> <span style=' color: Blue;'>class</span> BulkAnomalyDetector { \n9se <span style=' color: Blue;'>public</span>: \n <span style=' color: Blue;'>std</span>::vector<FloatVec> Thresholds; \n\n <span style=' color: Green;'>/// Instantiate the detectors from a zipped container of model files</span>\n BulkAnomalyDetector(<span style=' color: Blue;'>const</span> <span style=' color: Blue;'>char</span>* modelsContainerFile);\n\n <span style=' color: Blue;'>std</span>::vector<<span style=' color: Blue;'>int</span>> BulkPredict(<span style=' color: Blue;'>const</span> FloatVec& value, FloatVec& trends, FloatVec& scores);\n }; \n</pre>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
smorton2/think-stats
|
code/chap14soln.ipynb
|
gpl-3.0
|
[
"Examples and Exercises from Think Stats, 2nd Edition\nhttp://thinkstats2.com\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT",
"from __future__ import print_function, division\n\n%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\n\nimport random\n\nimport thinkstats2\nimport thinkplot",
"Analytic methods\nIf we know the parameters of the sampling distribution, we can compute confidence intervals and p-values analytically, which is computationally faster than resampling.",
"import scipy.stats\n\ndef EvalNormalCdfInverse(p, mu=0, sigma=1):\n return scipy.stats.norm.ppf(p, loc=mu, scale=sigma)",
"Here's the confidence interval for the estimated mean.",
"EvalNormalCdfInverse(0.05, mu=90, sigma=2.5)\n\nEvalNormalCdfInverse(0.95, mu=90, sigma=2.5)",
"normal.py provides a Normal class that encapsulates what we know about arithmetic operations on normal distributions.",
"from normal import Normal\n\ndist = Normal(90, 7.5**2)\ndist",
"We can use it to compute the sampling distribution of the mean.",
"dist_xbar = dist.Sum(9) / 9\ndist_xbar.sigma",
"And then compute a confidence interval.",
"dist_xbar.Percentile(5), dist_xbar.Percentile(95)",
"Central Limit Theorem\nIf you add up independent variates from a distribution with finite mean and variance, the sum converges on a normal distribution.\nThe following function generates samples with difference sizes from an exponential distribution.",
"def MakeExpoSamples(beta=2.0, iters=1000):\n \"\"\"Generates samples from an exponential distribution.\n\n beta: parameter\n iters: number of samples to generate for each size\n\n returns: list of samples\n \"\"\"\n samples = []\n for n in [1, 10, 100]:\n sample = [np.sum(np.random.exponential(beta, n))\n for _ in range(iters)]\n samples.append((n, sample))\n return samples",
"This function generates normal probability plots for samples with various sizes.",
"def NormalPlotSamples(samples, plot=1, ylabel=''):\n \"\"\"Makes normal probability plots for samples.\n\n samples: list of samples\n label: string\n \"\"\"\n for n, sample in samples:\n thinkplot.SubPlot(plot)\n thinkstats2.NormalProbabilityPlot(sample)\n\n thinkplot.Config(title='n=%d' % n,\n legend=False,\n xticks=[],\n yticks=[],\n xlabel='random normal variate',\n ylabel=ylabel)\n plot += 1",
"The following plot shows how the sum of exponential variates converges to normal as sample size increases.",
"thinkplot.PrePlot(num=3, rows=2, cols=3)\nsamples = MakeExpoSamples()\nNormalPlotSamples(samples, plot=1,\n ylabel='sum of expo values')",
"The lognormal distribution has higher variance, so it requires a larger sample size before it converges to normal.",
"def MakeLognormalSamples(mu=1.0, sigma=1.0, iters=1000):\n \"\"\"Generates samples from a lognormal distribution.\n\n mu: parmeter\n sigma: parameter\n iters: number of samples to generate for each size\n\n returns: list of samples\n \"\"\"\n samples = []\n for n in [1, 10, 100]:\n sample = [np.sum(np.random.lognormal(mu, sigma, n))\n for _ in range(iters)]\n samples.append((n, sample))\n return samples\n\nthinkplot.PrePlot(num=3, rows=2, cols=3)\nsamples = MakeLognormalSamples()\nNormalPlotSamples(samples, ylabel='sum of lognormal values')",
"The Pareto distribution has infinite variance, and sometimes infinite mean, depending on the parameters. It violates the requirements of the CLT and does not generally converge to normal.",
"def MakeParetoSamples(alpha=1.0, iters=1000):\n \"\"\"Generates samples from a Pareto distribution.\n\n alpha: parameter\n iters: number of samples to generate for each size\n\n returns: list of samples\n \"\"\"\n samples = []\n\n for n in [1, 10, 100]:\n sample = [np.sum(np.random.pareto(alpha, n))\n for _ in range(iters)]\n samples.append((n, sample))\n return samples\n\nthinkplot.PrePlot(num=3, rows=2, cols=3)\nsamples = MakeParetoSamples()\nNormalPlotSamples(samples, ylabel='sum of Pareto values')",
"If the random variates are correlated, that also violates the CLT, so the sums don't generally converge.\nTo generate correlated values, we generate correlated normal values and then transform to whatever distribution we want.",
"def GenerateCorrelated(rho, n):\n \"\"\"Generates a sequence of correlated values from a standard normal dist.\n \n rho: coefficient of correlation\n n: length of sequence\n\n returns: iterator\n \"\"\"\n x = random.gauss(0, 1)\n yield x\n\n sigma = np.sqrt(1 - rho**2)\n for _ in range(n-1):\n x = random.gauss(x * rho, sigma)\n yield x\n\ndef GenerateExpoCorrelated(rho, n):\n \"\"\"Generates a sequence of correlated values from an exponential dist.\n\n rho: coefficient of correlation\n n: length of sequence\n\n returns: NumPy array\n \"\"\"\n normal = list(GenerateCorrelated(rho, n))\n uniform = scipy.stats.norm.cdf(normal)\n expo = scipy.stats.expon.ppf(uniform)\n return expo\n\ndef MakeCorrelatedSamples(rho=0.9, iters=1000):\n \"\"\"Generates samples from a correlated exponential distribution.\n\n rho: correlation\n iters: number of samples to generate for each size\n\n returns: list of samples\n \"\"\" \n samples = []\n for n in [1, 10, 100]:\n sample = [np.sum(GenerateExpoCorrelated(rho, n))\n for _ in range(iters)]\n samples.append((n, sample))\n return samples\n\n\nthinkplot.PrePlot(num=3, rows=2, cols=3)\nsamples = MakeCorrelatedSamples()\nNormalPlotSamples(samples, ylabel='sum of correlated exponential values')",
"Difference in means\nLet's use analytic methods to compute a CI and p-value for an observed difference in means.\nThe distribution of pregnancy length is not normal, but it has finite mean and variance, so the sum (or mean) of a few thousand samples is very close to normal.",
"import first\n\nlive, firsts, others = first.MakeFrames()\ndelta = firsts.prglngth.mean() - others.prglngth.mean()\ndelta",
"The following function computes the sampling distribution of the mean for a set of values and a given sample size.",
"def SamplingDistMean(data, n):\n \"\"\"Computes the sampling distribution of the mean.\n\n data: sequence of values representing the population\n n: sample size\n\n returns: Normal object\n \"\"\"\n mean, var = data.mean(), data.var()\n dist = Normal(mean, var)\n return dist.Sum(n) / n",
"Here are the sampling distributions for the means of the two groups under the null hypothesis.",
"dist1 = SamplingDistMean(live.prglngth, len(firsts))\ndist2 = SamplingDistMean(live.prglngth, len(others))",
"And the sampling distribution for the difference in means.",
"dist_diff = dist1 - dist2\ndist",
"Under the null hypothesis, here's the chance of exceeding the observed difference.",
"1 - dist_diff.Prob(delta)",
"And the chance of falling below the negated difference.",
"dist_diff.Prob(-delta)",
"The sum of these probabilities is the two-sided p-value.\nTesting a correlation\nUnder the null hypothesis (that there is no correlation), the sampling distribution of the observed correlation (suitably transformed) is a \"Student t\" distribution.",
"def StudentCdf(n):\n \"\"\"Computes the CDF correlations from uncorrelated variables.\n\n n: sample size\n\n returns: Cdf\n \"\"\"\n ts = np.linspace(-3, 3, 101)\n ps = scipy.stats.t.cdf(ts, df=n-2)\n rs = ts / np.sqrt(n - 2 + ts**2)\n return thinkstats2.Cdf(rs, ps)",
"The following is a HypothesisTest that uses permutation to estimate the sampling distribution of a correlation.",
"import hypothesis\n\nclass CorrelationPermute(hypothesis.CorrelationPermute):\n \"\"\"Tests correlations by permutation.\"\"\"\n\n def TestStatistic(self, data):\n \"\"\"Computes the test statistic.\n\n data: tuple of xs and ys\n \"\"\"\n xs, ys = data\n return np.corrcoef(xs, ys)[0][1]",
"Now we can estimate the sampling distribution by permutation and compare it to the Student t distribution.",
"def ResampleCorrelations(live):\n \"\"\"Tests the correlation between birth weight and mother's age.\n\n live: DataFrame for live births\n\n returns: sample size, observed correlation, CDF of resampled correlations\n \"\"\"\n live2 = live.dropna(subset=['agepreg', 'totalwgt_lb'])\n data = live2.agepreg.values, live2.totalwgt_lb.values\n ht = CorrelationPermute(data)\n p_value = ht.PValue()\n return len(live2), ht.actual, ht.test_cdf\n\nn, r, cdf = ResampleCorrelations(live)\n\nmodel = StudentCdf(n)\nthinkplot.Plot(model.xs, model.ps, color='gray',\n alpha=0.5, label='Student t')\nthinkplot.Cdf(cdf, label='sample')\n\nthinkplot.Config(xlabel='correlation',\n ylabel='CDF',\n legend=True, loc='lower right')",
"That confirms the analytic result. Now we can use the CDF of the Student t distribution to compute a p-value.",
"t = r * np.sqrt((n-2) / (1-r**2))\np_value = 1 - scipy.stats.t.cdf(t, df=n-2)\nprint(r, p_value)",
"Chi-squared test\nThe reason the chi-squared statistic is useful is that we can compute its distribution under the null hypothesis analytically.",
"def ChiSquaredCdf(n):\n \"\"\"Discrete approximation of the chi-squared CDF with df=n-1.\n\n n: sample size\n \n returns: Cdf\n \"\"\"\n xs = np.linspace(0, 25, 101)\n ps = scipy.stats.chi2.cdf(xs, df=n-1)\n return thinkstats2.Cdf(xs, ps)",
"Again, we can confirm the analytic result by comparing values generated by simulation with the analytic distribution.",
"data = [8, 9, 19, 5, 8, 11]\ndt = hypothesis.DiceChiTest(data)\np_value = dt.PValue(iters=1000)\nn, chi2, cdf = len(data), dt.actual, dt.test_cdf\n\nmodel = ChiSquaredCdf(n)\nthinkplot.Plot(model.xs, model.ps, color='gray',\n alpha=0.3, label='chi squared')\nthinkplot.Cdf(cdf, label='sample')\n\nthinkplot.Config(xlabel='chi-squared statistic',\n ylabel='CDF',\n loc='lower right')",
"And then we can use the analytic distribution to compute p-values.",
"p_value = 1 - scipy.stats.chi2.cdf(chi2, df=n-1)\nprint(chi2, p_value)",
"Exercises\nExercise: In Section 5.4, we saw that the distribution of adult weights is approximately lognormal. One possible explanation is that the weight a person gains each year is proportional to their current weight. In that case, adult weight is the product of a large number of multiplicative factors:\nw = w0 f1 f2 ... fn \nwhere w is adult weight, w0 is birth weight, and fi is the weight gain factor for year i.\nThe log of a product is the sum of the logs of the factors:\nlogw = logw0 + logf1 + logf2 + ... + logfn \nSo by the Central Limit Theorem, the distribution of logw is approximately normal for large n, which implies that the distribution of w is lognormal.\nTo model this phenomenon, choose a distribution for f that seems reasonable, then generate a sample of adult weights by choosing a random value from the distribution of birth weights, choosing a sequence of factors from the distribution of f, and computing the product. What value of n is needed to converge to a lognormal distribution?",
"# Solution\n\ndef GenerateAdultWeight(birth_weights, n):\n \"\"\"Generate a random adult weight by simulating annual gain.\n\n birth_weights: sequence of birth weights in lbs\n n: number of years to simulate\n\n returns: adult weight in lbs\n \"\"\"\n bw = random.choice(birth_weights)\n factors = np.random.normal(1.09, 0.03, n)\n aw = bw * np.prod(factors)\n return aw\n\n# Solution\n\ndef PlotAdultWeights(live):\n \"\"\"Makes a normal probability plot of log10 adult weight.\n\n live: DataFrame of live births\n\n \n \"\"\"\n birth_weights = live.totalwgt_lb.dropna().values\n aws = [GenerateAdultWeight(birth_weights, 40) for _ in range(1000)]\n log_aws = np.log10(aws)\n thinkstats2.NormalProbabilityPlot(log_aws)\n thinkplot.Config(xlabel='standard normal values',\n ylabel='adult weight (log10 lbs)',\n loc='lower right')\n\n# Solution\n\nPlotAdultWeights(live)\n\n# Solution\n\n# With n=40 the distribution is approximately lognormal except for the lowest weights.\n\n# Actual distribution might deviate from lognormal because it is\n# a mixture of people at different ages, or because annual weight\n# gains are correlated.",
"Exercise: In Section 14.6 we used the Central Limit Theorem to find the sampling distribution of the difference in means, δ, under the null hypothesis that both samples are drawn from the same population.\nWe can also use this distribution to find the standard error of the estimate and confidence intervals, but that would only be approximately correct. To be more precise, we should compute the sampling distribution of δ under the alternate hypothesis that the samples are drawn from different populations.\nCompute this distribution and use it to calculate the standard error and a 90% confidence interval for the difference in means.",
"# Solution\n\n# Here's the observed difference in means\n\ndelta = firsts.prglngth.mean() - others.prglngth.mean()\nprint(delta)\n\n# Solution\n\n# Under the null hypothesis, both sampling distributions are based\n# on all live births.\n\ndist1 = SamplingDistMean(live.prglngth, len(firsts))\ndist2 = SamplingDistMean(live.prglngth, len(others))\ndist_diff_null = dist1 - dist2\nprint('null hypothesis', dist_diff_null)\nprint(dist_diff_null.Prob(-delta), 1 - dist_diff_null.Prob(delta))\n\n# Solution\n\n# Under the alternate hypothesis, each sampling distribution is\n# based on the observed parameters.\n\ndist1 = SamplingDistMean(firsts.prglngth, len(firsts))\ndist2 = SamplingDistMean(others.prglngth, len(others))\ndist_diff_alt = dist1 - dist2\nprint('estimated params', dist_diff_alt)\nprint(dist_diff_alt.Percentile(5), dist_diff_alt.Percentile(95))\n\n# Solution\n\n# The distribution of the difference under the null hypothesis is\n# centered at 0.\n\n# The distribution of the difference using the estimated parameters\n# is centered around the observed difference.\n\nthinkplot.PrePlot(2)\nthinkplot.Plot(dist_diff_null, label='null hypothesis')\n\nthinkplot.Plot(dist_diff_alt, label='estimated params')\nthinkplot.Config(xlabel='Difference in means (weeks)',\n ylabel='CDF', loc='lower right')",
"Exercise: In a recent paper, Stein et al. investigate the effects of an intervention intended to mitigate gender-stereotypical task allocation within student engineering teams.\nBefore and after the intervention, students responded to a survey that asked them to rate their contribution to each aspect of class projects on a 7-point scale.\nBefore the intervention, male students reported higher scores for the programming aspect of the project than female students; on average men reported a score of 3.57 with standard error 0.28. Women reported 1.91, on average, with standard error 0.32.\nCompute the sampling distribution of the gender gap (the difference in means), and test whether it is statistically significant. Because you are given standard errors for the estimated means, you don’t need to know the sample size to figure out the sampling distributions.\nAfter the intervention, the gender gap was smaller: the average score for men was 3.44 (SE 0.16); the average score for women was 3.18 (SE 0.16). Again, compute the sampling distribution of the gender gap and test it.\nFinally, estimate the change in gender gap; what is the sampling distribution of this change, and is it statistically significant?",
"# Solution\n\nmale_before = Normal(3.57, 0.28**2)\nmale_after = Normal(3.44, 0.16**2)\n\nfemale_before = Normal(1.91, 0.32**2)\nfemale_after = Normal(3.18, 0.16**2)\n\n# Solution\n\ndiff_before = female_before - male_before\nprint('mean, p-value', diff_before.mu, 1-diff_before.Prob(0))\nprint('CI', diff_before.Percentile(5), diff_before.Percentile(95))\nprint('stderr', diff_before.sigma)\n\n# Solution\n\ndiff_after = female_after - male_after\nprint('mean, p-value', diff_after.mu, 1-diff_after.Prob(0))\nprint('CI', diff_after.Percentile(5), diff_after.Percentile(95))\nprint('stderr', diff_after.sigma)\n\n# Solution\n\ndiff = diff_after - diff_before\nprint('mean, p-value', diff.mu, diff.Prob(0))\nprint('CI', diff.Percentile(5), diff.Percentile(95))\nprint('stderr', diff.sigma)\n\n# Solution\n\n# Conclusions:\n\n# 1) Gender gap before intervention was 1.66 points (p-value 5e-5)\n\n# 2) Genger gap after was 0.26 points (p-value 0.13, not significant)\n\n# 3) Change in gender gap was 1.4 points (p-value 0.002, significant)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cochoa0x1/integer-programming-with-python
|
04-packing-and-allocation/bin_packing.ipynb
|
mit
|
[
"Bin Packing\nImagine we have a lots of objects of various sizes, costs, shapes etc and we wish to pack them away in boxes. We can ask, how many boxes will this take? or if we perhaps only have a few boxes on hand, which objects we should put in them? These are known as bin packing problems and have practical uses in logistics, finance, and manufacturing. Despite being NP-Hard problems, they are heavily studied and have we are able so solve them efficiently through really clever heuristics. Here we will focus on applying linear programming to solving them.\nIf we have a bunch of objects of various sizes to pack, minimizing the number of boxes used gives us the classic 1-d bin packing problem. Variants exist for more dimensions but they are much harder to express.",
"from pulp import *\nimport numpy as np\nimport seaborn as sn\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"1. First lets make some fake data",
"items=['item_%d'%i for i in range(50)]\n\nitem_sizes = dict( (i,np.random.randint(1,20)) for i in items)",
"The Model\nLets model the each possible bins as having at most N spots to be filled by item 0,...,N.\n$$x_{i,b} = \\begin{cases}\n 1, & \\text{if item i is in bin b } \\\n 0, & \\text{otherwise}\n\\end{cases}\n$$\nWe need to make sure each item is placed in exactly one bin. Ie for any given item, summing $x_{i,b}$ along the bins should equal 1.\n$$\\sum_{b} x_{i,b} = 1 \\ \\forall i$$\nWe also need to make sure that if a bin is used, it is not used beyond its capacity. \n$$\\sum_{i} x_{i,b} \\leq \\text{bin_capacity}*y_{b} \\ \\forall b$$\nFinally, we are trying to minimize the number of needed bins, so our objective is:\n$$\\text{Minimize} \\ \\sum_{b} y_{b}$$",
"bin_size = 40\n\n#average item size\navg_size = np.mean([ item_sizes[k] for k in item_sizes])\nplt.barh([0,1],[avg_size,bin_size],height=.99)\nplt.gca().text(.5,1.5,'Bin Size',verticalalignment='center', fontsize=12)\nplt.gca().text(.5,0.5,'Avg item Size', verticalalignment='center', fontsize=12)\nplt.ylim(0,2)\nplt.gca().axis('off');\nplt.title('Bin size vs Object Size')\n\nbins = ['bin_%d'%i for i in range(len(items))]\n\nx = LpVariable.dicts('x',[(i,b) for i in items for b in bins],0,1,LpBinary)\n\ny = LpVariable.dicts('bin',bins,0,10, LpBinary)\n\n#create the problme\nprob=LpProblem(\"bin_packing\",LpMinimize)\n\n#the objective\ncost = lpSum([ y[b] for b in bins])\nprob+=cost\n\n#every item is placed in exactly one bin\nfor i in items:\n prob+= lpSum([x[i,b] for b in bins]) == 1\n\n#if a bin is used, it has a capacity constraint\nfor b in bins:\n prob+=lpSum([ item_sizes[i]*x[i,b] for i in items]) <= bin_size*y[b]",
"Solve it!",
"%time prob.solve()\nprint(LpStatus[prob.status])",
"And the result:",
"print(value(prob.objective))\n\nfor b in bins:\n if value(y[b]) !=0:\n print(b,':',', '.join([ i for i in items if value(x[i,b]) !=0 ]))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
openfisca/openfisca-web-notebook
|
2016-04-01-hackathon-CodeImpot/reforme_revenu_de_base_nouvelles_simulations_rdb_adultes.ipynb
|
agpl-3.0
|
[
"# FONCTIONNE COMPLETEMENT AU 02/04/2016\nfrom datetime import date\n\nfrom openfisca_france import init_country\nfrom openfisca_france.model.base import *\n\n# to debug / trace\nfrom openfisca_core import web_tools",
"Système socio-fiscal",
"TaxBenefitSystem = init_country()\ntax_benefit_system = TaxBenefitSystem()\n\nfrom openfisca_core import reforms\n\nReformeRevenuDeBase = reforms.make_reform(\n key = 'reforme_rdb',\n name = u\"Réforme Revenu de base\",\n reference = tax_benefit_system,\n )",
"Réforme : 1. Revenu de base",
"from numpy import logical_not as not_, minimum as min_, maximum as max_, logical_and as and_, logical_or as or_\n#- Hausse de la CSG déductible au taux de 25%\n#- Mise en place d'un revenu de base adulte égal à RSA socle célibataire - forfait logement célibataire\n#- Intégrer le revenu de base au revenu disponible\n#- Mise en place d'un crédit d'impot familles monoparentales montant ??? (50€)\n#- Supprimer le RSA\n\n\n#-Visualisation graphique en abscisse salaire brut et en ordonnée variation du revenu disponible \n# pour un célibataire sans enfant\n# pour un couple sans enfant\n# une famille monoparentale\n\n\n#(- Nouveau calcul de l'IRPP)\n\n\n#- Hausse de la CSG déductible au taux de 20%\n#montant_csg_crds calcul à partir law_node.taux-plein law_node.taux_réduit et law_node.taux\nimport json\n#- Hausse de la CSG déductible au taux de 20%\n#montant_csg_crds calcul à partir law_node.taux-plein law_node.taux_réduit et law_node.taux\ndef modify_legislation_json(reference_legislation_json_copy):\n for value_json in reference_legislation_json_copy['children']['csg']['children']['activite']['children']['deductible']['children']['taux']['values']:\n value_json['value'] = 0.20\n return reference_legislation_json_copy\n\n\n\n#- Mise en place d'un revenu de base adulte égal à RSA socle célibataire - forfait logement célibataire\n\nclass rdb(ReformeRevenuDeBase.Variable):\n column = FloatCol\n entity_class = Individus\n label = u\"Revenu de base\"\n\n def function(self, simulation, period):\n period = period.start.offset('first-of', 'month').period('month')\n age = simulation.calculate('age') \n rmi = simulation.legislation_at(period.start).minim.rmi\n \n return period, ((age >= 18) * rmi.rmi * ( 1 -rmi.forfait_logement.taux1) + not_(age >= 18) * 0)\n\n\n\n#- Intégrer le revenu de base au revenu disponible\nclass revdisp(ReformeRevenuDeBase.Variable):\n reference = Menages.column_by_name['revdisp']\n\n def function(self, simulation, period):\n '''\n Revenu disponible - ménage\n 'men'\n '''\n period = period.start.period('year').offset('first-of')\n rev_trav_holder = simulation.compute('rev_trav', period)\n pen_holder = simulation.compute('pen', period)\n rev_cap_holder = simulation.compute('rev_cap', period)\n psoc_holder = simulation.compute('psoc', period)\n ppe_holder = simulation.compute('ppe', period)\n impo = simulation.calculate('impo', period)\n rdb_holder = simulation.calculate_add('rdb', period)\n credit_impot_monoparentales_holder = simulation.calculate_add('credit_impot_monoparentales', period)\n\n pen = self.sum_by_entity(pen_holder)\n ppe = self.cast_from_entity_to_role(ppe_holder, role = VOUS)\n ppe = self.sum_by_entity(ppe)\n psoc = self.cast_from_entity_to_role(psoc_holder, role = CHEF)\n psoc = self.sum_by_entity(psoc)\n rev_cap = self.sum_by_entity(rev_cap_holder)\n rev_trav = self.sum_by_entity(rev_trav_holder)\n rdb = self.sum_by_entity(rdb_holder)\n #credit_impot_monoparentales = self.sum_by_entity(credit_impot_monoparentales_holder)\n \n return period, rev_trav + pen + rev_cap + psoc + ppe + impo + rdb + credit_impot_monoparentales_holder\n\n#- Mise en place d'un crédit d'impot familles monoparentales montant (150€)\nclass credit_impot_monoparentales(ReformeRevenuDeBase.Variable):\n column = FloatCol\n entity_class = Menages\n label = u\"credit_impot_monoparentales\"\n\n def function(self, simulation, period):\n period = period.start.offset('first-of', 'month').period('month')\n nb_enf_a_charge = simulation.calculate('nombre_enfants_a_charge_menage',period)\n caseT = simulation.calculate('caseT',period) #Egal True si le parent est isolé\n \n \n #return period, or_(and_(age_holder >= 18, nb_enf_a_charge > 0, caseT), or_(age_holder < 18, nb_enf_a_charge <= 0, not_(caseT)) * 0) * 100\n return period, (nb_enf_a_charge > 0) * (caseT) * 150\n #Si le parent est isolé, avec au moins un enfant, et qu'il est majeur il reçoit la pension\n\n#- Supprimer le RSA\nclass rsa_socle(ReformeRevenuDeBase.Variable):\n reference = Familles.column_by_name['rsa_socle']\n\n def function(self, simulation, period):\n period = period.this_month\n nb_par = simulation.calculate('nb_par', period)\n eligib = simulation.calculate('rsa_eligibilite', period)\n nb_enfant_rsa = simulation.calculate('nb_enfant_rsa', period)\n rmi = simulation.legislation_at(period.start).minim.rmi\n\n nbp = nb_par + nb_enfant_rsa\n\n taux = (\n 1 +\n (nbp >= 2) * rmi.txp2 +\n (nbp >= 3) * rmi.txp3 +\n (nbp >= 4) * ((nb_par == 1) * rmi.txps + (nb_par != 1) * rmi.txp3) +\n max_(nbp - 4, 0) * rmi.txps\n )\n #on met à zéro\n return period, eligib * rmi.rmi * taux * 0",
"Tests",
"reform = ReformeRevenuDeBase()\nreform.modify_legislation_json(modifier_function = modify_legislation_json)",
"Individu seul",
"parent1_salaire_de_base = 20000\n\nscenario_ref_individu_seul = tax_benefit_system.new_scenario().init_single_entity(\n period = 2014,\n parent1 = dict(\n birth = date(1980, 1, 1),\n salaire_de_base = parent1_salaire_de_base,\n statmarit = u'Célibataire',\n ),\n foyer_fiscal = dict(\n caseT = True,\n ),\n enfants = [],\n )\nsimulation_ref_individu_seul = scenario_ref_individu_seul.new_simulation(debug = True)\n\nscenario_rdb_individu_seul = reform.new_scenario().init_single_entity(\n period = 2014,\n parent1 = dict(\n birth = date(1980, 1, 1),\n salaire_de_base = parent1_salaire_de_base,\n statmarit = u'Célibataire',\n ),\n foyer_fiscal = dict(\n caseT = True,\n ),\n enfants = [],\n )\nsimulation_rdb_individu_seul = scenario_rdb_individu_seul.new_simulation(debug = True)",
"Calculs de référence (revenu disponible, IR, RSA, CSG imposable/deductible, impot total)",
"simulation_ref_individu_seul.calculate('revdisp')\n\nsimulation_ref_individu_seul.calculate('irpp')\n\nsimulation_ref_individu_seul.calculate_add('rsa')\n\nsimulation_ref_individu_seul.calculate('csg_imposable_salaire')\n\nsimulation_ref_individu_seul.calculate('csg_deductible_salaire')\n\nsimulation_ref_individu_seul.calculate('tot_impot')",
"Calculs avec réforme RDB (revenu disponible, IR, RSA, CSG imposable/deductible, impot total)",
"simulation_rdb_individu_seul.calculate('revdisp')\n\nsimulation_rdb_individu_seul.calculate('irpp')\n\nsimulation_rdb_individu_seul.calculate('csg_imposable_salaire')\n\nsimulation_rdb_individu_seul.calculate('csg_deductible_salaire')\n\nsimulation_rdb_individu_seul.calculate('tot_impot')\n\n# trace\nsimulation_rdb_individu_seul.calculate('tot_impot')\n#print web_tools.get_trace_tool_link(scenario, ['tot_impot'])\n\nsimulation_rdb_individu_seul.calculate('rdb')",
"Graphiques: scenario variant selon salaire_de_base (0 => 60k par palier de 5k)",
"min_salaire_de_base = 0\nmax_salaire_de_base = 60000\nnb_palier = max_salaire_de_base / 5000\n\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# REFERENCE\nyear = 2014\nscenario_ref_individu_seul = tax_benefit_system.new_scenario().init_single_entity(\n period = year,\n parent1 = dict(\n birth = date(1980, 1, 1),\n statmarit = u'Célibataire',\n ),\n foyer_fiscal = dict(\n caseT = True,\n ),\n enfants = [],\n axes = [[\n dict(\n count = nb_palier,\n min = min_salaire_de_base,\n max = max_salaire_de_base,\n name = 'salaire_de_base',\n period = year, \n ),\n dict(\n count = nb_palier,\n min = min_salaire_de_base,\n max = max_salaire_de_base,\n name = 'salaire_de_base',\n period = year-1, \n ),\n dict(\n count = nb_palier,\n min = min_salaire_de_base,\n max = max_salaire_de_base,\n name = 'salaire_de_base',\n period = year-2, \n )\n ]],\n)\nsimulation_ref_individu_seul = scenario_ref_individu_seul.new_simulation(debug = True)\n\n# RDB\nscenario_rdb_individu_seul = reform.new_scenario().init_single_entity(\n period = year,\n parent1 = dict(\n birth = date(1980, 1, 1),\n statmarit = u'Célibataire',\n ),\n foyer_fiscal = dict(\n caseT = True,\n ),\n enfants = [],\n axes = [[\n dict(\n count = nb_palier,\n min = min_salaire_de_base,\n max = max_salaire_de_base,\n name = 'salaire_de_base',\n period = year, \n ),\n dict(\n count = nb_palier,\n min = min_salaire_de_base,\n max = max_salaire_de_base,\n name = 'salaire_de_base',\n period = year-1, \n ),\n dict(\n count = nb_palier,\n min = min_salaire_de_base,\n max = max_salaire_de_base,\n name = 'salaire_de_base',\n period = year-2, \n )\n ]],\n )\nsimulation_rdb_individu_seul = scenario_rdb_individu_seul.new_simulation(debug = True)\n\nsalaire_de_base = simulation_ref_individu_seul.calculate(\"salaire_de_base\")\nprint salaire_de_base\n\n# comparaison du revenu disponible entre Reference et RDB\nrevenu_disponible_ref = simulation_ref_individu_seul.calculate_add(\"revdisp\")\nprint revenu_disponible_ref\nrevenu_disponible_rdb = simulation_rdb_individu_seul.calculate_add(\"revdisp\")\nprint revenu_disponible_rdb\n\nplt.plot(salaire_de_base[::1], revenu_disponible_ref, salaire_de_base[::1], revenu_disponible_rdb)",
"Couples sans enfants\nOn fait varier le salaire d'un des deux, l'autre est fixé à 0.",
"def make_two_parents_scenario(nombre_enfants = 0, year = None, tax_benefit_system = tax_benefit_system,\n axes_variable = 'salaire_de_base', ax_variable_max = 60000, count = 13):\n enfant = [dict(\n birth = date(2005, 1, 1),\n )]\n enfants = enfant * nombre_enfants\n scenario = tax_benefit_system.new_scenario().init_single_entity(\n axes = [[\n dict(\n count = nb_palier,\n min = min_salaire_de_base,\n max = max_salaire_de_base,\n name = 'salaire_de_base',\n period = year, \n ),\n dict(\n count = nb_palier,\n min = min_salaire_de_base,\n max = max_salaire_de_base,\n name = 'salaire_de_base',\n period = year-1, \n ),\n dict(\n count = nb_palier,\n min = min_salaire_de_base,\n max = max_salaire_de_base,\n name = 'salaire_de_base',\n period = year-2, \n )\n ]],\n period = year,\n parent1 = dict(\n birth = date(1980, 1, 1),\n ),\n parent2 = dict(\n birth = date(1980, 1, 1),\n ),\n enfants = enfants,\n menage = dict(\n loyer = 1000,\n statut_occupation = 4,\n ),\n )\n return scenario\n\nscenario_alone_ref = make_two_parents_scenario(0, 2014)\nsimulation_alone_ref = scenario_alone_ref.new_simulation()\n\nscenario_alone_reform = make_two_parents_scenario(0, 2014, reform)\nsimulation_alone_reform = scenario_alone_reform.new_simulation()\n\nrevenu_disponible_couple_ref = simulation_alone_ref.calculate_add(\"revdisp\")\nprint revenu_disponible_couple_ref\nrevenu_disponible_couple_rdb = simulation_alone_reform.calculate_add(\"revdisp\")\nsalaire_de_base_couple_ref = simulation_alone_ref.calculate(\"salaire_de_base\")\nsalaire_de_base_couple = simulation_alone_reform.calculate(\"salaire_de_base\")\nprint salaire_de_base_couple\nsalaire_de_base_couple_bis = salaire_de_base_couple[0::2]\nsalaire_de_base_couple_ref_bis = salaire_de_base_couple_ref[0::2]\nplt.plot(salaire_de_base_couple_bis[::1], revenu_disponible_couple_ref, salaire_de_base_couple_bis[::1], salaire_de_base_couple_bis)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dnc1994/MachineLearning-UW
|
ml-classification/module-10-online-learning-assignment-solution.ipynb
|
mit
|
[
"Training Logistic Regression via Stochastic Gradient Ascent\nThe goal of this notebook is to implement a logistic regression classifier using stochastic gradient ascent. You will:\n\nExtract features from Amazon product reviews.\nConvert an SFrame into a NumPy array.\nWrite a function to compute the derivative of log likelihood function with respect to a single coefficient.\nImplement stochastic gradient ascent.\nCompare convergence of stochastic gradient ascent with that of batch gradient ascent.\n\nFire up GraphLab Create\nMake sure you have the latest version of GraphLab Create. Upgrade by\npip install graphlab-create --upgrade\nSee this page for detailed instructions on upgrading.",
"from __future__ import division\nimport graphlab",
"Load and process review dataset\nFor this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.",
"products = graphlab.SFrame('amazon_baby_subset.gl/')",
"Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:\n\nRemove punctuation using Python's built-in string manipulation functionality.\nCompute word counts (only for the important_words)\n\nRefer to Module 3 assignment for more details.",
"import json\nwith open('important_words.json', 'r') as f: \n important_words = json.load(f)\nimportant_words = [str(s) for s in important_words]\n\n# Remote punctuation\ndef remove_punctuation(text):\n import string\n return text.translate(None, string.punctuation) \n\nproducts['review_clean'] = products['review'].apply(remove_punctuation)\n\n# Split out the words into individual columns\nfor word in important_words:\n products[word] = products['review_clean'].apply(lambda s : s.split().count(word))",
"The SFrame products now contains one column for each of the 193 important_words.",
"products",
"Split data into training and validation sets\nWe will now split the data into a 90-10 split where 90% is in the training set and 10% is in the validation set. We use seed=1 so that everyone gets the same result.",
"train_data, validation_data = products.random_split(.9, seed=1)\n\nprint 'Training set : %d data points' % len(train_data)\nprint 'Validation set: %d data points' % len(validation_data)",
"Convert SFrame to NumPy array\nJust like in the earlier assignments, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. \nNote: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.",
"import numpy as np\n\ndef get_numpy_data(data_sframe, features, label):\n data_sframe['intercept'] = 1\n features = ['intercept'] + features\n features_sframe = data_sframe[features]\n feature_matrix = features_sframe.to_numpy()\n label_sarray = data_sframe[label]\n label_array = label_sarray.to_numpy()\n return(feature_matrix, label_array)",
"Note that we convert both the training and validation sets into NumPy arrays.\nWarning: This may take a few minutes.",
"feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')\nfeature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment') ",
"Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)\nIt has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:\narrays = np.load('module-10-assignment-numpy-arrays.npz')\nfeature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']\nfeature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']\n Quiz question: In Module 3 assignment, there were 194 features (an intercept + one feature for each of the 193 important words). In this assignment, we will use stochastic gradient ascent to train the classifier using logistic regression. How does the changing the solver to stochastic gradient ascent affect the number of features?\nBuilding on logistic regression\nLet us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))},\n$$\nwhere the feature vector $h(\\mathbf{x}_i)$ is given by the word counts of important_words in the review $\\mathbf{x}_i$. \nWe will use the same code as in Module 3 assignment to make probability predictions, since this part is not affected by using stochastic gradient ascent as a solver. Only the way in which the coefficients are learned is affected by using stochastic gradient ascent as a solver.",
"'''\nproduces probablistic estimate for P(y_i = +1 | x_i, w).\nestimate ranges between 0 and 1.\n'''\ndef predict_probability(feature_matrix, coefficients):\n # Take dot product of feature_matrix and coefficients \n score = np.dot(feature_matrix, coefficients)\n \n # Compute P(y_i = +1 | x_i, w) using the link function\n predictions = 1. / (1.+np.exp(-score)) \n return predictions",
"Derivative of log likelihood with respect to a single coefficient\nLet us now work on making minor changes to how the derivative computation is performed for logistic regression.\nRecall from the lectures and Module 3 assignment that for logistic regression, the derivative of log likelihood with respect to a single coefficient is as follows:\n$$\n\\frac{\\partial\\ell}{\\partial w_j} = \\sum_{i=1}^N h_j(\\mathbf{x}_i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right)\n$$\nIn Module 3 assignment, we wrote a function to compute the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts the following two parameters:\n * errors vector containing $(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w}))$ for all $i$\n * feature vector containing $h_j(\\mathbf{x}_i)$ for all $i$\nComplete the following code block:",
"def feature_derivative(errors, feature): \n \n # Compute the dot product of errors and feature\n ## YOUR CODE HERE\n derivative = np.dot(errors, feature)\n\n return derivative",
"Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.\nTo verify the correctness of the gradient computation, we provide a function for computing average log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).\nTo track the performance of stochastic gradient ascent, we provide a function for computing average log likelihood. \n$$\\ell\\ell_A(\\mathbf{w}) = \\color{red}{\\frac{1}{N}} \\sum_{i=1}^N \\Big( (\\mathbf{1}[y_i = +1] - 1)\\mathbf{w}^T h(\\mathbf{x}_i) - \\ln\\left(1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))\\right) \\Big) $$\nNote that we made one tiny modification to the log likelihood function (called compute_log_likelihood) in our earlier assignments. We added a $\\color{red}{1/N}$ term which averages the log likelihood accross all data points. The $\\color{red}{1/N}$ term makes it easier for us to compare stochastic gradient ascent with batch gradient ascent. We will use this function to generate plots that are similar to those you saw in the lecture.",
"def compute_avg_log_likelihood(feature_matrix, sentiment, coefficients):\n \n indicator = (sentiment==+1)\n scores = np.dot(feature_matrix, coefficients)\n logexp = np.log(1. + np.exp(-scores))\n \n # Simple check to prevent overflow\n mask = np.isinf(logexp)\n logexp[mask] = -scores[mask]\n \n lp = np.sum((indicator-1)*scores - logexp)/len(feature_matrix)\n \n return lp",
"Quiz Question: Recall from the lecture and the earlier assignment, the log likelihood (without the averaging term) is given by \n$$\\ell\\ell(\\mathbf{w}) = \\sum_{i=1}^N \\Big( (\\mathbf{1}[y_i = +1] - 1)\\mathbf{w}^T h(\\mathbf{x}_i) - \\ln\\left(1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))\\right) \\Big) $$\nHow are the functions $\\ell\\ell(\\mathbf{w})$ and $\\ell\\ell_A(\\mathbf{w})$ related?\nModifying the derivative for stochastic gradient ascent\nRecall from the lecture that the gradient for a single data point $\\color{red}{\\mathbf{x}_i}$ can be computed using the following formula:\n$$\n\\frac{\\partial\\ell_{\\color{red}{i}}(\\mathbf{w})}{\\partial w_j} = h_j(\\color{red}{\\mathbf{x}i})\\left(\\mathbf{1}[y\\color{red}{i} = +1] - P(y_\\color{red}{i} = +1 | \\color{red}{\\mathbf{x}_i}, \\mathbf{w})\\right)\n$$\n Computing the gradient for a single data point\nDo we really need to re-write all our code to modify $\\partial\\ell(\\mathbf{w})/\\partial w_j$ to $\\partial\\ell_{\\color{red}{i}}(\\mathbf{w})/{\\partial w_j}$? \nThankfully No!. Using NumPy, we access $\\mathbf{x}i$ in the training data using feature_matrix_train[i:i+1,:]\nand $y_i$ in the training data using sentiment_train[i:i+1]. We can compute $\\partial\\ell{\\color{red}{i}}(\\mathbf{w})/\\partial w_j$ by re-using all the code written in feature_derivative and predict_probability.\nWe compute $\\partial\\ell_{\\color{red}{i}}(\\mathbf{w})/\\partial w_j$ using the following steps:\n* First, compute $P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})$ using the predict_probability function with feature_matrix_train[i:i+1,:] as the first parameter.\n* Next, compute $\\mathbf{1}[y_i = +1]$ using sentiment_train[i:i+1].\n* Finally, call the feature_derivative function with feature_matrix_train[i:i+1, j] as one of the parameters. \nLet us follow these steps for j = 1 and i = 10:",
"j = 1 # Feature number\ni = 10 # Data point number\ncoefficients = np.zeros(194) # A point w at which we are computing the gradient.\n\npredictions = predict_probability(feature_matrix_train[i:i+1,:], coefficients)\nindicator = (sentiment_train[i:i+1]==+1)\n\nerrors = indicator - predictions \ngradient_single_data_point = feature_derivative(errors, feature_matrix_train[i:i+1,j])\nprint \"Gradient single data point: %s\" % gradient_single_data_point\nprint \" --> Should print 0.0\"",
"Quiz Question: The code block above computed $\\partial\\ell_{\\color{red}{i}}(\\mathbf{w})/{\\partial w_j}$ for j = 1 and i = 10. Is $\\partial\\ell_{\\color{red}{i}}(\\mathbf{w})/{\\partial w_j}$ a scalar or a 194-dimensional vector?\nModifying the derivative for using a batch of data points\nStochastic gradient estimates the ascent direction using 1 data point, while gradient uses $N$ data points to decide how to update the the parameters. In an optional video, we discussed the details of a simple change that allows us to use a mini-batch of $B \\leq N$ data points to estimate the ascent direction. This simple approach is faster than regular gradient but less noisy than stochastic gradient that uses only 1 data point. Although we encorage you to watch the optional video on the topic to better understand why mini-batches help stochastic gradient, in this assignment, we will simply use this technique, since the approach is very simple and will improve your results.\nGiven a mini-batch (or a set of data points) $\\mathbf{x}{i}, \\mathbf{x}{i+1} \\ldots \\mathbf{x}{i+B}$, the gradient function for this mini-batch of data points is given by:\n$$\n\\color{red}{\\sum{s = i}^{i+B}} \\frac{\\partial\\ell_{s}}{\\partial w_j} = \\color{red}{\\sum_{s = i}^{i + B}} h_j(\\mathbf{x}_s)\\left(\\mathbf{1}[y_s = +1] - P(y_s = +1 | \\mathbf{x}_s, \\mathbf{w})\\right)\n$$\n Computing the gradient for a \"mini-batch\" of data points\nUsing NumPy, we access the points $\\mathbf{x}i, \\mathbf{x}{i+1} \\ldots \\mathbf{x}_{i+B}$ in the training data using feature_matrix_train[i:i+B,:]\nand $y_i$ in the training data using sentiment_train[i:i+B]. \nWe can compute $\\color{red}{\\sum_{s = i}^{i+B}} \\partial\\ell_{s}/\\partial w_j$ easily as follows:",
"j = 1 # Feature number\ni = 10 # Data point start\nB = 10 # Mini-batch size\ncoefficients = np.zeros(194) # A point w at which we are computing the gradient.\n\npredictions = predict_probability(feature_matrix_train[i:i+B,:], coefficients)\nindicator = (sentiment_train[i:i+B]==+1)\n\nerrors = indicator - predictions \ngradient_mini_batch = feature_derivative(errors, feature_matrix_train[i:i+B,j])\nprint \"Gradient mini-batch data points: %s\" % gradient_mini_batch\nprint \" --> Should print 1.0\"",
"Quiz Question: The code block above computed \n$\\color{red}{\\sum_{s = i}^{i+B}}\\partial\\ell_{s}(\\mathbf{w})/{\\partial w_j}$ \nfor j = 10, i = 10, and B = 10. Is this a scalar or a 194-dimensional vector?\n Quiz Question: For what value of B is the term\n$\\color{red}{\\sum_{s = 1}^{B}}\\partial\\ell_{s}(\\mathbf{w})/\\partial w_j$\nthe same as the full gradient\n$\\partial\\ell(\\mathbf{w})/{\\partial w_j}$?\nAveraging the gradient across a batch\nIt is a common practice to normalize the gradient update rule by the batch size B:\n$$\n\\frac{\\partial\\ell_{\\color{red}{A}}(\\mathbf{w})}{\\partial w_j} \\approx \\color{red}{\\frac{1}{B}} {\\sum_{s = i}^{i + B}} h_j(\\mathbf{x}_s)\\left(\\mathbf{1}[y_s = +1] - P(y_s = +1 | \\mathbf{x}_s, \\mathbf{w})\\right)\n$$\nIn other words, we update the coefficients using the average gradient over data points (instead of using a summation). By using the average gradient, we ensure that the magnitude of the gradient is approximately the same for all batch sizes. This way, we can more easily compare various batch sizes of stochastic gradient ascent (including a batch size of all the data points), and study the effect of batch size on the algorithm as well as the choice of step size.\nImplementing stochastic gradient ascent\nNow we are ready to implement our own logistic regression with stochastic gradient ascent. Complete the following function to fit a logistic regression model using gradient ascent:",
"from math import sqrt\ndef logistic_regression_SG(feature_matrix, sentiment, initial_coefficients, step_size, batch_size, max_iter):\n log_likelihood_all = []\n \n # make sure it's a numpy array\n coefficients = np.array(initial_coefficients)\n # set seed=1 to produce consistent results\n np.random.seed(seed=1)\n # Shuffle the data before starting\n permutation = np.random.permutation(len(feature_matrix))\n feature_matrix = feature_matrix[permutation,:]\n sentiment = sentiment[permutation]\n \n i = 0 # index of current batch\n # Do a linear scan over data\n for itr in xrange(max_iter):\n # Predict P(y_i = +1|x_i,w) using your predict_probability() function\n # Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,:]\n ### YOUR CODE HERE\n predictions = predict_probability(feature_matrix[i:i+batch_size,:], coefficients)\n \n # Compute indicator value for (y_i = +1)\n # Make sure to slice the i-th entry with [i:i+batch_size]\n ### YOUR CODE HERE\n indicator = (sentiment[i:i+batch_size] == +1)\n \n # Compute the errors as indicator - predictions\n errors = indicator - predictions\n for j in xrange(len(coefficients)): # loop over each coefficient\n # Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]\n # Compute the derivative for coefficients[j] and save it to derivative.\n # Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,j]\n ### YOUR CODE HERE\n derivative = feature_derivative(errors, feature_matrix[i:i+batch_size,j])\n \n # compute the product of the step size, the derivative, and the **normalization constant** (1./batch_size)\n ### YOUR CODE HERE\n coefficients[j] += step_size * derivative / batch_size\n \n # Checking whether log likelihood is increasing\n # Print the log likelihood over the *current batch*\n lp = compute_avg_log_likelihood(feature_matrix[i:i+batch_size,:], sentiment[i:i+batch_size],\n coefficients)\n log_likelihood_all.append(lp)\n if itr <= 15 or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) \\\n or itr % 10000 == 0 or itr == max_iter-1:\n data_size = len(feature_matrix)\n print 'Iteration %*d: Average log likelihood (of data points in batch [%0*d:%0*d]) = %.8f' % \\\n (int(np.ceil(np.log10(max_iter))), itr, \\\n int(np.ceil(np.log10(data_size))), i, \\\n int(np.ceil(np.log10(data_size))), i+batch_size, lp)\n \n # if we made a complete pass over data, shuffle and restart\n i += batch_size\n if i+batch_size > len(feature_matrix):\n permutation = np.random.permutation(len(feature_matrix))\n feature_matrix = feature_matrix[permutation,:]\n sentiment = sentiment[permutation]\n i = 0\n \n # We return the list of log likelihoods for plotting purposes.\n return coefficients, log_likelihood_all",
"Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.\nCheckpoint\nThe following cell tests your stochastic gradient ascent function using a toy dataset consisting of two data points. If the test does not pass, make sure you are normalizing the gradient update rule correctly.",
"sample_feature_matrix = np.array([[1.,2.,-1.], [1.,0.,1.]])\nsample_sentiment = np.array([+1, -1])\n\ncoefficients, log_likelihood = logistic_regression_SG(sample_feature_matrix, sample_sentiment, np.zeros(3),\n step_size=1., batch_size=2, max_iter=2)\nprint '-------------------------------------------------------------------------------------'\nprint 'Coefficients learned :', coefficients\nprint 'Average log likelihood per-iteration :', log_likelihood\nif np.allclose(coefficients, np.array([-0.09755757, 0.68242552, -0.7799831]), atol=1e-3)\\\n and np.allclose(log_likelihood, np.array([-0.33774513108142956, -0.2345530939410341])):\n # pass if elements match within 1e-3\n print '-------------------------------------------------------------------------------------'\n print 'Test passed!'\nelse:\n print '-------------------------------------------------------------------------------------'\n print 'Test failed'",
"Compare convergence behavior of stochastic gradient ascent\nFor the remainder of the assignment, we will compare stochastic gradient ascent against batch gradient ascent. For this, we need a reference implementation of batch gradient ascent. But do we need to implement this from scratch?\nQuiz Question: For what value of batch size B above is the stochastic gradient ascent function logistic_regression_SG act as a standard gradient ascent algorithm?\nRunning gradient ascent using the stochastic gradient ascent implementation\nInstead of implementing batch gradient ascent separately, we save time by re-using the stochastic gradient ascent function we just wrote — to perform gradient ascent, it suffices to set batch_size to the number of data points in the training data. Yes, we did answer above the quiz question for you, but that is an important point to remember in the future :)\nSmall Caveat. The batch gradient ascent implementation here is slightly different than the one in the earlier assignments, as we now normalize the gradient update rule.\nWe now run stochastic gradient ascent over the feature_matrix_train for 10 iterations using:\n* initial_coefficients = np.zeros(194)\n* step_size = 5e-1\n* batch_size = 1\n* max_iter = 10",
"coefficients, log_likelihood = logistic_regression_SG(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-1, batch_size=1, max_iter=10)",
"Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?\n* Increases\n* Decreases\n* Fluctuates \nNow run batch gradient ascent over the feature_matrix_train for 200 iterations using:\n* initial_coefficients = np.zeros(194)\n* step_size = 5e-1\n* batch_size = len(feature_matrix_train)\n* max_iter = 200",
"# YOUR CODE HERE\ncoefficients_batch, log_likelihood_batch = logistic_regression_SG(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-1, batch_size=len(feature_matrix_train), max_iter=200)",
"Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?\n* Increases \n* Decreases\n* Fluctuates \nMake \"passes\" over the dataset\nTo make a fair comparison betweeen stochastic gradient ascent and batch gradient ascent, we measure the average log likelihood as a function of the number of passes (defined as follows):\n$$\n[\\text{# of passes}] = \\frac{[\\text{# of data points touched so far}]}{[\\text{size of dataset}]}\n$$\nQuiz Question Suppose that we run stochastic gradient ascent with a batch size of 100. How many gradient updates are performed at the end of two passes over a dataset consisting of 50000 data points?",
"print 50000 / 100 * 2",
"Log likelihood plots for stochastic gradient ascent\nWith the terminology in mind, let us run stochastic gradient ascent for 10 passes. We will use\n* step_size=1e-1\n* batch_size=100\n* initial_coefficients to all zeros.",
"step_size = 1e-1\nbatch_size = 100\nnum_passes = 10\nnum_iterations = num_passes * int(len(feature_matrix_train)/batch_size)\n\ncoefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=1e-1, batch_size=100, max_iter=num_iterations)",
"We provide you with a utility function to plot the average log likelihood as a function of the number of passes.",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\ndef make_plot(log_likelihood_all, len_data, batch_size, smoothing_window=1, label=''):\n plt.rcParams.update({'figure.figsize': (9,5)})\n log_likelihood_all_ma = np.convolve(np.array(log_likelihood_all), \\\n np.ones((smoothing_window,))/smoothing_window, mode='valid')\n plt.plot(np.array(range(smoothing_window-1, len(log_likelihood_all)))*float(batch_size)/len_data,\n log_likelihood_all_ma, linewidth=4.0, label=label)\n plt.rcParams.update({'font.size': 16})\n plt.tight_layout()\n plt.xlabel('# of passes over data')\n plt.ylabel('Average log likelihood per data point')\n plt.legend(loc='lower right', prop={'size':14})\n\nmake_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,\n label='stochastic gradient, step_size=1e-1')",
"Smoothing the stochastic gradient ascent curve\nThe plotted line oscillates so much that it is hard to see whether the log likelihood is improving. In our plot, we apply a simple smoothing operation using the parameter smoothing_window. The smoothing is simply a moving average of log likelihood over the last smoothing_window \"iterations\" of stochastic gradient ascent.",
"make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,\n smoothing_window=30, label='stochastic gradient, step_size=1e-1')",
"Checkpoint: The above plot should look smoother than the previous plot. Play around with smoothing_window. As you increase it, you should see a smoother plot.\nStochastic gradient ascent vs batch gradient ascent\nTo compare convergence rates for stochastic gradient ascent with batch gradient ascent, we call make_plot() multiple times in the same cell.\nWe are comparing:\n* stochastic gradient ascent: step_size = 0.1, batch_size=100\n* batch gradient ascent: step_size = 0.5, batch_size=len(feature_matrix_train)\nWrite code to run stochastic gradient ascent for 200 passes using:\n* step_size=1e-1\n* batch_size=100\n* initial_coefficients to all zeros.",
"step_size = 1e-1\nbatch_size = 100\nnum_passes = 200\nnum_iterations = num_passes * int(len(feature_matrix_train)/batch_size)\n\n## YOUR CODE HERE\ncoefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=1e-1, batch_size=100, max_iter=num_iterations)",
"We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.",
"make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,\n smoothing_window=30, label='stochastic, step_size=1e-1')\nmake_plot(log_likelihood_batch, len_data=len(feature_matrix_train), batch_size=len(feature_matrix_train),\n smoothing_window=1, label='batch, step_size=5e-1')",
"Quiz Question: In the figure above, how many passes does batch gradient ascent need to achieve a similar log likelihood as stochastic gradient ascent? \n\nIt's always better\n10 passes\n20 passes\n150 passes or more\n\nExplore the effects of step sizes on stochastic gradient ascent\nIn previous sections, we chose step sizes for you. In practice, it helps to know how to choose good step sizes yourself.\nTo start, we explore a wide range of step sizes that are equally spaced in the log space. Run stochastic gradient ascent with step_size set to 1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, and 1e2. Use the following set of parameters:\n* initial_coefficients=np.zeros(194)\n* batch_size=100\n* max_iter initialized so as to run 10 passes over the data.",
"batch_size = 100\nnum_passes = 10\nnum_iterations = num_passes * int(len(feature_matrix_train)/batch_size)\n\ncoefficients_sgd = {}\nlog_likelihood_sgd = {}\nfor step_size in np.logspace(-4, 2, num=7):\n coefficients_sgd[step_size], log_likelihood_sgd[step_size] = logistic_regression_SG(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=step_size, batch_size=batch_size, max_iter=num_iterations)",
"Plotting the log likelihood as a function of passes for each step size\nNow, we will plot the change in log likelihood using the make_plot for each of the following values of step_size:\n\nstep_size = 1e-4\nstep_size = 1e-3\nstep_size = 1e-2\nstep_size = 1e-1\nstep_size = 1e0\nstep_size = 1e1\nstep_size = 1e2\n\nFor consistency, we again apply smoothing_window=30.",
"for step_size in np.logspace(-4, 2, num=7):\n make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,\n smoothing_window=30, label='step_size=%.1e'%step_size)",
"Now, let us remove the step size step_size = 1e2 and plot the rest of the curves.",
"for step_size in np.logspace(-4, 2, num=7)[0:6]:\n make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,\n smoothing_window=30, label='step_size=%.1e'%step_size)",
"Quiz Question: Which of the following is the worst step size? Pick the step size that results in the lowest log likelihood in the end.\n1. 1e-2\n2. 1e-1\n3. 1e0\n4. 1e1\n5. 1e2\nQuiz Question: Which of the following is the best step size? Pick the step size that results in the highest log likelihood in the end.\n1. 1e-4\n2. 1e-2\n3. 1e0\n4. 1e1\n5. 1e2"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
EmuKit/emukit
|
notebooks/Emukit-tutorial-parallel-eval-of-obj-fun.ipynb
|
apache-2.0
|
[
"Bayesian optimization with parallel evaluation of an external objection function using Emukit\nThis tutorial will show you how to leverage Emukit to do Bayesian optimization on an external objective function that we can evaluate multiple times in parallel.\nOverview\nBy the end of the tutorial, you will be able to:\n\nGenerate batches ${X_t | t \\in 1..}$ of objective function evaluation locations ${x_i | x_i \\in X_t}$\nEvaluate the objective function at these suggested locations in parallel $f(x_i)$\nUse asyncio to implement the concurrency structure supporting this parallel evaluation\n\nThis tutorial requires basic familiarity with Bayesian optimization and concurrency. If you've never run Bayesian optimization using Emukit before, please refer to the introductory tutorial for more information. The concurrency used here is not particularly complicated, so you should be able to follow just fine without much more than an understanding of the active object design pattern.\nThe overview must start with the general imports and plots configuration\nThe overview section must finish with a Navigation that links to the main sections of the notebook",
"### General imports\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors as mcolors\n\n### --- Figure config\ncolors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS)\nLEGEND_SIZE = 15\nTITLE_SIZE = 25\nAXIS_SIZE = 15\nFIG_SIZE = (12,8)",
"Navigation\n\n\nDefine Objective Function\n\n\nSetup BO & Run BO\n\n\nConclusions\n\n\n1. Define objective function",
"# Specific imports that are used in a section should be loaded at the beginning of that section.\n# It is ok if an import is repeated multiple times over the notebook\n\nimport time\nimport asyncio\nimport GPy\nimport emukit\nimport numpy as np\nfrom math import pi\n\nfrom emukit.test_functions.branin import (\n branin_function as _branin_function,\n)\n\n### Define the cost and objective functions\n\n_branin, _ps = _branin_function()\n\nasync def a_cost(x: np.ndarray):\n # Cost function, defined arbitrarily\n t = max(x.sum()/10, 0.1)\n await asyncio.sleep(t)\n\nasync def a_objective(x: np.ndarray):\n # Objective function\n r = _branin(x)\n await a_cost(x)\n return r\n\nasync def demo_async_obj():\n '''This function demonstrates a simple usage of the async objective function'''\n # Configure\n _x = [7.5, 12.5]\n d = len(_x)\n x = np.array(_x).reshape((1, d))\n assert _ps.check_points_in_domain(x).all(), (\"You configured a point outside the objective\"\n f\"function's domain: {x} is outside {_ps.get_bounds()}\")\n # Execute\n print(f\"Input: x={x}\")\n t0 = time.perf_counter()\n r = await a_objective(x)\n t1 = time.perf_counter()\n print(f\"Output: result={r}\")\n print(f\"Time elapsed: {t1-t0} sec\")\n\nawait demo_async_obj()",
"2. Run BO using parallel evaluation of batched suggestions",
"from GPy.models import GPRegression\nfrom emukit.model_wrappers import GPyModelWrapper\nfrom emukit.core.initial_designs.latin_design import LatinDesign\nfrom emukit.core import ParameterSpace, ContinuousParameter\nfrom emukit.core.loop import UserFunctionWrapper, UserFunctionResult\nfrom emukit.core.loop.stopping_conditions import FixedIterationsStoppingCondition\nfrom emukit.core.optimization import GradientAcquisitionOptimizer\nfrom emukit.bayesian_optimization.loops import BayesianOptimizationLoop\nfrom emukit.bayesian_optimization.acquisitions import NegativeLowerConfidenceBound\n\nimport warnings\nwarnings.filterwarnings('ignore') # to quell the numerical errors in hyperparameter fitting\n\n# Plotting stuff (from constrained optimization tutorial)\nx1b, x2b = _ps.get_bounds()\nplot_granularity = 50\nx_1 = np.linspace(x1b[0], x1b[1], plot_granularity)\nx_2 = np.linspace(x2b[0], x2b[1], plot_granularity)\nx_1_grid, x_2_grid = np.meshgrid(x_1, x_2)\nx_all = np.stack([x_1_grid.flatten(), x_2_grid.flatten()], axis=1)\ny_all = _branin(x_all)\ny_reshape = np.reshape(y_all, x_1_grid.shape)\nx_best = np.array([(-pi,12.275), (pi,2.275), (9.425,2.475)])\n\ndef plot_progress(loop_state, batch_size: int):\n plt.figure(figsize=FIG_SIZE)\n plt.contourf(x_1, x_2, y_reshape)\n plt.plot(loop_state.X[:-batch_size, 0], loop_state.X[:-batch_size, 1], linestyle='', marker='.', markersize=16, color='b')\n plt.plot(loop_state.X[-batch_size:, 0], loop_state.X[-batch_size:, 1], linestyle='', marker='.', markersize=16, color='r')\n plt.plot(x_best[:,0], x_best[:,1], linestyle='', marker='x', markersize=18, color='g')\n plt.legend(['Previously evaluated points', 'Last evaluation', 'True best'])\n plt.show()\n\nasync def async_run_bo():\n # Configure\n max_iter = 50\n n_init = 6\n batch_size = 6\n beta = 0.1 # tradeoff parameter for NCLB acq. opt.\n update_interval = 1 # how many results before running hyperparam. opt.\n # Build Bayesian optimization components\n space = _ps\n design = LatinDesign(space)\n X_init = design.get_samples(n_init)\n input_coroutines = [a_objective(x.reshape((1,space.dimensionality))) for x in X_init]\n _Y_init = await asyncio.gather(*input_coroutines, return_exceptions=True)\n Y_init = np.concatenate(_Y_init)\n model_gpy = GPRegression(X_init, Y_init)\n model_gpy.optimize()\n model_emukit = GPyModelWrapper(model_gpy)\n acquisition_function = NegativeLowerConfidenceBound(model=model_emukit, beta=beta)\n acquisition_optimizer = GradientAcquisitionOptimizer(space=space)\n bo_loop = BayesianOptimizationLoop(\n model = model_emukit,\n space = space,\n acquisition = acquisition_function,\n acquisition_optimizer = acquisition_optimizer,\n update_interval = update_interval,\n batch_size = batch_size,\n )\n # Run BO loop\n results = None\n n = bo_loop.model.X.shape[0]\n while n < max_iter:\n print(f\"Optimizing: n={n}\")\n # TODO use a different acquisition function because currently X_batch is 5 identical sugg.\n # ^ only on occasion, apparently\n X_batch = bo_loop.get_next_points(results)\n coroutines = [a_objective(x.reshape((1, space.dimensionality))) for x in X_batch]\n # TODO update model as soon as any result is available\n # ^ as-is, only updates and makes new suggestions when all results come in\n # TODO make suggestions cost-aware\n _results = await asyncio.gather(*coroutines, return_exceptions=True)\n Y_batch = np.concatenate(_results)\n results = list(map(UserFunctionResult, X_batch, Y_batch))\n n = n + len(results)\n plot_progress(bo_loop.loop_state, batch_size)\n final_result = bo_loop.get_results()\n true_best = 0.397887\n # rel_err = (final_result.minimum_value - true_best)/true_best\n print(\n \"############################################################\\n\"\n f\"Minimum found at location: {final_result.minimum_location}\\n\"\n f\"\\twith score: {final_result.minimum_value}\\n\"\n f\"True minima at:\\n{x_best}\\n\"\n f\"\\twith score: {true_best}\\n\"\n # f\"Relative error (%): {rel_err*100:.2f}\\n\"\n \"\\tsource: https://www.sfu.ca/~ssurjano/branin.html\\n\"\n \"############################################################\"\n )\n \nawait async_run_bo()",
"3. Conclusions\n\nI generated batches of suggestions using the bo_loop.get_next_points() function having configured the bo_loop with batch_size > 1\nI evaluated these suggestions in parallel using _Y_init = await asyncio.gather(*input_coroutines, return_exceptions=True)\nThe asyncio structure is bare-bones:\nThe coroutines are prepared by mapping the async external objective function over the inputs\nThe coroutines are executed using asyncio.gather"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kubeflow/examples
|
demos/recurring/recurring.ipynb
|
apache-2.0
|
[
"Recurring runs with the KFP SDK\nIf you're running on a local cluster, expose the GUI and API, respectively, with\nthe following commands:\nkubectl port-forward -n kubeflow svc/ml-pipeline-ui 8080:80\nkubectl port-forward -n kubeflow svc/ml-pipeline-ui 3000:80\nThe rest of this demo assumes that you're running locally.\nInstantiate the KFP SDK client. Set the host variable to the url and port where\nyou expose the KFP API.",
"import kfp\n\nhost = 'http://localhost:3000'\nclient = kfp.Client(host=host)",
"Create a pipeline component from the provided component file. This component\nretrieves and executes a script from a provided URL.",
"run_script = kfp.components.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/examples/master/demos/recurring/component.yaml'\n)",
"Create a pipeline function.",
"def pipeline(url):\n run_script_task = run_script(url=url)",
"Compile the pipeline function. We will pass the resulting yaml to the pipeline\nexecution invocations.",
"kfp.compiler.Compiler().compile(\n pipeline_func=pipeline,\n package_path='download.yaml',\n)",
"Create a parameters dictionary with the url key.",
"parameters = {\n 'url': 'https://raw.githubusercontent.com/kubeflow/examples/master/demos/recurring/success.sh'\n}",
"We can optionally validate the pipeline with a single run before creating a recurring run.",
"result = client.create_run_from_pipeline_func(\n pipeline_func=pipeline,\n arguments=parameters,\n)",
"We can retrieve the result of the pipeline run through the Kubeflow GUI, which\nis the recommended approach. That being said, we can also interrogate the result\nprogrammatically.",
"\nresult.wait_for_run_completion()\nprint(result.run_info)",
"Now that we've validated a single run, let's create a recurring run.\nWe first need to create an experiment since the create_recurring_run method\nrequires an experiment_id.",
"experiment = client.create_experiment('test')\n\njob = client.create_recurring_run(\n experiment_id=experiment.id,\n job_name='test',\n cron_expression='*/2 * * * *', # Runs once every two minutes.\n pipeline_package_path='download.yaml', # Pass in compiled output.\n params=parameters,\n)",
"The Kubeflow Pipelines GUI provides an excellent interface for interacting with\nrecurring runs, but you can interrogate the job programmatically if you prefer.",
"print(job)",
"In the GUI, you can retrieve the logs of an individual run. They should\nculminate with Success!.\nTo disable the recurring run:",
"client.disable_job(job.id)",
"To list recurring runs:",
"client.list_recurring_runs()",
"To get details about an individual recurring run:",
"client.get_recurring_run(job.id)",
"To delete a recurring run programmatically:",
"result = client.delete_job(job.id)",
"Additional recurring run interactions via the SDK are documented here."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gaufung/Data_Analytics_Learning_Note
|
DesignPattern/InterpretPattern.ipynb
|
mit
|
[
"解释器模式(Interpret Pattern)\n1 代码\n要开发一个自动识别谱子的吉他模拟器,达到录入谱即可按照谱发声的效果。除了发声设备外(假设已完成),最重要的就是读谱和译谱能力了。分析其需求,整个过程大致上分可以分为两部分:根据规则翻译谱的内容;根据翻译的内容演奏。我们用一个解释器模型来完成这个功能。",
"class PlayContext():\n play_text = None\n\nclass Expression():\n def interpret(self, context):\n if len(context.play_text) == 0:\n return\n else:\n play_segs=context.play_text.split(\" \")\n for play_seg in play_segs:\n pos=0\n for ele in play_seg:\n if ele.isalpha():\n pos+=1\n continue\n break\n play_chord = play_seg[0:pos]\n play_value = play_seg[pos:]\n self.execute(play_chord,play_value)\n def execute(self,play_key,play_value):\n pass\n\nclass NormGuitar(Expression):\n def execute(self, key, value):\n print (\"Normal Guitar Playing--Chord:%s Play Tune:%s\"%(key,value))",
"PlayContext类为谱的内容,这里仅含一个字段,没有方法。Expression即表达式,里面仅含两个方法,interpret负责转译谱,execute则负责演奏;NormGuitar类覆写execute,以吉他 的方式演奏。\n业务场景如下:",
"context = PlayContext()\ncontext.play_text = \"C53231323 Em43231323 F43231323 G63231323\"\nguitar=NormGuitar()\nguitar.interpret(context)",
"2 Discriptions\n解释器模式定义如下:给定一种语言,定义它的文法表示,并定义一个解释器,该解释器使用该表示来解释语言中的句子。典型的解释器模式中会有终结符和非终结符之说,语法也根据两种终结符,决定语句最终含义。上例中,非终结符就是空格,终结符就是整个句尾。\n3 Advantages\n\n在语法分析的场景中,具有比较好的扩展性。规则修改和制订比较灵活。\n\n4 Occasions\n\n若一个问题重复发生,可以考虑使用解释器模式。这点在数据处理和日志处理过程中使用较多,当数据的需求方需要将数据纳为己用时,必须将数据“翻译”成本系统的数据规格;同样的道理,日志分析平台也需要根据不同的日志格式翻译成统一的“语言”。\n特定语法解释器。如各种解释型语言的解释器,再比如自然语言中基于语法的文本分析等。\n\n5 Disadvantages\n\n解释规则多样化会导致解释器的爆炸;\n解释器目标比较单一,行为模式比较固定,因而重要的模块中尽量不要使用解释器模式。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mayank-johri/LearnSeleniumUsingPython
|
Section 3 - Machine Learning/Supervised Learning Algorithm/Classification/3. Gaussian Naive Bayes Classification.ipynb
|
gpl-3.0
|
[
"Gaussian Naive Bayes Classification\nFor most classification problems, it’s nice to have a simple, fast method to provide a quick baseline classification. If the simple and fast method is sufficient, then we don’t have to waste CPU cycles on more complex models. If not, we can use the results of the simple method to give us clues about our data.\nOne good method to keep in mind is Gaussian Naive Bayes (sklearn.naive_bayes.GaussianNB).\nGaussian Naive Bayes fits a Gaussian distribution to each training label independantly on each feature, and uses this to quickly give a rough classification. It is generally not sufficiently accurate for real-world data, but can perform surprisingly well, for instance on text data.",
"from sklearn.datasets import load_digits\ndigits = load_digits()\n\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)\n\nprint(len(X_train), len(X_test), y_train, y_test)\n\nclf = GaussianNB()\nclf.fit(X_train, y_train)\n\npredicted = clf.predict(X_test)\nexpected = y_test\nprint(predicted)\nprint(expected)",
"Quantitative Measurement of Performance",
"matches = (predicted == expected)\nprint(matches)\n\nprint(matches.sum())\nprint(len(matches))\nqmp = matches.sum() / float(len(matches))\nprint(qmp)",
"We see that more than 80% of the 450 predictions match the input. But there are other more sophisticated metrics that can be used to judge the performance of a classifier: several are available in the sklearn.metrics submodule.\nOne of the most useful metrics is the classification_report, which combines several measures and prints a table with the results:",
"from sklearn import metrics\nprint(metrics.classification_report(expected, predicted))",
"Another enlightening metric for this sort of multi-label classification is a confusion matrix: it helps us visualize which labels are being interchanged in the classification errors:",
"print(metrics.confusion_matrix(expected, predicted))",
"the above metrix is shown as 0 1 2 3 4 5 6 7 8 9 both X and Y axis.\nyou can see that there is no confusion in 0 and there are 41 instances of it. \nyou can see that there is some confusion in 1 and there are 31 instances it was correctly identified but it was confused with 6 ( 4 time), 8 (7 times) & 9 (1 time) and so on. \nWe see here that the numbers 1, 2, 3, and 9 are often being labeled 8."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sjqgithub/rquestions
|
Construct Dataset.ipynb
|
mit
|
[
"import numpy as np\nnp.random.seed(2345)\n\nimport pandas as pd\n\nquestions = pd.read_csv(\"./input/Questions.csv\", encoding='latin1')\nanswers = pd.read_csv(\"./input/Answers.csv\", encoding='latin1')\ntags = pd.read_csv(\"./input/Tags.csv\", encoding='latin1')\n\ntags.head()\n\nanswers.head()\n\nquestions.head()\n\nquestions.info() # Id Title Body are used for constructing the dataset\n\nanswers.info() # OwnerUserId ParentId IsAcceptedAnswer are used for constructing the dataset, maybe score can be used in the future\n\ntags.info() # Id and Tag are useful",
"先处理questions,数据清洗",
"# extract all the code part \ntemp_code = questions['Body'].str.extractall(r'(<code>[^<]+</code>)')\n\ntemp_code.head()\n\n# unstack and convert into a single column for cleaning\ntest = temp_code.unstack('match')\n\ntest.columns = test.columns.droplevel()\n# put all columns together\ncode = pd.DataFrame(test.apply(lambda x: x.str.cat(), axis=1,reduce=True))\n# rename \ncode.columns = ['CodeBody']\n# remove the html tags finally\ncode['CodeBody'] = code['CodeBody'].str.replace(r'<[^>]+>|\\n|\\r',' ')\n\n# remove the code part from questions\nbody = questions['Body'].str.replace(r'<code>[^<]+</code>',' ')\n# build up the question part from questions\nquestions['QuestionBody'] = body.str.replace(r\"<[^>]+>|\\n|\\r\", \" \")\n\n# Join the codebody by index\nquestions = questions.join(code)\n# final cleaned dataset\nquestions_final = questions.drop('Body',axis=1)\n\nquestions_final.head()\n\nquestions_final.info()",
"再处理tags,将其拼在questions后",
"tags = tags[tags.Tag.notnull()]\n\ntagsByquestion = tags.groupby('Id',as_index=False).agg(lambda x: ' '.join(x))\n\ntagsByquestion.head()\n\ntagsByquestion.info()\n\nquestions_tags = questions_final.merge(tagsByquestion,on='Id',how='left')\n\nquestions_tags.head()\n\nquestions_tags.info()",
"至此,Questions和tags处理并合并完成。",
"questions_tags = questions_tags.drop(['OwnerUserId','CreationDate','Score'], axis=1)\n\nquestions_tags.head()\n\nquestions_tags.info()",
"再处理answers,找出有最佳回答的问题",
"accepted_answers = answers[answers.IsAcceptedAnswer == True]\n\naccepted_answers.head()\n\naccepted_answers.info()\n\n%matplotlib inline\n\n# Let's compute the number of best answers the experts have proposed:\naccepted_answers[\"OwnerUserId\"].value_counts().head(10).plot(kind=\"barh\")\n\naccepted_answers[\"OwnerUserId\"].value_counts().head(10)\n\naccepted_answers = accepted_answers.drop(['Id','CreationDate','Score','IsAcceptedAnswer' ,'Body'], axis=1)\n\ncol_mapping = {'OwnerUserId' : 'ExpertId',\n 'ParentId' : 'Id'}\naccepted_answers = accepted_answers.rename(columns=col_mapping, copy = False)\n\naccepted_answers.head()\n\naccepted_answers.info()\n\naccepted_answers = accepted_answers.dropna()\n\naccepted_answers.info()\n\nunique_expert = accepted_answers.ExpertId.unique()\nunique_expert.shape\n\ncount = accepted_answers['ExpertId'].value_counts()\n\ncount_df = pd.DataFrame(count)\n\ncount_df = count_df.reset_index()\n\ncol_mapping2 = {'ExpertId' : 'Count',\n 'index' : 'ExpertId'}\ncount_df = count_df.rename(columns=col_mapping2, copy = False)\n\ncount_df.head()\n\ncount_df.info()",
"整合数据",
"questions_answers = questions_tags.merge(accepted_answers,on='Id',how='right')\n\nquestions_answers.head()\n\nquestions_answers.info()\n\nexperts_count = questions_answers.merge(count_df, on='ExpertId', how='left')\n\nexperts_count.head()\n\nexperts_count.info()\n\nexperts_count.columns\n\nexperts_count = experts_count.reindex(columns=[u'Id', u'Title', u'QuestionBody', u'CodeBody', u'Tag', u'ExpertId',\n u'Count', u'Label'])\n\nfrom sklearn.preprocessing import LabelEncoder\nlabel=LabelEncoder()\nexperts_count['Label']=label.fit_transform(experts_count['ExpertId'])\n\nexperts_count.head()\n\nmax_lable = np.max(experts_count.Label)\nmin_lable = np.min(experts_count.Label)\nprint (max_lable)\nprint (min_lable)\n\nexperts_count.info()\n\nimport pickle\npickle.dump(experts_count,open('experts_count.pkl','wb'))",
"数据整合完毕,开始构造实验数据集\nstart from here",
"import numpy as np\nimport pandas as pd\nexperts_count=pd.read_pickle('experts_count.pkl')\nexperts_count=experts_count.fillna('none')\n\nexperts_count.columns\n\ntrain1 = experts_count[:80000][experts_count.Count>10]\ntest1 = experts_count[80000:]\n\ntrain1_unique_expert = train1.ExpertId.unique()\nprint (\"number of experts in train set: %r \" % train1_unique_expert.shape)\n\ntest1_unique_expert = test1.ExpertId.unique()\nprint (\"number of experts in test set: %r\" % test1_unique_expert.shape)\n\nprint (\"type : %r\" % type(test1_unique_expert))\n\nl = np.intersect1d(train1_unique_expert,test1_unique_expert)\nprint (\"the number of experts both in train set and test set: %r\" % l.shape)\n\ntitle_train1 = train1.drop(['Id','QuestionBody','CodeBody','Tag', \\\n 'ExpertId','Count',],axis=1)\ntitle_test1 = test1.drop(['Id','QuestionBody','CodeBody','Tag', \\\n 'ExpertId','Count',],axis=1)\n\ntitle_train1 = title_train1.set_index('Label')\ntitle_test1 = title_test1.set_index('Label')\n\ntitle_train1.info()\n\ntitle_test1.info()\n\n# 通过fastText的测试类别我们得出,测试集中7404个问题中有5176个属于训练集中存在的专家\n5176/7404.0\n\ntitle_train1.to_csv('title_train1', encoding='utf-8')\ntitle_test1.to_csv('title_test1', encoding='utf-8')",
"至此,数据集title_set1——Count>10构造完成! used for classification model \n\nstop words were not ignored\ntext was not tokenized and lowercased\nno stemming was used\n\n\n\nNow, let's contruct the dataset for computing the similarity \nstart from here",
"import numpy as np\nimport pandas as pd\nexperts_count=pd.read_pickle('experts_count.pkl')\nexperts_count=experts_count.fillna('none')\n\ntrain = experts_count[:80000]\ntest = experts_count[80000:]\ntitle_train = train.drop(['Id','QuestionBody','CodeBody','Tag', \\\n 'ExpertId','Count',],axis=1)\ntitle_test = test.drop(['Id','QuestionBody','CodeBody','Tag', \\\n 'ExpertId','Count',],axis=1)\n\ntitle_train = title_train.groupby('Label',as_index=True).agg(lambda x: ' '.join(x))\ntitle_test = title_test.set_index('Label')\n\ntitle_train.info()\n\ntitle_test.info()\n\ntrain_unique_expert = train.ExpertId.unique()\nprint (\"number of experts in train set: %r \" % train_unique_expert.shape)\n\ntest_unique_expert = test.ExpertId.unique()\nprint (\"number of experts in test set: %r\" % test_unique_expert.shape)\n\nprint (\"type : %r\" % type(test_unique_expert))\n\nl = np.intersect1d(train_unique_expert,test_unique_expert)\nprint (\"the number of experts both in train set and test set: %r\" % l.shape)\n# 1417 - 698 = 719, 719 + 8285 = 9004. : 有719个专家是在测试集中新出现的。\n\ntitle_train.to_csv('title_train_similarity',encoding='utf-8')\ntitle_test.to_csv('title_test_similarity',encoding='utf-8')",
"至此,数据集title_set构造完成! used for similarity model \n\nstop words were not ignored\ntext was not tokenized and lowercased\nno stemming was used\n\n去停用词",
"import nltk\nstopset = set(nltk.corpus.stopwords.words('english'))\n\ntexts = list(experts_count.Title)\n# Tokenize the titles\ntexts = [nltk.word_tokenize(text) for text in texts]\n# pos tag the tokens\ntxtpos = [nltk.pos_tag(texts) for texts in texts]\n# for titles we only care about verbs and nouns\ntxtpos = [[w for w in s if (w[1][0] == 'N' or w[1][0] == 'V') and \\\n w[0].lower() not in stopset] \n for s in txtpos]\n\nqbodys = list(dfFinal.QuestionBody)\n#break into sentences\nqsents = [nltk.sent_tokenize(text) for text in qbodys]\n# Tokenize the question body\nqbodys = [nltk.word_tokenize(text) for text in qbodys]\n# attach tags to the body\nqpos = [nltk.pos_tag(texts) for texts in qbodys]\n\nimport cPickle\ncPickle.dump([train1, test1], open('data1.pkl','wb'))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wonkoderverstaendige/RattusExMachina
|
doc/Playtesting.ipynb
|
mit
|
[
"import re\nimport os\nimport numpy as np\nfrom scipy import stats\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef mean(values):\n return float(sum(values))/len(values)\n\nimport serial\n\ndef transfer_test(data, dev='/dev/ttyACM0'):\n \"\"\"Send numpy array over serial, return bytes written\n # TODO: Time taken to send for quick benchmarking!\"\"\"\n with serial.Serial(dev, writeTimeout=0) as ser:\n return ser.write(data)",
"PJRC's receive test\n(host in C, variable buffer size, receiving in 64 Byte chunks)\nAnything below 64 bytes is not a full USB packet and waits for transmission. Above, full speed is achieved.",
"result_path = '../src/USB_Virtual_Serial_Rcv_Speed_Test/usb_serial_receive/host_software/'\nprint [f for f in os.listdir(result_path) if f.endswith('.txt')]\n\ndef read_result(filename):\n results = {}\n current_blocksize = None\n with open(os.path.join(result_path, filename)) as f:\n for line in f.readlines():\n if line.startswith('port'):\n current_blocksize = int(re.search('(?:size.)(\\d*)', line).groups()[0])\n results[current_blocksize] = []\n else:\n results[current_blocksize].append(int(line[:-4].strip())/1000.)\n return results\n\n# Example: \nresults = read_result('result_readbytes.txt')\nfor bs in sorted(results.keys()):\n speeds = results[bs]\n print \"{bs:4d}B blocks: {avg:4.0f}±{sem:.0f} KB/s\".format(bs=bs, avg=mean(speeds), sem=stats.sem(speeds))\n\n# Standard\nsizes, speeds_standard = zip(*[(k, mean(v)) for k, v in read_result('result_standard.txt').items()])\n\n# ReadBytes\nsizes, speeds_readbytes = zip(*[(k, mean(v)) for k, v in read_result('result_readbytes.txt').items()])\n\n# Readbytes+8us overhead per transferred SPI packet (worst case scenario?)\nsizes, speeds_readbytes_oh = zip(*[(k, mean(v)) for k, v in read_result('result_readbytes_overhead.txt').items()])\n\n# ReadBytes+spi4teensy on 8 channels\nsizes, speeds_readbytes_spi = zip(*[(k, mean(v)) for k, v in read_result('result_readbytes_spi4teensy.txt').items()])\n\nfig, axes = plt.subplots(nrows=1, ncols=1, figsize=(10, 5))\n\naxes.semilogx(sizes, speeds_standard, 'gx', basex=2, label='Standard')\naxes.semilogx(sizes, speeds_readbytes, 'rx', basex=2, label='ReadBytes')\naxes.semilogx(sizes, speeds_readbytes_oh, 'bx', basex=2, label='ReadBytes+OH')\naxes.semilogx(sizes, speeds_readbytes_spi, 'k+', basex=2, label='ReadBytes+spi4teensy@8channels')\naxes.set_xlabel('Block size [B]')\naxes.set_ylabel('Transfer speed [kB/s]')\naxes.legend(loc=2)\naxes.set_xlim((min(sizes)/2., max(sizes)*2))\n\nfig.tight_layout()\n#TODO: use individual values, make stats + error bars\n\nn = int(1e6)\ndata = data=''.join([chr(i%256) for i in range(n)])\nt = %timeit -o -q transfer_test(data)\nprint \"{:.1f} KB, {:.2f} s, {:.1f} KB/s\".format(len(data)/1000., mean(t.all_runs), len(data)/1000./mean(t.all_runs))",
"Send arbitrary signals",
"n_val = 4096\nmax_val = 4096\n# cosines\ncosines = ((np.cos(np.linspace(-np.pi, np.pi, num=n_val))+1)*(max_val/2)).astype('uint16')\n\n# noise\nnoise = (np.random.rand(n_val)*max_val).astype('uint16')\n\n# ramps\nramps = np.linspace(0, max_val, n_val).astype('uint16')\n\n# squares\nhi = np.ones(n_val/4, dtype='uint16')*max_val-1\nlo = np.zeros_like(hi)\nsquares = np.tile(np.hstack((hi, lo)), 2)\n\n# all together\narr = np.dstack((cosines, noise, ramps, squares, \\\n cosines, noise, ramps, squares, \\\n cosines, noise, ramps, squares, \\\n cosines, noise, ramps, squares)).flatten()\nfig, axes = plt.subplots(nrows=2, ncols=1, figsize=(13, 8))\naxes[0].set_xlim((0, cosines.size))\naxes[0].plot(cosines, label='cosine');\naxes[0].plot(noise, label='random');\naxes[0].plot(ramps, label='ramp');\naxes[0].plot(squares, label='square');\naxes[0].legend()\n\naxes[1].set_xlim((0, arr.size))\naxes[1].plot(arr);\nfig.tight_layout()\n\nn = 500\ndata = np.tile(arr, n).view(np.uint8)\nt = %timeit -o -q -n 1 -r 1 tx = transfer_test(data)\nprint \"{:.1f} KB, {:.2f} s, {:.1f} KB/s\".format(arr.nbytes/1000.*n, mean(t.all_runs), arr.nbytes/1000.*n/mean(t.all_runs))\n\nt = %timeit -o -q -n 1 -r 1 tx = transfer_test(data)\nprint \"{:.1f} KB, {:.2f} s, {:.1f} KB/s\".format(arr.nbytes/1000.*n, mean(t.all_runs), arr.nbytes/1000.*n/mean(t.all_runs))",
"Send \"neural\" data\nUsing Lynn's data set from the Klusters2 example",
"data_path = \"../data/lynn/lynn.dat\"\ndata_float = np.fromfile(data_path, dtype='(64,)i2').astype(np.float)\n\n# normalize the array to 12bit\ndata_float -= data_float.min()\ndata_float /= data_float.max()\ndata_float *= (2**12-1)\ndata_scaled = data_float.astype(np.uint16)\nprint data_scaled.min(), data_scaled.max()\n\nfig, axes = plt.subplots(nrows=1, ncols=1, figsize=(13, 7))\nfor n in range(0, 64, 4):\n axes.plot(data_scaled[0:20000, n]+n*70, label=\"Channel %d\"%n);\nplt.legend()\nfig.tight_layout()\n\nprint \"first channel :\", data_scaled[0,0:3]\nprint \"second channel:\", data_scaled[8,0:3]\nprint \"interleaved :\", data_scaled[(0, 8), 0:3].transpose().flatten()\n\nn = 5\ndata = np.tile(data_scaled[:, 0:64:4].transpose().flatten(), n).tobytes()\nlen(data)\ntransfer_test(data)\n\nt = %timeit -q -o -n 1 -r 1 transfer_test(data);\nprint \"{:.1f} KB, {:.2f} s, {:.1f} KB/s\".format(data_scaled[:, 0:64:4].nbytes/1000.*n,\n mean(t.all_runs),\n data_scaled[:, 0:64:4].nbytes/1000.*n/mean(t.all_runs))\n\ntype(data)\n\ndata"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SKA-ScienceDataProcessor/algorithm-reference-library
|
workflows/notebooks/imaging_serial.ipynb
|
apache-2.0
|
[
"Imaging and deconvolution demonstration\nThis script makes a fake data set and then deconvolves it. Finally the full and residual visibility are plotted.",
"%matplotlib inline\n\nimport os\nimport sys\n\nsys.path.append(os.path.join('..', '..'))\n\nfrom data_models.parameters import arl_path\nresults_dir = arl_path('test_results')\n\n\nfrom matplotlib import pylab\n\npylab.rcParams['figure.figsize'] = (8.0, 8.0)\npylab.rcParams['image.cmap'] = 'rainbow'\n\nimport numpy\n\nfrom astropy.coordinates import SkyCoord\nfrom astropy import units as u\nfrom astropy.wcs.utils import pixel_to_skycoord\n\nfrom matplotlib import pyplot as plt\n\nfrom processing_components.image.iterators import image_raster_iter\n\nfrom wrappers.serial.visibility.base import create_visibility\nfrom wrappers.serial.skycomponent.operations import create_skycomponent\nfrom wrappers.serial.image.operations import show_image, export_image_to_fits\nfrom wrappers.serial.image.deconvolution import deconvolve_cube, restore_cube\nfrom wrappers.serial.visibility.iterators import vis_timeslice_iter\nfrom wrappers.serial.simulation.configurations import create_named_configuration\nfrom wrappers.serial.simulation.testing_support import create_test_image\nfrom wrappers.serial.imaging.base import create_image_from_visibility\nfrom wrappers.serial.imaging.base import advise_wide_field\n\nfrom workflows.serial.imaging.imaging_serial import invert_list_serial_workflow, predict_list_serial_workflow\n\nfrom data_models.polarisation import PolarisationFrame\n\nimport logging\n\nlog = logging.getLogger()\nlog.setLevel(logging.DEBUG)\nlog.addHandler(logging.StreamHandler(sys.stdout))\n\nmpl_logger = logging.getLogger(\"matplotlib\") \nmpl_logger.setLevel(logging.WARNING) \n\npylab.rcParams['figure.figsize'] = (12.0, 12.0)\npylab.rcParams['image.cmap'] = 'rainbow'",
"Construct LOW core configuration",
"lowr3 = create_named_configuration('LOWBD2', rmax=750.0)\n\nprint(lowr3.xyz)",
"We create the visibility. This just makes the uvw, time, antenna1, antenna2, weight columns in a table",
"times = numpy.zeros([1])\nfrequency = numpy.array([1e8])\nchannel_bandwidth = numpy.array([1e6])\nphasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-45.0 * u.deg, frame='icrs', equinox='J2000')\nvt = create_visibility(lowr3, times, frequency, channel_bandwidth=channel_bandwidth,\n weight=1.0, phasecentre=phasecentre, polarisation_frame=PolarisationFrame('stokesI'))\n\nadvice = advise_wide_field(vt, guard_band_image=3.0, delA=0.1, facets=1, wprojection_planes=1, \n oversampling_synthesised_beam=4.0)\ncellsize = advice['cellsize']",
"Plot the synthesized uv coverage.",
"plt.clf()\nplt.plot(vt.data['uvw'][:,0], vt.data['uvw'][:,1], '.', color='b')\nplt.plot(-vt.data['uvw'][:,0], -vt.data['uvw'][:,1], '.', color='b')\nplt.xlim([-400.0, 400.0])\nplt.ylim([-400.0, 400.0])\nplt.show()",
"Read the venerable test image, constructing an image",
"m31image = create_test_image(frequency=frequency, cellsize=cellsize)\nnchan, npol, ny, nx = m31image.data.shape\nm31image.wcs.wcs.crval[0] = vt.phasecentre.ra.deg\nm31image.wcs.wcs.crval[1] = vt.phasecentre.dec.deg\nm31image.wcs.wcs.crpix[0] = float(nx // 2)\nm31image.wcs.wcs.crpix[1] = float(ny // 2)\n\nfig=show_image(m31image)\n\nvt = predict_list_serial_workflow([vt], [m31image], context='2d')[0]\n\n# To check that we got the prediction right, plot the amplitude of the visibility.\nuvdist=numpy.sqrt(vt.data['uvw'][:,0]**2+vt.data['uvw'][:,1]**2)\nplt.clf()\nplt.plot(uvdist, numpy.abs(vt.data['vis']), '.')\nplt.xlabel('uvdist')\nplt.ylabel('Amp Visibility')\nplt.show()",
"Make the dirty image and point spread function",
"model = create_image_from_visibility(vt, cellsize=cellsize, npixel=512)\ndirty, sumwt = invert_list_serial_workflow([vt], [model], context='2d')[0]\npsf, sumwt = invert_list_serial_workflow([vt], [model], context='2d', dopsf=True)[0]\n\nshow_image(dirty)\nprint(\"Max, min in dirty image = %.6f, %.6f, sumwt = %f\" % (dirty.data.max(), dirty.data.min(), sumwt))\n\nprint(\"Max, min in PSF = %.6f, %.6f, sumwt = %f\" % (psf.data.max(), psf.data.min(), sumwt))\n\nexport_image_to_fits(dirty, '%s/imaging_dirty.fits'%(results_dir))\nexport_image_to_fits(psf, '%s/imaging_psf.fits'%(results_dir))",
"Deconvolve using clean",
"comp, residual = deconvolve_cube(dirty, psf, niter=10000, threshold=0.001, fractional_threshold=0.001,\n window_shape='quarter', gain=0.7, scales=[0, 3, 10, 30])\n\nrestored = restore_cube(comp, psf, residual)\n\n# Show the results\n\nfig=show_image(comp)\nplt.title('Solution')\nfig=show_image(residual)\nplt.title('Residual')\nfig=show_image(restored)\nplt.title('Restored')",
"Predict the visibility of the model",
"vtmodel = create_visibility(lowr3, times, frequency, channel_bandwidth=channel_bandwidth,\n weight=1.0, phasecentre=phasecentre, \n polarisation_frame=PolarisationFrame('stokesI'))\nvtmodel=predict_list_serial_workflow([vtmodel], [comp], context='2d')[0]",
"Now we will plot the original visibility and the residual visibility.",
"uvdist=numpy.sqrt(vt.data['uvw'][:,0]**2+vt.data['uvw'][:,1]**2)\nplt.clf()\nplt.plot(uvdist, numpy.abs(vt.data['vis'][:]-vtmodel.data['vis'][:]), '.', color='r', \n label='Residual')\nplt.plot(uvdist, numpy.abs(vt.data['vis'][:]), '.', color='b', label='Original')\n\nplt.xlabel('uvdist')\nplt.ylabel('Amp Visibility')\nplt.legend()\nplt.show()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ibm-cds-labs/pixiedust
|
notebook/PixieDust 1 - Easy Visualizations.ipynb
|
apache-2.0
|
[
"Welcome to PixieDust\nThis notebook features an introduction to PixieDust, the Python library that makes data visualization easy. \nGet started\nThis notebook is pretty simple and self-explanatory, but it wouldn't hurt to load up the PixieDust documentation so you have it. \nNew to notebooks? Don't worry, all you need to know to use this notebook is that to run code cells, put your cursor in the cell and press Shift + Enter.",
"# Make sure you have the latest version of PixieDust installed on your system\n# Only run this cell if you did _not_ install PixieDust from source\n# To confirm you have the latest, uncomment the next line and run this cell\n#!pip install --user --upgrade pixiedust",
"Now that you have PixieDust installed and up-to-date on your system, you need to import it into this notebook. This is the last dependency before you can play with PixieDust.",
"# Run this cell\nimport pixiedust",
"Once you see the success message output from running import pixiedust, you're all set.\nBehold, display()\nIn the next cell, build a very simple dataset and store it in a variable.",
"# Run this cell to\n# a) build a SQL context for a Spark dataframe \nsqlContext=SQLContext(sc) \n# b) create Spark dataframe, and assign it to a variable\ndf = sqlContext.createDataFrame(\n[(\"Green\", 75),\n (\"Blue\", 25)],\n[\"Colors\",\"%\"])",
"The data in the variable we just created is ready to be displayed, without any code other than the call to display().",
"# Run this cell to display the dataframe above as a pie chart\ndisplay(df)",
"After running the cell above, you should have seen a Spark dataframe displayed as a pie chart, along with some controls to tweak the display. All that came from passing the dataframe variable to display().\nIn the next cell, we'll pass more interesting data to display(), which will also offer more advanced controls.",
"# create another dataframe, in a new variable\ndf2 = sqlContext.createDataFrame(\n[(2010, 'Camping Equipment', 3),\n (2010, 'Golf Equipment', 1),\n (2010, 'Mountaineering Equipment', 1),\n (2010, 'Outdoor Protection', 2),\n (2010, 'Personal Accessories', 2),\n (2011, 'Camping Equipment', 4),\n (2011, 'Golf Equipment', 5),\n (2011, 'Mountaineering Equipment',2),\n (2011, 'Outdoor Protection', 4),\n (2011, 'Personal Accessories', 2),\n (2012, 'Camping Equipment', 5),\n (2012, 'Golf Equipment', 5),\n (2012, 'Mountaineering Equipment', 3),\n (2012, 'Outdoor Protection', 5),\n (2012, 'Personal Accessories', 3),\n (2013, 'Camping Equipment', 8),\n (2013, 'Golf Equipment', 5),\n (2013, 'Mountaineering Equipment', 3),\n (2013, 'Outdoor Protection', 8),\n (2013, 'Personal Accessories', 4)],\n[\"year\",\"category\",\"unique_customers\"])\n\n# This time, we've combined the dataframe and display() call in the same cell\n# Run this cell \ndisplay(df2)",
"display() controls\nRenderers\nThis chart like the first one is rendered by matplotlib. With PixieDust, you have other options. To toggle between renderers, use the Renderers control at top right of the display output:\n1. Bokeh is interactive; play with the controls along the top of the chart, e.g., zoom, save\n1. Matplotlib is static; you can save the image as a PNG\nChart options\n\nChart types: At top left, you should see an option to display the dataframe as a table. You should also see a dropdown menu with other chart options, including bar charts, pie charts, scatter plots, and so on.\nOptions: Click the Options button to explore other display configurations; e.g., clustering\n\nTo know more : https://pixiedust.github.io/pixiedust/displayapi.html\nLoading External Data\nSo far, we've worked with data hard-coded into our notebook. Now, let's load external data (CSV) from an addressable URL.",
"# load a CSV with pixiedust.sampledata()\ndf3 = pixiedust.sampleData(\"https://github.com/ibm-watson-data-lab/open-data/raw/master/cars/cars.csv\")\ndisplay(df3)",
"You should see a scatterplot above, rendered again by matplotlib. Look at the Renderer menu at top right. You should see options for Bokeh and now, Seaborn. If you don't see Seaborn, it's not installed on your system. No problem, just install it by running the next cell.",
"# To install Seaborn, uncomment the next line, and then run this cell\n#!pip install --user seaborn",
"If you installed Seaborn, you'll need to also restart your notebook kernel, and run the cell to import pixiedust again. Find Restart in the Kernel menu above."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
opengeostat/pygslib
|
pygslib/Ipython_templates/.ipynb_checkpoints/cova3_raw-checkpoint.ipynb
|
mit
|
[
"PyGSLIB\nCova3 test\nThis is a simple example on how to use raw cova3 to fit variograms",
"#general imports\nimport matplotlib.pyplot as plt \nimport pygslib \nimport numpy as np \n\n\n#make the plots inline\n%matplotlib inline ",
"Getting the data ready for work\nIf the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.",
"#get the data in gslib format into a pandas Dataframe\nmydata= pygslib.gslib.read_gslib_file('../data/cluster.dat') \n\n# This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code\n# so, we are adding constant elevation = 0 and a dummy BHID = 1 \nmydata['Zlocation']=0\nmydata['bhid']=1\n\n# printing to verify results\n#print (' \\n **** 5 first rows in my datafile \\n\\n ', mydata.head(n=5))",
"get some experimental variograms",
"# these are the parameters we need. Note that at difference of GSLIB this dictionary also stores \n# the actual data (ex, X, Y, etc.). \n\n#important! python is case sensitive 'bhid' is not equal to 'BHID'\n\nparameters_exp = { \n'x' : mydata['Xlocation'] , # X coordinates, array('f') with bounds (nd), nd is number of data points\n'y' : mydata['Ylocation'], # Y coordinates, array('f') with bounds (nd)\n'z' : mydata['Zlocation'], # Z coordinates, array('f') with bounds (nd)\n'bhid' : mydata['bhid'], # bhid for downhole variogram, array('i') with bounds (nd) \n'vr' : mydata['Primary'], # Variables, array('f') with bounds (nd,nv), nv is number of variables\n'tmin' : -1.0e21, # trimming limits, float\n'tmax' : 1.0e21, # trimming limits, float\n'nlag' : 10, # number of lags, int\n'xlag' : 4, # lag separation distance, float \n'xltol' : 2, # lag tolerance, float\n'azm' : [90], # azimuth, array('f') with bounds (ndir)\n'atol' : [22.5], # azimuth tolerance, array('f') with bounds (ndir)\n'bandwh' : [10000], # bandwith h, array('f') with bounds (ndir)\n'dip' : [0], # dip, array('f') with bounds (ndir)\n'dtol' : [10], # dip tolerance, array('f') with bounds (ndir)\n'bandwd' : [10], # bandwith d, array('f') with bounds (ndir)\n'isill' : 0, # standardize sills? (0=no, 1=yes), int\n'sills' : [100], # variance used to std the sills, array('f') with bounds (nv)\n'ivtail' : [1], # tail var., array('i') with bounds (nvarg), nvarg is number of variograms\n'ivhead' : [1], # head var., array('i') with bounds (nvarg)\n'ivtype' : [7], # variogram type, array('i') with bounds (nvarg)\n'maxclp' : 50000} # maximum number of variogram point cloud to use, input int\n\n'''\nRemember this is GSLIB... use this code to define variograms\ntype 1 = traditional semivariogram\n 2 = traditional cross semivariogram\n 3 = covariance\n 4 = correlogram\n 5 = general relative semivariogram\n 6 = pairwise relative semivariogram\n 7 = semivariogram of logarithms\n 8 = semimadogram\n\n''' \n\n#check the variogram is ok\nassert pygslib.gslib.check_gamv_par(parameters_exp)==1 , 'sorry this parameter file is wrong' \n\n\n#Now we are ready to calculate the veriogram\npdis,pgam, phm,ptm,phv,ptv,pnump, cldi, cldj, cldg, cldh = pygslib.gslib.gamv(parameters_exp)\n\n\nnvrg = pdis.shape[0]\nndir = pdis.shape[1]\nnlag = pdis.shape[2]-2",
"Plotting results",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\n#plotting the variogram 1 only\nv=0\n\n# in all the directions calculated\nfor d in range(ndir):\n dip=parameters_exp['dip'][d]\n azm=parameters_exp['azm'][d]\n plt.plot (pdis[v, d, 1:], pgam[v, d, 1:], '-o', label=str(dip) + '-->' + str(azm))\n\n# adding nice features to the plot\nplt.legend()\nplt.grid(True)\nplt.show()\n",
"Fit the variogram\nWe are using variogram of logarithms...",
"#rotatios matrix (one per structure)\nrmatrix_d1=pygslib.gslib.setrot(ang1=0,ang2=0,ang3=0,anis1=1,anis2=1,ind=1,maxrot=2) #rotation structure 1\nrmatrix_d2=pygslib.gslib.setrot(ang1=0,ang2=0,ang3=0,anis1=1,anis2=1,ind=2,maxrot=2) #rotation structure 2\n\nrmatrix=rmatrix_d1+rmatrix_d2\n\nprint (rmatrix)\n\nparameters_mod = { \n 'x1' : 0, # X coordinates, point 1\n 'y1' : 0, # Y coordinates, point 1\n 'z1' : 0, # Z coordinates, point 1\n 'x2' : 1, # X coordinates, point 2\n 'y2' : 0, # Y coordinates, point 2\n 'z2' : 0, # Z coordinates, point 2\n 'nst' : 2, # number of nested structures, array('i') with bounds (ivarg), \n # ivarg is variogram number\n 'c0' : [0.01], # nugget, array('f') with bounds (ivarg) \n 'it' : [3, 3], # structure type, array('i') with bounds (ivarg) \n 'cc' : [1, 1.4], # variance, array('f') with bounds (nvarg*nst[0])\n 'aa' : [8., 22.], # parameter a (or range), array('f') with bounds (nvarg*nst[0])\n 'irot' : 1, # index of the rotation matrix for the first nested structure\n # the second nested structure will use irot+1, the third irot+2, and so on\n 'rotmat' : rmatrix} # rotation matrices (output of the funtion setrot)\n\n# this is the covariance between the points x1, x2\ncmax,cova=pygslib.gslib.cova3(parameters_mod)\nprint (cmax, cova)\n\nres=300\nmh=np.linspace(0,40, res)\nmc=np.zeros(res)\nmv=np.zeros(res)\n\nfor i,h in enumerate(mh):\n parameters_mod['x2']=h\n cmax,cova=pygslib.gslib.cova3(parameters_mod)\n mc[i]=cova\n mv[i]=cmax- cova\n \n \n\n#plotting the variogram 1 only\nv=0\n\n# in all the directions calculated\nfor d in range(ndir):\n dip=parameters_exp['dip'][d]\n azm=parameters_exp['azm'][d]\n plt.plot (pdis[v, d, 1:], pgam[v, d, 1:], '-o', label=str(dip) + '-->' + str(azm))\n\n# add model\nplt.plot (mh, mv, '-', label = 'model')\n \n# adding nice features to the plot\nplt.legend()\nplt.grid(True)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
andres-root/AIND
|
Therm2/rnn/RNN_project.ipynb
|
mit
|
[
"Artificial Intelligence Nanodegree\nRecurrent Neural Network Projects\nWelcome to the Recurrent Neural Network Project in the Artificial Intelligence Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nImplementation TODOs in this notebook\nThis notebook contains two problems, cut into a variety of TODOs. Make sure to complete each section containing a TODO marker throughout the notebook. For convenience we provide links to each of these sections below.\nTODO #1: Implement a function to window time series\nTODO #2: Create a simple RNN model using keras to perform regression\nTODO #3: Finish cleaning a large text corpus\nTODO #4: Implement a function to window a large text corpus\nTODO #5: Create a simple RNN model using keras to perform multiclass classification\nTODO #6: Generate text using a fully trained RNN model and a variety of input sequences\nProblem 1: Perform time series prediction\nIn this project you will perform time series prediction using a Recurrent Neural Network regressor. In particular you will re-create the figure shown in the notes - where the stock price of Apple was forecasted (or predicted) 7 days in advance. In completing this exercise you will learn how to construct RNNs using Keras, which will also aid in completing the second project in this notebook.\nThe particular network architecture we will employ for our RNN is known as Long Term Short Memory (LSTM), which helps significantly avoid technical problems with optimization of RNNs. \n1.1 Getting started\nFirst we must load in our time series - a history of around 140 days of Apple's stock price. Then we need to perform a number of pre-processing steps to prepare it for use with an RNN model. First off, it is good practice to normalize time series - by normalizing its range. This helps us avoid serious numerical issues associated how common activation functions (like tanh) transform very large (positive or negative) numbers, as well as helping us to avoid related issues when computing derivatives.\nHere we normalize the series to lie in the range [0,1] using this scikit function, but it is also commonplace to normalize by a series standard deviation.",
"### Load in necessary libraries for data input and normalization\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%load_ext autoreload\n%autoreload 2\n\nfrom my_answers import *\n\n%load_ext autoreload\n%autoreload 2\n\nfrom my_answers import *\n\n### load in and normalize the dataset\ndataset = np.loadtxt('datasets/normalized_apple_prices.csv')",
"Lets take a quick look at the (normalized) time series we'll be performing predictions on.",
"# lets take a look at our time series\nplt.plot(dataset)\nplt.xlabel('time period')\nplt.ylabel('normalized series value')",
"1.2 Cutting our time series into sequences\nRemember, our time series is a sequence of numbers that we can represent in general mathematically as \n$$s_{0},s_{1},s_{2},...,s_{P}$$\nwhere $s_{p}$ is the numerical value of the time series at time period $p$ and where $P$ is the total length of the series. In order to apply our RNN we treat the time series prediction problem as a regression problem, and so need to use a sliding window to construct a set of associated input/output pairs to regress on. This process is animated in the gif below.\n<img src=\"images/timeseries_windowing_training.gif\" width=600 height=600/>\nFor example - using a window of size T = 5 (as illustrated in the gif above) we produce a set of input/output pairs like the one shown in the table below\n$$\\begin{array}{c|c}\n\\text{Input} & \\text{Output}\\\n\\hline \\color{CornflowerBlue} {\\langle s_{1},s_{2},s_{3},s_{4},s_{5}\\rangle} & \\color{Goldenrod}{ s_{6}} \\\n\\ \\color{CornflowerBlue} {\\langle s_{2},s_{3},s_{4},s_{5},s_{6} \\rangle } & \\color{Goldenrod} {s_{7} } \\\n\\color{CornflowerBlue} {\\vdots} & \\color{Goldenrod} {\\vdots}\\\n\\color{CornflowerBlue} { \\langle s_{P-5},s_{P-4},s_{P-3},s_{P-2},s_{P-1} \\rangle } & \\color{Goldenrod} {s_{P}}\n\\end{array}$$\nNotice here that each input is a sequence (or vector) of length 5 (and in general has length equal to the window size T) while each corresponding output is a scalar value. Notice also how given a time series of length P and window size T = 5 as shown above, we created P - 5 input/output pairs. More generally, for a window size T we create P - T such pairs.\nNow its time for you to window the input time series as described above! \n<a id='TODO_1'></a>\nTODO: Implement the function called window_transform_series in my_answers.py so that it runs a sliding window along the input series and creates associated input/output pairs. Note that this function should input a) the series and b) the window length, and return the input/output subsequences. Make sure to format returned input/output as generally shown in table above (where window_size = 5), and make sure your returned input is a numpy array.\n\nYou can test your function on the list of odd numbers given below",
"odd_nums = np.array([1,3,5,7,9,11,13])",
"Here is a hard-coded solution for odd_nums. You can compare its results with what you get from your window_transform_series implementation.",
"# run a window of size 2 over the odd number sequence and display the results\nwindow_size = 2\n\nX = []\nX.append(odd_nums[0:2])\nX.append(odd_nums[1:3])\nX.append(odd_nums[2:4])\nX.append(odd_nums[3:5])\nX.append(odd_nums[4:6])\n\ny = odd_nums[2:]\n\nX = np.asarray(X)\ny = np.asarray(y)\ny = np.reshape(y, (len(y),1)) #optional\n\nassert(type(X).__name__ == 'ndarray')\nassert(type(y).__name__ == 'ndarray')\nassert(X.shape == (5,2))\nassert(y.shape in [(5,1), (5,)])\n\n# print out input/output pairs --> here input = X, corresponding output = y\nprint ('--- the input X will look like ----')\nprint (X)\n\nprint ('--- the associated output y will look like ----')\nprint (y)",
"Again - you can check that your completed window_transform_series function works correctly by trying it on the odd_nums sequence - you should get the above output.",
"### TODO: implement the function window_transform_series in the file my_answers.py\nfrom my_answers import window_transform_series",
"With this function in place apply it to the series in the Python cell below. We use a window_size = 7 for these experiments.",
"# window the data using your windowing function\nwindow_size = 7\nX,y = window_transform_series(series = dataset,window_size = window_size)",
"1.3 Splitting into training and testing sets\nIn order to perform proper testing on our dataset we will lop off the last 1/3 of it for validation (or testing). This is that once we train our model we have something to test it on (like any regression problem!). This splitting into training/testing sets is done in the cell below.\nNote how here we are not splitting the dataset randomly as one typically would do when validating a regression model. This is because our input/output pairs are related temporally. We don't want to validate our model by training on a random subset of the series and then testing on another random subset, as this simulates the scenario that we receive new points within the timeframe of our training set. \nWe want to train on one solid chunk of the series (in our case, the first full 2/3 of it), and validate on a later chunk (the last 1/3) as this simulates how we would predict future values of a time series.",
"# split our dataset into training / testing sets\ntrain_test_split = int(np.ceil(2*len(y)/float(3))) # set the split point\n\n# partition the training set\nX_train = X[:train_test_split,:]\ny_train = y[:train_test_split]\n\n# keep the last chunk for testing\nX_test = X[train_test_split:,:]\ny_test = y[train_test_split:]\n\n# NOTE: to use keras's RNN LSTM module our input must be reshaped to [samples, window size, stepsize] \nX_train = np.asarray(np.reshape(X_train, (X_train.shape[0], window_size, 1)))\nX_test = np.asarray(np.reshape(X_test, (X_test.shape[0], window_size, 1)))",
"<a id='TODO_2'></a>\n1.4 Build and run an RNN regression model\nHaving created input/output pairs out of our time series and cut this into training/testing sets, we can now begin setting up our RNN. We use Keras to quickly build a two hidden layer RNN of the following specifications\n\nlayer 1 uses an LSTM module with 5 hidden units (note here the input_shape = (window_size,1))\nlayer 2 uses a fully connected module with one unit\nthe 'mean_squared_error' loss should be used (remember: we are performing regression here)\n\nThis can be constructed using just a few lines - see e.g., the general Keras documentation and the LSTM documentation in particular for examples of how to quickly use Keras to build neural network models. Make sure you are initializing your optimizer given the keras-recommended approach for RNNs \n(given in the cell below). (remember to copy your completed function into the script my_answers.py function titled build_part1_RNN before submitting your project)",
"### TODO: create required RNN model\n# import keras network libraries\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import LSTM\nimport keras\n\n# given - fix random seed - so we can all reproduce the same results on our default time series\nnp.random.seed(0)\n\n\n# TODO: implement build_part1_RNN in my_answers.py\nfrom my_answers import build_part1_RNN\nmodel = build_part1_RNN(window_size)\n\n# build model using keras documentation recommended optimizer initialization\noptimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)\n\n# compile the model\nmodel.compile(loss='mean_squared_error', optimizer=optimizer)",
"With your model built you can now fit the model by activating the cell below! Note: the number of epochs (np_epochs) and batch_size are preset (so we can all produce the same results). You can choose to toggle the verbose parameter - which gives you regular updates on the progress of the algorithm - on and off by setting it to 1 or 0 respectively.",
"# run your model!\nmodel.fit(X_train, y_train, epochs=1000, batch_size=50, verbose=1)",
"1.5 Checking model performance\nWith your model fit we can now make predictions on both our training and testing sets.",
"# generate predictions for training\ntrain_predict = model.predict(X_train)\ntest_predict = model.predict(X_test)",
"In the next cell we compute training and testing errors using our trained model - you should be able to achieve at least\ntraining_error < 0.02\nand \ntesting_error < 0.02\nwith your fully trained model. \nIf either or both of your accuracies are larger than 0.02 re-train your model - increasing the number of epochs you take (a maximum of around 1,000 should do the job) and/or adjusting your batch_size.",
"# print out training and testing errors\ntraining_error = model.evaluate(X_train, y_train, verbose=0)\nprint('training error = ' + str(training_error))\n\ntesting_error = model.evaluate(X_test, y_test, verbose=0)\nprint('testing error = ' + str(testing_error))",
"Activating the next cell plots the original data, as well as both predictions on the training and testing sets.",
"### Plot everything - the original series as well as predictions on training and testing sets\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# plot original series\nplt.plot(dataset,color = 'k')\n\n# plot training set prediction\nsplit_pt = train_test_split + window_size \nplt.plot(np.arange(window_size,split_pt,1),train_predict,color = 'b')\n\n# plot testing set prediction\nplt.plot(np.arange(split_pt,split_pt + len(test_predict),1),test_predict,color = 'r')\n\n# pretty up graph\nplt.xlabel('day')\nplt.ylabel('(normalized) price of Apple stock')\nplt.legend(['original series','training fit','testing fit'],loc='center left', bbox_to_anchor=(1, 0.5))\nplt.show()",
"Note: you can try out any time series for this exercise! If you would like to try another see e.g., this site containing thousands of time series and pick another one!\nProblem 2: Create a sequence generator\n2.1 Getting started\nIn this project you will implement a popular Recurrent Neural Network (RNN) architecture to create an English language sequence generator capable of building semi-coherent English sentences from scratch by building them up character-by-character. This will require a substantial amount amount of parameter tuning on a large training corpus (at least 100,000 characters long). In particular for this project we will be using a complete version of Sir Arthur Conan Doyle's classic book The Adventures of Sherlock Holmes.\nHow can we train a machine learning model to generate text automatically, character-by-character? By showing the model many training examples so it can learn a pattern between input and output. With this type of text generation each input is a string of valid characters like this one\ndogs are grea\nwhile the corresponding output is the next character in the sentence - which here is 't' (since the complete sentence is 'dogs are great'). We need to show a model many such examples in order for it to make reasonable predictions.\nFun note: For those interested in how text generation is being used check out some of the following fun resources:\n\n\nGenerate wacky sentences with this academic RNN text generator\n\n\nVarious twitter bots that tweet automatically generated text likethis one.\n\n\nthe NanoGenMo annual contest to automatically produce a 50,000+ novel automatically\n\n\nRobot Shakespeare a text generator that automatically produces Shakespear-esk sentences\n\n\n2.2 Preprocessing a text dataset\nOur first task is to get a large text corpus for use in training, and on it we perform a several light pre-processing tasks. The default corpus we will use is the classic book Sherlock Holmes, but you can use a variety of others as well - so long as they are fairly large (around 100,000 characters or more).",
"# read in the text, transforming everything to lower case\ntext = open('datasets/holmes.txt').read().lower()\nprint('our original text has ' + str(len(text)) + ' characters')",
"Next, lets examine a bit of the raw text. Because we are interested in creating sentences of English words automatically by building up each word character-by-character, we only want to train on valid English words. In other words - we need to remove all of the other characters that are not part of English words.",
"### print out the first 1000 characters of the raw text to get a sense of what we need to throw out\ntext[:2000]",
"Wow - there's a lot of junk here (i.e., weird uncommon character combinations - as this first character chunk contains the title and author page, as well as table of contents)! To keep things simple, we want to train our RNN on a large chunk of more typical English sentences - we don't want it to start thinking non-english words or strange characters are valid! - so lets clean up the data a bit.\nFirst, since the dataset is so large and the first few hundred characters contain a lot of junk, lets cut it out. Lets also find-and-replace those newline tags with empty spaces.",
"### find and replace '\\n' and '\\r' symbols - replacing them \ntext = text[1302:]\ntext = text.replace('\\n',' ') # replacing '\\n' with '' simply removes the sequence\ntext = text.replace('\\r',' ')",
"Lets see how the first 1000 characters of our text looks now!",
"### print out the first 1000 characters of the raw text to get a sense of what we need to throw out\ntext[:1000]",
"<a id='TODO_3'></a>\nTODO: finish cleaning the text\nLets make sure we haven't left any other atypical characters (commas, periods, etc., are ok) lurking around in the depths of the text. You can do this by enumerating all the text's unique characters, examining them, and then replacing any unwanted characters with empty spaces! Once we find all of the text's unique characters, we can remove all of the atypical ones in the next cell. Note: don't remove the punctuation marks given in my_answers.py.",
"### TODO: implement cleaned_text in my_answers.py\nfrom my_answers import cleaned_text\n\nprint(text)\ntext = cleaned_text(text)\n\n\n# shorten any extra dead space created above\ntext = text.replace(' ',' ')",
"With your chosen characters removed print out the first few hundred lines again just to double check that everything looks good.",
"### print out the first 2000 characters of the raw text to get a sense of what we need to throw out\ntext[:2000]",
"Now that we have thrown out a good number of non-English characters/character sequences lets print out some statistics about the dataset - including number of total characters and number of unique characters.",
"# count the number of unique characters in the text\nchars = sorted(list(set(text)))\n\n# print some of the text, as well as statistics\nprint (\"this corpus has \" + str(len(text)) + \" total number of characters\")\nprint (\"this corpus has \" + str(len(chars)) + \" unique characters\")",
"2.3 Cutting data into input/output pairs\nNow that we have our text all cleaned up, how can we use it to train a model to generate sentences automatically? First we need to train a machine learning model - and in order to do that we need a set of input/output pairs for a model to train on. How can we create a set of input/output pairs from our text to train on?\nRemember in part 1 of this notebook how we used a sliding window to extract input/output pairs from a time series? We do the same thing here! We slide a window of length $T$ along our giant text corpus - everything in the window becomes one input while the character following becomes its corresponding output. This process of extracting input/output pairs is illustrated in the gif below on a small example text using a window size of T = 5.\n<img src=\"images/text_windowing_training.gif\" width=400 height=400/>\nNotice one aspect of the sliding window in this gif that does not mirror the analogous gif for time series shown in part 1 of the notebook - we do not need to slide the window along one character at a time but can move by a fixed step size $M$ greater than 1 (in the gif indeed $M = 1$). This is done with large input texts (like ours which has over 500,000 characters!) when sliding the window along one character at a time we would create far too many input/output pairs to be able to reasonably compute with.\nMore formally lets denote our text corpus - which is one long string of characters - as follows\n$$s_{0},s_{1},s_{2},...,s_{P}$$\nwhere $P$ is the length of the text (again for our text $P \\approx 500,000!$). Sliding a window of size T = 5 with a step length of M = 1 (these are the parameters shown in the gif above) over this sequence produces the following list of input/output pairs\n$$\\begin{array}{c|c}\n\\text{Input} & \\text{Output}\\\n\\hline \\color{CornflowerBlue} {\\langle s_{1},s_{2},s_{3},s_{4},s_{5}\\rangle} & \\color{Goldenrod}{ s_{6}} \\\n\\ \\color{CornflowerBlue} {\\langle s_{2},s_{3},s_{4},s_{5},s_{6} \\rangle } & \\color{Goldenrod} {s_{7} } \\\n\\color{CornflowerBlue} {\\vdots} & \\color{Goldenrod} {\\vdots}\\\n\\color{CornflowerBlue} { \\langle s_{P-5},s_{P-4},s_{P-3},s_{P-2},s_{P-1} \\rangle } & \\color{Goldenrod} {s_{P}}\n\\end{array}$$\nNotice here that each input is a sequence (or vector) of 5 characters (and in general has length equal to the window size T) while each corresponding output is a single character. We created around P total number of input/output pairs (for general step size M we create around ceil(P/M) pairs).\n<a id='TODO_4'></a>\nNow its time for you to window the input time series as described above! \nTODO: Create a function that runs a sliding window along the input text and creates associated input/output pairs. A skeleton function has been provided for you. Note that this function should input a) the text b) the window size and c) the step size, and return the input/output sequences. Note: the return items should be lists - not numpy arrays.\n(remember to copy your completed function into the script my_answers.py function titled window_transform_text before submitting your project)",
"### TODO: implement window_transform_text in my_answers.py\nfrom my_answers import window_transform_text",
"With our function complete we can now use it to produce input/output pairs! We employ the function in the next cell, where the window_size = 50 and step_size = 5.",
"# run your text window-ing function \nwindow_size = 100\nstep_size = 5\ninputs, outputs = window_transform_text(text,window_size,step_size)",
"Lets print out a few input/output pairs to verify that we have made the right sort of stuff!",
"# print out a few of the input/output pairs to verify that we've made the right kind of stuff to learn from\nprint('input = ' + inputs[2])\nprint('output = ' + outputs[2])\nprint('--------------')\nprint('input = ' + inputs[100])\nprint('output = ' + outputs[100])",
"Looks good!\n2.4 Wait, what kind of problem is text generation again?\nIn part 1 of this notebook we used the same pre-processing technique - the sliding window - to produce a set of training input/output pairs to tackle the problem of time series prediction by treating the problem as one of regression. So what sort of problem do we have here now, with text generation? Well, the time series prediction was a regression problem because the output (one value of the time series) was a continuous value. Here - for character-by-character text generation - each output is a single character. This isn't a continuous value - but a distinct class - therefore character-by-character text generation is a classification problem. \nHow many classes are there in the data? Well, the number of classes is equal to the number of unique characters we have to predict! How many of those were there in our dataset again? Lets print out the value again.",
"# print out the number of unique characters in the dataset\nchars = sorted(list(set(text)))\nprint (\"this corpus has \" + str(len(chars)) + \" unique characters\")\nprint ('and these characters are ')\nprint (chars)",
"Rockin' - so we have a multiclass classification problem on our hands!\n2.5 One-hot encoding characters\nThe last issue we have to deal with is representing our text data as numerical data so that we can use it as an input to a neural network. One of the conceptually simplest ways of doing this is via a 'one-hot encoding' scheme. Here's how it works.\nWe transform each character in our inputs/outputs into a vector with length equal to the number of unique characters in our text. This vector is all zeros except one location where we place a 1 - and this location is unique to each character type. e.g., we transform 'a', 'b', and 'c' as follows\n$$a\\longleftarrow\\left[\\begin{array}{c}\n1\\\n0\\\n0\\\n\\vdots\\\n0\\\n0\n\\end{array}\\right]\\,\\,\\,\\,\\,\\,\\,b\\longleftarrow\\left[\\begin{array}{c}\n0\\\n1\\\n0\\\n\\vdots\\\n0\\\n0\n\\end{array}\\right]\\,\\,\\,\\,\\,c\\longleftarrow\\left[\\begin{array}{c}\n0\\\n0\\\n1\\\n\\vdots\\\n0\\\n0 \n\\end{array}\\right]\\cdots$$\nwhere each vector has 32 entries (or in general: number of entries = number of unique characters in text).\nThe first practical step towards doing this one-hot encoding is to form a dictionary mapping each unique character to a unique integer, and one dictionary to do the reverse mapping. We can then use these dictionaries to quickly make our one-hot encodings, as well as re-translate (from integers to characters) the results of our trained RNN classification model.",
"# this dictionary is a function mapping each unique character to a unique integer\nchars_to_indices = dict((c, i) for i, c in enumerate(chars)) # map each unique character to unique integer\nprint(chars_to_indices)\n\n# this dictionary is a function mapping each unique integer back to a unique character\nindices_to_chars = dict((i, c) for i, c in enumerate(chars)) # map each unique integer back to unique character\nprint(indices_to_chars)",
"Now we can transform our input/output pairs - consisting of characters - to equivalent input/output pairs made up of one-hot encoded vectors. In the next cell we provide a function for doing just this: it takes in the raw character input/outputs and returns their numerical versions. In particular the numerical input is given as $\\bf{X}$, and numerical output is given as the $\\bf{y}$",
"# transform character-based input/output into equivalent numerical versions\ndef encode_io_pairs(text,window_size,step_size):\n # number of unique chars\n chars = sorted(list(set(text)))\n num_chars = len(chars)\n \n # cut up text into character input/output pairs\n inputs, outputs = window_transform_text(text,window_size,step_size)\n \n # create empty vessels for one-hot encoded input/output\n X = np.zeros((len(inputs), window_size, num_chars), dtype=np.bool)\n y = np.zeros((len(inputs), num_chars), dtype=np.bool)\n \n # loop over inputs/outputs and transform and store in X/y\n for i, sentence in enumerate(inputs):\n for t, char in enumerate(sentence):\n X[i, t, chars_to_indices[char]] = 1\n y[i, chars_to_indices[outputs[i]]] = 1\n \n return X,y",
"Now run the one-hot encoding function by activating the cell below and transform our input/output pairs!",
"# use your function\nwindow_size = 100\nstep_size = 5\nX,y = encode_io_pairs(text,window_size,step_size)",
"<a id='TODO_5'></a>\n2.6 Setting up our RNN\nWith our dataset loaded and the input/output pairs extracted / transformed we can now begin setting up our RNN for training. Again we will use Keras to quickly build a single hidden layer RNN - where our hidden layer consists of LSTM modules.\nTime to get to work: build a 3 layer RNN model of the following specification\n\nlayer 1 should be an LSTM module with 200 hidden units --> note this should have input_shape = (window_size,len(chars)) where len(chars) = number of unique characters in your cleaned text\nlayer 2 should be a linear module, fully connected, with len(chars) hidden units --> where len(chars) = number of unique characters in your cleaned text\nlayer 3 should be a softmax activation ( since we are solving a multiclass classification)\nUse the categorical_crossentropy loss \n\nThis network can be constructed using just a few lines - as with the RNN network you made in part 1 of this notebook. See e.g., the general Keras documentation and the LSTM documentation in particular for examples of how to quickly use Keras to build neural network models.",
"### necessary functions from the keras library\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation, LSTM\nfrom keras.optimizers import RMSprop\nfrom keras.utils.data_utils import get_file\nimport keras\nimport random\n\n# TODO implement build_part2_RNN in my_answers.py\nfrom my_answers import build_part2_RNN\n\nmodel = build_part2_RNN(window_size, len(chars))\n\n# initialize optimizer\noptimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)\n\n# compile model --> make sure initialized optimizer and callbacks - as defined above - are used\nmodel.compile(loss='categorical_crossentropy', optimizer=optimizer)",
"2.7 Training our RNN model for text generation\nWith our RNN setup we can now train it! Lets begin by trying it out on a small subset of the larger version. In the next cell we take the first 10,000 input/output pairs from our training database to learn on.",
"# a small subset of our input/output pairs\nXsmall = X[:10000,:,:]\nysmall = y[:10000,:]",
"Now lets fit our model!",
"# train the model\nmodel.fit(Xsmall, ysmall, batch_size=500, epochs=40,verbose = 1)\n\n# save weights\nmodel.save_weights('model_weights/best_RNN_small_textdata_weights.hdf5')",
"How do we make a given number of predictions (characters) based on this fitted model? \nFirst we predict the next character after following any chunk of characters in the text of length equal to our chosen window size. Then we remove the first character in our input sequence and tack our prediction onto the end. This gives us a slightly changed sequence of inputs that still has length equal to the size of our window. We then feed in this updated input sequence into the model to predict the another character. Together then we have two predicted characters following our original input sequence. Repeating this process N times gives us N predicted characters.\nIn the next Python cell we provide you with a completed function that does just this - it makes predictions when given a) a trained RNN model, b) a subset of (window_size) characters from the text, and c) a number of characters to predict (to follow our input subset).",
"# function that uses trained model to predict a desired number of future characters\ndef predict_next_chars(model,input_chars,num_to_predict): \n # create output\n predicted_chars = ''\n for i in range(num_to_predict):\n # convert this round's predicted characters to numerical input \n x_test = np.zeros((1, window_size, len(chars)))\n for t, char in enumerate(input_chars):\n x_test[0, t, chars_to_indices[char]] = 1.\n\n # make this round's prediction\n test_predict = model.predict(x_test,verbose = 0)[0]\n\n # translate numerical prediction back to characters\n r = np.argmax(test_predict) # predict class of each test input\n d = indices_to_chars[r] \n\n # update predicted_chars and input\n predicted_chars+=d\n input_chars+=d\n input_chars = input_chars[1:]\n return predicted_chars",
"<a id='TODO_6'></a>\nWith your trained model try a few subsets of the complete text as input - note the length of each must be exactly equal to the window size. For each subset use the function above to predict the next 100 characters that follow each input.",
"# TODO: choose an input sequence and use the prediction function in the previous Python cell to predict 100 characters following it\n# get an appropriately sized chunk of characters from the text\nstart_inds = [0, 10, 20]\n\n# load in weights\nmodel.load_weights('model_weights/best_RNN_small_textdata_weights.hdf5')\nfor s in start_inds:\n start_index = s\n input_chars = text[start_index: start_index + window_size]\n\n # use the prediction function\n predict_input = predict_next_chars(model,input_chars,num_to_predict = 100)\n\n # print out input characters\n print('------------------')\n input_line = 'input chars = ' + '\\n' + input_chars + '\"' + '\\n'\n print(input_line)\n\n # print out predicted characters\n line = 'predicted chars = ' + '\\n' + predict_input + '\"' + '\\n'\n print(line)",
"This looks ok, but not great. Now lets try the same experiment with a larger chunk of the data - with the first 100,000 input/output pairs. \nTuning RNNs for a typical character dataset like the one we will use here is a computationally intensive endeavour and thus timely on a typical CPU. Using a reasonably sized cloud-based GPU can speed up training by a factor of 10. Also because of the long training time it is highly recommended that you carefully write the output of each step of your process to file. This is so that all of your results are saved even if you close the web browser you're working out of, as the processes will continue processing in the background but variables/output in the notebook system will not update when you open it again.\nIn the next cell we show you how to create a text file in Python and record data to it. This sort of setup can be used to record your final predictions.",
"### A simple way to write output to file\nf = open('my_test_output.txt', 'w') # create an output file to write too\nf.write('this is only a test ' + '\\n') # print some output text\nx = 2\nf.write('the value of x is ' + str(x) + '\\n') # record a variable value\nf.close() \n\n# print out the contents of my_test_output.txt\nf = open('my_test_output.txt', 'r') # create an output file to write too\nf.read()",
"With this recording devices we can now more safely perform experiments on larger portions of the text. In the next cell we will use the first 100,000 input/output pairs to train our RNN model.\nFirst we fit our model to the dataset, then generate text using the trained model in precisely the same generation method applied before on the small dataset.\nNote: your generated words should be - by and large - more realistic than with the small dataset, but you won't be able to generate perfect English sentences even with this amount of data. A rule of thumb: your model is working well if you generate sentences that largely contain real English words.",
"# a small subset of our input/output pairs\nXlarge = X[:100000,:,:]\nylarge = y[:100000,:]\n\n# TODO: fit to our larger dataset\nmodel.fit(Xlarge, ylarge, batch_size=500, epochs=30, verbose=1)\n\n# save weights\nmodel.save_weights('model_weights/best_RNN_large_textdata_weights.hdf5')\n\n# TODO: choose an input sequence and use the prediction function in the previous Python cell to predict 100 characters following it\n# get an appropriately sized chunk of characters from the text\nstart_inds = []\n\n# save output\nf = open('text_gen_output/RNN_large_textdata_output.txt', 'w') # create an output file to write too\n\n# load weights\nmodel.load_weights('model_weights/best_RNN_large_textdata_weights.hdf5')\nfor s in start_inds:\n start_index = s\n input_chars = text[start_index: start_index + window_size]\n\n # use the prediction function\n predict_input = predict_next_chars(model,input_chars,num_to_predict = 100)\n\n # print out input characters\n line = '-------------------' + '\\n'\n print(line)\n f.write(line)\n\n input_line = 'input chars = ' + '\\n' + input_chars + '\"' + '\\n'\n print(input_line)\n f.write(input_line)\n\n # print out predicted characters\n predict_line = 'predicted chars = ' + '\\n' + predict_input + '\"' + '\\n'\n print(predict_line)\n f.write(predict_line)\nf.close()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdpython/ensae_teaching_cs
|
_doc/notebooks/sklearn_ensae_course/07_application_to_face_recognition.ipynb
|
mit
|
[
"2A.ML101.7: Example from Image Processing\nHere we'll take a look at a simple facial recognition example.\nSource: Course on machine learning with scikit-learn by Gaël Varoquaux",
"%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot as plt",
"Ideally, we would use a dataset consisting of a\nsubset of the Labeled Faces in the Wild\ndata that is available within scikit-learn with the 'datasets.fetch_lfw_people' function. However, this is a relatively large download (~200MB) so we will do the tutorial on a simpler, less rich dataset. Feel free to explore the LFW dataset at home.",
"from sklearn import datasets\nfaces = datasets.fetch_olivetti_faces()\nfaces.data.shape",
"Let's visualize these faces to see what we're working with:",
"fig = plt.figure(figsize=(8, 6))\n# plot several images\nfor i in range(15):\n ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[])\n ax.imshow(faces.images[i], cmap=plt.cm.bone)",
"One thing to note is that these faces have already been localized and scaled\nto a common size. This is an important preprocessing piece for facial\nrecognition, and is a process that can require a large collection of training\ndata. This can be done in scikit-learn, but the challenge is gathering a\nsufficient amount of training data for the algorithm to work\nFortunately, this piece is common enough that it has been done. One good\nresource is OpenCV, the\nOpen Computer Vision Library.\nWe'll perform a Support Vector classification of the images. We'll\ndo a typical train-test split on the images to make this happen:",
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(faces.data,\n faces.target, random_state=0)\n\nprint(X_train.shape, X_test.shape)",
"Preprocessing: Principal Component Analysis\n1850 dimensions is a lot for SVM. We can use PCA to reduce these 1850 features to a manageable\nsize, while maintaining most of the information in the dataset. Here it is useful to use a variant\nof PCA called RandomizedPCA, which is an approximation of PCA that can be much faster for large\ndatasets. The interface is the same as the normal PCA we saw earlier:",
"from sklearn import decomposition\npca = decomposition.PCA(n_components=150, whiten=True, svd_solver='randomized')\npca.fit(X_train)",
"One interesting part of PCA is that it computes the \"mean\" face, which can be\ninteresting to examine:",
"plt.imshow(pca.mean_.reshape(faces.images[0].shape), cmap=plt.cm.bone);",
"The principal components measure deviations about this mean along orthogonal axes.\nIt is also interesting to visualize these principal components:",
"print(pca.components_.shape)\n\nfig = plt.figure(figsize=(16, 6))\nfor i in range(30):\n ax = fig.add_subplot(3, 10, i + 1, xticks=[], yticks=[])\n ax.imshow(pca.components_[i].reshape(faces.images[0].shape), cmap=plt.cm.bone)",
"The components (\"eigenfaces\") are ordered by their importance from top-left to bottom-right.\nWe see that the first few components seem to primarily take care of lighting\nconditions; the remaining components pull out certain identifying features:\nthe nose, eyes, eyebrows, etc.\nWith this projection computed, we can now project our original training\nand test data onto the PCA basis:",
"X_train_pca = pca.transform(X_train)\nX_test_pca = pca.transform(X_test)\n\nprint(X_train_pca.shape)\nprint(X_test_pca.shape)",
"These projected components correspond to factors in a linear combination of\ncomponent images such that the combination approaches the original face.\nDoing the Learning: Support Vector Machines\nNow we'll perform support-vector-machine classification on this reduced dataset:",
"from sklearn import svm\nclf = svm.SVC(C=5., gamma=0.001)\nclf.fit(X_train_pca, y_train)",
"Finally, we can evaluate how well this classification did. First, we might plot a\nfew of the test-cases with the labels learned from the training set:",
"fig = plt.figure(figsize=(8, 6))\nfor i in range(15):\n ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[])\n ax.imshow(X_test[i].reshape(faces.images[0].shape),\n cmap=plt.cm.bone)\n y_pred = clf.predict(X_test_pca[i, np.newaxis])[0]\n color = ('black' if y_pred == y_test[i] else 'red')\n ax.set_title(faces.target[y_pred],\n fontsize='small', color=color)",
"The classifier is correct on an impressive number of images given the simplicity\nof its learning model! Using a linear classifier on 150 features derived from\nthe pixel-level data, the algorithm correctly identifies a large number of the\npeople in the images.\nAgain, we can\nquantify this effectiveness using one of several measures from the sklearn.metrics\nmodule. First we can do the classification report, which shows the precision,\nrecall and other measures of the \"goodness\" of the classification:",
"from sklearn import metrics\ny_pred = clf.predict(X_test_pca)\nprint(metrics.classification_report(y_test, y_pred))",
"Another interesting metric is the confusion matrix, which indicates how often\nany two items are mixed-up. The confusion matrix of a perfect classifier\nwould only have nonzero entries on the diagonal, with zeros on the off-diagonal.",
"print(metrics.confusion_matrix(y_test, y_pred))",
"Pipelining\nAbove we used PCA as a pre-processing step before applying our support vector machine classifier.\nPlugging the output of one estimator directly into the input of a second estimator is a commonly\nused pattern; for this reason scikit-learn provides a Pipeline object which automates this\nprocess. The above problem can be re-expressed as a pipeline as follows:",
"from sklearn.pipeline import Pipeline\n\nclf = Pipeline([('pca', decomposition.PCA(n_components=150, whiten=True, svd_solver='randomized')),\n ('svm', svm.LinearSVC(C=1.0))])\n\nclf.fit(X_train, y_train)\n\ny_pred = clf.predict(X_test)\n\nprint(metrics.confusion_matrix(y_pred, y_test))",
"The results are not identical because we used the randomized version of the PCA -- because the\nprojection varies slightly each time, the results vary slightly as well.\nA Quick Note on Facial Recognition\nHere we have used PCA \"eigenfaces\" as a pre-processing step for facial recognition.\nThe reason we chose this is because PCA is a broadly-applicable technique, which can\nbe useful for a wide array of data types. Research in the field of facial recognition\nin particular, however, has shown that other more specific feature extraction methods\nare can be much more effective."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
seanjmcm/TrafficSign
|
Traffic_Sign_Classifier_sept2.ipynb
|
mit
|
[
"Self-Driving Car Engineer Nanodegree\nDeep Learning\nProject: Build a Traffic Sign Recognition Classifier\nIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. \n\nNote: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \\n\",\n \"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission. \n\nIn addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.\nThe rubric contains \"Stand Out Suggestions\" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the \"stand out suggestions\", you can include the code in this Ipython notebook and also discuss the results in the writeup file.\n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\n\nStep 0: Load The Data",
"# Load pickled data\nimport pickle\nimport cv2 # for grayscale and normalize\n\n# TODO: Fill this in based on where you saved the training and testing data\n\ntraining_file ='traffic-signs-data/train.p'\nvalidation_file='traffic-signs-data/valid.p'\ntesting_file = 'traffic-signs-data/test.p'\n\nwith open(training_file, mode='rb') as f:\n train = pickle.load(f)\nwith open(validation_file, mode='rb') as f:\n valid = pickle.load(f)\nwith open(testing_file, mode='rb') as f:\n test = pickle.load(f)\n \nX_trainLd, y_trainLd = train['features'], train['labels']\nX_validLd, y_validLd = valid['features'], valid['labels']\nX_test, y_test = test['features'], test['labels']\n\n#X_trainLd=X_trainLd.astype(float) \n#y_trainLd=y_trainLd.astype(float) \n#X_validLd=X_validLd.astype(float)\n#y_validLd=y_validLd.astype(float)\n\nprint(\"Xtrain shape : \"+str(X_trainLd.shape))\nprint(\"ytrain shape : \"+str(y_trainLd.shape))\nprint(\"ytrain shape : \"+str(y_trainLd.shape))\nprint(\"label : \"+str(y_trainLd[22]))\n\nfrom sklearn.model_selection import train_test_split",
"Step 1: Dataset Summary & Exploration\nThe pickled data is a dictionary with 4 key/value pairs:\n\n'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).\n'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.\n'sizes' is a list containing tuples, (width, height) representing the original width and height the image.\n'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES\n\nComplete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results. \nProvide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas",
"### Replace each question mark with the appropriate value. \n### Use python, pandas or numpy methods rather than hard coding the results\nimport numpy as np\n\n# TODO: Number of training examples\nn_train = X_trainLd.shape[0]\n\n# TODO: Number of validation examples\nn_validation = X_validLd.shape[0]\n\n# TODO: Number of testing examples.\nn_test = X_test.shape[0]\n\n# TODO: What's the shape of an traffic sign image?\nimage_shape = X_trainLd.shape[1:4]\n\n# TODO: How many unique classes/labels there are in the dataset.\n#n_classes = n_train+n_validation+n_test -- this doesn't seem correct 43 in excel file\nn_classes = 43\n\nprint(\"Number of training examples =\", n_train)\nprint(\"Number of testing examples =\", n_test)\nprint(\"Image data shape =\", image_shape)\nprint(\"Number of classes =\", n_classes)",
"Include an exploratory visualization of the dataset\nVisualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. \nThe Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.\nNOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?",
"import random\n### Data exploration visualization code goes here.\n### Feel free to use as many code cells as needed.\nimport matplotlib.pyplot as plt\n# Visualizations will be shown in the notebook.\n%matplotlib inline\n\nindex = random.randint(0, len(X_trainLd))\nimage = X_trainLd[100] #squeeze : Remove single-dimensional entries from the shape of an array.\n\nimage = image.astype(float)\n\n#normalise\ndef normit(img):\n\n size = img.shape[2]\n imagenorm = cv2.normalize(img, dst =image_shape, alpha=0, beta=25, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8UC1)\n image = img.astype(float)\n norm = (image-128.0)/128.0\n return norm\n \n\ntemp = normit(image)\n\n\nplt.figure(figsize=(1,1))\nplt.imshow(temp.squeeze())\n\n\n\n",
"Step 2: Design and Test a Model Architecture\nDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.\nThe LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! \nWith the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. \nThere are various aspects to consider when thinking about this problem:\n\nNeural network architecture (is the network over or underfitting?)\nPlay around preprocessing techniques (normalization, rgb to grayscale, etc)\nNumber of examples per label (some have more than others).\nGenerate fake data.\n\nHere is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.\nPre-process the Data Set (normalization, grayscale, etc.)\nMinimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project. \nOther pre-processing steps are optional. You can try different techniques to see if it improves performance. \nUse the code cell (or multiple code cells, if necessary) to implement the first step of your project.",
"### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include \n### converting to grayscale, etc.\n### Feel free to use as many code cells as needed.\nimport cv2\nfrom sklearn.utils import shuffle\n\nprint(\"Test\")\n\n## xtrain\ngrey_X_train = np.zeros(shape=[X_trainLd.shape[0],X_trainLd.shape[1],X_trainLd.shape[2]])\nnorm_X_train = np.zeros(shape=[X_trainLd.shape[0],X_trainLd.shape[1],X_trainLd.shape[2],3])\nnorm_X_train = norm_X_train.astype(float)\n\nX_train, y_train = shuffle(X_trainLd, y_trainLd)\nshuff_X_train, shuff_y_train =X_train, y_train\nX_valid, y_valid = X_validLd, y_validLd\n\ni=0 \n\nfor p in X_train:\n \n t = normit(p)\n norm_X_train[i] = t\n i=i+1\nprint(\"after normalise\")\n\n \n##validate\n\nnorm_X_valid = np.zeros(shape=[X_validLd.shape[0],X_validLd.shape[1],X_validLd.shape[2],3])\nnorm_X_valid=norm_X_valid.astype(float)\ni=0\n \nfor v in X_valid:\n \n tv = normit(v)\n #tempv = tv.reshape(32,32,1)\n \n norm_X_valid[i] = tv\n #print(norm_X_valid[i])\n i=i+1\n\n\n##test\n\nnorm_X_test = np.zeros(shape=[X_test.shape[0],X_test.shape[1],X_test.shape[2],3])\nnorm_X_test=X_test.astype(float)\ni=0\n \nfor z in X_test:\n \n tt = normit(z)\n norm_X_test[i] = tt\n i=i+1\n \n\nprint(\"fin\")\n\n\nimage22 = norm_X_train[110] #squeeze : Remove single-dimensional entries from the shape of an array\nimageb4 = X_train[110]\nimaget=norm_X_test[100]\n\nplt.figure(figsize=(1,1))\nplt.imshow(imaget.squeeze())\n",
"Model Architecture\nTrain, Validate and Test the Model",
"### Define your architecture here.\n### Feel free to use as many code cells as needed.\n\nimport tensorflow as tf\n\nEPOCHS = 25\nBATCH_SIZE = 128 #SMcM change to 256 from 128\n#X_train=X_train.astype(float)\nX_train=norm_X_train\n#print(X_train[20])\n#X_train=shuff_X_train\n\n#X_valid=norm_X_valid\n\nfrom tensorflow.contrib.layers import flatten\n\ndef LeNet(x): \n # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer\n mu = 0.0\n sigma = 0.1 #SMcM changed from 0.1 to 0.2\n \n # SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.\n conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5,3, 6), mean = mu, stddev = sigma)) #SMcM depth cahnged to 3\n conv1_b = tf.Variable(tf.zeros(6))\n conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b #try same should be better (padding)\n\n # SOLUTION: Activation.\n conv1 = tf.nn.relu(conv1)\n #conv1 = tf.nn.relu(conv1) #SMcM add an extra relu\n\n # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.\n conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # SOLUTION: Layer 2: Convolutional. Output = 10x10x16.\n conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))\n conv2_b = tf.Variable(tf.zeros(16))\n conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b\n \n # SOLUTION: Activation.\n conv2 = tf.nn.relu(conv2)\n\n # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.\n conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # SOLUTION: Flatten. Input = 5x5x16. Output = 400.\n fc0 = flatten(conv2)\n \n # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.\n fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))\n fc1_b = tf.Variable(tf.zeros(120))\n fc1 = tf.matmul(fc0, fc1_W) + fc1_b\n \n # SOLUTION: Activation.\n fc1 = tf.nn.relu(fc1)\n\n # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.\n fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))\n fc2_b = tf.Variable(tf.zeros(84))\n fc2 = tf.matmul(fc1, fc2_W) + fc2_b\n \n # SOLUTION: Activation.\n fc2 = tf.nn.relu(fc2)\n\n # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.\n fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))\n fc3_b = tf.Variable(tf.zeros(43))\n logits = tf.matmul(fc2, fc3_W) + fc3_b\n \n return logits\n\nprint(\"model\")\n\nimage22 = X_train[110] #squeeze : Remove single-dimensional entries from the shape of an array\n\nprint(norm_X_train.shape)\nprint(X_train.shape)\n\nplt.figure(figsize=(1,1))\nplt.imshow(image22.squeeze())\n#print(image22)\n",
"A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation\nsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.",
"### Train your model here.\n### Calculate and report the accuracy on the training and validation set.\n### Once a final model architecture is selected, \n### the accuracy on the test set should be calculated and reported as well.\n### Feel free to use as many code cells as needed.\n\n#Features and Labels\nx = tf.placeholder(tf.float32, (None, 32, 32, 3))\ny = tf.placeholder(tf.int32, (None))\none_hot_y = tf.one_hot(y, 43)\n\nprint(\"start\")\n\n#Training Pipeline\nrate = 0.0025 # SMCM decreased rate to .0008 from 0.001\n\nlogits = LeNet(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)\nloss_operation = tf.reduce_mean(cross_entropy)\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\ntraining_operation = optimizer.minimize(loss_operation)\n\ncorrect_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nsaver = tf.train.Saver()\n\n#Model Evaluation\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})\n total_accuracy += (accuracy * len(batch_x))\n return total_accuracy / num_examples\n\n#Train the Model\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n \n print(\"Training...\")\n print()\n for i in range(EPOCHS):\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})\n \n validation_accuracy = evaluate(norm_X_valid, y_valid)\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n if (validation_accuracy) > .944 :\n break\n \n saver.save(sess, './lenet')\n print(\"Model saved\")\n\n\n",
"Evaluate the Model\nevaluate the performance of the model on the test set.",
"with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.'))\n\n test_accuracy = evaluate(norm_X_test, y_test)\n print(\"Test Accuracy = {:.3f}\".format(test_accuracy)) ",
"Step 3: Test a Model on New Images\nTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.\nYou may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.\nLoad and Output the Images",
"### Load the images and plot them here.\n### Feel free to use as many code cells as needed.\n\n### Load the images and plot them here.\n### Feel free to use as many code cells as needed.\nimport numpy as np\nimport random\n### Data exploration visualization code goes here.\n### Feel free to use as many code cells as needed.\nimport matplotlib.pyplot as plt\nimport os\nimport tensorflow as tf\n\nimport cv2\n#os.listdir(\"path\")\n\nfile5 =['GTSRB5/00015b.bmp','GTSRB5/02329b.bmp','GTSRB5/03363b.bmp','GTSRB5/05312b.bmp','GTSRB5/03978b.bmp']\n#img5 = np.empty(5, dtype=object)\n\nimg5=[]\nimg5 = np.zeros(shape=[5, 32, 32, 3])\nimg5 = img5.astype(float)\nlabel5 = [2,2,34,1,35]\n\ni=0\n# Load an color image in grayscale\nfor file in file5:\n\n temp = cv2.imread(file,3)\n img5[i] =temp \n i=i+1\n \nprint(img5[1].shape)\n\n#file1 = cv2.imread(file1,3)\n#print(file1)\n#print(img5.shape)\ntemp=img5[1]\n\ni=0\nfor p in img5:\n \n t = normit(p)\n img5[i] = t\n i=i+1\n\nfor img in img5:\n plt.figure(figsize=(1,1))\n plt.imshow(img.squeeze())\n",
"Predict the Sign Type for Each Image",
"### Run the predictions here and use the model to output the prediction for each image.\n### Make sure to pre-process the images with the same pre-processing pipeline used earlier.\n### Feel free to use as many code cells as needed.\n\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.')) \n #test_accuracy = evaluate(img5, label5)\n #print(\"Test Accuracy = {:.3f}\".format(test_accuracy))\n lmax = tf.argmax(logits, 1)\n sm = sess.run(lmax,feed_dict={x: img5})\n print(\"The Predictions are\")\n print ( sm)\n print(\"The Labels are :\")\n print(label5)\n\nprint(\"Guide 3 = Speed limit (60km/h) 35 = Ahead Only 17 = No entry 4 =Speed limit (70km/h) 9=No passing\")",
"Analyze Performance",
"### Calculate the accuracy for these 5 new images. \n### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.\n\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.'))\n\n test_accuracy = evaluate(img5, label5)*100\n print(\"Test Accuracy = {:.2f}%\".format(test_accuracy))",
"Output Top 5 Softmax Probabilities For Each Image Found on the Web\nFor each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here. \nThe example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.\ntf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.\nTake this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:\n```\n(5, 6) array\na = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,\n 0.12789202],\n [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,\n 0.15899337],\n [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,\n 0.23892179],\n [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,\n 0.16505091],\n [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,\n 0.09155967]])\n```\nRunning it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:\nTopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],\n [ 0.28086119, 0.27569815, 0.18063401],\n [ 0.26076848, 0.23892179, 0.23664738],\n [ 0.29198961, 0.26234032, 0.16505091],\n [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],\n [0, 1, 4],\n [0, 5, 1],\n [1, 3, 5],\n [1, 4, 3]], dtype=int32))\nLooking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.",
"### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. \n### Feel free to use as many code cells as needed.\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.'))\n \n softmax = tf.nn.softmax(logits)\n top = tf.nn.top_k(softmax,5) \n sm = sess.run(top,feed_dict={x:img5})\n print (sm)\n ",
"Project Writeup\nOnce you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file. \n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \\n\",\n \"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.\n\n\nStep 4 (Optional): Visualize the Neural Network's State with Test Images\nThis Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.\nProvided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.\nFor an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.\n<figure>\n <img src=\"visualize_cnn.png\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above)</p> \n </figcaption>\n</figure>\n<p></p>",
"### Visualize your network's feature maps here.\n### Feel free to use as many code cells as needed.\n\n# image_input: the test image being fed into the network to produce the feature maps\n# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer\n# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output\n# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry\n\ndef outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):\n # Here make sure to preprocess your image_input in a way your network expects\n # with size, normalization, ect if needed\n # image_input =\n # Note: x should be the same name as your network's tensorflow data placeholder variable\n # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function\n activation = tf_activation.eval(session=sess,feed_dict={x : image_input})\n featuremaps = activation.shape[3]\n plt.figure(plt_num, figsize=(15,15))\n for featuremap in range(featuremaps):\n plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column\n plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number\n if activation_min != -1 & activation_max != -1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmin =activation_min, vmax=activation_max, cmap=\"gray\")\n elif activation_max != -1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmax=activation_max, cmap=\"gray\")\n elif activation_min !=-1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmin=activation_min, cmap=\"gray\")\n else:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", cmap=\"gray\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
VandyAstroML/Vanderbilt_Computational_Bootcamp
|
notebooks/Week_04/04_Python_Structures.ipynb
|
mit
|
[
"Week 4 - Python\nToday we will cover some basic python techniques and structures that are really useful for analyzing data\nToday's Agenda\n\nBasics of Python\nList Comprehension\nDictionaries\nFunctions\nClasses\n\nBasics of Python\nThe minimal Python script\nUnlike many other languages, a simple Python script does not require any sort of header information in the code. So, we can look at the standard programming example, Hello World, in Python (below). Here we're simply printing to screen. If we put that single line into a blank file (called, say, HelloWorld.py]) and then run that in the command line by typing 'python HelloWorld.py' the script should run with no problems. This also shows off the first Python function, print, which can be used to print strings or numbers.",
"print \"Hello World!\"",
"There are, however, a few lines that you will usually see in a Python script. The first line often starts with #! and is called the shebang. For a Python script, an example of the shebang line would be \"#!/usr/bin/env python\"\nWithin Python, any line that starts with # is a comment, and won't be executed when running the script. The shebang, though, is there for the shell. If you run the script by calling python explicitly, then the script will be executed in Python. If, however, you want to make the script an executable (which can be run just by typing \"./HelloWorld.py\") then the shell won't know what language the script should be run in. This is the information included in the shebang line. You don't need it, in general, but it's a good habit to have in case you ever decide to run a script as an executable.\nAnother common thing at the starts of scripts is several lines that start with 'import'. These lines allow you to allow import individual functions or entire modules (files that contain multiple functions). These can be those you write yourself, or things like numpy, matplotlib, etc.\nPython variables\nSome languages require that every variable be defined by a variable type. For example, in C++, you have to define a variable type, first. For example a line like \"int x\" would define the variable x, and specify that it be an an integer. Python, however, using dynamic typing. That means that variable types are entirely defined by what the variable is stored.\nIn the below example, we can see a few things happening. First of all, we can see that x behaves initally as a number (specifically, an integer, which is why 42/4=10). However, we can put a string in there instead with no problems. However, we can't treat it as a number anymore and add to it.\nTry commenting out the 5th line (print x+10) by adding a # to the front of that line, and we'll see that Python will still add strings to it.",
"x=42\nprint x+10\nprint x/4\nx=\"42\"\nprint x+10\nprint x+\"10\"",
"Lists\nThe basic way for storing larger amounts of data in Python (and without using other modules like numpy) is Python's default option, lists. A list is, by its definition, one dimensional. If we'd like to store more dimensions, then we are using what are referred to as lists of lists. This is not the same thing as an array, which is what numpy will use. Let's take a look at what a list does.\nWe'll start off with a nice simple list below. Here the list stores integers. Printing it back, we get exactly what we expect. However, because it's being treated as a list, not an array, it gets a little bit weird when we try to do addition or multiplication. Feel free to try changing the operations that we're using and see what causes errors, and what causes unexpected results.",
"x=[1, 2, 3]\ny=[4,5, 6]\nprint x\nprint x*2\nprint x+y",
"We can also set up a quick list if we want to using the range function. If we use just a single number, then we'll get a list of integers from 0 to 1 less than the number we gave it.\nIf we want a bit fancier of a list, then we can also include the number to start at (first parameter) and the step size (last parameter). All three of these have to be integers.\nIf we need it, we can also set up blank lists.",
"print range(10)\nprint range(20, 50, 3)\nprint []",
"If we want to, we can refer to subsets of the list. For just a single term, we can just use the number corresponding to that position. An important thing with Python is that the list index starts at 0, not at 1, starting from the first term. If we're more concerned about the last number in the list, then we can use negative numbers as the index. The last item in the list is -1, the item before that is -2, and so on.\nWe can also select a set of numbers by using a : to separate list indices. If you use this, and don't specify first or last index, it will presume you meant the start or end of the list, respectively.\nAfter you try running the sample examples below, try to get the following results:\n* [6] (using two methods)\n* [3,4,5,6]\n* [0,1,2,3,4,5,6]\n* [7,8,9]",
"x=range(10)\nprint x\nprint \"First value\", x[0]\nprint \"Last value\", x[-1]\nprint \"Fourth to sixth values\", x[3:5]",
"Modifying lists\nThe simplest change we can make to a list is to change it at a specific index just be redefining it, like in the second line in the code below.\nThere's three other handy ways to modify a list. append will add whatever we want as the next item in the list, but this means if we're adding more than a single value, it will add a list into our list, which may not always be what we want.\nextend will expand the list to include the additional values, but only if it's a list, it won't work on a single integer (go ahead and try that).\nFinally, insert will let us insert a value anywhere within the list. To do this, it requires a number for what spot in the list it should go, and also what we want to add into the list.",
"x=[1,2,3,4,5]\nx[2]=8\nprint x\n\nprint \"Testing append\"\nx.append(6)\nprint x\nx.append([7,8])\nprint x\n\nprint \"testing extend\"\nx=[1,2,3,4,5]\n#x.extend(6)\n#print x\nx.extend([7,8])\nprint x\n\nprint \"testing insert\"\nx=[1,2,3,4,5]\nx.insert(3, \"in\")\nprint x",
"Loops and List Comprehension\nLike most languages, we can write loops in Python. One of the most standard loops is a for loop, so we'll focus on that one. Below is a 'standard' way of writing a 'for' loop. We'll do something simple, where all we want is to get the square of each number in the array.",
"x=range(1,11,1)\nprint x\nx_2=[]\nfor i in x:\n i_2=i*i\n x_2.append(i_2)\nprint x_2",
"While that loop works, even this pretty simple example can be condensed into something a bit shorter. We have to set up a blank list, and then after that, the loop itself was 3 lines, so just getting the squares of all these values took 4 lines. We can do it in one with list comprehension.\nThis is basically a different way of writing a for loop, and will return a list, so we don't have to set up an empty list for the results.",
"x=range(1,11,1)\nprint x\nx_2=[i*i for i in x]\nprint x_2",
"Dictionaries\nDictionaries are another way of storing a large amount of data in Python, except instead of being referenced by an ordered set of numbers like in a list, they are referenced by either strings/characters or numbers, referred to as keys.",
"x={}\nx['answer']=42\nprint x['answer']",
"These are particularly useful if you'll have a handful of values you'd like to call back to often. For an astronomy example, we can set up a dictionary that contains the absolute magnitude of the Sun in a bunch of bands (from Binney & Merrifield). We can now have a code that easily calls absolute magnitudes whenever needed using that dictionary.\nWe can also list out the dictionary, if needed, with AbMag.items(). There's some other tools for more advanced tricks with dictionaries, but this covers the basics.",
"AbMag={'U':5.61, 'B':5.48, 'V':4.83, 'R':4.42, 'I':4.08}\nprint AbMag['U']\nprint AbMag.items()\n",
"Functions\nAt a certain point you'll be writing the same bits of code over and over again. That means that if you want to update it, you'll have to update it in every single spot you did the same thing. This is.... less than optimal use of time, and it also means it's really easy to screw up by forgetting to keep one spot the same as the rest.\nWe can try out a function by writing a crude function for the sum of a geometric series.\n$$\\frac{1}{r} + \\frac{1}{r^2} + \\frac{1}{r^3} + \\frac{1}{r^4} + \\ldots $$\nConveniently, so long as r is larger than 1, there's a known solution to this series. We can use that to see that this function works.\n$$ \\frac{1}{r-1} $$\nThis means we can call the function repeatedly and not need to change anything. In this case, you can try using this GeoSum function for several different numbers (remember, r>1), and see how closely this works, by just changing TermValue",
"def GeoSum(r):\n powers=range(1,11,1) #set up a list for the exponents 1 to 10\n terms=[(1./(r**x)) for x in powers] #calculate each term in the series\n return sum(terms) #return the sum of the list\n\nTermValue=2\nprint GeoSum(TermValue), (1.)/(TermValue-1)",
"Classes\nTo steal a good line for this, \"Classes can be thought of as blueprints for creating objects.\"\nWith a class, we can create an object with a whole set of properties that we can access. This can be very useful when you want to deal with many objects with the same set of parameters, rather than trying to keep track of related variables over multiple lists, or even just having a single object's properties all stored in some hard to manage list or dictionary.\nHere we'll just use a class that's set up to do some basic math. Note that the class consists of several smaller functions inside of it. The first function, called init, is going to be run as soon as we create an object belonging to this class, and so that'll create two attributes to that object, value and square. The other function, powerraise, only gets called if we call it. Try adding some other subfunctions in there to try this out. They don't need to have anything new passed to them to be run.",
"class SampleClass:\n def __init__(self, value): #run on initial setup of the class, provide a value\n self.value = value\n self.square = value**2\n \n def powerraise(self, powerval): #only run when we call it, provide powerval\n self.powerval=powerval\n self.raisedpower=self.value**powerval\n\nMyNum=SampleClass(3)\nprint MyNum.value\nprint MyNum.square\nMyNum.powerraise(4)\nprint MyNum.powerval\nprint MyNum.raisedpower\nprint MyNum.value,'^',MyNum.powerval,'=',MyNum.raisedpower",
"Next week, the first modules!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ahmadia/py4ultimate
|
Excel Data.ipynb
|
mit
|
[
"%matplotlib inline\n\nimport pandas as pd\n\nnd = pd.read_excel('nicki_data.xlsx')\n\nnd.head()\n\nnd['q1.2'].hist()\n\nnd.columns\n\ndrop_columns = [column for column in nd.columns if 'spec' in column]\n\nprint(drop_columns)\n\nnd = nd.drop(drop_columns, axis=1);\n\nnd.head()\n\nnd['q1.1'].hist()",
"See http://pandas.pydata.org/pandas-docs/stable/visualization.html",
"from pandas.tools.plotting import scatter_matrix\n\np = scatter_matrix(nd.loc[:, use_col[:3]], alpha=0.2, figsize=(18, 12), diagonal='kde')\n\n%%script bash --bg --out script_out\nbokeh-server\n\nnd.dtypes[:5]\n\nuse_col = [True if dtype in ['int64', 'float64'] else False for dtype in nd.dtypes]\n\nfrom bokeh.crossfilter.models import CrossFilter\nfrom bokeh.sampledata.autompg import autompg\nfrom bokeh.document import Document\nfrom bokeh.session import Session\nfrom bokeh.plotting import *\n\napp = CrossFilter.create(df=nd.loc[:, use_col])\ndocument = Document()\nsession = Session()\nsession.use_doc('crossfilter')\nsession.load_document(document)\n\ndocument.add(app)\nsession.store_document(document)\nsession.show(app)",
"not tested...\nwriter = pd.ExcelWriter('nd_out.xlsx')\nnd.to_excel(writer,'Sheet1')\nwriter.save()",
"nd.to_csv('nd_out.csv')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
ljishen/BSFD
|
playbook/bench/visualize.ipynb
|
mit
|
[
"import pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n\npd.set_option('display.max_colwidth', 120)\n\npd_res = pd.read_csv('out/results/parse_res.prof')\n\nBASE_MACHINE_NAME = 'kv7'\n\ndef plot_hist(category_name, yscale, xmargin, tick_rotation, xlabel, ylabel, ylim=None):\n category = pd_res[pd_res['benchmark'].str.contains(category_name)].sort_values('benchmark')\n\n # recreate a new view with benchmarks with all machines\n category_df = pd.DataFrame(data=category['benchmark'], columns=['benchmark'])\n machines = category['machine'].unique()\n\n for m in machines:\n category_df[m] = category[category['machine'] == m]['result']\n\n category_df[BASE_MACHINE_NAME] = category[category['machine'] == machines[0]]['base_result']\n category_df = category_df.reset_index(drop=True)\n\n # start to draw figrue\n all_machines = machines.tolist() + [BASE_MACHINE_NAME]\n names = category_df['benchmark'].tolist()\n values = [category_df[m].tolist() for m in all_machines]\n colors = cm.gist_rainbow(np.linspace(0, 1, 5))\n\n pos = np.arange(len(values[0]))\n width = 1. / (5 + len(values))\n\n bars = []\n fig, ax = plt.subplots()\n \n if ylim is not None:\n ax.set_ylim(ylim[0], ylim[1])\n \n for idx, (v, color) in enumerate(zip(values, colors)):\n bars.append(ax.bar(left=pos + idx * width, height=v, width=width, alpha=0.7, color=color))\n\n ax.legend([bars[i] for i in range(len(all_machines))], all_machines, loc='center', bbox_to_anchor=(1.3, 0.5))\n ax.set_yscale(yscale)\n ax.margins(xmargin, None)\n ax.set_xticks(pos + width)\n ax.set_xticklabels(names, rotation=tick_rotation)\n \n plt.xlabel(xlabel)\n plt.ylabel(ylabel)\n plt.show()\n\n return category_df",
"SysBench: CPU performance test\nWhen running with the CPU workload, sysbench will verify prime numbers by doing standard division of the number by all numbers between 2 and the square root of the number. If any number gives a remainder of 0, the next number is calculated.",
"sysbench_cpu_df = plot_hist(category_name='sysbench_cpu', yscale='log',\n xmargin=0.1 , tick_rotation='vertical',\n xlabel='Maximum prime number', ylabel='Elapsed time (sec)')\nsysbench_cpu_df",
"SysBench: Memory functions speed test\nWhen using the memory workload, sysbench will allocate a buffer (1KByte in this case) and each execution will read or write to this memory in a random or sequential manner. This is then total execution time in seconds is reached (300 seconds).\nCommand arguments: memory-block-size=1K, memory-oper=read/write, memory-access-mode=seq/rnd, max-time=300, max-requests=0 num-threads=[1,2,3,4,5]",
"sysbench_memory_df = plot_hist(category_name='sysbench_memory', yscale='linear',\n xmargin=0.1 , tick_rotation='vertical',\n xlabel='Operation', ylabel='# of operations (ops/sec)')\nsysbench_memory_df\n\nstress_ng_cpu_df = plot_hist(category_name='stress-ng_cpu', yscale='log',\n xmargin=0.05 , tick_rotation='vertical',\n xlabel='CPU Module', ylabel='# of operations (ops/sec)')\nstress_ng_cpu_df\n\nstress_ng_memory_df = plot_hist(category_name='stress-ng_memory', yscale='log',\n xmargin=0.05 , tick_rotation='vertical',\n xlabel='RAM Module', ylabel='# of operations (ops/sec)')\nstress_ng_memory_df",
"SysBench: File I/O test\nUse direct I/O for data to avoiding the buffer cache, but the drive write-back caching is activated.\nCommand arguments: --file-num=128 --file-total-size=8G --file-block-size=1048576 --max-time=60 --max-requests=0 --file-test-mode=seqwr --file-extra-flags=direct --file-fsync-end=on --file-fsync-mode=fsync --num-threads=1 run\n\nfile-num: number of files to create\nfile-total-size: total size of files to create\nfile-block-size: block size to use in all IO operations (KB)\nmax-time: limit for total execution time in seconds\nmax-requests: limit for total number of requests (0 for unlimited)\nfile-test-mode: test mode (seqwr, seqrd, rndwr, rndrd)\nfile-extra-flags=STRING additional flags to use on opening files {sync,dsync,direct} []\nfile-fsync-end=[on|off] do fsync() at the end of test [on]\nfile-fsync-mode=STRING which method to use for synchronization {fsync, fdatasync} [fsync]\nnum-threads: number of threads to use\n\n Defaults arguments: \n- file-io-mode=sync: file operations mode {sync,async,mmap}\n- file-fsync-all=off: do fsync() after each write operation (It forces flushing to disk before moving onto the next write)\n- file-merged-requests=0: merge at most this number of IO requests if possible (0 - don't merge)\n\"direct\" for oflag=flag\nUse direct I/O for data, avoiding the buffer cache. Note that the kernel may impose restrictions on read or write buffer sizes. For example, with an ext4 destination file system and a Linux-based kernel, using ‘oflag=direct’ will cause writes to fail with EINVAL if the output buffer size is not a multiple of 512.\n Original output from kv7 \n```\nsysbench 1.0: multi-threaded system evaluation benchmark\nRunning the test with following options:\nNumber of threads: 44\nInitializing random number generator from current time\nExtra file open flags: 0\n128 files, 800MiB each\n100GiB total file size\nBlock size 1KiB\nNumber of IO requests: 0\nRead/Write ratio for combined random IO test: 1.50\nPeriodic FSYNC enabled, calling fsync() each 100 requests.\nCalling fsync() at the end of test, Enabled.\nUsing synchronous I/O mode\nDoing random write test\nInitializing worker threads...\nThreads started!\nFile operations:\n reads/s: 0.00\n writes/s: 510.89\n fsyncs/s: 653.22\nThroughput:\n read, MiB/s: 0.00\n written, MiB/s: 0.50\nGeneral statistics:\n total time: 60.0915s\n total number of events: 30700\n total time taken by event execution: 239.9327s\n response time:\n min: 0.01ms\n avg: 7.82ms\n max: 192.07ms\n approx. 95 percentile: 46.77ms\nThreads fairness:\n events (avg/stddev): 697.7273/66.06\n execution time (avg/stddev): 5.4530/0.58\n```\n\n Events/min: total number of events / total time (in second) * 60\n Req/min: writes/s * 60\n Bandwidth: throughput - written, MiB/s",
"sysbench_fileio_df = plot_hist(category_name='sysbench_fileio_ssd_run.*?bw.*', yscale='log',\n xmargin=0.1 , tick_rotation='vertical',\n xlabel='Metrics (# of Events, # of Operations, Bandwidth)', ylabel='')\nsysbench_fileio_df",
"SSD\n\nsequential write: 70.46 205.35 (2.9 times faster)\nsequential read: 67.60 528.22 (7.8 times faster)\nrandom read: 49.86 526.35 (10.6 times faster)\nrandom write: 38.11 200.33 (5.3 times faster)",
"sysbench_fileio_df = plot_hist(category_name='sysbench_fileio_run.*?bw.*', yscale='log',\n xmargin=0.1 , tick_rotation='vertical',\n xlabel='Metrics (# of Events, # of Operations, Bandwidth)', ylabel='')\nsysbench_fileio_df",
"HDD\n\nsequential write: 70.46 84.90 (1.2 times faster)\nsequential read: 66.57 161.06 (2.4 times faster)\nrandom read: 48.99 115.13 (2.4 times faster)\nrandom write: 38.71 24.87 (0.6 times faster)",
"stress_ng_io_df = plot_hist(category_name='stress-ng_io', yscale='log',\n xmargin=0.05 , tick_rotation='vertical',\n xlabel='I/O Module', ylabel='# of operations (ops/sec)')\nstress_ng_io_df\n\nstress_ng_network_df = plot_hist(category_name='stress-ng_network', yscale='log',\n xmargin=0.05 , tick_rotation='vertical',\n xlabel='Network Module', ylabel='# of operations (ops/sec)')\nstress_ng_network_df\n\niperf3_df = plot_hist(category_name='iperf3', yscale='linear',\n xmargin=0.05 , tick_rotation='vertical',\n xlabel='window size in KByte', ylabel='Bandwidth (Mbits/sec)')\niperf3_df",
"Resources\n\nSysBench manual: http://imysql.com/wp-content/uploads/2014/10/sysbench-manual.pdf\nSysbench usage: https://wiki.gentoo.org/wiki/Sysbench#Using_the_memory_workload\nIperf Tutorial: https://www.es.net/assets/Uploads/201007-JTIperf.pdf\niPerf 3 user documentation: https://iperf.fr/iperf-doc.php#3doc\nGNU Coreutils: dd invocation: https://www.gnu.org/software/coreutils/manual/html_node/dd-invocation.html\n\nSummary\nConfiguration of KV Drive (many of configurations are hidden)\n\nCPU: 4 cores\nRAM: 2GB, no swap partition\nHDD: 2x 1.8T (2000G)\n5.4K rpm (Revolutions Per Minute)\n8MB Cache\nRAID Array (Raid Level : raid1)\nSSD: 1x 256GB\n\nConfiguration of issdm-6\n10 years old machine\n\nCPU: 2x Opteron 2212\n2.0GHz\nDual-Core\n2MB L2 Cache\nRAM: 8GB (4x 2GB DDR2)\nHDD: 4x Seagate BarracudaES (ST3250620NS)\n250GB\nSATA 3Gb/s\n7.2K rpm\n16MB Cache\n\nResults\nCPU\nBased on primality test (SysBench) and cpu, cpu-cache benchmarks (stress-ng) using 1 thread.\nKV drive is 16 times slower than issdm-6 (1 thread)\nMemory\nBased on SysBench memory functions speed test (compare the peak performance)\n\nsequential write: (1.5 times faster than issdm-6)\nsequential read: (1.5 times faster)\nrandom read: (5.4 times slower)\nrandom write: (1.9 times slower)\n\nFile I/O (SSD vs HDD)\nBased on SysBench File I/O test (compare the peak performance).\nThis experiment use direct I/O for data to avoid the buffer cache, but the drive write-back caching is avtivated.\n\nsequential write: (SSD on KV drive is 2.9 times faster than the HDD on issdm-6 / HDD on KV drive is 1.2 faster than the HDD on issdm-6)\nsequential read: (7.8 times faster / 2.4)\nrandom read: (10.6 times faster / 2.4)\nrandom write: (5.3 times faster / 0.6)\n\nNetwork bandwidth\nBased on stress-ng network test and iperf3 Network bandwidth performance test\nUDP and TCP are almost the same (1 thread)."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
maestrotf/pymepps
|
examples/example_plot_xr_accessor.ipynb
|
gpl-3.0
|
[
"How to use Xarray accessor\nThis example shows how to use the SpatialData accessor to extend the capabilities of xarray.\nTo extend xarray.DataArray you need only to load also pymepps with \"import pymepps\". The extensions could be used with the property xarray.DataArray.pp",
"import matplotlib.pyplot as plt\nimport xarray as xr\nimport pymepps",
"To use the full power of pymepps, you have to set a grid. If you load the data with the xarray functions you have to set the grid afterwards. So the next step is to load a NetCDF model file with xarray. There are also pymepps functions to load model data. These are shown in another example.",
"ds = xr.open_dataset('../data/model/GFS_Global_0p25deg_20161219_0600.nc')\nt2m_max = ds['Maximum_temperature_height_above_ground_Mixed_intervals_Maximum']\nprint(t2m_max)",
"The grid definition is inspired by the climate data operators. So you could either generate your own grid (done in this example), or you could load a cdo-conform grid file.\nWe could see that the grid is a structured latitude and longitude grid with a resolution of 0.25 degree.",
"grid_dict = dict(\n gridtype='lonlat',\n xsize=t2m_max['lon'].size,\n ysize=t2m_max['lat'].size,\n xfirst=t2m_max['lon'].values[0],\n xinc=0.25,\n yfirst=t2m_max['lat'].values[0],\n yinc=-0.25,\n)",
"We created our grid dict with the information. Now we have to build the grid. In pymepps you could use the GridBuilder to build the grid with given grid_dict.",
"builder = pymepps.GridBuilder(grid_dict)\ngrid = builder.build_grid()\nprint(grid)",
"The next step is to set the grid for our dataset. For this we could use the set_grid method of the SpatialAccessor.",
"t2m_max = t2m_max.pp.set_grid(grid)\nprint(t2m_max.pp.grid)",
"Now we set the grid. It is also possible to normalize the coordinates to allow a consistent processing of the model data.",
"# Before normalization\nprint('Before:\\n{0:s}\\n'.format(str(t2m_max)))\n\nt2m_max = t2m_max.pp.normalize_coords()\n# After normalization\nprint('After:\\n{0:s}'.format(str(t2m_max)))",
"We could see that the height_above_ground and the time variable are renamed to a more common name. The ensemble member is set to the default value 'det', while the runtime is set to the missing value None. Now lets plot the data with the xarray internal plot method.",
"t2m_max.plot()\nplt.show()",
"Lets make use of the SpatialAccessor to slice an area over germany. We would also transform the temperature unit to degree celsius. For this we could use the normal xarray.DataArray mathematical operations. After the transformation lets plot the temperature.",
"# sphinx_gallery_thumbnail_number = 2\nger_t2m_max = t2m_max.pp.sellonlatbox([5, 55, 15, 45])\n# K to deg C\nger_t2m_max -= 273.15\nger_t2m_max.plot()\nplt.show()",
"If we use a xarray.DataArray method where the DataArray instance is copied, we have to set a new grid. This behaviour coud seen in the following code block.",
"stacked_array = t2m_max.stack(stacked=('runtime', 'validtime'))\n# we have to catch the error for sphinx documentation\ntry:\n print(stacked_array.pp.grid)\nexcept TypeError:\n print('This DataArray has no grid defined!')",
"This seen behavior arises from the fact that the grid is depending on the grid coordinates of the DataArray and they could be changed with a xarray.DataArray method."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mattmcd/PyBayes
|
scripts/qfe_20220221.ipynb
|
apache-2.0
|
[
"Utility Functions\nDate: 2022-02-21\nAuthor: Matt McDonnell @mattmcd\nLooking at 'Quantitative Financial Economics' by Cuthbertson and Nitzsche to understand utility functions and\nindifference curves.",
"import sympy as sp\nfrom sympy.interactive import printing\nprinting.init_printing(use_latex=True)\n\nfrom sympy.stats import Bernoulli, LogNormal, density, sample, P as Prob, E as Expected, variance",
"Start by looking at a fair lottery random. A Bernoulli distribution can be used to represent a fair lottery\nwith proability $p$ for payoff $k_1$, probability $1-p$ for payoff of $k_2$. If the lottery is fair it has \nexpected value of zero, which can be used to solve for $p$.",
"k1, k2 = sp.symbols('k1 k2', real=True)\np = sp.symbols('p', nonnegative=True)\nXs = sp.symbols('X')\n\nX = Bernoulli('X', p=p, succ=k1, fail=k2)\n\nExpected(X)\n\nsp.Eq(p, sp.solve(Expected(X), p)[0])",
"Playing around with random variables",
"# Doesn't work - same RV?\n# FairCoin = X.subs({p: sp.S.Half, k1: 1, k2: -1})\n# FairCoin2 = X.subs({p: sp.S.Half, k1: -1, k2: 1})\n\n# Works\nFairCoin = Bernoulli('X1', p=sp.S.Half, succ=1, fail=-1)\nFairCoin2 = Bernoulli('X2', p=sp.S.Half, succ=1, fail=-1)\n\nsample(FairCoin + FairCoin2, size=(10,))\n\nProb(X.subs({p: 1/2, k1:1, k2:-1}) > 0)",
"Worked example and definition of Utility\nBelow we follow the example from p14-17, of a bet on a fair coin flip, $p=1/2$,\nthat costs \\$10 to enter, and pays off \\$16 for a win, \\$4 for a loss.",
"# Expected value of FairCoin toss with payoff $16 and $4. \n# No risk aversion -> would pay up to this amount for the bet\nPayoff = X.subs({p: sp.S.Half, k1: 16, k2: 4})\nExpected(Payoff)",
"Now consider a utility function of the form $U(W) = \\sqrt{W}$, where $W$ is the wealth of the player.",
"# Expected utility of the FairCoin toss with payoff $16 and $4 for sqrt utility\n# i.e. U(W) = sqrt(W)\nd = sp.Dummy()\nU = sp.Lambda(d, sp.sqrt(d))\nExpected(U(Payoff))",
"We see that the expected utility of the bet is less than the utility of the original \\$10:",
"# Utility of keeping $10 rather than paying $10 for bet\nU(10).evalf()",
"Calculate the risk premium: what would you pay not to have to take the bet?",
"# Calculate the risk premium: what would you pay not to have to take the bet?\ninitial_wealth = 10\nbet_cost = 10\nsp.solve(U(initial_wealth-d) - Expected(U(initial_wealth + Payoff - bet_cost)), d)[0].evalf()",
"We can go back to the general form for the lottery and calculate the risk premium $\\pi$ as a function of the other\nparameters, assuming that the cost to enter the bet is given by the fair value of the lottery.\n(Sidenote: I'm not a fan of using $\\pi$ to represent risk premium, seems like it's asking for trouble if $\\pi$\nthe number crops up in expressions.)",
"# General form\nW, c, pis = sp.symbols('W, c, pi', real=True)\n# Solve for cost\nsp.Eq(c, sp.solve(Expected(X - c), c)[0].collect([k1, k2]))\n\nsp.Eq(pis, sp.solve(U(W-pis) - Expected(U(W + X - Expected(X))), pis)[0].simplify().collect(p))",
"Relate risk premium to utility function curvature.\nAssume general RV for (payoff - cost), assume this has $E[x] == 0$ i.e. cost is expected payoff.\nThis also means var(x) = $E[x^2]$.",
"x = sp.stats.rv.RandomSymbol('x') \nUs = sp.symbols('U', cls=sp.Function)\nsp.Eq(Us(W - pis), Expected(Us(W + x)))\n\nlhs = sp.series(Us(W - pis), pis, n=2).removeO().simplify()\nlhs\n\nsigma_sq_x = sp.symbols('sigma_x^2', positive=True)\nrhs = Expected(\n sp.series(Us(W + x), x, n=3).removeO()\n).collect(Us(W)).subs(Expected(x), 0).subs({Expected(x**2): sigma_sq_x})\nrhs\n\npi = sp.solve(lhs - rhs, pis)[0]\nsp.Eq(pis, pi)",
"We get to the form for risk premium given in the text, using SymPy to do the required manipulations.\nRisk Aversion Coefficients\nWe can extract the term containing $U(W)$ into the coefficient for absolute risk aversion $R_A$ and from this the \ncoefficient for relative risk aversion $R_R$",
"Ras, Rrs = sp.symbols('R_A R_R', positive=True)\nRa = pi/(sigma_sq_x/2)\nRr = W*Ras\nRau = lambda U: Ra.subs(Us(W), U).simplify().powsimp()\nsp.Eq(Ras, Ra), sp.Eq(Rrs, Rr)",
"Then we can apply these definitions to some utility functions of interest: constant relative risk aversion, \nconstant absolute risk aversion, quadratic",
"g, a, b, c = sp.symbols('gamma a b c', positive=True)\nU_crras, U_caras, U_qs = sp.symbols('U_{CRRA} U_{CARA} U_Q', cls=sp.Function)\nU_crra = W**(1-g)/(1-g)\nU_cara = a - b*sp.exp(-c*W)\nU_q = W - b/2*W**2",
"Constant Relative Risk Aversion",
"sp.Eq(U_crras(W), U_crra)\n\nsp.Eq(Ras, Rau(U_crra)), sp.Eq(Rrs, Rr.subs(Ras, Rau(U_crra)))",
"Constant Absolute Risk Aversion",
"sp.Eq(U_caras(W), U_cara)\n\nsp.Eq(Ras, Rau(U_cara)), sp.Eq(Rrs, Rr.subs(Ras, Rau(U_cara)))",
"Quadratic Utility",
"sp.Eq(U_qs(W), U_q)\n\nsp.Eq(Ras, Rau(U_q)), sp.Eq(Rrs, Rr.subs(Ras, Rau(U_q)))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
atlury/deep-opencl
|
DL0110EN/5.1.2dropoutRegressionAssignemnt.ipynb
|
lgpl-3.0
|
[
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <a href=\"http://cocl.us/pytorch_link_top\"><img src = \"http://cocl.us/Pytorch_top\" width = 950, align = \"center\"></a>\n\n<img src = \"https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png\" width = 200, align = \"center\">\n\n<h1 align=center><font size = 5>Using Dropout in Regression Assignment </font></h1> \n\n# Table of Contents\nin this lab, you will see how adding dropout to your model will decrease overfitting with <code>nn.Sequential</code>.\n\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<li><a href=\"#ref0\">Make Some Data </a></li>\n<li><a href=\"#ref1\">Create the Model and Cost Function the Pytorch way</a></li>\n<li><a href=\"#ref2\">Batch Gradient Descent</a></li>\n<br>\n<p></p>\nEstimated Time Needed: <strong>20 min</strong>\n</div>\n\n<hr>\n\nImport all the libraries you need for this lab:",
"import torch\nimport matplotlib.pyplot as plt\nimport torch.nn as nn\n\nimport numpy as np",
"<a id=\"ref0\"></a>\n<h2 align=center>Get Some Data </h2>\n\nCreate polynomial dataset objects:",
"from torch.utils.data import Dataset, DataLoader\n\nclass Data(Dataset):\n def __init__(self,N_SAMPLES = 40,noise_std=1,train=True):\n \n \n self.x = torch.linspace(-1, 1, N_SAMPLES).view(-1,1)\n self.f=self.x**2\n \n if train!=True:\n torch.manual_seed(1)\n \n self.y = self.f+noise_std*torch.randn(self.f.size())\n self.y=self.y.view(-1,1)\n torch.manual_seed(0)\n else:\n self.y = self.f+noise_std*torch.randn(self.f.size())\n self.y=self.y.view(-1,1)\n def __getitem__(self,index): \n return self.x[index],self.y[index]\n def __len__(self):\n return self.len\n def plot(self):\n plt.figure(figsize=(6.1, 10))\n plt.scatter(self.x.numpy(), self.y.numpy(), label=\"Samples\")\n plt.plot(self.x.numpy(), self.f.numpy() ,label=\"True function\",color='orange')\n plt.xlabel(\"x\")\n plt.ylabel(\"y\")\n plt.xlim((-1, 1))\n plt.ylim((-2, 2.5))\n plt.legend(loc=\"best\")\n plt.show()",
"Create a dataset object:",
"data_set=Data()\ndata_set.plot()",
"Get some validation data:",
"torch.manual_seed(0) \nvalidation_set=Data(train=False)",
"<a id=\"ref1\"></a>\n<h2 align=center>Create the Model, Optimizer, and Total Loss Function (cost)</h2>",
"torch.manual_seed(4) ",
"Create a three-layer neural network <code>model</code> with a ReLU() activation function for regression. All the appropriate layers should be 300 units.\nDouble-click here for the solution.\n<!-- Your answer is below:\n\nn_hidden = 30\nmodel= torch.nn.Sequential(\n torch.nn.Linear(1, n_hidden), \n torch.nn.ReLU(),\n torch.nn.Linear(n_hidden, n_hidden),\n\n torch.nn.ReLU(),\n torch.nn.Linear(n_hidden, 1),\n)\n-->\n\nCreate a three-layer neural network <code>model_drop</code> with a ReLU() activation function for regression. All the appropriate layers should be 300 units. Apply dropout to all but the last layer and make the probability of dropout is 50%.\nDouble-click here for the solution.\n<!-- Your answer is below:\nn_hidden = 300\nmodel_drop= torch.nn.Sequential(\n torch.nn.Linear(1, n_hidden),\n torch.nn.Dropout(0.5), \n torch.nn.ReLU(),\n torch.nn.Linear(n_hidden, n_hidden),\n torch.nn.Dropout(0.5), \n torch.nn.ReLU(),\n torch.nn.Linear(n_hidden, 1),\n)\n-->\n\n<a id=\"ref2\"></a>\n<h2 align=center>Train the Model via Mini-Batch Gradient Descent </h2>\n\nSet the model using dropout to training mode; this is the default mode, but it's a good practice.",
"model_drop.train()",
"Train the model by using the Adam optimizer. See the unit on other optimizers. Use the mean square loss:",
"optimizer_ofit = torch.optim.Adam(model.parameters(), lr=0.01)\noptimizer_drop = torch.optim.Adam(model_drop.parameters(), lr=0.01)\ncriterion = torch.nn.MSELoss()",
"Initialize a dictionary that stores the training and validation loss for each model:",
"LOSS={}\nLOSS['training data no dropout']=[]\nLOSS['validation data no dropout']=[]\nLOSS['training data dropout']=[]\nLOSS['validation data dropout']=[]",
"Run 500 iterations of batch gradient decent:",
"epochs=500\n\nfor epoch in range(epochs):\n #make a prediction for both models \n yhat = model(data_set.x)\n yhat_drop = model_drop(data_set.x)\n #calculate the lossf or both models \n loss = criterion(yhat, data_set.y)\n loss_drop = criterion(yhat_drop, data_set.y)\n \n #store the loss for both the training and validation data for both models \n LOSS['training data no dropout'].append(loss.item())\n LOSS['validation data no dropout'].append(criterion(model(validation_set.x), validation_set.y).item())\n LOSS['training data dropout'].append(loss_drop.item())\n model_drop.eval()\n LOSS['validation data dropout'].append(criterion(model_drop(validation_set.x), validation_set.y).item())\n model_drop.train()\n \n #clear gradient \n optimizer_ofit.zero_grad()\n optimizer_drop.zero_grad()\n #Backward pass: compute gradient of the loss with respect to all the learnable parameters\n loss.backward()\n loss_drop.backward()\n #the step function on an Optimizer makes an update to its parameters\n optimizer_ofit.step()\n optimizer_drop.step()",
"Make a prediction by using the test set assign <code>model</code> to yhat and <code>model_drop</code> to yhat_drop.\nDouble-click here for the solution.\n<!-- Your answer is below:\nyhat=model(data_set.x)\nmodel_drop.eval()\nyhat_drop=model_drop(data_set.x),\n)\n-->\n\nPlot predictions of both models. Compare them to the training points and the true function:",
"plt.figure(figsize=(6.1, 10))\n\nplt.scatter(data_set.x.numpy(), data_set.y.numpy(), label=\"Samples\")\nplt.plot(data_set.x.numpy(), data_set.f.numpy() ,label=\"True function\",color='orange')\nplt.plot(data_set.x.numpy(),yhat.detach().numpy(),label='no dropout',c='r')\nplt.plot(data_set.x.numpy(),yhat_drop.detach().numpy(),label=\"dropout\",c='g')\n\n\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.xlim((-1, 1))\nplt.ylim((-2, 2.5))\nplt.legend(loc=\"best\")\nplt.show()",
"You can see that the model using dropout does better at tracking the function that generated the data. \nPlot out the loss for training and validation data on both models:",
"plt.figure(figsize=(6.1, 10))\nfor key, value in LOSS.items():\n plt.plot(np.log(np.array(value)),label=key)\n plt.legend()\n plt.xlabel(\"iterations\")\n plt.ylabel(\"Log of cost or total loss\")",
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <a href=\"http://cocl.us/pytorch_link_bottom\"><img src = \"http://cocl.us/pytorch_image_bottom\" width = 950, align = \"center\"></a>\n\n### About the Authors: \n\n [Joseph Santarcangelo]( https://www.linkedin.com/in/joseph-s-50398b136/) has a PhD in Electrical Engineering. His research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. \n\nOther contributors: [Michelle Carey]( https://www.linkedin.com/in/michelleccarey/) ,[Morvan Youtube channel]( https://www.youtube.com/channel/UCdyjiB5H8Pu7aDTNVXTTpcg), [Mavis Zhou]( https://www.linkedin.com/in/jiahui-mavis-zhou-a4537814a/)\n\n<hr>\n\nCopyright © 2018 [cognitiveclass.ai](cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
taliamo/Final_Project
|
organ_pitch/.ipynb_checkpoints/upload_pitch_data-checkpoint.ipynb
|
mit
|
[
"T. Martz-Oberlander, 2015-11-12, CO2 and Speed of Sound\nFormatting PITCH pipe organ data for Python operations\nThe entire script looks for mathematical relationships between CO2 concentration changes and pitch changes from a pipe organ. This script uploads, cleans data and organizes new dataframes, creates figures, and performs statistical tests on the relationships between variable CO2 and frequency of sound from a note played on a pipe organ.\nThis uploader script:\n1) Uploads organ note pitch data files\n2) Munges it (creates a Date Time column for the time stamps), establishes column contents as floats\nHere I pursue data analysis route 1 (as mentionted in my notebook.md file), which involves comparing one pitch dataframe with one dataframe of environmental characteristics taken at one sensor location. Both dataframes are compared by the time of data recorded.",
"# I import useful libraries (with functions) so I can visualize my data\n# I use Pandas because this dataset has word/string column titles and I like the readability features of commands and finish visual products that Pandas offers\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport re\nimport numpy as np\n\n%matplotlib inline\n\n#I want to be able to easily scroll through this notebook so I limit the length of the appearance of my dataframes \nfrom pandas import set_option\nset_option('display.max_rows', 10)",
"Uploaded data into Python¶\nFirst I upload my data sets. I am working with two: one for pitch measurements and another for environmental characteristics (CO2, temperature (deg C), and relative humidity (RH) (%) measurements). My data comes from environmental sensing logger devices in the \"Choir Division\" section of the organ consul.",
"#I import a pitch data file\n\n#comment by nick changed the path you upload that data from making in compatible with clone copies of your project\npitch=pd.read_table('../Data/pitches.csv', sep=',')\n\n#assigning columns names\npitch.columns=[['date_time','section','note','freq1','freq2','freq3', 'freq4', 'freq5', 'freq6', 'freq7', 'freq8', 'freq9']]\n\n#I display my dataframe\npitch\n\n#Tell python that my date_time column has a \"datetime\" values, so it won't read as a string or object\npitch['date_time']= pd.to_datetime(env_choir_div['Date_time'])\n\n#print the new table and the type of data to check that all columns are in line with the column names\nprint(pitch)\n\n#Check the type of data in each column. This shows there are integers and floats, and datetime. This is good for analysing.\npitch.dtypes",
"Next\n1. Find the average pitch value for each date_time\n2. Select out the pitch data for one division at a time. Make an argument\n3. Append other pitch files"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
17-12-11-workcamp-ml/2017-12-11-workcamp-ml-beispiel-hauspreise-40.ipynb
|
gpl-3.0
|
[
"<h1><font color=\"#8888\">Workcamp Maschinelles Lernen - 11.12.2017</font></h1>\nMaschinelles Lernen in Python\nan einem Beispiel: California Housing - Wir bestimmen die Bestimmungsfaktoren\nfür Hauspreise in Kalifornien - Spielt ein Tesla eine Rolle ?",
"# Laden der Bibliotheken numpy, pandas und matplotlib\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n# Einen %magic Befehl ausführen\n% pylab inline \n# %matplotlib inline\n\n#Aus der scikit learn bibliothek werden module geladen\nfrom sklearn import base, pipeline, preprocessing\nfrom sklearn import svm, linear_model, tree, ensemble, neighbors\n\ndf_file = pd.read_csv(\"california_housing.csv\")\nXtrain = df_file[df_file[\"sample_id\"] == 0].copy()\nytrain = np.asarray(Xtrain[\"y\"]).copy()\ndel(Xtrain[\"sample_id\"], Xtrain[\"y\"])\nXtest = df_file[df_file[\"sample_id\"] == 1].copy()\nytest = np.asarray(Xtest[\"y\"]).copy()\ndel(Xtest[\"sample_id\"], Xtest[\"y\"])\ndf_file.head()\n\ndf_file.describe()",
"Laden der Daten (Trainings- und Validierungsdatensatz)",
"#from utilities import load_data\n#Xtrain, ytrain, Xtest, ytest = load_data()\nXtrain.head()\n\nXtrain.tail()\n\nXtrain.describe()\n\nytrain\n\nXtest.head()\n\nXtest.describe()\n\nytest",
"Plotten der Verteilung der Trainingsdaten",
"plt.hist(ytrain, bins=50)\nplt.show()\n\nplt.hist(ytrain, bins=20)\nplt.show()",
"Einfaches Modell: Mittelwert",
"#from utilities import evaluate\n#from utilities import evaluate\ntrivialprognose = np.mean(ytrain)\nprediction = trivialprognose\n#Plot der Daten\ntruth = ytest\n#evaluate(trivialprognose, ytest)\nif isinstance(prediction, np.ndarray):\n p = plt.hist(prediction, bins=50, color=\"g\", label='Vorhersage')\nelse:\n p = plt.bar(prediction, 250, width=0.125, color=\"g\", label='Vorhersage')\nt = plt.hist(truth, bins=50, color=\"b\", label='Wahrheit')\nplt.ylabel(\"Anzahl\")\nplt.xlabel(\"logarithmierter Hauspreis.\")\nplt.legend()\nprint(\"Mittlere absolute Abweichung: {}\".format(np.mean(np.abs(prediction - truth))))\nprint(\"Mittlere quadratische Abweichung: {}\".format(np.mean(np.square(prediction - truth))))",
"Modell A: Lineare Regression",
"from sklearn.linear_model import LinearRegression\n\nest = LinearRegression()\n\nest.fit(Xtrain, ytrain)\nprediction = est.predict(Xtest)\n#Plot der Daten\ntruth=ytest\n\nif isinstance(prediction, np.ndarray):\n p = plt.hist(prediction, bins=50, color=\"g\", label='Vorhersage')\nelse:\n p = plt.bar(prediction, 250, width=0.125, color=\"g\", label='Vorhersage')\nt = plt.hist(truth, bins=50, color=\"b\", label='Wahrheit')\nplt.ylabel(\"Anzahl\")\nplt.xlabel(\"logarithmierter Hauspreis.\")\nplt.legend()\nprint(\"Mittlere absolute Abweichung: {}\".format(np.mean(np.abs(prediction - truth))))\nprint(\"Mittlere quadratische Abweichung: {}\".format(np.mean(np.square(prediction - truth))))\n\n#evaluate(prediction, ytest)",
"Modell B: Support Vektor Regression",
"from sklearn.svm import SVR\n\nest = SVR(max_iter=5000)\nest.fit(Xtrain, ytrain)\n\nprediction = est.predict(Xtest)\n#Plot der Daten\ntruth = ytest\nif isinstance(prediction, np.ndarray):\n p = plt.hist(prediction, bins=50, color=\"g\", label='Vorhersage')\nelse:\n p = plt.bar(prediction, 250, width=0.125, color=\"g\", label='Vorhersage')\nt = plt.hist(truth, bins=50, color=\"b\", label='Wahrheit')\nplt.ylabel(\"Anzahl\")\nplt.xlabel(\"logarithmierter Hauspreis.\")\nplt.legend()\nprint(\"Mittlere absolute Abweichung: {}\".format(np.mean(np.abs(prediction - truth))))\nprint(\"Mittlere quadratische Abweichung: {}\".format(np.mean(np.square(prediction - truth))))\n#evaluate(prediction, ytest)",
"Modell B durch Erweiterung mit Skalierungsverfahren",
"from sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler\n\npipe = Pipeline([\n ('scaler', StandardScaler()),\n ('regressor', SVR(max_iter=5000))\n ])\npipe.fit(Xtrain, ytrain)\n\nprediction = pipe.predict(Xtest)\n\n#Plot der Daten\ntruth = ytest\nif isinstance(prediction, np.ndarray):\n p = plt.hist(prediction, bins=50, color=\"g\", label='Vorhersage')\nelse:\n p = plt.bar(prediction, 250, width=0.125, color=\"g\", label='Vorhersage')\nt = plt.hist(truth, bins=50, color=\"b\", label='Wahrheit')\nplt.ylabel(\"Anzahl\")\nplt.xlabel(\"logarithmierter Hauspreis.\")\nplt.legend()\nprint(\"Mittlere absolute Abweichung: {}\".format(np.mean(np.abs(prediction - truth))))\nprint(\"Mittlere quadratische Abweichung: {}\".format(np.mean(np.square(prediction - truth))))\n\n#evaluate(prediction, ytest)",
"Weitere Betrachtung: Zweidimensionale Korrelationen",
"df_file = pd.read_csv(\"california_housing.csv\")\nXtrain = df_file[df_file[\"sample_id\"] == 0].copy()\nytrain = np.asarray(Xtrain[\"y\"]).copy()\ndel(Xtrain[\"sample_id\"], Xtrain[\"y\"])\nXtest = df_file[df_file[\"sample_id\"] == 1].copy()\nytest = np.asarray(Xtest[\"y\"]).copy()\ndel(Xtest[\"sample_id\"], Xtest[\"y\"])\n#Xtrain, ytrain, Xtest, ytest = load_data()\n\n#from utilities import visualize\n\n#def visualize(geo_prediction):\n#Xtrain, ytrain, Xtest, ytest = load_data()\ngeo_prediction = ytest\nx1 = np.asarray(Xtest['Longitude'])\nx2 = np.asarray(Xtest['Latitude'])\n\nfrom matplotlib import colors\ncm = plt.cm.get_cmap('RdYlBu')\n\nsc = plt.scatter(x1, x2, c=geo_prediction, s=20, vmin=0, vmax=5)\nplt.colorbar(sc)\nplt.show()\n\n\n\n#visualize(ytest)",
"Erweiterung: Separate Regression der Geokoordinaten",
"from utilities import RegressionOnSubset\nfrom sklearn.neighbors import KNeighborsRegressor\n\nXtrain, ytrain, Xtest, ytest = load_data()\n\ncolumns = [\"Longitude\", \"Latitude\"]\n\npipe = Pipeline([\n ('geo_regressor', RegressionOnSubset(\n KNeighborsRegressor(), columns)),\n ('scaler', StandardScaler()),\n ('regressor', SVR(max_iter=5000)),\n ])\n\npipe.fit(Xtrain, ytrain)\n",
"Darstellen der Ergebnisse",
"Xtrain, ytrain, Xtest, ytest = load_data()\nprediction = pipe.predict(Xtest)\n\n\nXtest.head()\nvisualize(Xtest.knearest)\n\nevaluate(prediction, ytest)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
awjuliani/DeepRL-Agents
|
Contextual-Policy.ipynb
|
mit
|
[
"Simple Reinforcement Learning in Tensorflow Part 1.5:\nThe Contextual Bandits\nThis tutorial contains a simple example of how to build a policy-gradient based agent that can solve the contextual bandit problem. For more information, see this Medium post.\nFor more Reinforcement Learning algorithms, including DQN and Model-based learning in Tensorflow, see my Github repo, DeepRL-Agents.",
"import tensorflow as tf\nimport tensorflow.contrib.slim as slim\nimport numpy as np",
"The Contextual Bandits\nHere we define our contextual bandits. In this example, we are using three four-armed bandit. What this means is that each bandit has four arms that can be pulled. Each bandit has different success probabilities for each arm, and as such requires different actions to obtain the best result. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the bandit-arm that will most often give a positive reward, depending on the Bandit presented.",
"class contextual_bandit():\n def __init__(self):\n self.state = 0\n #List out our bandits. Currently arms 4, 2, and 1 (respectively) are the most optimal.\n self.bandits = np.array([[0.2,0,-0.0,-5],[0.1,-5,1,0.25],[-5,5,5,5]])\n self.num_bandits = self.bandits.shape[0]\n self.num_actions = self.bandits.shape[1]\n \n def getBandit(self):\n self.state = np.random.randint(0,len(self.bandits)) #Returns a random state for each episode.\n return self.state\n \n def pullArm(self,action):\n #Get a random number.\n bandit = self.bandits[self.state,action]\n result = np.random.randn(1)\n if result > bandit:\n #return a positive reward.\n return 1\n else:\n #return a negative reward.\n return -1",
"The Policy-Based Agent\nThe code below established our simple neural agent. It takes as input the current state, and returns an action. This allows the agent to take actions which are conditioned on the state of the environment, a critical step toward being able to solve full RL problems. The agent uses a single set of weights, within which each value is an estimate of the value of the return from choosing a particular arm given a bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward.",
"class agent():\n def __init__(self, lr, s_size,a_size):\n #These lines established the feed-forward part of the network. The agent takes a state and produces an action.\n self.state_in= tf.placeholder(shape=[1],dtype=tf.int32)\n state_in_OH = slim.one_hot_encoding(self.state_in,s_size)\n output = slim.fully_connected(state_in_OH,a_size,\\\n biases_initializer=None,activation_fn=tf.nn.sigmoid,weights_initializer=tf.ones_initializer())\n self.output = tf.reshape(output,[-1])\n self.chosen_action = tf.argmax(self.output,0)\n\n #The next six lines establish the training proceedure. We feed the reward and chosen action into the network\n #to compute the loss, and use it to update the network.\n self.reward_holder = tf.placeholder(shape=[1],dtype=tf.float32)\n self.action_holder = tf.placeholder(shape=[1],dtype=tf.int32)\n self.responsible_weight = tf.slice(self.output,self.action_holder,[1])\n self.loss = -(tf.log(self.responsible_weight)*self.reward_holder)\n optimizer = tf.train.GradientDescentOptimizer(learning_rate=lr)\n self.update = optimizer.minimize(self.loss)",
"Training the Agent\nWe will train our agent by getting a state from the environment, take an action, and recieve a reward. Using these three things, we can know how to properly update our network in order to more often choose actions given states that will yield the highest rewards over time.",
"tf.reset_default_graph() #Clear the Tensorflow graph.\n\ncBandit = contextual_bandit() #Load the bandits.\nmyAgent = agent(lr=0.001,s_size=cBandit.num_bandits,a_size=cBandit.num_actions) #Load the agent.\nweights = tf.trainable_variables()[0] #The weights we will evaluate to look into the network.\n\ntotal_episodes = 10000 #Set total number of episodes to train agent on.\ntotal_reward = np.zeros([cBandit.num_bandits,cBandit.num_actions]) #Set scoreboard for bandits to 0.\ne = 0.1 #Set the chance of taking a random action.\n\ninit = tf.global_variables_initializer()\n\n# Launch the tensorflow graph\nwith tf.Session() as sess:\n sess.run(init)\n i = 0\n while i < total_episodes:\n s = cBandit.getBandit() #Get a state from the environment.\n \n #Choose either a random action or one from our network.\n if np.random.rand(1) < e:\n action = np.random.randint(cBandit.num_actions)\n else:\n action = sess.run(myAgent.chosen_action,feed_dict={myAgent.state_in:[s]})\n \n reward = cBandit.pullArm(action) #Get our reward for taking an action given a bandit.\n \n #Update the network.\n feed_dict={myAgent.reward_holder:[reward],myAgent.action_holder:[action],myAgent.state_in:[s]}\n _,ww = sess.run([myAgent.update,weights], feed_dict=feed_dict)\n \n #Update our running tally of scores.\n total_reward[s,action] += reward\n if i % 500 == 0:\n print(\"Mean reward for each of the \" + str(cBandit.num_bandits) + \" bandits: \" + str(np.mean(total_reward,axis=1)))\n i+=1\nfor a in range(cBandit.num_bandits):\n print(\"The agent thinks action \" + str(np.argmax(ww[a])+1) + \" for bandit \" + str(a+1) + \" is the most promising....\")\n if np.argmax(ww[a]) == np.argmin(cBandit.bandits[a]):\n print(\"...and it was right!\")\n else:\n print(\"...and it was wrong!\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
agushman/coursera
|
src/cours_3/week_4/edit_CookingLDA_PA.ipynb
|
mit
|
[
"Programming Assignment:\nГотовим LDA по рецептам\nКак вы уже знаете, в тематическом моделировании делается предположение о том, что для определения тематики порядок слов в документе не важен; об этом гласит гипотеза «мешка слов». Сегодня мы будем работать с несколько нестандартной для тематического моделирования коллекцией, которую можно назвать «мешком ингредиентов», потому что на состоит из рецептов блюд разных кухонь. Тематические модели ищут слова, которые часто вместе встречаются в документах, и составляют из них темы. Мы попробуем применить эту идею к рецептам и найти кулинарные «темы». Эта коллекция хороша тем, что не требует предобработки. Кроме того, эта задача достаточно наглядно иллюстрирует принцип работы тематических моделей.\nДля выполнения заданий, помимо часто используемых в курсе библиотек, потребуются модули json и gensim. Первый входит в дистрибутив Anaconda, второй можно поставить командой \npip install gensim\nПостроение модели занимает некоторое время. На ноутбуке с процессором Intel Core i7 и тактовой частотой 2400 МГц на построение одной модели уходит менее 10 минут.\nЗагрузка данных\nКоллекция дана в json-формате: для каждого рецепта известны его id, кухня (cuisine) и список ингредиентов, в него входящих. Загрузить данные можно с помощью модуля json (он входит в дистрибутив Anaconda):",
"import json\n\nwith open(\"recipes.json\") as f:\n recipes = json.load(f)\n\nprint(recipes[0])",
"Составление корпуса",
"from gensim import corpora, models\nimport numpy as np",
"Наша коллекция небольшая, и целиком помещается в оперативную память. Gensim может работать с такими данными и не требует их сохранения на диск в специальном формате. Для этого коллекция должна быть представлена в виде списка списков, каждый внутренний список соответствует отдельному документу и состоит из его слов. Пример коллекции из двух документов: \n[[\"hello\", \"world\"], [\"programming\", \"in\", \"python\"]]\nПреобразуем наши данные в такой формат, а затем создадим объекты corpus и dictionary, с которыми будет работать модель.",
"texts = [recipe[\"ingredients\"] for recipe in recipes]\ndictionary = corpora.Dictionary(texts) # составляем словарь\ncorpus = [dictionary.doc2bow(text) for text in texts] # составляем корпус документов\n\nprint(texts[0])\nprint(corpus[0])",
"У объекта dictionary есть полезная переменная dictionary.token2id, позволяющая находить соответствие между ингредиентами и их индексами.\nОбучение модели\nВам может понадобиться документация LDA в gensim.\nЗадание 1. Обучите модель LDA с 40 темами, установив количество проходов по коллекции 5 и оставив остальные параметры по умолчанию. \nЗатем вызовите метод модели show_topics, указав количество тем 40 и количество токенов 10, и сохраните результат (топы ингредиентов в темах) в отдельную переменную. Если при вызове метода show_topics указать параметр formatted=True, то топы ингредиентов будет удобно выводить на печать, если formatted=False, будет удобно работать со списком программно. Выведите топы на печать, рассмотрите темы, а затем ответьте на вопрос:\nСколько раз ингредиенты \"salt\", \"sugar\", \"water\", \"mushrooms\", \"chicken\", \"eggs\" встретились среди топов-10 всех 40 тем? При ответе не нужно учитывать составные ингредиенты, например, \"hot water\".\nПередайте 6 чисел в функцию save_answers1 и загрузите сгенерированный файл в форму.\nУ gensim нет возможности фиксировать случайное приближение через параметры метода, но библиотека использует numpy для инициализации матриц. Поэтому, по утверждению автора библиотеки, фиксировать случайное приближение нужно командой, которая написана в следующей ячейке. Перед строкой кода с построением модели обязательно вставляйте указанную строку фиксации random.seed.",
"np.random.seed(76543)\n# здесь код для построения модели:\nlda_1 = models.LdaModel(corpus, id2word=dictionary, num_topics=40, passes=5)\n\ntopics = lda_1.show_topics(num_topics=40, num_words=10, formatted=False)\n\nc_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs = 0, 0, 0, 0, 0, 0\nfor _, top_words in lda_1.print_topics(num_topics=40, num_words=10):\n c_salt += top_words.count(u'salt')\n c_sugar += top_words.count(u'sugar')\n c_water += top_words.count(u'water')\n c_mushrooms += top_words.count(u'mushrooms')\n c_chicken += top_words.count(u'chicken')\n c_eggs += top_words.count(u'eggs')\n\ndef save_answers1(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs):\n with open(\"cooking_LDA_pa_task1.txt\", \"w\") as fout:\n fout.write(\" \".join([str(el) for el in [c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs]]))\n\nprint(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs)\nsave_answers1(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs)",
"Фильтрация словаря\nВ топах тем гораздо чаще встречаются первые три рассмотренных ингредиента, чем последние три. При этом наличие в рецепте курицы, яиц и грибов яснее дает понять, что мы будем готовить, чем наличие соли, сахара и воды. Таким образом, даже в рецептах есть слова, часто встречающиеся в текстах и не несущие смысловой нагрузки, и поэтому их не желательно видеть в темах. Наиболее простой прием борьбы с такими фоновыми элементами — фильтрация словаря по частоте. Обычно словарь фильтруют с двух сторон: убирают очень редкие слова (в целях экономии памяти) и очень частые слова (в целях повышения интерпретируемости тем). Мы уберем только частые слова.",
"import copy\ndictionary2 = copy.deepcopy(dictionary)",
"Задание 2. У объекта dictionary2 есть переменная dfs — это словарь, ключами которого являются id токена, а элементами — число раз, сколько слово встретилось во всей коллекции. Сохраните в отдельный список ингредиенты, которые встретились в коллекции больше 4000 раз. Вызовите метод словаря filter_tokens, подав в качестве первого аргумента полученный список популярных ингредиентов. Вычислите две величины: dict_size_before и dict_size_after — размер словаря до и после фильтрации.\nЗатем, используя новый словарь, создайте новый корпус документов, corpus2, по аналогии с тем, как это сделано в начале ноутбука. Вычислите две величины: corpus_size_before и corpus_size_after — суммарное количество ингредиентов в корпусе (для каждого документа вычислите число различных ингредиентов в нем и просуммируйте по всем документам) до и после фильтрации.\nПередайте величины dict_size_before, dict_size_after, corpus_size_before, corpus_size_after в функцию save_answers2 и загрузите сгенерированный файл в форму.",
"frequent_words = list()\nfor el in dictionary2.dfs:\n if dictionary2.dfs[el] > 4000:\n frequent_words.append(el)\nprint(frequent_words)\n\ndict_size_before = len(dictionary2.dfs)\ndictionary2.filter_tokens(frequent_words)\ndict_size_after = len(dictionary2.dfs)\n\ncorpus2 = [dictionary2.doc2bow(text) for text in texts]\n\ncorpus_size_before = 0\nfor i in corpus:\n corpus_size_before += len(i)\n \ncorpus_size_after = 0\nfor i in corpus2:\n corpus_size_after += len(i)\n\ndef save_answers2(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after):\n with open(\"cooking_LDA_pa_task2.txt\", \"w\") as fout:\n fout.write(\" \".join([str(el) for el in [dict_size_before, dict_size_after, corpus_size_before, corpus_size_after]]))\n\nprint(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after)\nsave_answers2(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after)",
"Сравнение когерентностей\nЗадание 3. Постройте еще одну модель по корпусу corpus2 и словарю dictionary2, остальные параметры оставьте такими же, как при первом построении модели. Сохраните новую модель в другую переменную (не перезаписывайте предыдущую модель). Не забудьте про фиксирование seed!\nЗатем воспользуйтесь методом top_topics модели, чтобы вычислить ее когерентность. Передайте в качестве аргумента соответствующий модели корпус. Метод вернет список кортежей (топ токенов, когерентность), отсортированных по убыванию последней. Вычислите среднюю по всем темам когерентность для каждой из двух моделей и передайте в функцию save_answers3.",
"np.random.seed(76543)\nlda_2 = models.LdaModel(corpus2, id2word=dictionary2, num_topics = 40, passes = 5)\n\ntop_topics_1 = lda_1.top_topics(corpus)\ntop_topics_2 = lda_2.top_topics(corpus2)\n\ndef topics_mean(all_topics):\n return np.mean([one_topics[1] for one_topics in all_topics])\n\ncoherence_1 = topics_mean(top_topics_1)\ncoherence_2 = topics_mean(top_topics_2)\n\ndef save_answers3(coherence_1, coherence_2):\n with open(\"cooking_LDA_pa_task3.txt\", \"w\") as fout:\n fout.write(\" \".join([\"%3f\"%el for el in [coherence_1, coherence_2]]))\n\nprint(coherence_1, coherence_2)\nsave_answers3(coherence_1, coherence_2)",
"Считается, что когерентность хорошо соотносится с человеческими оценками интерпретируемости тем. Поэтому на больших текстовых коллекциях когерентность обычно повышается, если убрать фоновую лексику. Однако в нашем случае этого не произошло. \nИзучение влияния гиперпараметра alpha\nВ этом разделе мы будем работать со второй моделью, то есть той, которая построена по сокращенному корпусу. \nПока что мы посмотрели только на матрицу темы-слова, теперь давайте посмотрим на матрицу темы-документы. Выведите темы для нулевого (или любого другого) документа из корпуса, воспользовавшись методом get_document_topics второй модели:",
"lda_1.get_document_topics(corpus2[0])",
"Также выведите содержимое переменной .alpha второй модели:",
"lda_1.alpha",
"У вас должно получиться, что документ характеризуется небольшим числом тем. Попробуем поменять гиперпараметр alpha, задающий априорное распределение Дирихле для распределений тем в документах.\nЗадание 4. Обучите третью модель: используйте сокращенный корпус (corpus2 и dictionary2) и установите параметр alpha=1, passes=5. Не забудьте про фиксацию seed! Выведите темы новой модели для нулевого документа; должно получиться, что распределение над множеством тем практически равномерное. Чтобы убедиться в том, что во второй модели документы описываются гораздо более разреженными распределениями, чем в третьей, посчитайте суммарное количество элементов, превосходящих 0.01, в матрицах темы-документы обеих моделей. Другими словами, запросите темы модели для каждого документа с параметром minimum_probability=0.01 и просуммируйте число элементов в получаемых массивах. Передайте две суммы (сначала для модели с alpha по умолчанию, затем для модели в alpha=1) в функцию save_answers4.",
"np.random.seed(76543)\n\nlda_3 = models.ldamodel.LdaModel(corpus2, id2word=dictionary2, num_topics=40, passes=5, alpha = 1)\n\nlda_3.get_document_topics(corpus2[0])\n\ndef sum_doc_topics(model, corpus):\n return sum([len(model.get_document_topics(i, minimum_probability=0.01)) for i in corpus])\n\ncount_lda_2 = sum_doc_topics(lda_2,corpus2)\ncount_lda_3 = sum_doc_topics(lda_3,corpus2)\n\ndef save_answers4(count_model_2, count_model_3):\n with open(\"cooking_LDA_pa_task4.txt\", \"w\") as fout:\n fout.write(\" \".join([str(el) for el in [count_model_2, count_model_3]]))\n\nprint(count_lda_2, count_lda_3)\nsave_answers4(count_lda_2, count_lda_3)",
"Таким образом, гиперпараметр alpha влияет на разреженность распределений тем в документах. Аналогично гиперпараметр eta влияет на разреженность распределений слов в темах.\nLDA как способ понижения размерности\nИногда, распределения над темами, найденные с помощью LDA, добавляют в матрицу объекты-признаки как дополнительные, семантические, признаки, и это может улучшить качество решения задачи. Для простоты давайте просто обучим классификатор рецептов на кухни на признаках, полученных из LDA, и измерим точность (accuracy).\nЗадание 5. Используйте модель, построенную по сокращенной выборке с alpha по умолчанию (вторую модель). Составьте матрицу $\\Theta = p(t|d)$ вероятностей тем в документах; вы можете использовать тот же метод get_document_topics, а также вектор правильных ответов y (в том же порядке, в котором рецепты идут в переменной recipes). Создайте объект RandomForestClassifier со 100 деревьями, с помощью функции cross_val_score вычислите среднюю accuracy по трем фолдам (перемешивать данные не нужно) и передайте в функцию save_answers5.",
"from sklearn.ensemble import RandomForestClassifier\nfrom sklearn.cross_validation import cross_val_score\n\nX = np.zeros((len(recipes), 40))\ny = [recipe['cuisine'] for recipe in recipes]\n\nfor i in range(len(recipes)):\n for top in lda_2.get_document_topics(corpus2[i]):\n X[i, top[0]] = top[1]\n\nRFC = RandomForestClassifier(n_estimators = 100)\nestimator = cross_val_score(RFC, X, y, cv=3).mean()\n\ndef save_answers5(accuracy):\n with open(\"cooking_LDA_pa_task5.txt\", \"w\") as fout:\n fout.write(str(accuracy))\n\nprint(estimator)\nsave_answers5(estimator)",
"Для такого большого количества классов это неплохая точность. Вы можете попробовать обучать RandomForest на исходной матрице частот слов, имеющей значительно большую размерность, и увидеть, что accuracy увеличивается на 10–15%. Таким образом, LDA собрал не всю, но достаточно большую часть информации из выборки, в матрице низкого ранга.\nLDA — вероятностная модель\nМатричное разложение, использующееся в LDA, интерпретируется как следующий процесс генерации документов.\nДля документа $d$ длины $n_d$:\n1. Из априорного распределения Дирихле с параметром alpha сгенерировать распределение над множеством тем: $\\theta_d \\sim Dirichlet(\\alpha)$\n1. Для каждого слова $w = 1, \\dots, n_d$:\n 1. Сгенерировать тему из дискретного распределения $t \\sim \\theta_{d}$\n 1. Сгенерировать слово из дискретного распределения $w \\sim \\phi_{t}$.\nПодробнее об этом в Википедии.\nВ контексте нашей задачи получается, что, используя данный генеративный процесс, можно создавать новые рецепты. Вы можете передать в функцию модель и число ингредиентов и сгенерировать рецепт :)",
"def generate_recipe(model, num_ingredients):\n theta = np.random.dirichlet(model.alpha)\n for i in range(num_ingredients):\n t = np.random.choice(np.arange(model.num_topics), p=theta)\n topic = model.show_topic(t, topn=model.num_terms)\n topic_distr = [x[1] for x in topic]\n terms = [x[0] for x in topic]\n w = np.random.choice(terms, p=topic_distr)\n print w\n\nprint(generate_recipe(lda_1, 5))\nprint('\\n')\nprint(generate_recipe(lda_2, 5))\nprint('\\n')\nprint(generate_recipe(lda_3, 5))",
"Интерпретация построенной модели\nВы можете рассмотреть топы ингредиентов каждой темы. Большиснтво тем сами по себе похожи на рецепты; в некоторых собираются продукты одного вида, например, свежие фрукты или разные виды сыра.\nПопробуем эмпирически соотнести наши темы с национальными кухнями (cuisine). Построим матрицу $A$ размера темы $x$ кухни, ее элементы $a_{tc}$ — суммы $p(t|d)$ по всем документам $d$, которые отнесены к кухне $c$. Нормируем матрицу на частоты рецептов по разным кухням, чтобы избежать дисбаланса между кухнями. Следующая функция получает на вход объект модели, объект корпуса и исходные данные и возвращает нормированную матрицу $A$. Ее удобно визуализировать с помощью seaborn.",
"import pandas\nimport seaborn\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\ndef compute_topic_cuisine_matrix(model, corpus, recipes):\n # составляем вектор целевых признаков\n targets = list(set([recipe[\"cuisine\"] for recipe in recipes]))\n # составляем матрицу\n tc_matrix = pandas.DataFrame(data=np.zeros((model.num_topics, len(targets))), columns=targets)\n for recipe, bow in zip(recipes, corpus):\n recipe_topic = model.get_document_topics(bow)\n for t, prob in recipe_topic:\n tc_matrix[recipe[\"cuisine\"]][t] += prob\n # нормируем матрицу\n target_sums = pandas.DataFrame(data=np.zeros((1, len(targets))), columns=targets)\n for recipe in recipes:\n target_sums[recipe[\"cuisine\"]] += 1\n return pandas.DataFrame(tc_matrix.values/target_sums.values, columns=tc_matrix.columns)\n\ndef plot_matrix(tc_matrix):\n plt.figure(figsize=(10, 10))\n seaborn.heatmap(tc_matrix, square=True)\n\n# Визуализируйте матрицу\nplot_matrix(compute_topic_cuisine_matrix(lda_1, corpus, recipes))\n\nplot_matrix(compute_topic_cuisine_matrix(lda_2, corpus2, recipes))\n\nplot_matrix(compute_topic_cuisine_matrix(lda_3, corpus2, recipes))",
"Чем темнее квадрат в матрице, тем больше связь этой темы с данной кухней. Мы видим, что у нас есть темы, которые связаны с несколькими кухнями. Такие темы показывают набор ингредиентов, которые популярны в кухнях нескольких народов, то есть указывают на схожесть кухонь этих народов. Некоторые темы распределены по всем кухням равномерно, они показывают наборы продуктов, которые часто используются в кулинарии всех стран. \nЖаль, что в датасете нет названий рецептов, иначе темы было бы проще интерпретировать...\nЗаключение\nВ этом задании вы построили несколько моделей LDA, посмотрели, на что влияют гиперпараметры модели и как можно использовать построенную модель."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/earthengine-api
|
python/examples/ipynb/Earth_Engine_asset_from_cloud_geotiff.ipynb
|
apache-2.0
|
[
"#@title Copyright 2020 Google LLC. { display-mode: \"form\" }\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"<table class=\"ee-notebook-buttons\" align=\"left\"><td>\n<a target=\"_blank\" href=\"http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_asset_from_cloud_geotiff.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /> Run in Google Colab</a>\n</td><td>\n<a target=\"_blank\" href=\"https://github.com/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_asset_from_cloud_geotiff.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /> View source on GitHub</a></td></table>\n\nCloud GeoTiff Backed Earth Engine Assets\nNote: The REST API contains new and advanced features that may not be suitable for all users. If you are new to Earth Engine, please get started with the JavaScript guide.\nEarth Engine can load images from Cloud Optimized GeoTiffs (COGs) in Google Cloud Storage (learn more). This notebook demonstrates how to create Earth Engine assets backed by COGs. An advantage of COG-backed assets is that the spatial and metadata fields of the image will be indexed at asset creation time, making the image more performant in collections. (In contrast, an image created through ee.Image.loadGeoTIFF and put into a collection will require a read of the GeoTiff for filtering operations on the collection.) A disadvantage of COG-backed assets is that they may be several times slower than standard assets when used in computations.\nTo create a COG-backed asset, make a POST request to the Earth Engine CreateAsset endpoint. As shown in the following, this request must be authorized to create an asset in your user folder.\nStart an authorized session\nTo be able to make an Earth Engine asset in your user folder, you need to be able to authenticate as you when you make the request. The Earth Engine Python authenticator can be leveraged as a client app that is able to pass your credentials along. Follow the instructions in the cell output to authenticate. (Note that this auth flow is not supported if this notebook is being run in playgroud mode; make a copy before proceeding).\nFor more details, see this guide on obtaining credentials in this manner, this reference on the Flow module, this reference for the client secrets format, and oauth.py from the Earth Engine python library.",
"# This has details about the Earth Engine Python Authenticator client.\nfrom ee import oauth\nfrom google_auth_oauthlib.flow import Flow\nimport json\n\n# Build the `client_secrets.json` file by borrowing the\n# Earth Engine python authenticator.\nclient_secrets = {\n 'web': {\n 'client_id': oauth.CLIENT_ID,\n 'client_secret': oauth.CLIENT_SECRET,\n 'redirect_uris': [oauth.REDIRECT_URI],\n 'auth_uri': 'https://accounts.google.com/o/oauth2/auth',\n 'token_uri': 'https://accounts.google.com/o/oauth2/token'\n }\n}\n\n# Write to a json file.\nclient_secrets_file = 'client_secrets.json'\nwith open(client_secrets_file, 'w') as f:\n json.dump(client_secrets, f, indent=2)\n\n# Start the flow using the client_secrets.json file.\nflow = Flow.from_client_secrets_file(client_secrets_file,\n scopes=oauth.SCOPES,\n redirect_uri=oauth.REDIRECT_URI)\n\n# Get the authorization URL from the flow.\nauth_url, _ = flow.authorization_url(prompt='consent')\n\n# Print instructions to go to the authorization URL.\noauth._display_auth_instructions_with_print(auth_url)\nprint('\\n')\n\n# The user will get an authorization code.\n# This code is used to get the access token.\ncode = input('Enter the authorization code: \\n')\nflow.fetch_token(code=code)\n\n# Get an authorized session from the flow.\nsession = flow.authorized_session()",
"Request body\nThe request body is an instance of an EarthEngineAsset. This is where the path to the COG is specified, along with other useful properties. Note that the image is a small area exported from the composite made in this example script. See this doc for details on exporting a COG.\nEarth Engine will determine the bands, geometry, and other relevant information from the metadata of the TIFF. The only other fields that are accepted when creating a COG-backed asset are properties, start_time, and end_time.",
"# Request body as a dictionary.\nrequest = {\n 'type': 'IMAGE',\n 'gcs_location': {\n 'uris': ['gs://ee-docs-demos/COG_demo.tif']\n },\n 'properties': {\n 'source': 'https://code.earthengine.google.com/d541cf8b268b2f9d8f834c255698201d'\n },\n 'startTime': '2016-01-01T00:00:00.000000000Z',\n 'endTime': '2016-12-31T15:01:23.000000000Z',\n}\n\nfrom pprint import pprint\npprint(json.dumps(request))",
"Send the request\nMake the POST request to the Earth Engine CreateAsset endpoint.",
"# Where Earth Engine assets are kept.\nproject_folder = 'earthengine-legacy'\n# Your user folder name and new asset name.\nasset_id = 'users/user_folder_name/asset_name'\n\nurl = 'https://earthengine.googleapis.com/v1alpha/projects/{}/assets?assetId={}'\n\nresponse = session.post(\n url = url.format(project_folder, asset_id),\n data = json.dumps(request)\n)\n\npprint(json.loads(response.content))",
"Details on COG-backed assets\nPermissions\nThe ACLs of COG-backed Earth Engine assets and the underlying data are managed separately. If a COG-backed asset is shared in Earth Engine, it is the owner's responsibility to ensure that the data in GCS is shared with the same parties. If the data is not visible, Earth Engine will return an error of the form \"Failed to load the GeoTIFF at gs://my-bucket/my-object#123456\" (123456 is the generation of the object).\nGenerations\nWhen a COG-backed asset is created, Earth Engine reads the metadata of the TIFF in Cloud Storage and creates asset store entry. The URI associated with that entry must have a generation. See the object versioning docs for details on generations. If a generation is specified (e.g., gs://foo/bar#123), Earth Engine will use it. If a generation is not specified, Earth Engine will use the latest generation of the object. \nThat means that if the object in GCS is updated, Earth Engine will return a \"Failed to load the GeoTIFF at gs://my-bucket/my-object#123456\" error because the expected object no longer exists (unless the bucket enables multiple object versions). This policy is designed to keep metadata of the asset in sync with the metadata of the object. \nConfiguration\nIn terms of how a COG should be configured, the TIFF MUST be:\n\nTiled, where the tile dimensions are either:\n16x16\n32x32\n64x64\n128x128\n256x256\n512x512\n\n1024x1024\n\n\nArranged so that all IFDs are at the beginning.\n\n\nFor best performance:\n\nUse tile dimensions of 128x128 or 256x256.\nInclude power of 2 overviews.\n\nSee this page for more details on an optimized configuration."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jbocharov-mids/W207-Machine-Learning
|
reference/Nearest_Neighbors.ipynb
|
apache-2.0
|
[
"Build and test a Nearest Neighbors classifier.\nLoad the relevant packages.",
"# This tells matplotlib not to try opening a new window for each plot.\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.datasets import load_iris",
"Load the Iris data to use for experiments. The data include 50 observations of each of 3 types of irises (150 total). Each observation includes 4 measurements: sepal and petal width and height. The goal is to predict the iris type from these measurements.\nhttp://en.wikipedia.org/wiki/Iris_flower_data_set",
"# Load the data, which is included in sklearn.\niris = load_iris()\nprint 'Iris target names:', iris.target_names\nprint 'Iris feature names:', iris.feature_names\nX, Y = iris.data, iris.target\n\n# Shuffle the data, but make sure that the features and accompanying labels stay in sync.\nnp.random.seed(0) # To ensure repeatability of results\nshuffle = np.random.permutation(np.arange(X.shape[0]))\nX, Y = X[shuffle], Y[shuffle]\n\n# Split into train and test.\ntrain_data, train_labels = X[:100], Y[:100]\ntest_data, test_labels = X[100:], Y[100:]",
"Create a distance function that returns the distance between 2 observations.",
"## Note: the assumption is len (v1) == len (v2)\ndef EuclideanDistance(v1, v2):\n sum = 0.0\n for index in range(len(v1)):\n sum += (v1[index] - v2[index]) ** 2\n return sum ** 0.5",
"This is just an example of code for Euclidean distance. In order to create a robust production-quality code, you have to be prepared for situations where len(v1) != len (v2): missing data; bad data; wrong data types, etc. Make sure your functions are always prepared for such scenarios: as much as 50% of a data scientists' job is cleaning up the data. A great overview of \"what can go wrong with data\" is, e.g., in this book: http://www.amazon.com/Bad-Data-Handbook-Cleaning-Back/dp/1449321887.\nJust for fun, let's compute all the pairwise distances in the training data and plot a histogram.",
"dists = []\nfor i in range(len(train_data) - 1):\n for j in range(i + 1, len(train_data)):\n dist = EuclideanDistance(train_data[i], train_data[j])\n dists.append(dist)\n \nfig = plt.hist(dists, 100) ## Play with different values of the parameter; see how the view changes",
"Ok now let's create a class that implements a Nearest Neighbors classifier. We'll model it after the sklearn classifier implementations, with fit() and predict() methods.\nhttp://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier",
"class NearestNeighbors:\n # Initialize an instance of the class.\n def __init__(self, metric=EuclideanDistance):\n self.metric = metric\n \n # No training for Nearest Neighbors. Just store the data.\n def fit(self, train_data, train_labels):\n self.train_data = train_data\n self.train_labels = train_labels\n \n # Make predictions for each test example and return results.\n def predict(self, test_data):\n results = []\n for item in test_data:\n results.append(self._predict_item(item))\n return results\n \n # Private function for making a single prediction.\n def _predict_item(self, item):\n best_dist, best_label = 1.0e10, None\n for i in range(len(self.train_data)):\n dist = self.metric(self.train_data[i], item)\n if dist < best_dist:\n best_label = self.train_labels[i]\n best_dist = dist\n return best_label",
"Run an experiment with the classifier.",
"clf = NearestNeighbors()\nclf.fit(train_data, train_labels)\npreds = clf.predict(test_data)\n\ncorrect, total = 0, 0\nfor pred, label in zip(preds, test_labels):\n if pred == label: correct += 1\n total += 1\nprint 'total: %3d correct: %3d accuracy: %3.2f' %(total, correct, 1.0*correct/total)",
"Let's try and see what happens if we do not set the seed for the random number generator. When no seed is given, RNG usually sets the seed to the numeric interpretation of UTC (Coordinated Universal Time). Let us do just that (change the second argument in range() function before you run this loop):",
"X, Y = iris.data, iris.target\nimport time\nNow = time.time()\nprint long (Now.real * 100)\n\n# Now run the same codeShuffle the data, but make sure that the features and accompanying labels stay in sync.\nfor i in range (0, 1):\n myseed = long(Now.real)+i\n np.random.seed(myseed) # To ensure repeatability of results\n\n shuffle = np.random.permutation(np.arange(X.shape[0]))\n X, Y = X[shuffle], Y[shuffle]\n\n # Split into train and test.\n train_data, train_labels = X[:100], Y[:100]\n test_data, test_labels = X[100:], Y[100:]\n\n clf.fit(train_data, train_labels)\n preds = clf.predict(test_data)\n\n correct, total = 0, 0\n for pred, label in zip(preds, test_labels):\n if pred == label: correct += 1\n total += 1\n print 'seed: %ld total: %3d correct: %3d accuracy: %3.2f' %(myseed, total, correct, 1.0*correct/total)\n\n",
"The accuracy varies with the random seed. It is normal and expected. It is important to be aware of this phenomenon: shuffling does matter, and it does affect the classification accuracy. \nWhen you report findings with any ML methodology, or compare your results with someone else's, it is very important to keep it in mind and account for it.\nAs an optional homework, plot the histogram of accuracy after 100 iterations of the Nearest_Neighbors algorithm in this worksheet."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
midnightradio/gensim
|
docs/src/auto_examples/core/run_corpora_and_vector_spaces.ipynb
|
gpl-3.0
|
[
"%matplotlib inline",
"Corpora and Vector Spaces\nDemonstrates transforming text into a vector space representation.\nAlso introduces corpus streaming and persistence to disk in various formats.",
"import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)",
"First, let’s create a small corpus of nine short documents [1]_:\nFrom Strings to Vectors\nThis time, let's start from documents represented as strings:",
"documents = [\n \"Human machine interface for lab abc computer applications\",\n \"A survey of user opinion of computer system response time\",\n \"The EPS user interface management system\",\n \"System and human system engineering testing of EPS\",\n \"Relation of user perceived response time to error measurement\",\n \"The generation of random binary unordered trees\",\n \"The intersection graph of paths in trees\",\n \"Graph minors IV Widths of trees and well quasi ordering\",\n \"Graph minors A survey\",\n]",
"This is a tiny corpus of nine documents, each consisting of only a single sentence.\nFirst, let's tokenize the documents, remove common words (using a toy stoplist)\nas well as words that only appear once in the corpus:",
"from pprint import pprint # pretty-printer\nfrom collections import defaultdict\n\n# remove common words and tokenize\nstoplist = set('for a of the and to in'.split())\ntexts = [\n [word for word in document.lower().split() if word not in stoplist]\n for document in documents\n]\n\n# remove words that appear only once\nfrequency = defaultdict(int)\nfor text in texts:\n for token in text:\n frequency[token] += 1\n\ntexts = [\n [token for token in text if frequency[token] > 1]\n for text in texts\n]\n\npprint(texts)",
"Your way of processing the documents will likely vary; here, I only split on whitespace\nto tokenize, followed by lowercasing each word. In fact, I use this particular\n(simplistic and inefficient) setup to mimic the experiment done in Deerwester et al.'s\noriginal LSA article [1]_.\nThe ways to process documents are so varied and application- and language-dependent that I\ndecided to not constrain them by any interface. Instead, a document is represented\nby the features extracted from it, not by its \"surface\" string form: how you get to\nthe features is up to you. Below I describe one common, general-purpose approach (called\n:dfn:bag-of-words), but keep in mind that different application domains call for\ndifferent features, and, as always, it's garbage in, garbage out <http://en.wikipedia.org/wiki/Garbage_In,_Garbage_Out>_...\nTo convert documents to vectors, we'll use a document representation called\nbag-of-words <http://en.wikipedia.org/wiki/Bag_of_words>_. In this representation,\neach document is represented by one vector where each vector element represents\na question-answer pair, in the style of:\n\nQuestion: How many times does the word system appear in the document?\nAnswer: Once.\n\nIt is advantageous to represent the questions only by their (integer) ids. The mapping\nbetween the questions and ids is called a dictionary:",
"from gensim import corpora\ndictionary = corpora.Dictionary(texts)\ndictionary.save('/tmp/deerwester.dict') # store the dictionary, for future reference\nprint(dictionary)",
"Here we assigned a unique integer id to all words appearing in the corpus with the\n:class:gensim.corpora.dictionary.Dictionary class. This sweeps across the texts, collecting word counts\nand relevant statistics. In the end, we see there are twelve distinct words in the\nprocessed corpus, which means each document will be represented by twelve numbers (ie., by a 12-D vector).\nTo see the mapping between words and their ids:",
"print(dictionary.token2id)",
"To actually convert tokenized documents to vectors:",
"new_doc = \"Human computer interaction\"\nnew_vec = dictionary.doc2bow(new_doc.lower().split())\nprint(new_vec) # the word \"interaction\" does not appear in the dictionary and is ignored",
"The function :func:doc2bow simply counts the number of occurrences of\neach distinct word, converts the word to its integer word id\nand returns the result as a sparse vector. The sparse vector [(0, 1), (1, 1)]\ntherefore reads: in the document \"Human computer interaction\", the words computer\n(id 0) and human (id 1) appear once; the other ten dictionary words appear (implicitly) zero times.",
"corpus = [dictionary.doc2bow(text) for text in texts]\ncorpora.MmCorpus.serialize('/tmp/deerwester.mm', corpus) # store to disk, for later use\nprint(corpus)",
"By now it should be clear that the vector feature with id=10 stands for the question \"How many\ntimes does the word graph appear in the document?\" and that the answer is \"zero\" for\nthe first six documents and \"one\" for the remaining three.\nCorpus Streaming -- One Document at a Time\nNote that corpus above resides fully in memory, as a plain Python list.\nIn this simple example, it doesn't matter much, but just to make things clear,\nlet's assume there are millions of documents in the corpus. Storing all of them in RAM won't do.\nInstead, let's assume the documents are stored in a file on disk, one document per line. Gensim\nonly requires that a corpus must be able to return one document vector at a time:",
"from smart_open import open # for transparently opening remote files\n\n\nclass MyCorpus:\n def __iter__(self):\n for line in open('https://radimrehurek.com/gensim/mycorpus.txt'):\n # assume there's one document per line, tokens separated by whitespace\n yield dictionary.doc2bow(line.lower().split())",
"The full power of Gensim comes from the fact that a corpus doesn't have to be\na list, or a NumPy array, or a Pandas dataframe, or whatever.\nGensim accepts any object that, when iterated over, successively yields\ndocuments.",
"# This flexibility allows you to create your own corpus classes that stream the\n# documents directly from disk, network, database, dataframes... The models\n# in Gensim are implemented such that they don't require all vectors to reside\n# in RAM at once. You can even create the documents on the fly!",
"Download the sample mycorpus.txt file here <./mycorpus.txt>_. The assumption that\neach document occupies one line in a single file is not important; you can mold\nthe __iter__ function to fit your input format, whatever it is.\nWalking directories, parsing XML, accessing the network...\nJust parse your input to retrieve a clean list of tokens in each document,\nthen convert the tokens via a dictionary to their ids and yield the resulting sparse vector inside __iter__.",
"corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!\nprint(corpus_memory_friendly)",
"Corpus is now an object. We didn't define any way to print it, so print just outputs address\nof the object in memory. Not very useful. To see the constituent vectors, let's\niterate over the corpus and print each document vector (one at a time):",
"for vector in corpus_memory_friendly: # load one vector into memory at a time\n print(vector)",
"Although the output is the same as for the plain Python list, the corpus is now much\nmore memory friendly, because at most one vector resides in RAM at a time. Your\ncorpus can now be as large as you want.\nSimilarly, to construct the dictionary without loading all texts into memory:",
"# collect statistics about all tokens\ndictionary = corpora.Dictionary(line.lower().split() for line in open('https://radimrehurek.com/gensim/mycorpus.txt'))\n# remove stop words and words that appear only once\nstop_ids = [\n dictionary.token2id[stopword]\n for stopword in stoplist\n if stopword in dictionary.token2id\n]\nonce_ids = [tokenid for tokenid, docfreq in dictionary.dfs.items() if docfreq == 1]\ndictionary.filter_tokens(stop_ids + once_ids) # remove stop words and words that appear only once\ndictionary.compactify() # remove gaps in id sequence after words that were removed\nprint(dictionary)",
"And that is all there is to it! At least as far as bag-of-words representation is concerned.\nOf course, what we do with such a corpus is another question; it is not at all clear\nhow counting the frequency of distinct words could be useful. As it turns out, it isn't, and\nwe will need to apply a transformation on this simple representation first, before\nwe can use it to compute any meaningful document vs. document similarities.\nTransformations are covered in the next tutorial\n(sphx_glr_auto_examples_core_run_topics_and_transformations.py),\nbut before that, let's briefly turn our attention to corpus persistency.\nCorpus Formats\nThere exist several file formats for serializing a Vector Space corpus (~sequence of vectors) to disk.\nGensim implements them via the streaming corpus interface mentioned earlier:\ndocuments are read from (resp. stored to) disk in a lazy fashion, one document at\na time, without the whole corpus being read into main memory at once.\nOne of the more notable file formats is the Market Matrix format <http://math.nist.gov/MatrixMarket/formats.html>_.\nTo save a corpus in the Matrix Market format:\ncreate a toy corpus of 2 documents, as a plain Python list",
"corpus = [[(1, 0.5)], []] # make one document empty, for the heck of it\n\ncorpora.MmCorpus.serialize('/tmp/corpus.mm', corpus)",
"Other formats include Joachim's SVMlight format <http://svmlight.joachims.org/>,\nBlei's LDA-C format <http://www.cs.princeton.edu/~blei/lda-c/> and\nGibbsLDA++ format <http://gibbslda.sourceforge.net/>_.",
"corpora.SvmLightCorpus.serialize('/tmp/corpus.svmlight', corpus)\ncorpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)\ncorpora.LowCorpus.serialize('/tmp/corpus.low', corpus)",
"Conversely, to load a corpus iterator from a Matrix Market file:",
"corpus = corpora.MmCorpus('/tmp/corpus.mm')",
"Corpus objects are streams, so typically you won't be able to print them directly:",
"print(corpus)",
"Instead, to view the contents of a corpus:",
"# one way of printing a corpus: load it entirely into memory\nprint(list(corpus)) # calling list() will convert any sequence to a plain Python list",
"or",
"# another way of doing it: print one document at a time, making use of the streaming interface\nfor doc in corpus:\n print(doc)",
"The second way is obviously more memory-friendly, but for testing and development\npurposes, nothing beats the simplicity of calling list(corpus).\nTo save the same Matrix Market document stream in Blei's LDA-C format,",
"corpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)",
"In this way, gensim can also be used as a memory-efficient I/O format conversion tool:\njust load a document stream using one format and immediately save it in another format.\nAdding new formats is dead easy, check out the code for the SVMlight corpus\n<https://github.com/piskvorky/gensim/blob/develop/gensim/corpora/svmlightcorpus.py>_ for an example.\nCompatibility with NumPy and SciPy\nGensim also contains efficient utility functions <http://radimrehurek.com/gensim/matutils.html>_\nto help converting from/to numpy matrices",
"import gensim\nimport numpy as np\nnumpy_matrix = np.random.randint(10, size=[5, 2]) # random matrix as an example\ncorpus = gensim.matutils.Dense2Corpus(numpy_matrix)\n# numpy_matrix = gensim.matutils.corpus2dense(corpus, num_terms=number_of_corpus_features)",
"and from/to scipy.sparse matrices",
"import scipy.sparse\nscipy_sparse_matrix = scipy.sparse.random(5, 2) # random sparse matrix as example\ncorpus = gensim.matutils.Sparse2Corpus(scipy_sparse_matrix)\nscipy_csc_matrix = gensim.matutils.corpus2csc(corpus)",
"What Next\nRead about sphx_glr_auto_examples_core_run_topics_and_transformations.py.\nReferences\nFor a complete reference (Want to prune the dictionary to a smaller size?\nOptimize converting between corpora and NumPy/SciPy arrays?), see the apiref.\n.. [1] This is the same corpus as used in\n Deerwester et al. (1990): Indexing by Latent Semantic Analysis <http://www.cs.bham.ac.uk/~pxt/IDA/lsa_ind.pdf>_, Table 2.",
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimg = mpimg.imread('run_corpora_and_vector_spaces.png')\nimgplot = plt.imshow(img)\n_ = plt.axis('off')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rsignell-usgs/notebook
|
HOPS/.ipynb_checkpoints/hops2cf-checkpoint.ipynb
|
mit
|
[
"The problem: CF compliant readers cannot read HOPS dataset directly.\nThe solution: read with the netCDF4-python raw interface and create a CF object from the data.\nNOTE: Ideally this should be a nco script that could be run as a CLI script and fix the files.\nHere I am using Python+iris. That works and could be written as a CLI script too.\nThe main advantage is that it takes care of the CF boilerplate.\nHowever, this approach is to \"heavy-weight\" to be applied in many variables and files.",
"from netCDF4 import Dataset\n\nurl = ('http://geoport.whoi.edu/thredds/dodsC/usgs/data2/rsignell/gdrive/'\n 'nsf-alpha/Data/MIT_MSEAS/MSEAS_Tides_20160317/mseas_tides_2015071612_2015081612_01h.nc')\n\nnc = Dataset(url)",
"Extract lon, lat variables from vgrid2 and u, v variables from vbaro.\nThe goal is to split the joint variables into individual CF compliant phenomena.",
"vtime = nc['time']\ncoords = nc['vgrid2']\nvbaro = nc['vbaro']",
"Using iris to create the CF object.\nNOTE: ideally lon, lat should be DimCoord like time and not AuxCoord,\nbut iris refuses to create 2D DimCoord. Not sure if CF enforces that though.\nFirst the Coordinates.\nFIXME: change to a full time slice later!",
"itime = -1\n\nimport iris\niris.FUTURE.netcdf_no_unlimited = True\n\ntime = iris.coords.DimCoord(vtime[itime],\n var_name='time',\n long_name=vtime.long_name,\n standard_name='longitude',\n units=vtime.units)\n\nlongitude = iris.coords.AuxCoord(coords[:, :, 0],\n var_name='vlat',\n standard_name='longitude',\n units='degrees')\n\nlatitude = iris.coords.AuxCoord(coords[:, :, 1],\n var_name='vlon',\n standard_name='latitude',\n units='degrees')",
"Now the phenomena.\nNOTE: You don't need the broadcast_to trick if saving more than 1 time step.\nHere I just wanted the single time snapshot to have the time dimension to create a full example.",
"import numpy as np\n\nu = vbaro[itime, :, :, 0]\nu_cube = iris.cube.Cube(np.broadcast_to(u, (1,) + u.shape),\n units=vbaro.units,\n long_name=vbaro.long_name,\n var_name='u',\n standard_name='barotropic_eastward_sea_water_velocity',\n dim_coords_and_dims=[(time, 0)],\n aux_coords_and_dims=[(latitude, (1, 2)),\n (longitude, (1, 2))])\n\nv = vbaro[itime, :, :, 1]\nv_cube = iris.cube.Cube(np.broadcast_to(v, (1,) + v.shape),\n units=vbaro.units,\n long_name=vbaro.long_name,\n var_name='v',\n standard_name='barotropic_northward_sea_water_velocity',\n dim_coords_and_dims=[(time, 0)],\n aux_coords_and_dims=[(latitude, (1, 2)),\n (longitude, (1, 2))])",
"Join the individual CF phenomena into one dataset.",
"cubes = iris.cube.CubeList([u_cube, v_cube])",
"Save the CF-compliant file!",
"iris.save(cubes, 'hops.nc')\n\n!ncdump -h hops.nc"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/jax
|
docs/notebooks/neural_network_with_tfds_data.ipynb
|
apache-2.0
|
[
"Copyright 2018 Google LLC.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nTraining a Simple Neural Network, with tensorflow/datasets Data Loading\n\nForked from neural_network_and_data_loading.ipynb\n\nLet's combine everything we showed in the quickstart notebook to train a simple neural network. We will first specify and train a simple MLP on MNIST using JAX for the computation. We will use tensorflow/datasets data loading API to load images and labels (because it's pretty great, and the world doesn't need yet another data loading library :P).\nOf course, you can use JAX with any API that is compatible with NumPy to make specifying the model a bit more plug-and-play. Here, just for explanatory purposes, we won't use any neural network libraries or special APIs for building our model.",
"import jax.numpy as jnp\nfrom jax import grad, jit, vmap\nfrom jax import random",
"Hyperparameters\nLet's get a few bookkeeping items out of the way.",
"# A helper function to randomly initialize weights and biases\n# for a dense neural network layer\ndef random_layer_params(m, n, key, scale=1e-2):\n w_key, b_key = random.split(key)\n return scale * random.normal(w_key, (n, m)), scale * random.normal(b_key, (n,))\n\n# Initialize all layers for a fully-connected neural network with sizes \"sizes\"\ndef init_network_params(sizes, key):\n keys = random.split(key, len(sizes))\n return [random_layer_params(m, n, k) for m, n, k in zip(sizes[:-1], sizes[1:], keys)]\n\nlayer_sizes = [784, 512, 512, 10]\nstep_size = 0.01\nnum_epochs = 10\nbatch_size = 128\nn_targets = 10\nparams = init_network_params(layer_sizes, random.PRNGKey(0))",
"Auto-batching predictions\nLet us first define our prediction function. Note that we're defining this for a single image example. We're going to use JAX's vmap function to automatically handle mini-batches, with no performance penalty.",
"from jax.scipy.special import logsumexp\n\ndef relu(x):\n return jnp.maximum(0, x)\n\ndef predict(params, image):\n # per-example predictions\n activations = image\n for w, b in params[:-1]:\n outputs = jnp.dot(w, activations) + b\n activations = relu(outputs)\n \n final_w, final_b = params[-1]\n logits = jnp.dot(final_w, activations) + final_b\n return logits - logsumexp(logits)",
"Let's check that our prediction function only works on single images.",
"# This works on single examples\nrandom_flattened_image = random.normal(random.PRNGKey(1), (28 * 28,))\npreds = predict(params, random_flattened_image)\nprint(preds.shape)\n\n# Doesn't work with a batch\nrandom_flattened_images = random.normal(random.PRNGKey(1), (10, 28 * 28))\ntry:\n preds = predict(params, random_flattened_images)\nexcept TypeError:\n print('Invalid shapes!')\n\n# Let's upgrade it to handle batches using `vmap`\n\n# Make a batched version of the `predict` function\nbatched_predict = vmap(predict, in_axes=(None, 0))\n\n# `batched_predict` has the same call signature as `predict`\nbatched_preds = batched_predict(params, random_flattened_images)\nprint(batched_preds.shape)",
"At this point, we have all the ingredients we need to define our neural network and train it. We've built an auto-batched version of predict, which we should be able to use in a loss function. We should be able to use grad to take the derivative of the loss with respect to the neural network parameters. Last, we should be able to use jit to speed up everything.\nUtility and loss functions",
"def one_hot(x, k, dtype=jnp.float32):\n \"\"\"Create a one-hot encoding of x of size k.\"\"\"\n return jnp.array(x[:, None] == jnp.arange(k), dtype)\n \ndef accuracy(params, images, targets):\n target_class = jnp.argmax(targets, axis=1)\n predicted_class = jnp.argmax(batched_predict(params, images), axis=1)\n return jnp.mean(predicted_class == target_class)\n\ndef loss(params, images, targets):\n preds = batched_predict(params, images)\n return -jnp.mean(preds * targets)\n\n@jit\ndef update(params, x, y):\n grads = grad(loss)(params, x, y)\n return [(w - step_size * dw, b - step_size * db)\n for (w, b), (dw, db) in zip(params, grads)]",
"Data Loading with tensorflow/datasets\nJAX is laser-focused on program transformations and accelerator-backed NumPy, so we don't include data loading or munging in the JAX library. There are already a lot of great data loaders out there, so let's just use them instead of reinventing anything. We'll use the tensorflow/datasets data loader.",
"import tensorflow as tf\n# Ensure TF does not see GPU and grab all GPU memory.\ntf.config.set_visible_devices([], device_type='GPU')\n\nimport tensorflow_datasets as tfds\n\ndata_dir = '/tmp/tfds'\n\n# Fetch full datasets for evaluation\n# tfds.load returns tf.Tensors (or tf.data.Datasets if batch_size != -1)\n# You can convert them to NumPy arrays (or iterables of NumPy arrays) with tfds.dataset_as_numpy\nmnist_data, info = tfds.load(name=\"mnist\", batch_size=-1, data_dir=data_dir, with_info=True)\nmnist_data = tfds.as_numpy(mnist_data)\ntrain_data, test_data = mnist_data['train'], mnist_data['test']\nnum_labels = info.features['label'].num_classes\nh, w, c = info.features['image'].shape\nnum_pixels = h * w * c\n\n# Full train set\ntrain_images, train_labels = train_data['image'], train_data['label']\ntrain_images = jnp.reshape(train_images, (len(train_images), num_pixels))\ntrain_labels = one_hot(train_labels, num_labels)\n\n# Full test set\ntest_images, test_labels = test_data['image'], test_data['label']\ntest_images = jnp.reshape(test_images, (len(test_images), num_pixels))\ntest_labels = one_hot(test_labels, num_labels)\n\nprint('Train:', train_images.shape, train_labels.shape)\nprint('Test:', test_images.shape, test_labels.shape)",
"Training Loop",
"import time\n\ndef get_train_batches():\n # as_supervised=True gives us the (image, label) as a tuple instead of a dict\n ds = tfds.load(name='mnist', split='train', as_supervised=True, data_dir=data_dir)\n # You can build up an arbitrary tf.data input pipeline\n ds = ds.batch(batch_size).prefetch(1)\n # tfds.dataset_as_numpy converts the tf.data.Dataset into an iterable of NumPy arrays\n return tfds.as_numpy(ds)\n\nfor epoch in range(num_epochs):\n start_time = time.time()\n for x, y in get_train_batches():\n x = jnp.reshape(x, (len(x), num_pixels))\n y = one_hot(y, num_labels)\n params = update(params, x, y)\n epoch_time = time.time() - start_time\n\n train_acc = accuracy(params, train_images, train_labels)\n test_acc = accuracy(params, test_images, test_labels)\n print(\"Epoch {} in {:0.2f} sec\".format(epoch, epoch_time))\n print(\"Training set accuracy {}\".format(train_acc))\n print(\"Test set accuracy {}\".format(test_acc))",
"We've now used the whole of the JAX API: grad for derivatives, jit for speedups and vmap for auto-vectorization.\nWe used NumPy to specify all of our computation, and borrowed the great data loaders from tensorflow/datasets, and ran the whole thing on the GPU."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
msalvaris/gpu_monitor
|
examples/notebooks/FileLoggerExample.ipynb
|
mit
|
[
"File Logger Example\nThis notebook is a small demo of how to use gpumon in Jupyter notebooks and some convenience methods for working with GPUs",
"from gpumon.file import log_context",
"device_name and device_count will return the name of the GPU found on the system and the number of GPUs",
"from gpumon import device_name, device_count\n\ndevice_count() # Returns the number of GPUs available\n\ndevice_name() # Returns the type of GPU available\n\nfrom bokeh.io import output_notebook, show\nimport time\n\noutput_notebook()",
"Here we are simply going to tell the log context to record GPU measurements to the file test_gpu.txt",
"with log_context('test_gpu.txt') as log:\n time.sleep(10)",
"We can then cat the file to see what we recorded in the 10 seconds",
"!cat test_gpu.txt",
"By calling the log object we get all the data returned to us in a dataframe",
"df = log()\n\ndf",
"We can also call plot on the log object to plot the measurements we want",
"p = log.plot(gpu_measurement='pwr', num_gpus=4)\n\nshow(p)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JavierVLAB/DataAnalysisScience
|
Eleciones/Elections.ipynb
|
gpl-3.0
|
[
"<h1>Spanish Elections 2015/2016</h1>\n\nData of the elections of Spain in 2015 and 2106<br>\nthe data is from http://elecciones.elperiodico.com/resultados/generales/2016/espana/",
"import pandas\n\n\nraw_elections_2015 = pandas.read_csv(\"test2015.csv\",delimiter='\\t')\nraw_elections_2016 = pandas.read_csv(\"test2016.csv\",delimiter='\\t')\n\ndef find_PODEMOS(words):\n if 'PODEMOS' in words:\n return 'PODEMOS'\n if 'EN COMU' == words:\n return 'PODEMOS'\n return words\n\nelections_2015 = raw_elections_2015\nelections_2016 = raw_elections_2016\n\nelections_2015['Partido'] = elections_2015['Partido'].apply(find_PODEMOS)\nelections_2016['Partido'] = elections_2016['Partido'].apply(find_PODEMOS)\n\nprint(elections_2015.columns)\n\n#The data is in spanish format (10,0 and 1.000) \ndef replace_dots(word):\n return word.replace('.','')\ndef replace_commas(word):\n return word.replace(',','.')\n\nelections_2015['Votos'] = elections_2015['Votos'].apply(replace_dots) \nelections_2015['%'] = elections_2015['%'].apply(replace_commas) \nelections_2016['Votos'] = elections_2016['Votos'].apply(replace_dots) \nelections_2016['%'] = elections_2016['%'].apply(replace_commas) \n\nelections_2015.info()",
"<p>All the data are string</p>",
"\nelections_2015['Votos'] = pandas.to_numeric(elections_2015['Votos'], errors='coerse')\nelections_2015['Escannos'] = pandas.to_numeric(elections_2015['Escannos'], errors='coerse')\nelections_2015['%'] = pandas.to_numeric(elections_2015['%'], errors='coerse')\nelections_2016['Votos'] = pandas.to_numeric(elections_2016['Votos'], errors='coerse')\nelections_2016['Escannos'] = pandas.to_numeric(elections_2016['Escannos'], errors='coerse')\nelections_2016['%'] = pandas.to_numeric(elections_2016['%'], errors='coerse')\nprint(elections_2015.head())\nprint(elections_2016.head())\n\nelections_2015.describe()\n\nelections_2016 = elections_2016.groupby('Partido',as_index=False).sum()\nelections_2015 = elections_2015.groupby('Partido',as_index=False).sum()\n\nelections_2015.columns=['Partido','Votos 2015','% 2015', 'Escanos 2015']\nelections_2016.columns=['Partido','Votos 2016','% 2016', 'Escanos 2016']\n\nbelections = pandas.merge(elections_2015, elections_2016, on='Partido',how='outer')\nelections = pandas.merge(elections_2015, elections_2016, on='Partido',how='inner')\n\nelections.sort_values('Votos 2015', ascending = False)\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\nsns.set_style(\"whitegrid\")\nsns.set_style({\"grid.color\": \"1.\"})\nsns.set_context(\"notebook\", font_scale=1.5)\n\nelection_filtred = elections[elections['% 2015'] > 1.01]\n\n\nelection_filtred = election_filtred.sort_values('% 2016', ascending=False)\ncolumns = election_filtred['Partido']\n\n#Dataframe transposition for graph\ndict_pre = {}\nfor i in range(election_filtred.shape[0]):\n #print(i)\n dict_pre[election_filtred.iloc[i,0]] = [election_filtred.iloc[i,1],election_filtred.iloc[i,4]]\n\nelection_graph = pandas.DataFrame(dict_pre, index=[2015,2016])\nelection_graph.columns = columns\nelection_graph.plot(colormap='Paired')\n\n",
"<h3>Despite all the corruption cases from the PP, only this growh their votes in 2016</h3>",
"#elections.to_csv(\"elections_clean.csv\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
risantos/schoolwork
|
Física Computacional/Ficha 3.ipynb
|
mit
|
[
"Departamento de Física - Faculdade de Ciências e Tecnologia da Universidade de Coimbra\nFísica Computacional - Ficha 3 - Integração e Diferenciação Numérica\nRafael Isaque Santos - 2012144694 - Licenciatura em Física",
"from numpy import sin, cos, tan, pi, e, exp, log, copy, linspace\nfrom numpy.polynomial.legendre import leggauss\n\nn_list = [2, 4, 8, 10, 20, 30, 50, 100]",
"1 - Cálculo do integral $\\int _{0}^{\\pi} e^{x} \\cos(x) \\; dx$",
"f1 = lambda x: exp(x) * cos(x)\nf1_sol = -(exp(pi) + 1) / 2\n\ntrapezios_simples = lambda f, a, b: (b-a)/2 * (f(a) + f(b))\nsimpson13_simples = lambda f, a, b: ((b-a)/3) * (f(a) + 4*f((a + b)/2) + f(b))\nsimpson38_simples = lambda f, a, b: (3/8)*(b-a) * (f(a) + 3*f((2*a + b)/3) + 3*f((a + 2*b)/3) + f(b))\n\ndef trapezios_composta(f, a, b, n):\n h = (b-a)/n\n xi = a\n s_int = 0\n for i in range(n):\n s_int += f(xi) + f(xi+h)\n xi += h\n s_int *= h/2\n return s_int\n\n\ndef simpson13_composta(f, a, b, n):\n h = (b-a)/n\n x = linspace(a, b, n+1)\n s_int = 0\n for i in range(0, n, 2):\n s_int += f(x[i]) + 4*f(x[i+1]) + f(x[i+2])\n s_int *= h/3\n return s_int\n\nfrom sympy import oo # símbolo 'infinito'\n\ndef gausslegendre(f, a, b, x_pts, w_pts):\n x_gl = copy(x_pts)\n w_gl = copy(w_pts)\n def gl_sum(f, x_list, w_list):\n s_int = 0\n for x, w in zip(x_list, w_list):\n s_int += w * f(x)\n return s_int\n if (a == -1 and b == 1): return gl_sum(f, x_gl, w_gl)\n elif (a == 0 and b == oo):\n x_inf = list(map(lambda x: tan( pi/4 * (1+x)), copy(x_pts)))\n w_inf = list(map(lambda w, x: pi/4 * w/(cos(pi/4 * (1+x)))**2, copy(w_pts), copy(x_pts)))\n return gl_sum(f, x_inf, w_inf)\n\n else:\n h = (b-a)/2\n xi = list(map(lambda x: h * (x + 1) + a, x_gl))\n return h * gl_sum(f, xi, w_gl)\n\ndef erro_rel(est, real):\n if real == 0: return abs((est-real)/(est+real)) * 100\n else: return abs((est-real)/real) * 100\n\ndef aval_simples(f, a, b, real_value):\n print('Utilizando os métodos:')\n trap_si = trapezios_simples(f, a, b)\n print('Trapézio Simples: ' + str(trap_si) + ' Erro Relativo: ' + str(erro_rel(trap_si, real_value)) + ' %')\n\n simps13_si = simpson13_simples(f, a, b)\n print('Simpson 1/3 Simples: ' + str(simps13_si) + ' Erro Relativo: ' + str(erro_rel(simps13_si, real_value)) + ' %')\n\n simps38_si = simpson38_simples(f, a, b)\n print('Simpson 3/8 Simples: ' + str(simps38_si) + ' Erro Relativo: ' + str(erro_rel(simps38_si, real_value)) + ' %')\n\n\ndef aval_composta(f, a, b, n, x_n, w_n, real_value):\n print('Utilizando os métodos: [N = ' + str(n) + '] \\n')\n\n trap_c = trapezios_composta(f, a, b, n)\n print('Trapézios Composta: ' + str(trap_c) + ' Erro Relativo: ' + str(erro_rel(trap_c, real_value)))\n\n simp_13_c = simpson13_composta(f, a, b, n)\n print('Simpson Composta: ' + str(simp_13_c) + ' Erro Relativo: ' + str(erro_rel(simp_13_c, real_value)))\n\n gaule_ab = gausslegendre(f, a, b, x_n, w_n)\n print('Gauss-Legendre: ' + str(gaule_ab) + ' Erro Relativo: ' + str(erro_rel(gaule_ab, real_value)))\n print('\\n')",
"Integrando $e^{x} \\cos (x)$",
"aval_simples(f1, 0, pi, f1_sol)\n\nfor n in n_list:\n x_i, w_i = leggauss(n)\n aval_composta(f1, 0, pi, n, x_i, w_i, f1_sol)",
"Derivar $e^{x} \\sin(x) + e^{-x} \\cos(x)$ nos pontos:\nx = 0, $\\frac{\\pi}{4}$, $\\frac{\\pi}{2}$, $\\frac{3\\pi}{4}$ e $\\pi$.\nUtilizando as fórmulas a 2, 3 e 5 pontos.\ncom passos h = 0.1, 0.05, 0.01",
"f2 = lambda x: exp(x)*sin(x) + exp(-x)*cos(x)\nf2_sol = lambda x: (exp(x)-exp(-x)) * (sin(x) + cos(x))\nx_2 = [0, pi/4, pi/2, 3*pi/4, pi]\nh_2 = [0.1, 0.05, 0.01]\n\ndf_2pts = lambda f, x, h: (f(x+h) - f(x)) / h\ndf_3pts = lambda f, x, h: (-f(x + 2*h) + 4*f(x+h) - 3*f(x)) / (2*h)\ndf_5pts = lambda f, x, h: (-3*f(x+4*h) + 16*f(x+3*h) - 36*f(x+2*h) + 48*f(x+h) - 25*f(x)) / (12*h)\n\nfor x in x_2:\n print('\\nDerivada de f(' + str(x) + ') :' )\n d_sol = f2_sol(x)\n print('Valor real = ' + str(d_sol))\n for h in h_2:\n print('com passo \\'h\\' = ' + str(h) + ' :')\n d2r = df_2pts(f2, x, h)\n print('Fórmula a 2 pontos: ' + str(d2r) + ' Erro relativo: ' + str(erro_rel(d2r, d_sol)))\n d3r = df_3pts(f2, x, h)\n print('Fórmula a 3 pontos: ' + str(d3r) + ' Erro relativo: ' + str(erro_rel(d3r, d_sol)))\n d5r = df_5pts(f2, x, h)\n print('Fórmula a 5 pontos: ' + str(d5r) + ' Erro relativo: ' + str(erro_rel(d5r, d_sol)))",
"Calcular o integral:\n$\\int _{0}^{\\infty} \\frac{x dx}{(1+x)^{4}}$",
"xi, wi = leggauss(100)\n\nf3 = lambda x: x / (1+x)**4\ngausslegendre(f3, 0, oo, xi, wi)",
"Usando as transformações\n$x = \\frac{y}{1-y}$\n$x = \\tan \\left[ \\frac{\\pi}{4} (1+y) \\right]$\nIntegral Duplo de\n$\\int {0}^{1} \\left( \\int {-\\sqrt{1-y^{2}}} ^{\\sqrt{1-y^{2}}} \\, dx \\right) dy$",
"gausslegendre((lambda y: gausslegendre((lambda x: 1), -(1-y**2)**(1/2), (1-y**2)**(1/2) , xi, wi)), 0, 1, xi, wi)",
"Integral Duplo de\n$\\int {0}^{1} \\left( \\int {-\\sqrt{1-y^{2}}} ^{\\sqrt{1-y^{2}}} e^{-xy} \\, dx \\right) dy$",
"gausslegendre((lambda y: gausslegendre(lambda x: e**(-x*y), -(1-y**2)**(1/2), (1-y**2)**(1/2), xi, wi)), 0, 1, xi, wi)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
woobe/odsc_h2o_machine_learning
|
py_03b_regression_grid_search.ipynb
|
apache-2.0
|
[
"Machine Learning with H2O - Tutorial 3b: Regression Models (Grid Search)\n<hr>\n\nObjective:\n\nThis tutorial explains how to fine-tune regression models for better out-of-bag performance.\n\n<hr>\n\nWine Quality Dataset:\n\nSource: https://archive.ics.uci.edu/ml/datasets/Wine+Quality\nCSV (https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv)\n\n<hr>\n\nSteps:\n\nGBM with default settings\nGBM with manual settings\nGBM with manual settings & cross-validation\nGBM with manual settings, cross-validation and early stopping\nGBM with cross-validation, early stopping and full grid search\nGBM with cross-validation, early stopping and random grid search\nModel stacking (combining different GLM, DRF, GBM and DNN models)\n\n<hr>\n\nFull Technical Reference:\n\nhttp://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/modeling.html\n\n<br>",
"# Start and connect to a local H2O cluster\nimport h2o\nh2o.init(nthreads = -1)",
"<br>",
"# Import wine quality data from a local CSV file\nwine = h2o.import_file(\"winequality-white.csv\")\nwine.head(5)\n\n# Define features (or predictors)\nfeatures = list(wine.columns) # we want to use all the information\nfeatures.remove('quality') # we need to exclude the target 'quality' (otherwise there is nothing to predict)\nfeatures\n\n# Split the H2O data frame into training/test sets\n# so we can evaluate out-of-bag performance\nwine_split = wine.split_frame(ratios = [0.8], seed = 1234)\n\nwine_train = wine_split[0] # using 80% for training\nwine_test = wine_split[1] # using the rest 20% for out-of-bag evaluation\n\nwine_train.shape\n\nwine_test.shape",
"<br>\nStep 1 - Gradient Boosting Machines (GBM) with Default Settings",
"# Build a Gradient Boosting Machines (GBM) model with default settings\n\n# Import the function for GBM\nfrom h2o.estimators.gbm import H2OGradientBoostingEstimator\n\n# Set up GBM for regression\n# Add a seed for reproducibility\ngbm_default = H2OGradientBoostingEstimator(model_id = 'gbm_default', \n seed = 1234)\n\n# Use .train() to build the model\ngbm_default.train(x = features, \n y = 'quality', \n training_frame = wine_train)\n\n# Check the model performance on test dataset\ngbm_default.model_performance(wine_test)",
"<br>\nStep 2 - GBM with Manual Settings",
"# Build a GBM with manual settings\n\n# Set up GBM for regression\n# Add a seed for reproducibility\ngbm_manual = H2OGradientBoostingEstimator(model_id = 'gbm_manual', \n seed = 1234,\n ntrees = 100,\n sample_rate = 0.9,\n col_sample_rate = 0.9)\n\n# Use .train() to build the model\ngbm_manual.train(x = features, \n y = 'quality', \n training_frame = wine_train)\n\n# Check the model performance on test dataset\ngbm_manual.model_performance(wine_test)",
"<br>\nStep 3 - GBM with Manual Settings & Cross-Validation (CV)",
"# Build a GBM with manual settings & cross-validation\n\n# Set up GBM for regression\n# Add a seed for reproducibility\ngbm_manual_cv = H2OGradientBoostingEstimator(model_id = 'gbm_manual_cv', \n seed = 1234,\n ntrees = 100,\n sample_rate = 0.9,\n col_sample_rate = 0.9,\n nfolds = 5)\n \n# Use .train() to build the model\ngbm_manual_cv.train(x = features, \n y = 'quality', \n training_frame = wine_train)\n\n# Check the cross-validation model performance\ngbm_manual_cv\n\n# Check the model performance on test dataset\ngbm_manual_cv.model_performance(wine_test)\n# It should be the same as gbm_manual above as the model is trained with same parameters",
"<br>\nStep 4 - GBM with Manual Settings, CV and Early Stopping",
"# Build a GBM with manual settings, CV and early stopping\n\n# Set up GBM for regression\n# Add a seed for reproducibility\ngbm_manual_cv_es = H2OGradientBoostingEstimator(model_id = 'gbm_manual_cv_es', \n seed = 1234,\n ntrees = 10000, # increase the number of trees \n sample_rate = 0.9,\n col_sample_rate = 0.9,\n nfolds = 5,\n stopping_metric = 'mse', # let early stopping feature determine\n stopping_rounds = 15, # the optimal number of trees\n score_tree_interval = 1) # by looking at the MSE metric\n# Use .train() to build the model\ngbm_manual_cv_es.train(x = features, \n y = 'quality', \n training_frame = wine_train)\n\n# Check the model summary\ngbm_manual_cv_es.summary()\n\n# Check the cross-validation model performance\ngbm_manual_cv_es\n\n# Check the model performance on test dataset\ngbm_manual_cv_es.model_performance(wine_test)",
"<br>\nStep 5 - GBM with CV, Early Stopping and Full Grid Search",
"# import Grid Search\nfrom h2o.grid.grid_search import H2OGridSearch\n\n# define the criteria for full grid search\nsearch_criteria = {'strategy': \"Cartesian\"}\n\n# define the range of hyper-parameters for grid search\nhyper_params = {'sample_rate': [0.7, 0.8, 0.9],\n 'col_sample_rate': [0.7, 0.8, 0.9]}\n\n# Set up GBM grid search\n# Add a seed for reproducibility\ngbm_full_grid = H2OGridSearch(\n H2OGradientBoostingEstimator(\n model_id = 'gbm_full_grid', \n seed = 1234,\n ntrees = 10000, \n nfolds = 5,\n stopping_metric = 'mse', \n stopping_rounds = 15, \n score_tree_interval = 1),\n search_criteria = search_criteria, # full grid search\n hyper_params = hyper_params)\n\n# Use .train() to start the grid search\ngbm_full_grid.train(x = features, \n y = 'quality', \n training_frame = wine_train)\n\n# Sort and show the grid search results\ngbm_full_grid_sorted = gbm_full_grid.get_grid(sort_by='mse', decreasing=False)\nprint(gbm_full_grid_sorted)\n\n# Extract the best model from full grid search\nbest_model_id = gbm_full_grid_sorted.model_ids[0]\nbest_gbm_from_full_grid = h2o.get_model(best_model_id)\nbest_gbm_from_full_grid.summary()\n\n# Check the model performance on test dataset\nbest_gbm_from_full_grid.model_performance(wine_test)",
"GBM with CV, Early Stopping and Random Grid Search",
"# define the criteria for random grid search\nsearch_criteria = {'strategy': \"RandomDiscrete\", \n 'max_models': 9,\n 'seed': 1234}\n\n# define the range of hyper-parameters for grid search\n# 27 combinations in total\nhyper_params = {'sample_rate': [0.7, 0.8, 0.9],\n 'col_sample_rate': [0.7, 0.8, 0.9],\n 'max_depth': [3, 5, 7]}\n\n# Set up GBM grid search\n# Add a seed for reproducibility\ngbm_rand_grid = H2OGridSearch(\n H2OGradientBoostingEstimator(\n model_id = 'gbm_rand_grid', \n seed = 1234,\n ntrees = 10000, \n nfolds = 5,\n stopping_metric = 'mse', \n stopping_rounds = 15, \n score_tree_interval = 1),\n search_criteria = search_criteria, # full grid search\n hyper_params = hyper_params)\n\n# Use .train() to start the grid search\ngbm_rand_grid.train(x = features, \n y = 'quality', \n training_frame = wine_train)\n\n# Sort and show the grid search results\ngbm_rand_grid_sorted = gbm_rand_grid.get_grid(sort_by='mse', decreasing=False)\nprint(gbm_rand_grid_sorted)\n\n# Extract the best model from random grid search\nbest_model_id = gbm_rand_grid_sorted.model_ids[0]\nbest_gbm_from_rand_grid = h2o.get_model(best_model_id)\nbest_gbm_from_rand_grid.summary()\n\n# Check the model performance on test dataset\nbest_gbm_from_rand_grid.model_performance(wine_test)",
"<br>\nComparison of Model Performance on Test Data",
"print('GBM with Default Settings :', gbm_default.model_performance(wine_test).mse())\nprint('GBM with Manual Settings :', gbm_manual.model_performance(wine_test).mse())\nprint('GBM with Manual Settings & CV :', gbm_manual_cv.model_performance(wine_test).mse())\nprint('GBM with Manual Settings, CV & Early Stopping :', gbm_manual_cv_es.model_performance(wine_test).mse())\nprint('GBM with CV, Early Stopping & Full Grid Search :', \n best_gbm_from_full_grid.model_performance(wine_test).mse())\nprint('GBM with CV, Early Stopping & Random Grid Search :', \n best_gbm_from_rand_grid.model_performance(wine_test).mse())",
"<br>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
otavio-r-filho/AIND-Deep_Learning_Notebooks
|
autoencoder/Simple_Autoencoder.ipynb
|
mit
|
[
"A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.",
"%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)",
"Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.",
"img = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')",
"We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.\n\n\nExercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.",
"# Size of the encoding layer (the hidden layer)\nencoding_dim = 32 # feel free to change this value\n\n# Setting the size of the image to be encode and decoded\nimg_size = 784\n\n# Setting the learning rate\n\nlearning_rate = 0.01\n\n# Input and target placeholders\ninputs_ = tf.placeholder(tf.float32, [None, img_size], name='inputs')\ntargets_ = tf.placeholder(tf.float32, [None, img_size], name='labels')\n\n# Output of hidden layer, single fully connected layer here with ReLU activation\nencoded = tf.layers.dense(inputs=inputs_, units=encoding_dim, activation=tf.nn.relu)\n\n# Output layer logits, fully connected layer with no activation\nlogits = tf.layers.dense(inputs=encoded, units=img_size, activation=None)\n# Sigmoid output from logits\ndecoded = tf.nn.sigmoid(logits, name='output')\n\n# Sigmoid cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\n# Mean of the loss\ncost = tf.reduce_mean(loss)\n\n# Adam optimizer\nopt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)",
"Training",
"# Create the session\nsess = tf.Session()",
"Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. \nCalling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).",
"epochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n feed = {inputs_: batch[0], targets_: batch[0]}\n batch_cost, _ = sess.run([cost, opt], feed_dict=feed)\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))",
"Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.",
"fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)\n\nsess.close()",
"Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.\nIn practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/tensorflow/c_batched.ipynb
|
apache-2.0
|
[
"<h1> 2c. Refactoring to add batching and feature-creation </h1>\n\nIn this notebook, we continue reading the same small dataset, but refactor our ML pipeline in two small, but significant, ways:\n<ol>\n<li> Refactor the input to read data in batches.\n<li> Refactor the feature creation so that it is not one-to-one with inputs.\n</ol>\nThe Pandas function in the previous notebook also batched, only after it had read the whole data into memory -- on a large dataset, this won't be an option.",
"import tensorflow.compat.v1 as tf\nimport numpy as np\nimport shutil\nprint(tf.__version__)",
"<h2> 1. Refactor the input </h2>\n\nRead data created in Lab1a, but this time make it more general and performant. Instead of using Pandas, we will use TensorFlow's Dataset API.",
"CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']\nLABEL_COLUMN = 'fare_amount'\nDEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]\n\ndef read_dataset(filename, mode, batch_size = 512):\n def _input_fn():\n def decode_csv(value_column):\n columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)\n features = dict(zip(CSV_COLUMNS, columns))\n label = features.pop(LABEL_COLUMN)\n return features, label\n\n # Create list of files that match pattern\n file_list = tf.gfile.Glob(filename)\n\n # Create dataset from file list\n dataset = tf.data.TextLineDataset(file_list).map(decode_csv)\n if mode == tf.estimator.ModeKeys.TRAIN:\n num_epochs = None # indefinitely\n dataset = dataset.shuffle(buffer_size = 10 * batch_size)\n else:\n num_epochs = 1 # end-of-input after this\n\n dataset = dataset.repeat(num_epochs).batch(batch_size)\n return dataset.make_one_shot_iterator().get_next()\n return _input_fn\n \n\ndef get_train():\n return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)\n\ndef get_valid():\n return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)\n\ndef get_test():\n return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL)",
"<h2> 2. Refactor the way features are created. </h2>\n\nFor now, pass these through (same as previous lab). However, refactoring this way will enable us to break the one-to-one relationship between inputs and features.",
"INPUT_COLUMNS = [\n tf.feature_column.numeric_column('pickuplon'),\n tf.feature_column.numeric_column('pickuplat'),\n tf.feature_column.numeric_column('dropofflat'),\n tf.feature_column.numeric_column('dropofflon'),\n tf.feature_column.numeric_column('passengers'),\n]\n\ndef add_more_features(feats):\n # Nothing to add (yet!)\n return feats\n\nfeature_cols = add_more_features(INPUT_COLUMNS)",
"<h2> Create and train the model </h2>\n\nNote that we train for num_steps * batch_size examples.",
"tf.logging.set_verbosity(tf.logging.INFO)\nOUTDIR = 'taxi_trained'\nshutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\nmodel = tf.estimator.LinearRegressor(\n feature_columns = feature_cols, model_dir = OUTDIR)\nmodel.train(input_fn = get_train(), steps = 100);",
"<h3> Evaluate model </h3>\n\nAs before, evaluate on the validation data. We'll do the third refactoring (to move the evaluation into the training loop) in the next lab.",
"def print_rmse(model, name, input_fn):\n metrics = model.evaluate(input_fn = input_fn, steps = 1)\n print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))\nprint_rmse(model, 'validation', get_valid())",
"Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
openai/openai-python
|
examples/finetuning/finetuning-classification.ipynb
|
mit
|
[
"Fine tuning classification example\nWe will fine-tune an ada classifier to distinguish between the two sports: Baseball and Hockey.",
"from sklearn.datasets import fetch_20newsgroups\nimport pandas as pd\nimport openai\n\ncategories = ['rec.sport.baseball', 'rec.sport.hockey']\nsports_dataset = fetch_20newsgroups(subset='train', shuffle=True, random_state=42, categories=categories)",
"## Data exploration\n The newsgroup dataset can be loaded using sklearn. First we will look at the data itself:",
"print(sports_dataset['data'][0])\n\nsports_dataset.target_names[sports_dataset['target'][0]]\n\n\nlen_all, len_baseball, len_hockey = len(sports_dataset.data), len([e for e in sports_dataset.target if e == 0]), len([e for e in sports_dataset.target if e == 1])\nprint(f\"Total examples: {len_all}, Baseball examples: {len_baseball}, Hockey examples: {len_hockey}\")",
"One sample from the baseball category can be seen above. It is an email to a mailing list. We can observe that we have 1197 examples in total, which are evenly split between the two sports.\nData Preparation\nWe transform the dataset into a pandas dataframe, with a column for prompt and completion. The prompt contains the email from the mailing list, and the completion is a name of the sport, either hockey or baseball. For demonstration purposes only and speed of fine-tuning we take only 300 examples. In a real use case the more examples the better the performance.",
"import pandas as pd\n\nlabels = [sports_dataset.target_names[x].split('.')[-1] for x in sports_dataset['target']]\ntexts = [text.strip() for text in sports_dataset['data']]\ndf = pd.DataFrame(zip(texts, labels), columns = ['prompt','completion']) #[:300]\ndf.head()",
"Both baseball and hockey are single tokens. We save the dataset as a jsonl file.",
"df.to_json(\"sport2.jsonl\", orient='records', lines=True)",
"Data Preparation tool\nWe can now use a data preparation tool which will suggest a few improvements to our dataset before fine-tuning. Before launching the tool we update the openai library to ensure we're using the latest data preparation tool. We additionally specify -q which auto-accepts all suggestions.",
"!pip install --upgrade openai\n\n!openai tools fine_tunes.prepare_data -f sport2.jsonl -q",
"The tool helpfully suggests a few improvements to the dataset and splits the dataset into training and validation set.\nA suffix between a prompt and a completion is necessary to tell the model that the input text has stopped, and that it now needs to predict the class. Since we use the same separator in each example, the model is able to learn that it is meant to predict either baseball or hockey following the separator.\nA whitespace prefix in completions is useful, as most word tokens are tokenized with a space prefix.\nThe tool also recognized that this is likely a classification task, so it suggested to split the dataset into training and validation datasets. This will allow us to easily measure expected performance on new data.\nFine-tuning\nThe tool suggests we run the following command to train the dataset. Since this is a classification task, we would like to know what the generalization performance on the provided validation set is for our classification use case. The tool suggests to add --compute_classification_metrics --classification_positive_class \" baseball\" in order to compute the classification metrics.\nWe can simply copy the suggested command from the CLI tool. We specifically add -m ada to fine-tune a cheaper and faster ada model, which is usually comperable in performance to slower and more expensive models on classification use cases.",
"!openai api fine_tunes.create -t \"sport2_prepared_train.jsonl\" -v \"sport2_prepared_valid.jsonl\" --compute_classification_metrics --classification_positive_class \" baseball\" -m ada",
"The model is successfully trained in about ten minutes. We can see the model name is ada:ft-openai-2021-07-30-12-26-20, which we can use for doing inference.\n[Advanced] Results and expected model performance\nWe can now download the results file to observe the expected performance on a held out validation set.",
"!openai api fine_tunes.results -i ft-2zaA7qi0rxJduWQpdvOvmGn3 > result.csv\n\nresults = pd.read_csv('result.csv')\nresults[results['classification/accuracy'].notnull()].tail(1)",
"The accuracy reaches 99.6%. On the plot below we can see how accuracy on the validation set increases during the training run.",
"results[results['classification/accuracy'].notnull()]['classification/accuracy'].plot()",
"Using the model\nWe can now call the model to get the predictions.",
"test = pd.read_json('sport2_prepared_valid.jsonl', lines=True)\ntest.head()",
"We need to use the same separator following the prompt which we used during fine-tuning. In this case it is \\n\\n###\\n\\n. Since we're concerned with classification, we want the temperature to be as low as possible, and we only require one token completion to determine the prediction of the model.",
"ft_model = 'ada:ft-openai-2021-07-30-12-26-20'\nres = openai.Completion.create(model=ft_model, prompt=test['prompt'][0] + '\\n\\n###\\n\\n', max_tokens=1, temperature=0)\nres['choices'][0]['text']\n",
"To get the log probabilities, we can specify logprobs parameter on the completion request",
"res = openai.Completion.create(model=ft_model, prompt=test['prompt'][0] + '\\n\\n###\\n\\n', max_tokens=1, temperature=0, logprobs=2)\nres['choices'][0]['logprobs']['top_logprobs'][0]",
"We can see that the model predicts hockey as a lot more likely than baseball, which is the correct prediction. By requesting log_probs, we can see the prediction (log) probability for each class.\nGeneralization\nInterestingly, our fine-tuned classifier is quite versatile. Despite being trained on emails to different mailing lists, it also successfully predicts tweets.",
"sample_hockey_tweet = \"\"\"Thank you to the \n@Canes\n and all you amazing Caniacs that have been so supportive! You guys are some of the best fans in the NHL without a doubt! Really excited to start this new chapter in my career with the \n@DetroitRedWings\n !!\"\"\"\nres = openai.Completion.create(model=ft_model, prompt=sample_hockey_tweet + '\\n\\n###\\n\\n', max_tokens=1, temperature=0, logprobs=2)\nres['choices'][0]['text']\n\nsample_baseball_tweet=\"\"\"BREAKING: The Tampa Bay Rays are finalizing a deal to acquire slugger Nelson Cruz from the Minnesota Twins, sources tell ESPN.\"\"\"\nres = openai.Completion.create(model=ft_model, prompt=sample_baseball_tweet + '\\n\\n###\\n\\n', max_tokens=1, temperature=0, logprobs=2)\nres['choices'][0]['text']"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sorig/shogun
|
doc/ipython-notebooks/neuralnets/rbms_dbns.ipynb
|
bsd-3-clause
|
[
"Restricted Boltzmann Machines & Deep Belief Networks\nby Khaled Nasr as a part of a <a href=\"https://www.google-melange.com/gsoc/project/details/google/gsoc2014/khalednasr92/5657382461898752\">GSoC 2014 project</a> mentored by Theofanis Karaletsos and Sergey Lisitsyn\nIn this notebook we'll take a look at training and evaluating restricted Boltzmann machines and deep belief networks in Shogun.\nIntroduction\nRestricted Boltzmann Machines\nAn RBM is an energy based probabilistic model. It consists of two groups of variables: the visible variables $ v $ and the hidden variables $ h $. The key assumption that RBMs make is that the hidden units are conditionally independent given the visible units, and vice versa.\nThe RBM defines its distribution through an energy function $E(v,h)$, which is a function that assigns a number (called energy) to each possible state of the visible and hidden variables. The probability distribution is defined as:\n$$ P(v,h) := \\frac{\\exp(-E(v,h))}{Z} , \\qquad Z = \\sum_{v,h} \\exp(-E(v,h))$$\nwhere $Z$ is a constant that makes sure that the distribution sums/integrates to $1$. This distribution is also called a Gibbs distribution and $Z$ is sometimes called the partition function.\nFrom the definition of $P(v,h)$ we can see that the probability of a configuration increases as its energy decreases. Training an RBM in an unsupervised manner involves manipulating the energy function so that it would assign low energy (and therefore high probability) to values of $v$ that are similar to the training data, and high energy to values that are far from the training data.\nFor an RBM with binary visible and hidden variables the energy function is defined as:\n$ E(v,h) = -\\sum_i \\sum_j h_i W_{ij} v_j - \\sum_i h_i c_i - \\sum_j v_j b_j $\nwhere $b \\in \\mathbb{R^n} $ is the bias for the visible units, $c \\in \\mathbb{R^m}$ is the bias for hidden units and $ W \\in \\mathbb{R^{mxn}}$ is the weight matrix between the hidden units and the visible units.\nPlugging that definition into the definition of the probability distribution will yield the following conditional distributions for each of the hidden and visible variables:\n$$ P(h=1|v) = \\frac{1}{1+exp(-Wv-c)}, \\quad P(v=1|h) = \\frac{1}{1+exp(-W^T h-b)} $$\nWe can do a quick visualization of an RBM:",
"%pylab inline\n%matplotlib inline\nimport os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\nimport networkx as nx\n \nG = nx.Graph()\npos = {}\n\nfor i in range(8):\n pos['V'+str(i)] = (i,0)\n pos['H'+str(i)] = (i,1)\n \n for j in range(8): G.add_edge('V'+str(j),'H'+str(i))\n\nfigure(figsize=(7,2))\nnx.draw(G, pos, node_color='y', node_size=750)",
"The nodes labeled V are the visible units, the ones labeled H are the hidden units. There's an indirected connection between each hidden unit and all the visible units and similarly for visible unit. There are no connections among the visible units or among the hidden units, which implies the the hidden units are independent of each other given the visible units, and vice versa.\nDeep Belief Networks\nIf an RBM is properly trained, the hidden units learn to extract useful features from training data. An obvious way to go further would be transform the training data using the trained RBM, and train yet another RBM on the transformed data. The second RBM will learn to extract useful features from the features that the first RBM extracts. The process can be repeated to add a third RBM, and so.\nWhen stacked on top of each other, those RBMs form a Deep Belief Network [1]. The network has directed connections going from the units in each layer to units in the layer below it. The connections between the top layer and the layer below it are undirected. The process of stacking RBMs to form a DBN is called pre-training the DBN.\nAfter pre-training, the DBN can be used to initialize a similarly structured neural network which can be used for supervised classification.\nWe can do a visualization of a 4-layer DBN:",
"G = nx.DiGraph()\npos = {}\n\nfor i in range(8):\n pos['V'+str(i)] = (i,0)\n pos['H'+str(i)] = (i,1)\n pos['P'+str(i)] = (i,2)\n pos['Q'+str(i)] = (i,3)\n \n for j in range(8): \n G.add_edge('H'+str(j),'V'+str(i))\n G.add_edge('P'+str(j),'H'+str(i))\n G.add_edge('Q'+str(j),'P'+str(i))\n G.add_edge('P'+str(j),'Q'+str(i))\n\nfigure(figsize=(5,5))\nnx.draw(G, pos, node_color='y', node_size=750)",
"RBMs in Shogun\nRBMs in Shogun are handled through the CRBM class. We create one by specifying the number of visible units and their type (binary, Gaussian, and Softmax visible units are supported), and the number of hidden units (only binary hidden units are supported).\nIn this notebook we'll train a few RBMs on the USPS dataset for handwritten digits. We'll have one RBM for each digit class, making 10 RBMs in total:",
"from shogun import RBM, RBMVUT_BINARY, Math\n\n# initialize the random number generator with a fixed seed, for repeatability\nMath.init_random(10)\n\nrbms = []\nfor i in range(10):\n rbms.append(RBM(25, 256, RBMVUT_BINARY)) # 25 hidden units, 256 visible units (one for each pixel in a 16x16 binary image)\n rbms[i].initialize_neural_network()",
"Next we'll load the USPS dataset:",
"from scipy.io import loadmat\n\ndataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))\n\nXall = dataset['data']\n# the usps dataset has the digits labeled from 1 to 10 \n# we'll subtract 1 to make them in the 0-9 range instead\nYall = np.array(dataset['label'].squeeze(), dtype=np.double)-1 ",
"Now we'll move on to training the RBMs using Persistent Contrastive Divergence [2]. Training using regular Contrastive Divergence [3] is also supported.The optimization is performed using Gradient Descent. The training progress can be monitored using the reconstruction error or the psuedo-likelihood. Check the public attributes of the CRBM class for all the available training options.",
"from shogun import features, RBMMM_PSEUDO_LIKELIHOOD\n\n# uncomment this line to allow the training progress to be printed on the console\n#from shogun import MSG_INFO; rbms[0].io.set_loglevel(MSG_INFO)\n\nfor i in range(10):\n # obtain the data for digit i\n X_i = Xall[:,Yall==i]\n \n # binarize the data for use with the RBM\n X_i = (X_i>0).astype(float64)\n \n # set the number of contrastive divergence steps\n rbms[i].cd_num_steps = 5\n \n # set the gradient descent parameters\n rbms[i].gd_learning_rate = 0.005\n rbms[i].gd_mini_batch_size = 100\n rbms[i].max_num_epochs = 30\n \n # set the monitoring method to pseudo-likelihood\n rbms[i].monitoring_method = RBMMM_PSEUDO_LIKELIHOOD\n \n # start training\n rbms[i].train(features(X_i))",
"After training, we can draw samples from the RBMs to see what they've learned. Samples are drawn using Gibbs sampling. We'll draw 10 samples from each RBM and plot them:",
"samples = zeros((256,100))\nfor i in range(10):\n # initialize the sampling chain with a random state for the visible units\n rbms[i].reset_chain()\n \n # run 10 chains for a 1000 steps to obtain the samples\n samples[:,i*10:i*10+10] = rbms[i].sample_group(0, 1000, 10)\n\n# plot the samples\nfigure(figsize=(7,7))\nfor i in range(100):\n\tax=subplot(10,10,i+1)\n\tax.imshow(samples[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)\n\tax.set_xticks([])\n\tax.set_yticks([])",
"DBNs in Shogun\nNow we'll create a DBN, pre-train it on the digits dataset, and use it initialize a neural network which we can use for classification.\nDBNs are handled in Shogun through the CDeepBeliefNetwork class. We create a network by specifying the number of visible units it has, and then add the desired number of hidden layers using add_hidden_layer(). When done, we call initialize_neural_network() to initialize the network:",
"from shogun import DeepBeliefNetwork\n\ndbn = DeepBeliefNetwork(256) # 256 visible units\ndbn.add_hidden_layer(200) # 200 units in the first hidden layer\ndbn.add_hidden_layer(300) # 300 units in the second hidden layer\n\ndbn.initialize_neural_network()",
"Then we'll pre-train the DBN on the USPS dataset. Since we have 3 layers, the DBN will be pre-trained as two RBMs: one that consists of the first hidden layer and the visible layer, the other consists of the first hidden layer and the second hidden layer. Pre-training parameters can be specified using the pt_* public attributes of the class. Each of those attributes is an SGVector whose length is the number of RBMs (2 in our case). It can be used to set the parameters for each RBM indiviually. SGVector's set_const() method can also be used to assign the same parameter value for all RBMs.",
"# take 3000 examples for training, the rest for testing\nXtrain = Xall[:,0:3000]\nYtrain = Yall[0:3000]\nXtest = Xall[:,3000:-1]\nYtest = Yall[3000:-1]\n\n# set the number of contrastive divergence steps\ndbn.pt_cd_num_steps.set_const(5)\n\n# set the gradient descent parameters\ndbn.pt_gd_learning_rate.set_const(0.01)\ndbn.pt_gd_mini_batch_size.set_const(100)\ndbn.pt_max_num_epochs.set_const(30)\n\n# binarize the data and start pre-training\ndbn.pre_train(features((Xtrain>0).astype(float64)))",
"After pre-training, we can visualize the features learned by the first hidden layer by plotting the weights between some hidden units and the visible units:",
"# obtain the weights of the first hidden layer\nw1 = dbn.get_weights(0)\n\n# plot the weights between the first 100 units in the hidden layer and the visible units\nfigure(figsize=(7,7))\nfor i in range(100):\n\tax1=subplot(10,10,i+1)\n\tax1.imshow(w1[i,:].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)\n\tax1.set_xticks([])\n\tax1.set_yticks([])",
"Now, we'll use the DBN to initialize a CNeuralNetwork. This is done through the convert_to_neural_network() method. The neural network will consist of a CNeuralInputLayer with 256 neurons, a CNeuralLogisticLayer with 200 neurons, and another CNeuralLogisticLayer with 300 neurons. We'll also add a CNeuralSoftmaxLayer as an output layer so that we can train the network in a supervised manner. We'll also train the network on the training set:",
"from shogun import NeuralSoftmaxLayer, MulticlassLabels\n\n# get the neural network\nnn = dbn.convert_to_neural_network(NeuralSoftmaxLayer(10))\n\n# add some L2 regularization\nnn.l2_coefficient = 0.0001\n\n# start training\nnn.put('labels', MulticlassLabels(Ytrain))\n_ = nn.train(features(Xtrain))",
"And finally we'll measure the classification accuracy on the test set:",
"from shogun import MulticlassAccuracy\n\npredictions = nn.apply_multiclass(features(Xtest))\naccuracy = MulticlassAccuracy().evaluate(predictions, MulticlassLabels(Ytest)) * 100\n\nprint(\"Classification accuracy on the test set =\", accuracy, \"%\")",
"References\n\n[1] A Fast Learning Algorithm for Deep Belief Nets, Hinton, 2006\n[2] Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient, Tieleman, 2008\n[3] Training Products of Experts by Minimizing Contrastive Divergence, Hinton, 2002"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GLiCom/CorpusAnalysis2016
|
notebooks/Simple descriptive statistics.ipynb
|
apache-2.0
|
[
"Estadística descriptiva",
"import nltk\n#nltk.download('book')\n\ntext = '''Neither a recession nor a collapse in revenue has yet been enough to convince \\\nRussian President Vladimir Putin that it’s time to join with OPEC and cut oil output to \\\nboost prices. His reasons may be pragmatic rather than political. Russia’s Energy \\\nMinister Alexander Novak and his Saudi Arabian, Venezuelan and Qatari counterparts \\\nagreed to freeze output at January levels on Tuesday. The world’s second-largest crude \\\nproducer faces numerous obstacles to any deal that would actually cut production, even if \\\nPutin decides it’s in the national interest. Reducing the flow of crude might damage \\\nRussia’s fields and pipelines, require expensive new storage tanks or simply take too long. \\\nPrior to Tuesday’s agreement, Novak had said he could consider reductions if other producers \\\njoined in. Yet Igor Sechin, chief executive officer of the country’s largest oil company \\\nRosneft OJSC and a close Putin ally, has resisted, saying last week in London that \\\ncoordination would be difficult because no major producer seems willing to pare output. \\\n\"The history of relations with OPEC suggests that Russian companies are not keen to cut \\\nproduction,\" James Henderson, an oil and gas industry analyst at the Oxford Institute for \\\nEnergy Studies, said by phone. \"There are certain practical difficulties, and the companies \\\nwould rather somebody else did that, and they could benefit once the price goes up.\"'''",
"Promedio de longitud de palabras (esta vez de verdad)\nTrabajaremos de momento con una sola oración. Para esto separamos en oraciones:",
"sentences = nltk.sent_tokenize(text)\n\nsentences",
"Y seleccionamos la primera oración:",
"sentence = sentences[0]\n\nsentence\n\nwords = nltk.word_tokenize(sentence)\n\nwords",
"Recordamos la solución aproximada:",
"len(sentence)/len(words)",
"Pero el promedio en realidad se calcula así:\n$$\\mu = {\\sum_{w \\in words}{|w|} \\over {|words|}}$$\nEs decir que sumamos la longitud de cada palabra, y dividimos por el número de palabras.\nCalculamos la longitud de cada palabra:",
"word_len = [len(w) for w in words]\n\nprint(word_len)",
"Paréntesis: también podemos ver cada palabra con su longitud:",
"[(w,len(w)) for w in words]",
"Sumamos todas las longitudes:",
"word_len_sum = sum(word_len)\n\nword_len_sum",
"Y dividimos por el número de palabras:",
"word_len_avg = word_len_sum/len(words)\n\nword_len_avg",
"Distribuciones\nLa función FreqDist() nos da la distribución de frecuencia de los valores de una lista, p.ej.:",
"nltk.FreqDist(['a','b','c','a','a','c','c','c'])",
"Podemos aplicar lo mismo a las longitudes de palabras:",
"len_dist = nltk.FreqDist(word_len)\n\nlen_dist",
"FreqDist incorpora funcionalidad para visualizar la distribución:\nNecesitamos incluir lo siguiente (en particular el %matplotlib inline) para que las gráficas aparezcan directamente dentro del Notebook.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nlen_dist.plot()",
"Para tenerlo no por orden de frecuencia, pero por orden de longitud de palabra, es un poco más complicado:\nTenemos que crear dos listas (de tamaño idéntico). La primera contiene solo las etiquetas (en nuestro caso la longitud de las palabras, pero también podrían ser las palabras mismas u otra cosa), y la segunda contiene los valores que corresponden a cada etiqueta.\nPodemos usar matplotlib para crear gráficos:",
"x=[length for length in len_dist.keys()]\ny=[len_dist[length] for length in len_dist.keys()]\n\nx,y\n\nplt.bar(x,y)",
"Métricas de dispersion\nLas métricas de dispersión suelen basarse en la diferencia entre el valor promedio y el valor de cada ejemplo.\nPor lo tanto calculamos para cada palabra la diferencia entre su longitud y la longitud esperada (promedio)",
"word_len_diff = [len(w)-word_len_avg for w in words]\n\nword_len_diff",
"Desviación estándar\nPara tener una métrica de dispersión necesitamos agregar todas estas diferencias.\nComo vemos, al hacer la resta obtenemos valores positivos o negativos, según si el valor del ejemplo es más grande o más pequeño del valor esperado. Primero nos tenemos que asegurar que cada diferencia cuente como valor positivo (de otra forma se cancelarían entre ellos). La forma más habitual de hacerlo es calculando el cuadrado de la diferencia",
"word_len_sq_diff = [diff**2 for diff in word_len_diff]\n\nword_len_sq_diff",
"Ahora agregamos todos estos valores calculando el promedio (suma de los valores dividida por el número de los mismos), lo que nos da la varianza:",
"variance = sum(word_len_sq_diff)/len(word_len_sq_diff)\n\nvariance",
"Y finalmente la desviación estándar (que es la métrica más utilizada) es la raíz cuadrada de la varianza:",
"import math\nstd_dev = math.sqrt(variance)\n\nstd_dev",
"También podemos hacer todos estos cálculos en un solo paso:",
"math.sqrt(sum([(len(w)-word_len_avg)**2 for w in words])/len(words))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kousukekikuchi1984/kaggle
|
reducing_commercial_aviation_fatalities.ipynb
|
cc0-1.0
|
[
"<a href=\"https://colab.research.google.com/github/kousukekikuchi1984/kaggle/blob/master/reducing_commercial_aviation_fatalities.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"from google.colab import drive\ndrive.mount('/content/drive')\n\n# .kaggleというフォルダをColab上に作成\n!mkdir -p ~/.kaggle\n\n# .kaggelフォルダにコピーし、権限を変更\n!cp /content/drive/'My Drive'/Kaggle/kaggle.json ~/.kaggle/\n!chmod 600 ~/.kaggle/kaggle.json\n\n!ls /root/.kaggle\n\n!pip install kaggle\n\n!kaggle competitions list\n\n!kaggle competitions download -c reducing-commercial-aviation-fatalities\n\n!ls | grep .zip | xargs -I{} unzip {}\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\n\npd.options.display.max_rows = 999\npd.options.display.max_columns = 999\n\ntrain = pd.read_csv(\"train.csv\")\ntest = pd.read_csv(\"test.csv\")\n\ntrain.info()",
"やること\n\n下記のデータセットから、パイロットの状態を推測する。\nSS, CA, DAへの多項分類問題\n評価はMulti-Class Log Lossで評価される。\nセンサーによるノイズが多いので、それを自動的に処理する必要がある。\n\ncrew\n\n確かクルーの組み合わせで決めている数字らしい\n\nexperiment\n\nexperiment - One of CA, DA, SS or LOFT. The first 3 comprise the training set. The latter the test set.\nCAは集中できている状態\nDAは集中が阻害されている状態\nSSは驚きがある状態\n\nseat\n\npilotが左の席に座っているか、右の席に座っているかを示す\n多分、機長と副操縦士のことをカテゴリわけしている。\n\neeg*\n\nelectroencephalogram、脳波を示す\n\necg\n\n3-point Electrocardiogram signal. The sensor had a resolution/bit of .012215 µV and a range of -100mV to +100mV. The data are provided in microvolts.\n心電図の波形らしい\n\nr\n\nRespiration, a measure of the rise and fall of the chest. The sensor had a resolution/bit of .2384186 µV and a range of -2.0V to +2.0V. The data are provided in microvolts.\n呼吸を示すセンサーで、microvoltsで単位が表されている\n\ngsr\n\nGalvanic Skin Response, a measure of electrodermal activity. The sensor had a resolution/bit of .2384186 µV and a range of -2.0V to +2.0V. The data are provided in microvolts.\n皮膚電気反応、ストレスがかかった時に電位が変化することを利用している。\n\nevent\n\nA = baseline, B = SS, C = CA, D = DA",
"train[(train[\"crew\"] == 1) & (train[\"seat\"] == 0)].describe()",
"基本的な戦略\nパイロット別に考える\n\n基本的に人間のセンサーデータは個体差が大きいので、ノイズ処理は全体ではなく個体で考える。\n異常値は前後の平均値で穴埋めを行うなど\nパイロット別に行うためには、\n\nシグナル\n\n正規分布に従っているのかどうかが疑問。従っていそうな気がするが\n\n一旦の方向性\n\nどのカラムとevent のクラスが影響があるのかを知りたい。",
"train.head()\n\nfrom scipy import signal\n\nb, a = signal.buffer(8, 0.05)\ny = signal.filtfilt(b, a, subset['r'], padlen=150)\nplt.plot(y[3000:4024])\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
xiongzhenggang/xiongzhenggang.github.io
|
data-science/28-密度和轮廓图.ipynb
|
gpl-3.0
|
[
"密度和轮廓图\n有时,使用轮廓或颜色编码区域在二维中显示三维数据很有用。有三个Matplotlib函数可以帮助完成此任务:用于轮廓图的plt.contour,用于填充轮廓图的plt.contourf和用于显示图像的plt.imshow。本节介绍了使用它们的几个示例。我们将从设置笔记本开始,以绘制和导入将要使用的功能:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-white')\nimport numpy as np",
"可视化三维函数\n我们将从使用函数z =f(x,y)演示轮廓图开始,对f使用以下特定选择(我们之前在数组计算中已经看到了这一点:广播,当我们将其用作示例时)用于阵列广播):",
"def f(x, y):\n return np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x)\n",
"可以使用plt.contour函数创建轮廓图。它包含三个参数:x值的网格,y值的网格和z值的网格。 x和y值表示绘图上的位置,z值将由轮廓线级别表示。准备此类数据的最直接方法也许是使用np.meshgrid函数,该函数从一维数组构建二维网格:",
"x = np.linspace(0, 5, 50)\ny = np.linspace(0, 5, 40)\n\nX, Y = np.meshgrid(x, y)\nZ = f(X, Y)",
"现在,让我们用标准的仅线条轮廓图查看一下:",
"plt.contour(X, Y, Z, colors='black');",
"默认情况下,当使用单色时,负值由虚线表示,正值由实线表示。或者,可以通过使用cmap参数指定颜色图来对行进行颜色编码。在这里,我们还将指定我们希望绘制更多的线—在数据范围内20个等距的间隔:",
"plt.contour(X, Y, Z, 20, cmap='RdGy');",
"在这里,我们选择了RdGy(Red-Gray的缩写)颜色图,这是居中数据的不错选择。 Matplotlib具有多种可用的颜色图,您可以通过在plt.cm模块上完成制表符来轻松地在IPython中浏览:\n\nplt.cm.<TAB>\n\n我们的图看起来更好了,但是线条之间的间隔可能会分散注意力。我们可以通过使用plt.contourf()函数切换到填充轮廓图来更改此设置(注意最后的f),该函数使用与plt.contour()大致相同的语法。\n此外,我们将添加一个plt.colorbar()命令,该命令将自动为绘图创建带有标签颜色信息的附加轴:",
"plt.contourf(X, Y, Z, 20, cmap='RdGy')\nplt.colorbar();",
"颜色栏清楚地表明黑色区域是“峰”,而红色区域是“谷”。\n该图的一个潜在问题是它有点“斑点”。即,色阶是离散的而不是连续的,这并不总是所希望的。可以通过将轮廓的数量设置为非常高的数量来解决此问题,但这会导致绘制效率低下:Matplotlib必须为关卡中的每个步骤渲染一个新的多边形。处理此问题的更好方法是使用plt.imshow()函数,该函数将二维数据网格解释为图像。\n以下代码显示了这一点:",
"plt.imshow(Z, extent=[0, 5, 0, 5], origin='lower',\n cmap='RdGy')\nplt.colorbar()\nplt.axis(aspect='image');\n\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bsafdi/NPTFit
|
examples/Example7_Manual_nonPoissonian_Likelihood.ipynb
|
mit
|
[
"Example 7: Manual evaluation of non-Poissonian Likelihood\nIn this example we show to manually evaluate the non-Poissonian likelihood. This can be used, for example, to interface nptfit with parameter estimation packages other than MultiNest. We also show how to extract the prior cube.\nWe will take the exact same analysis as considered in the previous example, and show the likelihood peaks at exactly the same location for the normalisation of the non-Poissonian template.\nNB: This example makes use of the Fermi Data, which needs to already be installed. See Example 1 for details.",
"# Import relevant modules\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport numpy as np\nimport healpy as hp\nimport matplotlib.pyplot as plt\n\nfrom NPTFit import nptfit # module for performing scan\nfrom NPTFit import create_mask as cm # module for creating the mask\nfrom NPTFit import psf_correction as pc # module for determining the PSF correction\nfrom NPTFit import dnds_analysis # module for analysing the output\n\nfrom __future__ import print_function",
"Setup an identical instance of NPTFit to Example 6\nFirstly we initialize an instance of nptfit identical to that used in the previous example.",
"n = nptfit.NPTF(tag='non-Poissonian_Example')\n\nfermi_data = np.load('fermi_data/fermidata_counts.npy').astype(np.int32)\nfermi_exposure = np.load('fermi_data/fermidata_exposure.npy')\nn.load_data(fermi_data, fermi_exposure)\n\nanalysis_mask = cm.make_mask_total(mask_ring = True, inner = 0, outer = 5, ring_b = 90, ring_l = 0)\nn.load_mask(analysis_mask)\n\niso_p = np.load('fermi_data/template_iso.npy')\nn.add_template(iso_p, 'iso_p')\niso_np = np.ones(len(iso_p))\nn.add_template(iso_np, 'iso_np',units='PS')\n\nn.add_poiss_model('iso_p','$A_\\mathrm{iso}$', False, fixed=True, fixed_norm=1.51)\nn.add_non_poiss_model('iso_np',\n ['$A^\\mathrm{ps}_\\mathrm{iso}$','$n_1$','$n_2$','$S_b$'],\n [[-6,1],[2.05,30],[-2,1.95]],\n [True,False,False],\n fixed_params = [[3,172.52]])\n\npc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812)\nf_ary = pc_inst.f_ary\ndf_rho_div_f_ary = pc_inst.df_rho_div_f_ary\n\nn.configure_for_scan(f_ary=f_ary, df_rho_div_f_ary=df_rho_div_f_ary, nexp=1)",
"Evaluate the Likelihood Manually\nAfter configuring for the scan, the instance of nptfit.NPTF now has an associated function ll. This function was passed to MultiNest in the previous example, but we can also manually evaluate it.\nThe log likelihood function is called as: ll(theta), where theta is a flattened array of parameters. In the case above:\n$$ \\theta = \\left[ \\log_{10} \\left( A^\\mathrm{ps}_\\mathrm{iso} \\right), n_1, n_2 \\right] $$\nAs an example we can evaluate it at a few points around the best fit parameters:",
"print('Vary A: ', n.ll([-4.76+0.32,18.26,0.06]), n.ll([-4.76,18.26,0.06]), n.ll([-4.76-0.37,18.26,0.06]))\nprint('Vary n1:', n.ll([-4.76,18.26+7.98,0.06]), n.ll([-4.76,18.26,0.06]), n.ll([-4.76,18.26-9.46,0.06]))\nprint('Vary n2:', n.ll([-4.76,18.26,0.06+0.93]), n.ll([-4.76,18.26,0.06]), n.ll([-4.76,18.26,0.06-1.31]))",
"To make the point clearer we can fix $n_1$ and $n_2$ to their best fit values, and calculate a Test Statistics (TS) array as we vary $\\log_{10} \\left( A^\\mathrm{ps}_\\mathrm{iso} \\right)$. As shown the likelihood is maximised at approximated where MultiNest told us was the best fit point for this parameter.",
"Avals = np.arange(-6.0,-2.0,0.01)\nTSvals_A = np.array([2*(n.ll([-4.76,18.26,0.06])-n.ll([Avals[i],18.26,0.06])) for i in range(len(Avals))])\n\nplt.plot(Avals,TSvals_A,color='black', lw=1.5)\nplt.axvline(-4.76+0.32,ls='dashed',color='black')\nplt.axvline(-4.76,ls='dashed',color='black')\nplt.axvline(-4.76-0.37,ls='dashed',color='black')\nplt.axhline(0,ls='dashed',color='black')\nplt.xlim([-5.5,-4.0])\nplt.ylim([-5.0,15.0])\nplt.xlabel('$A^\\mathrm{ps}_\\mathrm{iso}$')\nplt.ylabel('$\\mathrm{TS}$')\nplt.show()",
"Next we do the same thing for $n_2$. This time we see that this parameter is much more poorly constrained than the value of the normalisation, as the TS is very flat.\nNB: it is important not to evaluate breaks exactly at a value of $n=1$. The reason for this is the analytic form of the likelihood involves $(n-1)^{-1}$.",
"n2vals = np.arange(-1.995,1.945,0.01)\nTSvals_n2 = np.array([2*(n.ll([-4.76,18.26,0.06])-n.ll([-4.76,18.26,n2vals[i]])) for i in range(len(n2vals))])\n\nplt.plot(n2vals,TSvals_n2,color='black', lw=1.5)\nplt.axvline(0.06+0.93,ls='dashed',color='black')\nplt.axvline(0.06,ls='dashed',color='black')\nplt.axvline(0.06-1.31,ls='dashed',color='black')\nplt.axhline(0,ls='dashed',color='black')\nplt.xlim([-2.0,1.5])\nplt.ylim([-5.0,15.0])\nplt.xlabel('$n_2$')\nplt.ylabel('$\\mathrm{TS}$')\nplt.show()",
"In general $\\theta$ will always be a flattened array of the floated parameters. Poisson parameters always occur first, in the order in which they were added (via add_poiss_model), following by non-Poissonian parameters in the order they were added (via add_non_poiss_model). To be explicit if we have $m$ Poissonian templates and $n$ non-Poissonian templates with breaks $\\ell_n$, then:\n$$ \\theta = \\left[ A_\\mathrm{P}^1, \\ldots, A_\\mathrm{P}^m, A_\\mathrm{NP}^1, n_1^1, \\ldots, n_{\\ell_1+1}^1, S_b^{(1)~1}, \\ldots, S_b^{(\\ell_1)~1}, \\ldots, A_\\mathrm{NP}^n, n_1^n, \\ldots, n_{\\ell_n+1}^n, S_b^{(1)~n}, \\ldots, S_b^{(\\ell_n)~n} \\right]\n$$\nFixed parameters are deleted from the list, and any parameter entered with a log flat prior is replaced by $\\log_{10}$ of itself.\nExtract the Prior Cube Manually\nTo extract the prior cube, we use the internal function log_prior_cube. This requires two arguments: 1. cube, the unit cube of dimension equal to the number of floated parameters; and 2. ndim, the number of floated parameters.",
"print(n.prior_cube(cube=[1,1,1],ndim=3))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fagonzalezo/is-2017-1
|
assign3.ipynb
|
mit
|
[
"Intelligent Systems Assignment 3\nBayes' net inference\nNames:\nIDs:",
"class Directions:\n NORTH = 'North'\n SOUTH = 'South'\n EAST = 'East'\n WEST = 'West'\n STOP = 'Stop'",
"a. Bayes' net for instant perception and position.\nBuild a Bayes' net that represent the relationships between the random variables. Based on it, write an expression for the joint probability distribution of all the variables.\nb. Probability functions calculated from the instant model.\nAssuming an uniform distribution for the Pacman position probability, write functions to calculate the following probabilities:\ni. $P(X=x|E_{N}=e_{N},E_{S}=e_{S})$",
"def P_1(eps, E_N, E_S):\n '''\n Calculates: P(X=x|E_{N}=e_{N},E_{S}=e_{S})\n Arguments: E_N, E_S \\in {True,False}\n 0 <= eps <= 1 (epsilon)\n Returns: dictionary of type int x int --> float\n '''\n pd = {(x,y):0 for x in range(1,7) for y in range(1,6)}\n return pd\n\nP_1(0, True, False)",
"ii. $P(E_{E}=e_{E}|E_{N}=e_{N},E_{S}=E_{S})$",
"def P_2(eps, E_N, E_S):\n '''\n Calculates: P(E_{E}=e_{E}|E_{N}=e_{N},E_{S}=E_{S})\n Arguments: E_N, E_S \\in {True,False}\n 0 <= eps <= 1\n Returns: dictionary of type (False, True) --> float\n '''\n pd = {True:0, False:0}\n return pd\n\nP_2(0.2, True, False)",
"iii. $P(S)$, where $S\\subseteq{e_{N},e_{S},e_{E},e_{W}}$",
"def P_3(eps, S):\n '''\n Calculates: P(S), where S\\subseteq\\{e_{N},e_{S},e_{E},e_{W}\\}\n Arguments: S a dictionary with keywords in Directions and values in\n {True,False}\n 0 <= eps <= 1\n Returns: float value representing P(S)\n '''\n return 0\n\nP_3(0.3, {Directions.EAST: True, Directions.SOUTH: False})",
"c. Bayes' net for dynamic perception and position.\nNow we will consider a scenario where the Pacman moves a finite number of steps $n$. In this case we have $n$\ndifferent variables for the positions $X_{1},\\dots,X_{n}$, as well as for each one of the perceptions, e.g.\n$E_{N_{1}},\\dots,E_{N_{n}}$ for the north perception. For the initial Pacman position, assume an uniform \ndistribution among the valid positions. Also assume that at each time step the Pacman choses, to move, one of the valid neighbor positions with uniform probability. Draw the corresponding Bayes' net for $n=4$.\nd. Probability functions calculated from the dynamic model.\nAssuming an uniform distribution for the Pacman position probability, write functions to calculate the following probabilities:\ni. $P(X_{4}=x_{4}|E_{1}=e_{1},E_{3}=e_{3})$",
"def P_4(eps, E_1, E_3):\n '''\n Calculates: P(X_{4}=x_{4}|E_{1}=e_{1},E_{3}=e_{3})\n Arguments: E_1, E_3 dictionaries of type Directions --> {True,False}\n 0 <= eps <= 1\n Returns: dictionary of type int x int --> float\n '''\n pd = {(x,y):0 for x in range(1,7) for y in range(1,6)}\n return pd\n\nE_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}\nE_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}\nP_4(0.1, E_1, E_3)",
"ii. $P(X_{2}=x_{2}|E_{2}=e_{2},E_{3}=e_{3},E_{4}=e_{4})$",
"def P_5(eps, E_2, E_3, E_4):\n '''\n Calculates: P(X_{2}=x_{2}|E_{2}=e_{2},E_{3}=e_{3},E_{4}=e_{4})\n Arguments: E_2, E_3, E_4 dictionaries of type Directions --> {True,False}\n 0 <= eps <= 1\n Returns: dictionary of type int x int --> float\n '''\n pd = {(x,y):0 for x in range(1,7) for y in range(1,6)}\n return pd\n\nE_2 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}\nE_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}\nE_4 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}\nP_5(0.1, E_2, E_3, E_4)",
"iii. $P(E_{4}=e_{4}|E_{1}=e_{1},E_{2}=e_{2},E_{3}=e_{3})$",
"def P_6(eps, E_1, E_2, E_3):\n '''\n Calculates: P(E_{4}=e_{4}|E_{1}=e_{1},E_{2}=e_{2},E_{3}=e_{3})\n Arguments: E_1, E_2, E_3 dictionaries of type Directions --> {True,False}\n 0 <= eps <= 1\n Returns: dictionary of type {False, True}^4 --> float\n '''\n pd = {(n, s, e, w): 0 for n in [False, True] for s in [False, True] \n for e in [False, True] for w in [False, True]}\n return pd\n\nE_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}\nE_2 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}\nE_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}\nP_6(0.1, E_1, E_2, E_3)",
"iv. $P(E_{E_{2}}=e_{E_{2}}|E_{N_{2}}=e_{N_{2}},E_{S_{2}}=E_{S_{2}})$",
"def P_7(eps, E_N, E_S):\n '''\n Calculates: P(E_{E_{2}}=e_{E_{2}}|E_{N_{2}}=e_{N_{2}},E_{S_{2}}=E_{S_{2}})\n Arguments: E_N_2, E_S_2 \\in {True,False}\n 0 <= eps <= 1\n Returns: dictionary of type (False, True) --> float\n '''\n pd = {True:0, False:0}\n return pd\n\nP_7(0.1, True, False)",
"Test functions\nYou can use the following functions to test your solutions.",
"def approx_equal(val1, val2):\n return abs(val1-val2) <= 0.00001\n\ndef test_P_1():\n pd = P_1(0.0, True, True)\n assert approx_equal(pd[(2, 1)], 0.1111111111111111)\n assert approx_equal(pd[(3, 1)], 0)\n pd = P_1(0.3, True, False)\n assert approx_equal(pd[(2, 1)], 0.03804347826086956)\n assert approx_equal(pd[(3, 1)], 0.016304347826086956)\n\ndef test_P_2():\n pd = P_2(0.0, True, True)\n assert approx_equal(pd[False], 1.0)\n pd = P_2(0.3, True, False)\n assert approx_equal(pd[False], 0.5514492753623188)\n\ndef test_P_3():\n pd = P_3(0.1, {Directions.EAST: True, Directions.WEST: True})\n assert approx_equal(pd, 0.2299999999999999)\n pd = P_3(0.1, {Directions.EAST: True})\n assert approx_equal(pd, 0.3999999999999999)\n pd = P_3(0.2, {Directions.EAST: False, Directions.WEST: True, Directions.SOUTH: True})\n assert approx_equal(pd, 0.0980000000000000)\n\ndef test_P_4():\n E_1 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: True}\n E_3 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: True}\n pd = P_4(0.0, E_1, E_3)\n assert approx_equal(pd[(6, 3)], 0.1842105263157895)\n assert approx_equal(pd[(4, 3)], 0.0)\n pd = P_4(0.2, E_1, E_3)\n assert approx_equal(pd[(6, 3)], 0.17777843398830864)\n assert approx_equal(pd[(4, 3)], 0.000578430282649176)\n E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}\n E_3 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}\n pd = P_4(0.0, E_1, E_3)\n assert approx_equal(pd[(6, 2)], 0.3333333333333333)\n assert approx_equal(pd[(4, 3)], 0.0)\n\ndef test_P_5():\n E_2 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}\n E_3 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: False, Directions.WEST: False}\n E_4 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}\n pd = P_5(0, E_2, E_3, E_4)\n assert approx_equal(pd[(2, 5)], 0.5)\n assert approx_equal(pd[(4, 3)], 0.0)\n pd = P_5(0.3, E_2, E_3, E_4)\n assert approx_equal(pd[(2, 5)], 0.1739661245168835)\n assert approx_equal(pd[(4, 3)], 0.0787991740545979)\n\ndef test_P_6():\n E_1 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}\n E_2 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}\n E_3 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}\n pd = P_6(0.2, E_1, E_2, E_3)\n assert approx_equal(pd[(False, False, True, True)], 0.15696739914079486)\n assert approx_equal(pd[(True, True, False, False)], 0.20610191744824477)\n pd = P_6(0., E_1, E_2, E_3)\n assert approx_equal(pd[(False, False, True, True)], 0.5)\n assert approx_equal(pd[(False, True, False, False)], 0.0)\n\ndef test_P_7():\n pd = P_7(0.0, True, False)\n assert approx_equal(pd[False], 0.7142857142857143)\n pd = P_7(0.3, False, False)\n assert approx_equal(pd[False], 0.5023529411764706)\n \ntest_P_1()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/ja/hub/tutorials/bigbigan_with_tf_hub.ipynb
|
apache-2.0
|
[
"Copyright 2019 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================",
"BigBiGAN による画像生成\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/bigbigan_with_tf_hub\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org で実行</a></td>\n <td> <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/bigbigan_with_tf_hub.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colabで実行</a>\n</td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/bigbigan_with_tf_hub.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub で表示</a></td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/bigbigan_with_tf_hub.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a>\n</td>\n <td> <a href=\"https://tfhub.dev/s?q=experts%2Fbert\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\">TF Hub モデルを参照</a> </td>\n</table>\n\nこのノートブックは <a>TF Hub</a> で利用できる BigBiGAN モデルのデモです。\nBigBiGAN は、教師なし表現学習に使用可能なエンコーダモジュールを追加することによって、標準的な (Big)GAN を拡張します。大まかに言えば、エンコーダは与えられた実データ x で潜在性 z を予測してジェネレータを反転させます。これらのモデルの詳細については、arXiv の BigBiGAN 論文 [1] をご覧ください。\nランタイムに接続した後、以下の指示に従ってください。\n\n(オプション)下記の最初のコードセルで選択した module_path を更新して、異なるエンコーダアーキテクチャ用の BigBiGAN ジェネレータを読み込みます。\nRuntime > Run all をクリックして各セルを順番に実行します。その後、BigBiGAN のサンプルや再構成の可視化を含む出力は、以下のように自動的に表示されます。\n\n注意: 問題が生じる場合は、Runtime > Restart and run all... をクリックすると、ランタイムを再起動して始めからすべてのセルの再実行ができます。\n[1] Jeff Donahue・Karen Simonyan『Large Scale Adversarial Representation Learning』arxiv:1907.02544 (2019)\nまず、モジュールのパスを設定します。デフォルトでは https://tfhub.dev/deepmind/bigbigan-resnet50/1 から小さい ResNet-50 ベースのエンコーダの BigBiGAN モデルを読み込みます。最良の表現学習結果を得るためにもっと大きな RevNet-50-x4 ベースのモデルを読み込む場合には、アクティブな module_path の設定をコメントアウトして、その他の設定をアンコメントします。",
"module_path = 'https://tfhub.dev/deepmind/bigbigan-resnet50/1' # ResNet-50\n# module_path = 'https://tfhub.dev/deepmind/bigbigan-revnet50x4/1' # RevNet-50 x4",
"セットアップ",
"import io\nimport IPython.display\nimport PIL.Image\nfrom pprint import pformat\n\nimport numpy as np\n\nimport tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n\nimport tensorflow_hub as hub",
"関数を定義して画像を表示する",
"def imgrid(imarray, cols=4, pad=1, padval=255, row_major=True):\n \"\"\"Lays out a [N, H, W, C] image array as a single image grid.\"\"\"\n pad = int(pad)\n if pad < 0:\n raise ValueError('pad must be non-negative')\n cols = int(cols)\n assert cols >= 1\n N, H, W, C = imarray.shape\n rows = N // cols + int(N % cols != 0)\n batch_pad = rows * cols - N\n assert batch_pad >= 0\n post_pad = [batch_pad, pad, pad, 0]\n pad_arg = [[0, p] for p in post_pad]\n imarray = np.pad(imarray, pad_arg, 'constant', constant_values=padval)\n H += pad\n W += pad\n grid = (imarray\n .reshape(rows, cols, H, W, C)\n .transpose(0, 2, 1, 3, 4)\n .reshape(rows*H, cols*W, C))\n if pad:\n grid = grid[:-pad, :-pad]\n return grid\n\ndef interleave(*args):\n \"\"\"Interleaves input arrays of the same shape along the batch axis.\"\"\"\n if not args:\n raise ValueError('At least one argument is required.')\n a0 = args[0]\n if any(a.shape != a0.shape for a in args):\n raise ValueError('All inputs must have the same shape.')\n if not a0.shape:\n raise ValueError('Inputs must have at least one axis.')\n out = np.transpose(args, [1, 0] + list(range(2, len(a0.shape) + 1)))\n out = out.reshape(-1, *a0.shape[1:])\n return out\n\ndef imshow(a, format='png', jpeg_fallback=True):\n \"\"\"Displays an image in the given format.\"\"\"\n a = a.astype(np.uint8)\n data = io.BytesIO()\n PIL.Image.fromarray(a).save(data, format)\n im_data = data.getvalue()\n try:\n disp = IPython.display.display(IPython.display.Image(im_data))\n except IOError:\n if jpeg_fallback and format != 'jpeg':\n print ('Warning: image was too large to display in format \"{}\"; '\n 'trying jpeg instead.').format(format)\n return imshow(a, format='jpeg')\n else:\n raise\n return disp\n\ndef image_to_uint8(x):\n \"\"\"Converts [-1, 1] float array to [0, 255] uint8.\"\"\"\n x = np.asarray(x)\n x = (256. / 2.) * (x + 1.)\n x = np.clip(x, 0, 255)\n x = x.astype(np.uint8)\n return x",
"BigBiGAN TF Hub モジュールを読み込んで利用可能な機能を表示する",
"# module = hub.Module(module_path, trainable=True, tags={'train'}) # training\nmodule = hub.Module(module_path) # inference\n\nfor signature in module.get_signature_names():\n print('Signature:', signature)\n print('Inputs:', pformat(module.get_input_info_dict(signature)))\n print('Outputs:', pformat(module.get_output_info_dict(signature)))\n print()",
"ラッパークラスを定義して様々な関数へのアクセスを容易にする",
"class BigBiGAN(object):\n\n def __init__(self, module):\n \"\"\"Initialize a BigBiGAN from the given TF Hub module.\"\"\"\n self._module = module\n\n def generate(self, z, upsample=False):\n \"\"\"Run a batch of latents z through the generator to generate images.\n\n Args:\n z: A batch of 120D Gaussian latents, shape [N, 120].\n\n Returns: a batch of generated RGB images, shape [N, 128, 128, 3], range\n [-1, 1].\n \"\"\"\n outputs = self._module(z, signature='generate', as_dict=True)\n return outputs['upsampled' if upsample else 'default']\n\n def make_generator_ph(self):\n \"\"\"Creates a tf.placeholder with the dtype & shape of generator inputs.\"\"\"\n info = self._module.get_input_info_dict('generate')['z']\n return tf.placeholder(dtype=info.dtype, shape=info.get_shape())\n\n def gen_pairs_for_disc(self, z):\n \"\"\"Compute generator input pairs (G(z), z) for discriminator, given z.\n\n Args:\n z: A batch of latents (120D standard Gaussians), shape [N, 120].\n\n Returns: a tuple (G(z), z) of discriminator inputs.\n \"\"\"\n # Downsample 256x256 image x for 128x128 discriminator input.\n x = self.generate(z)\n return x, z\n\n def encode(self, x, return_all_features=False):\n \"\"\"Run a batch of images x through the encoder.\n\n Args:\n x: A batch of data (256x256 RGB images), shape [N, 256, 256, 3], range\n [-1, 1].\n return_all_features: If True, return all features computed by the encoder.\n Otherwise (default) just return a sample z_hat.\n\n Returns: the sample z_hat of shape [N, 120] (or a dict of all features if\n return_all_features).\n \"\"\"\n outputs = self._module(x, signature='encode', as_dict=True)\n return outputs if return_all_features else outputs['z_sample']\n\n def make_encoder_ph(self):\n \"\"\"Creates a tf.placeholder with the dtype & shape of encoder inputs.\"\"\"\n info = self._module.get_input_info_dict('encode')['x']\n return tf.placeholder(dtype=info.dtype, shape=info.get_shape())\n\n def enc_pairs_for_disc(self, x):\n \"\"\"Compute encoder input pairs (x, E(x)) for discriminator, given x.\n\n Args:\n x: A batch of data (256x256 RGB images), shape [N, 256, 256, 3], range\n [-1, 1].\n\n Returns: a tuple (downsample(x), E(x)) of discriminator inputs.\n \"\"\"\n # Downsample 256x256 image x for 128x128 discriminator input.\n x_down = tf.nn.avg_pool(x, ksize=2, strides=2, padding='SAME')\n z = self.encode(x)\n return x_down, z\n\n def discriminate(self, x, z):\n \"\"\"Compute the discriminator scores for pairs of data (x, z).\n\n (x, z) must be batches with the same leading batch dimension, and joint\n scores are computed on corresponding pairs x[i] and z[i].\n\n Args:\n x: A batch of data (128x128 RGB images), shape [N, 128, 128, 3], range\n [-1, 1].\n z: A batch of latents (120D standard Gaussians), shape [N, 120].\n\n Returns:\n A dict of scores:\n score_xz: the joint scores for the (x, z) pairs.\n score_x: the unary scores for x only.\n score_z: the unary scores for z only.\n \"\"\"\n inputs = dict(x=x, z=z)\n return self._module(inputs, signature='discriminate', as_dict=True)\n\n def reconstruct_x(self, x, use_sample=True, upsample=False):\n \"\"\"Compute BigBiGAN reconstructions of images x via G(E(x)).\n\n Args:\n x: A batch of data (256x256 RGB images), shape [N, 256, 256, 3], range\n [-1, 1].\n use_sample: takes a sample z_hat ~ E(x). Otherwise, deterministically\n use the mean. (Though a sample z_hat may be far from the mean z,\n typically the resulting recons G(z_hat) and G(z) are very\n similar.\n upsample: if set, upsample the reconstruction to the input resolution\n (256x256). Otherwise return the raw lower resolution generator output\n (128x128).\n\n Returns: a batch of recons G(E(x)), shape [N, 256, 256, 3] if\n `upsample`, otherwise [N, 128, 128, 3].\n \"\"\"\n if use_sample:\n z = self.encode(x)\n else:\n z = self.encode(x, return_all_features=True)['z_mean']\n recons = self.generate(z, upsample=upsample)\n return recons\n\n def losses(self, x, z):\n \"\"\"Compute per-module BigBiGAN losses given data & latent sample batches.\n\n Args:\n x: A batch of data (256x256 RGB images), shape [N, 256, 256, 3], range\n [-1, 1].\n z: A batch of latents (120D standard Gaussians), shape [M, 120].\n\n For the original BigBiGAN losses, pass batches of size N=M=2048, with z's\n sampled from a 120D standard Gaussian (e.g., np.random.randn(2048, 120)),\n and x's sampled from the ImageNet (ILSVRC2012) training set with the\n \"ResNet-style\" preprocessing from:\n\n https://github.com/tensorflow/tpu/blob/master/models/official/resnet/resnet_preprocessing.py\n\n Returns:\n A dict of per-module losses:\n disc: loss for the discriminator.\n enc: loss for the encoder.\n gen: loss for the generator.\n \"\"\"\n # Compute discriminator scores on (x, E(x)) pairs.\n # Downsample 256x256 image x for 128x128 discriminator input.\n scores_enc_x_dict = self.discriminate(*self.enc_pairs_for_disc(x))\n scores_enc_x = tf.concat([scores_enc_x_dict['score_xz'],\n scores_enc_x_dict['score_x'],\n scores_enc_x_dict['score_z']], axis=0)\n\n # Compute discriminator scores on (G(z), z) pairs.\n scores_gen_z_dict = self.discriminate(*self.gen_pairs_for_disc(z))\n scores_gen_z = tf.concat([scores_gen_z_dict['score_xz'],\n scores_gen_z_dict['score_x'],\n scores_gen_z_dict['score_z']], axis=0)\n\n disc_loss_enc_x = tf.reduce_mean(tf.nn.relu(1. - scores_enc_x))\n disc_loss_gen_z = tf.reduce_mean(tf.nn.relu(1. + scores_gen_z))\n disc_loss = disc_loss_enc_x + disc_loss_gen_z\n\n enc_loss = tf.reduce_mean(scores_enc_x)\n gen_loss = tf.reduce_mean(-scores_gen_z)\n\n return dict(disc=disc_loss, enc=enc_loss, gen=gen_loss)",
"サンプル、再構成、ディスクリミネータスコア、損失の計算に使用するテンソルを作成する",
"bigbigan = BigBiGAN(module)\n\n# Make input placeholders for x (`enc_ph`) and z (`gen_ph`).\nenc_ph = bigbigan.make_encoder_ph()\ngen_ph = bigbigan.make_generator_ph()\n\n# Compute samples G(z) from encoder input z (`gen_ph`).\ngen_samples = bigbigan.generate(gen_ph)\n\n# Compute reconstructions G(E(x)) of encoder input x (`enc_ph`).\nrecon_x = bigbigan.reconstruct_x(enc_ph, upsample=True)\n\n# Compute encoder features used for representation learning evaluations given\n# encoder input x (`enc_ph`).\nenc_features = bigbigan.encode(enc_ph, return_all_features=True)\n\n# Compute discriminator scores for encoder pairs (x, E(x)) given x (`enc_ph`)\n# and generator pairs (G(z), z) given z (`gen_ph`).\ndisc_scores_enc = bigbigan.discriminate(*bigbigan.enc_pairs_for_disc(enc_ph))\ndisc_scores_gen = bigbigan.discriminate(*bigbigan.gen_pairs_for_disc(gen_ph))\n\n# Compute losses.\nlosses = bigbigan.losses(enc_ph, gen_ph)",
"TensorFlow セッションを作成して変数を初期化する",
"init = tf.global_variables_initializer()\nsess = tf.Session()\nsess.run(init)",
"ジェネレータのサンプル\nまず最初に、事前トレーニング済みの BigBiGAN ジェネレータからのサンプルを、標準のガウス(np.random.randn経由)からジェネレータの入力 z をサンプリングして可視化し、生成される画像を表示します。ここでは、標準的な GAN の能力を超えることは行わず(エンコーダは無視して)ジェネレータのみを使用しています。",
"feed_dict = {gen_ph: np.random.randn(32, 120)}\n_out_samples = sess.run(gen_samples, feed_dict=feed_dict)\nprint('samples shape:', _out_samples.shape)\nimshow(imgrid(image_to_uint8(_out_samples), cols=4))",
"TF-Flowers データセットから test_images を読み込む\nBigBiGAN は ImageNet 上でトレーニングを行いますが、このデモに使用するには大きすぎるため、再構成の可視化およびエンコーダ特徴量計算の入力として、もっと小さな TF-Flowers [1] データセットを使用します。\nこのセルでは TF-Flowers を読み込んで(必要に応じてデータセットをダウンロードします)、256x256 の RGB 画像サンプルの固定バッチを NumPy 配列の test_images に格納します。\n[1] https://www.tensorflow.org/datasets/catalog/tf_flowers",
"def get_flowers_data():\n \"\"\"Returns a [32, 256, 256, 3] np.array of preprocessed TF-Flowers samples.\"\"\"\n import tensorflow_datasets as tfds\n ds, info = tfds.load('tf_flowers', split='train', with_info=True)\n\n # Just get the images themselves as we don't need labels for this demo.\n ds = ds.map(lambda x: x['image'])\n\n # Filter out small images (with minor edge length <256).\n ds = ds.filter(lambda x: tf.reduce_min(tf.shape(x)[:2]) >= 256)\n\n # Take the center square crop of the image and resize to 256x256.\n def crop_and_resize(image):\n imsize = tf.shape(image)[:2]\n minor_edge = tf.reduce_min(imsize)\n start = (imsize - minor_edge) // 2\n stop = start + minor_edge\n cropped_image = image[start[0] : stop[0], start[1] : stop[1]]\n resized_image = tf.image.resize_bicubic([cropped_image], [256, 256])[0]\n return resized_image\n ds = ds.map(crop_and_resize)\n\n # Convert images from [0, 255] uint8 to [-1, 1] float32.\n ds = ds.map(lambda image: tf.cast(image, tf.float32) / (255. / 2.) - 1)\n\n # Take the first 32 samples.\n ds = ds.take(32)\n\n return np.array(list(tfds.as_numpy(ds)))\n\ntest_images = get_flowers_data()",
"再構成\nここでは、実画像をエンコーダに通してジェネレータに戻し、画像 x から G(E(x)) を計算して BigBiGAN の再構成を可視化します。以下には、左列に入力画像xを、右列に対応する再構成を表示します。\n再構成はピクセルが完璧に入力画像と一致しているわけではなく、むしろ入力の低レベルの詳細情報の大部分を「忘れる」一方で、高レベルのセマンティックな情報はキャプチャする傾向があることに注意してください。このことは、表現学習のアプローチで表示する画像の高レベルのセマンティックな情報の型をキャプチャするように BigBiGAN エンコーダが学習する可能性があることを示唆しています。\nまた、256x256 入力の未加工画像の再構成は、ジェネレータが 128x128 の低解像度で生成することにも注意してください。可視化が目的なのでアップサンプルしています。",
"test_images_batch = test_images[:16]\n_out_recons = sess.run(recon_x, feed_dict={enc_ph: test_images_batch})\nprint('reconstructions shape:', _out_recons.shape)\n\ninputs_and_recons = interleave(test_images_batch, _out_recons)\nprint('inputs_and_recons shape:', inputs_and_recons.shape)\nimshow(imgrid(image_to_uint8(inputs_and_recons), cols=2))",
"エンコーダの特徴量\nここでは、標準的な表現学習評価に使用されるエンコーダから特徴量を計算する方法を示します。\nこれらの特徴量は、線形分類器または最近傍法をベースとする分類器で使用する場合があります。全体平均プーリングの後に取得する標準的特徴(主な avepool_feat)を含むと共に、もっと大きな「BN+CReLU」特徴量(主なbn_crelu_feat)を使用して、最良の結果が得られるようにしています。",
"_out_features = sess.run(enc_features, feed_dict={enc_ph: test_images_batch})\nprint('AvePool features shape:', _out_features['avepool_feat'].shape)\nprint('BN+CReLU features shape:', _out_features['bn_crelu_feat'].shape)",
"識別器のスコアと損失\n最後に、エンコーダとジェネレータのペアのバッチについて、識別器のスコアと損失を計算します。これらの損失をオプティマイザに渡して BigBiGAN のトレーニングを行うことができます。\n上の画像のバッチをエンコーダ入力xとして使用し、エンコーダスコアをD(x, E(x))として計算します。ジェネレータの入力にはnp.random.randnを使用して 120D の標準ガウスからzをサンプリングし、ジェネレータのスコアをD(G(z), z)として計算します。\n識別器は (x, z) のペアに対する結合スコア score_xz、および <code data-md-type=\"codespan\">x と z の単独スコア score_x と score_z を予測します。これは、エンコーダのペアには高い(正の)スコアを、ジェネレータのペアには低い(負の)スコアを与えるようにトレーニングされています。大抵は以下のようになり、単独スコア score_z はどちらの場合でも負となりますが、エンコーダ出力 E(x) がガウスからの実際のサンプルに似ていることを示しています。",
"feed_dict = {enc_ph: test_images, gen_ph: np.random.randn(32, 120)}\n_out_scores_enc, _out_scores_gen, _out_losses = sess.run(\n [disc_scores_enc, disc_scores_gen, losses], feed_dict=feed_dict)\nprint('Encoder scores:', {k: v.mean() for k, v in _out_scores_enc.items()})\nprint('Generator scores:', {k: v.mean() for k, v in _out_scores_gen.items()})\nprint('Losses:', _out_losses)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
martinjrobins/hobo
|
examples/interfaces/statsmodels-arima.ipynb
|
bsd-3-clause
|
[
"Interface to statsmodels: ARIMA time series models\nThis notebook provides a short exposition of how it is possible to interface with the cornucopia of time series models provided by the statsmodels package. In this notebook, we illustrate how to fit the logistic ODE model, where the errors are described by ARIMA models.",
"import pints\nimport pints.toy as toy\nimport pints.plot\nimport numpy as np\nimport matplotlib.pyplot as plt",
"ARMA errors\nWe assume that the observed data $y(t)$ follows\n$$y(t)= f(t; \\theta) + \\epsilon(t),$$\nwhere $f(t; \\theta)$ is the logistic model solution.\nUnder the ARMA(1,1) noise model, the error terms $\\epsilon(t)$ have 1 moving average term and 1 autoregressive term. Therefore, \n$$\\epsilon(t) = \\rho \\epsilon(t-1) + \\nu(t) + \\phi \\nu(t-1),$$\nwhere the white noise term $\\nu(t) \\overset{i.i.d.}{\\sim} \\mathcal{N}(0, \\sigma \\sqrt{(1 - \\rho^2) / (1 + 2 \\rho \\phi + \\phi^2))}$. The noise process standard deviation is such that the marginal distribution of $\\epsilon$ is,\n$$\\epsilon\\sim\\mathcal{N}(0, \\sigma).$$\nThe ARMA(1,1) noise model is available in Pints using pints.ARMA11LogLikelihood. As before, the code below shows how to generate a time series with ARMA(1,1) noise and perform Bayesian inference using the Kalman filter provided by the statsmodels ARIMA module.\nNote that, whilst we do not show how to do this, it is possible to use the score function of the statsmodels package to calculate sensitivities of the log-likelihood.",
"# Load a forward model\nmodel = toy.LogisticModel()\n\n# Create some toy data\nreal_parameters = [0.015, 500]\ntimes = np.linspace(0, 1000, 1000)\norg_values = model.simulate(real_parameters, times)\n\n# Add noise\nnoise = 10\nrho = 0.9\nphi = 0.95\n## makes sigma comparable with estimate from statsmodel\nerrors = pints.noise.arma11(rho, phi, noise / np.sqrt((1-rho**2) / (1 + 2 * rho * phi + phi**2)), len(org_values))\nvalues = org_values + errors\n\n# Show the noisy data\nplt.figure()\nplt.plot(times, org_values)\nplt.plot(times, values)\nplt.xlabel('time')\nplt.ylabel('y')\nplt.legend(['true', 'observed'])\nplt.show()",
"Perform Bayesian inference using statsmodels' ARIMA Kalman filter\nHere, we fit an ARMA(1,1) model in a Bayesian framework. Note, this is different from the fit functionality in the statsmodels package, which estimates maximum likelihood parameter values.",
"from statsmodels.tsa.arima.model import ARIMA\n\nmodel = toy.LogisticModel()\n\nclass ARIMALogLikelihood(pints.ProblemLogLikelihood):\n def __init__(self, problem, arima_order):\n super(ARIMALogLikelihood, self).__init__(problem)\n self._nt = len(self._times) - 1\n self._no = problem.n_outputs()\n \n if len(arima_order) != 3:\n raise ValueError(\"ARIMA (p, d, q) orders must be tuple of length 3.\")\n self._arima_order = arima_order\n p = arima_order[0]\n d = arima_order[1]\n q = arima_order[2]\n self._p = p\n self._q = q\n self._d = d\n \n self._n_parameters = problem.n_parameters() + (p + q + 1) * self._no\n self._m = (self._p + self._q + 1) * self._no\n \n def __call__(self, x):\n # convert x to list to make it easier to append\n # nuisance params\n x = x.tolist()\n # p AR params; q MA params\n m = self._m\n \n # extract noise model params\n parameters = x[-m:]\n sol = self._problem.evaluate(x[:-m])\n model = ARIMA(endog=self._values,\n order=self._arima_order,\n exog=sol)\n # in statsmodels, parameters are variances\n # rather than std. deviations, so square\n sigma2 = parameters[-1]**2\n parameters = parameters[:-1] + [sigma2]\n \n # first param is trend (if model not differenced),\n # second is coefficient on ODE soln\n # see model.param_names\n if self._d == 0:\n full_params = [0, 1] + parameters\n else:\n full_params = [1] + parameters \n return model.loglike(full_params)\n \n# Create an object with links to the model and time series\nproblem = pints.SingleOutputProblem(model, times, values)\n\n# Create a log-likelihood function (adds an extra parameter!)\nlog_likelihood = ARIMALogLikelihood(problem, arima_order=(1, 0, 1))\n\n# Create a uniform prior over both the parameters and the new noise variable\nlog_prior = pints.UniformLogPrior(\n [0.01, 400, 0, 0, noise * 0.1],\n [0.02, 600, 1, 1, noise * 100],\n)\n\n# Create a posterior log-likelihood (log(likelihood * prior))\nlog_posterior = pints.LogPosterior(log_likelihood, log_prior)\n\n# Choose starting points for 3 mcmc chains\nreal_parameters = np.array(real_parameters + [rho, phi, 10])\nxs = [\n real_parameters * 1.05,\n real_parameters * 1,\n real_parameters * 1.025\n]\n\n# Create mcmc routine\nmcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC)\n\n# Add stopping criterion\nmcmc.set_max_iterations(4000)\n\n# Disable logging\nmcmc.set_log_to_screen(False)\n\n# Run!\nprint('Running...')\nchains = mcmc.run()\nprint('Done!')",
"Look at results.",
"# Show traces and histograms\npints.plot.trace(chains,\n ref_parameters=real_parameters,\n parameter_names=[r'$r$', r'$k$', r'$\\rho$', r'$\\phi$', r'$\\sigma$'])\n\n# Discard warm up\nchains = chains[:, 2000:, :]\n\n# Look at distribution in chain 0\npints.plot.pairwise(chains[0],\n kde=False,\n ref_parameters=real_parameters,\n parameter_names=[r'$r$', r'$k$', r'$\\rho$', r'$\\phi$', r'$\\sigma$'])\n\n# Show graphs\nplt.show()",
"Look at results. Note that 'sigma' will be different to the value used to generate the data, due to a different definition.",
"results = pints.MCMCSummary(chains=chains,\n parameter_names=[\"r\", \"k\", \"rho\", \"phi\", \"sigma\"])\nprint(results)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hunterherrin/phys202-2015-work
|
assignments/assignment03/NumpyEx03.ipynb
|
mit
|
[
"Numpy Exercise 3\nImports",
"import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom math import exp\n\n\nimport antipackage\nimport github.ellisonbg.misc.vizarray as va",
"Geometric Brownian motion\nHere is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.",
"def brownian(maxt, n):\n \"\"\"Return one realization of a Brownian (Wiener) process with n steps and a max time of t.\"\"\"\n t = np.linspace(0.0,maxt,n)\n h = t[1]-t[0]\n Z = np.random.normal(0.0,1.0,n-1)\n dW = np.sqrt(h)*Z\n W = np.zeros(n)\n W[1:] = dW.cumsum()\n return t, W",
"Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.",
"t, W = brownian(1.0, 1000)\n\n\n\nassert isinstance(t, np.ndarray)\nassert isinstance(W, np.ndarray)\nassert t.dtype==np.dtype(float)\nassert W.dtype==np.dtype(float)\nassert len(t)==len(W)==1000",
"Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.",
"plt.plot(t, W)\nplt.xlabel('t')\nplt.ylabel('W(t)')\n\n\nassert True # this is for grading",
"Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.",
"dW = np.diff(W)\nnp.mean(dW)\nnp.std(dW)\n\nassert len(dW)==len(W)-1\nassert dW.dtype==np.dtype(float)",
"Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:\n$$\nX(t) = X_0 e^{((\\mu - \\sigma^2/2)t + \\sigma W(t))}\n$$\nUse Numpy ufuncs and no loops in your function.",
"def geo_brownian(t, W, X0, mu, sigma):\n \"Return X(t) for geometric brownian motion with drift mu, volatility sigma.\"\"\"\n y = ((mu-(sigma**2)/2)*t+sigma*W)\n b = X0*np.exp(y)\n return(t, b)\n\nassert True # leave this for grading",
"Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\\mu=0.5$ and $\\sigma=0.3$ with the Wiener process you computed above.\nVisualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.",
"q, r = geo_brownian(t, W, 1.0, 0.5, 0.3)\nplt.plot(q, r)\nplt.xlabel('t')\nplt.ylabel('X(t)')\n\nassert True # leave this for grading"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gprMax/gprMax
|
tools/Jupyter_notebooks/example_Ascan_2D.ipynb
|
gpl-3.0
|
[
"A-scan from a metal cylinder (2D)\nThis example is the GPR modelling equivalent of 'Hello World'! It demonstrates how to simulate a single trace (A-scan) from a metal cylinder buried in a dielectric half-space. The input needed to create the model is:\nmy_cylinder_Ascan_2D.in",
"%%writefile ../../user_models/cylinder_Ascan_2D.in\n#title: A-scan from a metal cylinder buried in a dielectric half-space\n#domain: 0.240 0.210 0.002\n#dx_dy_dz: 0.002 0.002 0.002\n#time_window: 3e-9\n\n#material: 6 0 1 0 half_space\n\n#waveform: ricker 1 1.5e9 my_ricker\n#hertzian_dipole: z 0.100 0.170 0 my_ricker\n#rx: 0.140 0.170 0\n\n#box: 0 0 0 0.240 0.170 0.002 half_space\n#cylinder: 0.120 0.080 0 0.120 0.080 0.002 0.010 pec\n\n#geometry_view: 0 0 0 0.240 0.210 0.002 0.002 0.002 0.002 cylinder_half_space n",
"Geometry of a metal cylinder buried in a dielectric half-space\n<img style=\"float: left\" src=\"cylinder_half_space_geo.png\" width=\"50%\" height=\"50%\" />\nThe geometry of the scenario is straightforward - the transparent area around the boundary of the domain represents the PML region. The red cell is the source and the blue cell is the receiver.\nFor this initial example a detailed description of what each command in the input file does and why each command was used is given. The following steps explain the process of building the input file:\n1. Determine the constitutive parameters for the materials\nThere will be three different materials in the model representing air, the dielectric half-space, and the metal cylinder. Air (free space) already exists as a built-in material in gprMax which can be accessed using the free_space identifier. The metal cylinder will be modelled as a Perfect Electric Conductor, which again exists as a built-in material in gprMax and can be accessed using the pec identifier. So the only material which has to be defined is for the dielectric half-space. It is a non-magnetic material, i.e. $\\mu_r=1$ and $\\sigma_*=0$ and with a relative permittivity of six, $\\epsilon_r=6$, and zero conductivity, $\\sigma=0$. The identifier half_space will be used.\n#material: 6 0 1 0 half_space\n\n2. Determine the source type and excitation frequency\nThese should generally be known, often based on the GPR system or scenario being modelled. Low frequencies are used where significant penetration depth is important, whereas high frequencies are used where less penetration and better resolution are required. In this case a theoretical Hertzian dipole source fed with a Ricker waveform with a centre frequency of $f_c=1.5~\\textrm{GHz}$ will be used to simulate the GPR antenna (see the bowtie antenna example model for how to include a model of the actual GPR antenna in the simulation).\n#waveform: ricker 1 1.5e9 my_ricker\n#hertzian_dipole: z 0.100 0.170 0 my_ricker\n\nThe Ricker waveform is created with the #waveform command, specifying an amplitude of one, centre frequency of 1.5 GHz and picking an arbitrary identifier of my_ricker. The Hertzian dipole source is created using the #hertzian_dipole command, specifying a z direction polarisation (perpendicular to the survey direction if a B-scan were being created), location on the surface of the slab, and using the Ricker waveform already created.\n3. Calculate a spatial resolution and domain size\nIn the guidance section it was stated that a good rule-of-thumb was that the spatial resolution should be one tenth of the smallest wavelength present in the model. To determine the smallest wavelength, the highest frequency and lowest velocity present in the model are required. The highest frequency is not the centre frequency of the Ricker waveform! \nYou can use the following code to plot builtin waveforms and their FFTs.",
"%matplotlib inline\nfrom gprMax.waveforms import Waveform\nfrom tools.plot_source_wave import check_timewindow, mpl_plot\n\nw = Waveform()\nw.type = 'ricker'\nw.amp = 1\nw.freq = 1.5e9\ntimewindow = 3e-9\ndt = 1.926e-12\n\ntimewindow, iterations = check_timewindow(timewindow, dt)\nplt = mpl_plot(w, timewindow, dt, iterations, fft=True)",
"By examining the spectrum of a Ricker waveform it is evident much higher frequencies are present, i.e. at a level -40dB from the centre frequency, frequencies 2-3 times as high are present. In this case the highest significant frequency present in the model is likely to be around 4 GHz. To calculate the wavelength at 4 GHz in the half-space (which has the lowest velocity) use:\n$$\\lambda = \\frac{c}{f \\sqrt{\\epsilon_r}}$$",
"from math import sqrt\n\n# Speed of light in vacuum (m/s)\nc = 299792458\n\n# Highest relative permittivity present in model\ner = 6\n\n# Maximum frequency present in model\nfmax = 4e9\n\n# Minimum wavelength\nwmin = c / (fmax * sqrt(er))\n\n# Maximum spatial resolution\ndmin = wmin / 10\n\nprint('Minimum wavelength: {:g} m'.format(wmin))\nprint('Maximum spatial resolution: {:g} m'.format(dmin))",
"This would give a minimum spatial resolution of 3 mm. However, the diameter of the cylinder is 20 mm so would be resolved to 7 cells. Therefore a better choice would be 2 mm which resolves the diameter of the rebar to 10 cells.\n#dx_dy_dz: 0.002 0.002 0.002\n\nThe domain size should be enough to enclose the volume of interest, plus allow 10 cells (if using the default value) for the PML absorbing boundary conditions and approximately another 10 cells of between the PML and any objects of interest. In this case the plan is to take a B-scan of the scenario (in the next example) so the domain should be large enough to do that. Although this is a 2D model one cell must be specified in the infinite direction (in this case the z direction) of the domain.\n#domain: 0.240 0.210 0.002\n\n4. Choose a time window\nIt is desired to see the reflection from the cylinder, therefore the time window must be long enough to allow the electromagnetic waves to propagate from the source through the half-space to the cylinder and be reflected back to the receiver.",
"d = 0.090\nt = (2 * d) / (c / sqrt(6))\nprint('Minimum time window: {:g} s'.format(t))",
"This is the minimum time required, but the source waveform has a width of 1.2 ns, to allow for the entire source waveform to be reflected back to the receiver an initial time window of 3 ns will be tested.\n#time_window: 3e-9\n\nThe time step required for the model is automatically calculated using the CFL condition (for this case in 2D).\n5. Create the objects\nNow physical objects can be created for the half-space and the cylinder. First the #box command will be used to create the half-space and then the #cylinder command will be given which will overwrite the properties of the half-space with those of the cylinder at the location of the cylinder.\n#box: 0 0 0 0.240 0.170 0.002 half_space\n#cylinder: 0.120 0.080 0 0.120 0.080 0.002 0.010 pec\n\nRun the model\nYou can now run the model using:\npython -m gprMax user_models/cylinder_Ascan_2D.in\n\nTip: You can use the --geometry-only command line argument to build a model and produce any geometry views but not run the simulation. This option is useful for checking the geometry of the model is correct.",
"import os\nfrom gprMax.gprMax import api\n\nfilename = os.path.join(os.pardir, os.pardir, 'user_models', 'cylinder_Ascan_2D.in')\napi(filename, n=1, geometry_only=False)",
"View the results\nPlot the A-scan\nYou should have produced an output file cylinder_Ascan_2D.out. You can view the results using:\npython -m tools.plot_Ascan user_models/cylinder_Ascan_2D.out\n\nYou can use the following code to experiment with plotting different field/current components.",
"%matplotlib inline\nimport os\nfrom gprMax.receivers import Rx\nfrom tools.plot_Ascan import mpl_plot\n\nfilename = os.path.join(os.pardir, os.pardir, 'user_models', 'cylinder_Ascan_2D.out')\noutputs = Rx.defaultoutputs\n#outputs = ['Ez']\nplt = mpl_plot(filename, outputs, fft=False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ingmarschuster/distributions
|
Transforms_demo.ipynb
|
lgpl-3.0
|
[
"Ingmars change of variables code\nFirst of, some plotting code",
"from __future__ import division, print_function, absolute_import\n\nimport numpy as np\nimport scipy as sp\nimport scipy.stats as stats\n\nfrom numpy import exp, log, sqrt\nfrom scipy.misc import logsumexp\n\nimport distributions as dist, distributions.transform as tr\n\nimport matplotlib.pyplot as plt\n\ndef apply_to_mg(func, *mg):\n #apply a function to points on a meshgrid\n x = np.vstack([e.flat for e in mg]).T\n return np.array([func(i) for i in x]).reshape(mg[0].shape)\n\ndef cont(f, coord, grid_density=100):\n fig = plt.figure()\n xx = np.linspace(coord[0][0], coord[0][1], grid_density)\n yy = np.linspace(coord[1][0], coord[1][1], grid_density)\n X, Y = np.meshgrid(xx,yy)\n Z = apply_to_mg(f, X, Y)\n plt.contour(X,Y,exp(Z))\n\ndef visualize(f, xin, yin, coord):\n fig = plt.figure()\n \n #\n plt.scatter(xin, yin)\n xx = np.linspace(coord[0][0], coord[0][1],100)\n yy = np.linspace(coord[1][0], coord[1][1],100)\n X, Y = np.meshgrid(xx,yy)\n Z = apply_to_mg(f, X, Y)\n plt.contour(X,Y,exp(Z))\n\ndef vis_dist(d, nsamps, coord):\n samps = d.rvs(nsamps).T\n visualize(d.logpdf, samps[0], samps[1], coord)\n \n try:\n for x in [np.array((0,8)), np.array((-1,10))]:\n g = d.logpdf_grad(x)\n print('d',g)\n g = x+g\n\n plt.arrow(x[0],x[1], g[0], g[1], head_width=0.05, head_length=0.1, fc='k', ec='k')\n except:\n pass\n plt.show()",
"Now lets separate a 2D Gaussian into 2 or 4 modes",
"vis_dist(tr.Separate(dist.mvnorm(np.zeros(2), np.eye(2)), [0, ], 0., 0.3), 1000, [[-2,2], [-2,2]])\nvis_dist(tr.Separate(dist.mvnorm(np.zeros(2), np.eye(2)), [0, 1], 0., 0.3), 1000, [[-2,2], [-2,2]])",
"Playing around with the power parameter of the Separate transform increases and decreases separation",
"vis_dist(tr.Separate(dist.mvnorm(np.zeros(2), np.eye(2)), [0, 1], 0., 0.3), 1000, [[-2,2], [-2,2]])\nvis_dist(tr.Separate(dist.mvnorm(np.zeros(2), np.eye(2)), [0, 1], 0., 0.5), 1000, [[-2,2], [-2,2]])",
"And the transformation is fully invertible: we can get back to the gaussian we started with (because of a bug in the implementation, the contours are messed up for the second plot)",
"np.random.seed(1)\nplt.scatter(*dist.mvnorm(np.zeros(2), np.eye(2)).rvs(100).T)\nnp.random.seed(1)\nvis_dist(tr.Separate(tr.Separate(dist.mvnorm(np.zeros(2), np.eye(2)), [0, 1], 0., 0.3), [0, 1], 0., 1./0.3), 100, [[-2,2], [-2,2]])\nnp.random.seed()",
"Demonstration of some of the other implemented transforms (Softplus, power, multiplication of differnt dimensions)",
"vis_dist(tr.Softplus(dist.mvnorm(np.zeros(2), np.eye(2)), [0, 1]), 1000, [[-2,2], [-2,2]])\nvis_dist(tr.Power(dist.mvnorm(np.zeros(2), np.eye(2)), np.array([1]), 1.5, 0, 2), 1000, [[-7,7], [-5,10]])\nvis_dist(tr.TimesFirst(dist.mvnorm(np.zeros(2), np.eye(2)), [ 1]), 1000, [[-4,4], [-2,2]])\n#vis_dist(tr.DivByFirst(dist.mvnorm(np.zeros(2), np.eye(2)), [ 1]), 1000, [[-2,2], [-2,2]])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
christinahedges/PyKE
|
docs/source/tutorials/ipython_notebooks/whatsnew31.ipynb
|
mit
|
[
"What's new in PyKE 3.1?\nUtility functions\nPyKE has included two convinience functions to convert between module.output to channel and vice-versa:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfrom pyke.utils import module_output_to_channel, channel_to_module_output\n\nmodule_output_to_channel(module=19, output=3)\n\nchannel_to_module_output(67)",
"PyKE 3.1 includes KeplerQualityFlags class which encodes the meaning of the Kepler QUALITY bitmask flags as documented in the Kepler Archive Manual (Table 2.3):",
"from pyke.utils import KeplerQualityFlags\n\nKeplerQualityFlags.decode(1)",
"It also can handle multiple flags:",
"KeplerQualityFlags.decode(1 + 1024 + 1048576)",
"A few quality flags are already computed:",
"KeplerQualityFlags.decode(KeplerQualityFlags.DEFAULT_BITMASK)\n\nKeplerQualityFlags.decode(KeplerQualityFlags.CONSERVATIVE_BITMASK)",
"Target Pixel File (TPF)\nPyKE 3.1 includes class called KeplerTargetPixelFile which is used to handle target pixel files:",
"from pyke import KeplerTargetPixelFile",
"A KeplerTargetPixelFile can be instantiated either from a local file or a url:",
"tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'\n '200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz')",
"Additionally, we can mask out cadences that are flagged using the quality_bitmask argument in the constructor:",
"tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'\n '200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz',\n quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)",
"Furthermore, we can mask out pixel values using the aperture_mask argument. The default behaviour is to use\nall pixels that have real values. This argument can also get a string value 'kepler-pipeline', in which case the default aperture used by Kepler's pipeline is applied.",
"tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'\n '200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz',\n aperture_mask='kepler-pipeline',\n quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)\n\ntpf.aperture_mask",
"The TPF objects stores both data and a few metadata information, e.g., channel number, EPIC number, reference column and row, module, and shape. The whole header is also available:",
"tpf.header(ext=0)",
"The pixel fluxes time series can be accessed using the flux property:",
"tpf.flux.shape",
"This shows that our TPF is a 35 x 35 image recorded over 3209 cadences.\nOne can visualize the pixel data at a given cadence using the plot method:",
"tpf.plot(frame=1)",
"We can perform aperture photometry using the method to_lightcurve:",
"lc = tpf.to_lightcurve()\n\nplt.figure(figsize=[17, 4])\nplt.plot(lc.time, lc.flux)",
"Light Curves\nLet's see how the previous light curve compares against the 'SAP_FLUX' produced by Kepler's pipeline. For that, we are going to explore the KeplerLightCurveFile class:",
"from pyke.lightcurve import KeplerLightCurveFile\n\nklc = KeplerLightCurveFile('https://archive.stsci.edu/missions/k2/lightcurves/'\n 'c14/200100000/82000/ktwo200182949-c14_llc.fits',\n quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)\n\nsap_lc = klc.SAP_FLUX\n\nplt.figure(figsize=[17, 4])\nplt.plot(lc.time, lc.flux)\nplt.plot(sap_lc.time, sap_lc.flux)\nplt.ylabel('Flux (e-/s)')\nplt.xlabel('Time (BJD - 2454833)')",
"Now, let's correct this light curve using by fitting cotrending basis vectors. That can be achived either with the KeplerCBVCorrector class or the compute_cotrended_lightcurve in KeplerLightCurveFile. Let's try the latter:",
"klc_corrected = klc.compute_cotrended_lightcurve(cbvs=range(1, 17))\n\nplt.figure(figsize=[17, 4])\nplt.plot(klc_corrected.time, klc_corrected.flux)\nplt.ylabel('Flux (e-/s)')\nplt.xlabel('Time (BJD - 2454833)')\n\npdcsap_lc = klc.PDCSAP_FLUX\n\nplt.figure(figsize=[17, 4])\nplt.plot(klc_corrected.time, klc_corrected.flux)\nplt.plot(pdcsap_lc.time, pdcsap_lc.flux)\nplt.ylabel('Flux (e-/s)')\nplt.xlabel('Time (BJD - 2454833)')",
"Pixel Response Function (PRF) Photometry\nPyKE 3.1 also includes tools to perform PRF Photometry:",
"from pyke.prf import PRFPhotometry, SceneModel, SimpleKeplerPRF",
"For that, let's create a SceneModel which will be fitted to the object of the following TPF:",
"tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'\n '201500000/43000/ktwo201543306-c14_lpd-targ.fits.gz',\n quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)\n\ntpf.plot(frame=100)\n\nscene = SceneModel(prfs=[SimpleKeplerPRF(channel=tpf.channel, shape=tpf.shape[1:],\n column=tpf.column, row=tpf.row)])",
"We also need to define prior distributions on the parameters of our SceneModel model. Those parameters are\nthe flux, center positions of the target, and a constant background level. We can do that with oktopus:",
"from oktopus.prior import UniformPrior\n\nunif_prior = UniformPrior(lb=[0, 1090., 706., 0.],\n ub=[1e5, 1096., 712., 1e5])\n\nscene.plot(*unif_prior.mean)\n\nprf_phot = PRFPhotometry(scene_model=scene, prior=unif_prior)\n\nresults = prf_phot.fit(tpf.flux + tpf.flux_bkg)\n\nplt.imshow(prf_phot.residuals[1], origin='lower')\nplt.colorbar()\n\nflux = results[:, 0]\nxcenter = results[:, 1]\nycenter = results[:, 2]\nbkg_density = results[:, 3]\n\nplt.figure(figsize=[17, 4])\nplt.plot(tpf.time, flux)\nplt.ylabel('Flux (e-/s)')\nplt.xlabel('Time (BJD - 2454833)')\n\nplt.figure(figsize=[17, 4])\nplt.plot(tpf.time, xcenter)\nplt.ylabel('Column position')\nplt.xlabel('Time (BJD - 2454833)')\n\nplt.figure(figsize=[17, 4])\nplt.plot(tpf.time, ycenter)\nplt.ylabel('Row position')\nplt.xlabel('Time (BJD - 2454833)')\n\nplt.figure(figsize=[17, 4])\nplt.plot(tpf.time, bkg_density)\nplt.ylabel('Background density')\nplt.xlabel('Time (BJD - 2454833)')",
"For more examples on PRF photometry, please see our tutorials page: http://pyke.keplerscience.org/tutorials/index.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
whitead/numerical_stats
|
unit_10/lectures/lecture_1.ipynb
|
gpl-3.0
|
[
"Hypothesis Tests (Parametric)\nUnit 10, Lecture 1\nNumerical Methods and Statistics\n\nReading\nLangley: Pages 137-189, 199-211, 230-245\n\nProf. Andrew White, April 3 2018\nGoals:\n\nUnderstand the meaning of a null hypothesis\nBe able to construct a null hypothesis\nBe able to calculate a p-value, understand its meaning, and test it against significance level\nVisualize the intervals used to compute p-values for $z$ and $t$ tests",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom math import sqrt, pi, erf\nimport scipy.optimize",
"Hypothesis Testing\nHypothesis: Going to college increases your starting salary. How do you prove this?\nIt's not really possible to prove this directly. We can, however, disprove the opposite hypothesis. We construct the opposite hypothesis to what we're interested in, called the null hypothesis. The null hypothesis is an assumption of no-difference and/or no-correlation.\nIn our example, this seems simple at first: Going to college has no effect on your salary. \nBut, maybe the null hypothesis is: For People who qualified and were accepted to college, attending college has no effect on their salary. \nOr it might be People who can afford, are smart enough, and motivated enough to go to college but did not, have the same salary as those that did.\nLet's assume we know well enough what our null hypothesis is. Hypothesis testing is the ability to use statistics to disprove the null hypothesis\nNull Hypothesis: The simplest explanation, typically meaning no correlation or that all data is from the same population. Because a simpler explanation is preferred to a more complex one, the null hypothesis should be our default belief. We construct our null hypothesis as sort-of the opposite of what we want to study. For example, if we want to know if a sample is significantly different than our population, our null hypothesis is that it is not significantly different and then we aim to disprove the null hypothesis. Hypothesis testing is about showing significance by disproving or rejecting the null hypothesis.\nHypothesis Testing\nWe construct a null hypothesis and take it to be true. For example, we believe college has no influence on income. This allows us to build a simple probability model. For colelge then, we might take income to be normally distributed. Then we see how likely our observed data is according to that null hypothesis model. For example, we check to see if the sample mean of people who graduated from college matches our null hypothesis mean.\nHypothesis Test Example\nI open up a cash4gold store and people bring me their jewlery. I know the probability distribution for gold melting is normal with mean 1060 $^{\\circ}$C and my measurements have a standard deviation of 3$^{\\circ} $C. I melt a sample at 1045 $^{\\circ} $ C and want to know if I should be suspicious.\nNull Hypothesis: The sample is gold\nLet's see if I can disporve this. If the sample is gold, what is the probability of that measurement? Zero, because it's a single point. Let's ask though how big of an interval would we need to include that measurement.",
"from scipy import stats as ss\n\nZ = abs(1045 - 1060.) / 3\ninterval_p = ss.norm.cdf(Z) - ss.norm.cdf(-Z)\n\nprint(interval_p)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nsample = 1045\nmu = 1060\nw = mu - sample\nx = np.linspace(mu - w - 2, w + mu + 2, 1000)\ny = ss.norm.pdf(x, loc=mu, scale=3)\nplt.plot(x,y)\nplt.fill_between(x, y, where= np.abs(x - mu) < w, color='lightblue')\nplt.text(mu,np.max(y) / 3, 'Area={}'.format(np.round(interval_p,6)), fontdict={'size':14}, horizontalalignment='center')\nplt.axvline(mu - w, linestyle='--', color='orange')\nplt.axvline(mu + w, linestyle='--', color='blue')\nplt.xticks([mu - w, mu + w], [sample, mu + w])\nplt.yticks([])\nplt.ylabel(r'$p(x)$')\nplt.xlabel(r'$x$')\nplt.show()",
"We would expect to see values outside or at the boundary of this interval 0.00000001% of the time.\nWhat is signficiant?\nWould we call it significant if it was 0.1%? what about 1%? It turns out the convention is 5%. This is called our $\\alpha$ or significance level.\nWe saw in the cash4gold example how to compare a single sample with a known population. What about when we don't know the standard deviation of the population?\nI open up a cash4gold store and people bring me their jewlery. I know the probability distribution for gold melting is normal with mean 1060 $^{\\circ}$C. I do not know the standard deviation. If I get a sample that melts at 1045$^\\circ{}$C, should I be suspicious?\nI don't know. We have no way of estimating the standard deviation with one sample, so we can't say anything. If we have at least 3 samples though, we can compute the sample standard deviation\nThis leads us to the beautiful part about hypothesis testing: since we're assuming the null hypothesis, that the samples are gold, we can use what we learned about normal distributions to estimate the population standard deviation from our samples.\n\nGeneral Strategy\nThe general approach for hypothesis tests is as follows:\n\nStart with a null hypothesis, $H_0$, that corresponds to a probability distribution/mass function\nCompute the probability of an interval which inculdes values as extreme as your sample data. Note this may be single- or double-sided depending on if \"extreme\" means both above and below or only above/below the mean. This is your $p$-value.\nReject the null hypothesis if the $p$-value is below your significance level ($\\alpha$), which is generally 0.05\n\nStudent's $t$-Test\nI open up a cash4gold store and people bring me their jewlery. I know the probability distribution for gold melting is normal with mean 1060 $^{\\circ}$C. I do not know the standard deviation. Someone brings in 6 samples and they melt at 1035, 1050, 1020, 1055, and 1046 $^{\\circ}$C. Should I reject the null hpyothesis, that these are gold?\nLogical Steps:\n\nAssuming the null hypothesis, compute our uncertainty in the true mean confidence interval. This is our probability distribution\nWe happen to have the true mean, so then we see how big the confidence interval has to be to include it. This is us constructing our probability interval\nTake the area of that interval, our $p$-value, and compare with the significance level",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nsamples = np.array([1035., 1050., 1020., 1055., 1046.])\nsigma = np.sqrt(np.var(samples, ddof=1))\nsample_mean = np.mean(samples)\nmu = 1060\nw = mu - sample_mean\n\nx = np.linspace(mu - w - 2, w + mu + 2, 1000)\ny1 = ss.norm.pdf(x, loc=mu, scale=3)\ny2 = ss.t.pdf(x, loc=mu, scale=sigma / np.sqrt(len(samples)), df=len(samples) - 1)\nplt.plot(x,y1, label='single sample')\nplt.plot(x,y2, label='multiple samples')\nplt.fill_between(x, y1, where= np.abs(x - mu) < w, color='lightblue')\n\nplt.axvline(mu - w, linestyle='--', color='orange')\nplt.axvline(mu + w, linestyle='--', color='gray')\nplt.xticks([mu - w, mu + w], [sample_mean, mu + w])\nplt.yticks([])\nplt.ylabel(r'$p(x)$')\nplt.xlabel(r'$x$')\nplt.legend(loc='best')\nplt.show()\n\nsamples = np.array([1035., 1050., 1020., 1055., 1046.])\nsigma = np.sqrt(np.var(samples, ddof=1))\nsample_mean = np.mean(samples)\ntrue_mean = 1060.\n\nprint(sigma, sample_mean, true_mean)\n\nT = (sample_mean - true_mean) / (sigma / np.sqrt(len(samples)))\nprint(T)",
"Now we have a $T$ and we know $P(T)$. However, just like the $zM$ test, we can't compute $P(T)$ since that's a single point and we're using a continuous distribution. So instead, we build an interval and see how big it must be to catch that $T$. \nSpecifically, we want to find $\\int_{-T}^T p(t)\\,dt$",
"print('T = ', T)\n\np = ss.t.cdf(T, len(samples) - 1) # The integral from -infinity to T\nprint(p, 'Is the single sided p-value')\n\np = 1 - (ss.t.cdf(-T, len(samples) - 1) - ss.t.cdf(T, len(samples) - 1)) # 1- The integral from -T to T\nprint(p, 'Is the double sided p-value')\n\nprint('notice, just using 2 * the single-sided value gives the same answer')",
"What if accidentally reverse our T-value?",
"T = (true_mean - sample_mean) / (sigma / sqrt(len(samples)))\nprint (T)\n\np = (scipy.stats.t.cdf(T, len(samples) - 1))\nprint ('CDF gives: ', p)\nprint ('Recognize that includes from -infty up to a positive T, so we need to find the complementary area')\n\nprint (1 - p, 'Is the single sided p-value')\nprint ((1 - p) * 2, 'Is the double sided p-value')",
"Summary of Methods for Comparing Single Measurement with Normal Population\n$zM$ Test\nData Type: Measurements and Ranks\nCompares: A single sample vs a normally distributed population\nNull Hypothesis: The sample came from the population\nConditions: Standard deviation of population is known\nRelated Test 1: Student's $t$-test, which is used when the standard deviation is not known\nPython: Integrate an interval with a Z-statistic and erf or scipt.stats.norm.cdf\nStudent's $t$-test\nData Type: Measurements and Ranks\nCompares: A set of samples vs a normally distributed population\nNull Hypothesis: The sample came from the population\nConditions: Standard deviation of population is not known\nRelated Test 1: $zM$ test, which is used when standard deviation is known\nPython: Integrate an interval with a T-stastic and scipt.stats.t.cdf"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bgroveben/python3_machine_learning_projects
|
learn_kaggle/machine_learning/data_leakage.ipynb
|
mit
|
[
"Data Leakage\nWhat is it?\nData leakage is one of the most important issues for a data scientist to understand.\nIf you don't know how to prevent it, leakage will come up frequently, and it will ruin your models in the most subtle and dangerous ways.\nSpecifically, leakage causes a model to look accurate until you start making decisions with the model, and then the model becomes very inaccurate.\nThis tutorial will show you what leakage is and how to avoid it.\nThere are two main types of leakage: Leaky Predictors and Leaky Validation Strategies.\nLeaky Predictors\nThis occurs when your predictors include data that will not be available at the time you make your predictions.\nFor example, imagine that you want to predict who will catch pneumonia.\nThe first few rows of your raw data might look like this:\n\nPeople take antibiotic medicines after getting pneumonia in order to recover.\nSo the raw data shows a strong relationship between those columns.\nBut took_antibiotic_medicine is frequently changed after the value for got_pneumonia is determined.\nThis is target leakage.\nThe model would see that anyone who has a value of False for took_antibiotic_medicine didn't have pneumonia.\nValidation data comes from the same source, so the pattern will repeat itself in validation, and the model will have great validation (or cross-validation) scores.\nHowever, the model will be less accurate when subsequently deployed in the real world.\nTo prevent this type of data leakage, any variable updated (or created) after the target value is realized should be excluded.\nBecause when we use this model to make new predictions, that data won't be available to the model.\n\nLeaky Validation Strategies\nA much different type of leak occurs when you aren't careful distinguishing training data from validation data.\nFor example, this happens if you run preprocessing (like fitting the Imputer for missing values) before calling train_test_split.\nValidation is meant to be a measure of how the model does on data it hasn't considered before.\nYou can corrupt this process in subtle ways if the validation data affects the preprocessing behavior.\nYour model will get very good validation scores, giving you great confidence in it, but perform poorly when you deploy it to make decisions.\nPreventing Leaky Predictors\nThere is no single solution that universally prevents leaky predictors.\nThat being said, there are a few common strategies you can use.\nLeaky predictors frequently have high statistical correlations to the target.\nTo screen for possible leaks, look for columns that are strongly correlated to your target.\nIf you then build your model and the results are very accurate, then there is a good chance of a leakage problem.\nPreventing Leaky Validation Strategies\nIf your validation is based on a simple train-test split, exclude the validation data from any type of fitting, including the fitting of preprocessing steps.\nThis another place where scikit-learn pipelines make themselves useful.\nWhen using cross-validation, it's very helpful to use pipelines and do your preprocessing inside the pipeline.\nNow for the code:\nWe will use a small dataset about credit card applications, and we will build a model predicting which applications were accepted (stored in a variable called card).",
"import pandas as pd\n\ndata = pd.read_csv('input/credit_card_data.csv', true_values=['yes'], false_values=['no'])\ndata.head()\n\ndata.shape",
"This can be considered a small dataset, so we'll use cross-validation to ensure accurate measures of model quality.",
"from sklearn.pipeline import make_pipeline\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import cross_val_score\n\ny = data.card\nX =data.drop(['card'], axis=1)\n\n# Using a pipeline is best practice, so it's included here even though\n# the absence of preprocessing makes it unnecessary.\nmodeling_pipeline = make_pipeline(RandomForestClassifier())\ncv_scores = cross_val_score(modeling_pipeline, X, y, scoring='accuracy')\nprint(\"Cross-validation accuracy: \")\nprint(cv_scores.mean())",
"With experience, you'll find that it's very rare to find models that are accurate 98% of the time.\nIt happens, but it's rare enough that we should inspect the data more closely to see if it is target leakage.\nHere is a summary of the data:\n card: Dummy variable, 1 if application for credit card accepted, 0 if not\n* reports: Number of major derogatory reports\n* age: Age n years plus twelfths of a year\n* income: Yearly income (divided by 10,000)\n* share: Ratio of monthly credit card expenditure to yearly income\n* expenditure: Average monthly credit card expenditure\n* owner: 1 if owns their home, 0 if rent\n* selfempl: 1 if self employed, 0 if not.\n* dependents: 1 + number of dependents\n* months: Months living at current address\n* majorcards: Number of major credit cards held\n active: Number of active credit accounts\nA few variables look suspicious. For example, does expenditure mean expenditure on this card or on cards used before appying?\nAt this point, basic data comparisons can be very helpful:",
"expenditures_cardholders = data.expenditure[data.card]\nexpenditures_not_cardholders = data.expenditure[~data.card]\n((expenditures_cardholders == 0).mean())\n\n((expenditures_not_cardholders == 0).mean())",
"Everyone with card == False had no expenditures, while only 2% of those with card == True had no expenditures.\nIt's not surprising that our model appeared to have a high accuracy.\nBut this seems a data leak, where expenditures probably means expenditures on the card they applied for.\nSince share is partially determined by expenditure, it should be excluded too.\nThe variables active, majorcards are a little less clear, but from the description, they may be affected.\nIn most situations, it's better to be safe than sorry if you can't track down the people who created the data to find out more.\nNow that that pitfall has presented itself, it's time to build a model that is more data-leakage resistant:",
"potential_leaks = ['expenditure', 'share', 'active', 'majorcards']\nX2 = X.drop(potential_leaks, axis=1)\ncv_scores = cross_val_score(modeling_pipeline, X2, y, scoring='accuracy')\ncv_scores.mean()",
"The accuracy is lower but much more realistic (and believable).\nData leakage can be a multi-million dollar mistake in many data science applications.\nCareful separation of training and validation data is a first step, and pipelines can help implement this separation.\nLeaking predictors are a more frequent issue, and harder to track down.\nA combination of caution, common sense and data exploration can help identify leaking predictors so you remove them from your model.\nReview the data in your ongoing project.\nAre there any predictors that may cause leakage?\nAs a hint, most datasets from Kaggle competitions don't have these variables.\nOnce you get past those carefully curated datasets, this becomes a common issue."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark
|
Week 1 - Data Science Background and Course Software Setup/lab0_student.ipynb
|
mit
|
[
"+ \nFirst Notebook: Virtual machine test and assignment submission\nThis notebook will test that the virtual machine (VM) is functioning properly and will show you how to submit an assignment to the autograder. To move through the notebook just run each of the cells. You will not need to solve any problems to complete this lab. You can run a cell by pressing \"shift-enter\", which will compute the current cell and advance to the next cell, or by clicking in a cell and pressing \"control-enter\", which will compute the current cell and remain in that cell. At the end of the notebook you will export / download the notebook and submit it to the autograder.\n This notebook covers: \nPart 1: Test Spark functionality\nPart 2: Check class testing library\nPart 3: Check plotting\nPart 4: Check MathJax formulas\nPart 5: Export / download and submit\n Part 1: Test Spark functionality \n (1a) Parallelize, filter, and reduce",
"# Check that Spark is working\nlargeRange = sc.parallelize(xrange(100000))\nreduceTest = largeRange.reduce(lambda a, b: a + b)\nfilterReduceTest = largeRange.filter(lambda x: x % 7 == 0).sum()\n\nprint reduceTest\nprint filterReduceTest\n\n# If the Spark jobs don't work properly these will raise an AssertionError\nassert reduceTest == 4999950000\nassert filterReduceTest == 714264285",
"(1b) Loading a text file",
"# Check loading data with sc.textFile\nimport os.path\nbaseDir = os.path.join('data')\ninputPath = os.path.join('cs100', 'lab1', 'shakespeare.txt')\nfileName = os.path.join(baseDir, inputPath)\n\nrawData = sc.textFile(fileName)\nshakespeareCount = rawData.count()\n\nprint shakespeareCount\n\n# If the text file didn't load properly an AssertionError will be raised\nassert shakespeareCount == 122395",
"Part 2: Check class testing library \n (2a) Compare with hash",
"# TEST Compare with hash (2a)\n# Check our testing library/package\n# This should print '1 test passed.' on two lines\nfrom test_helper import Test\n\ntwelve = 12\nTest.assertEquals(twelve, 12, 'twelve should equal 12')\nTest.assertEqualsHashed(twelve, '7b52009b64fd0a2a49e6d8a939753077792b0554',\n 'twelve, once hashed, should equal the hashed value of 12')",
"(2b) Compare lists",
"# TEST Compare lists (2b)\n# This should print '1 test passed.'\nunsortedList = [(5, 'b'), (5, 'a'), (4, 'c'), (3, 'a')]\nTest.assertEquals(sorted(unsortedList), [(3, 'a'), (4, 'c'), (5, 'a'), (5, 'b')],\n 'unsortedList does not sort properly')",
"Part 3: Check plotting \n (3a) Our first plot \nAfter executing the code cell below, you should see a plot with 50 blue circles. The circles should start at the bottom left and end at the top right.",
"# Check matplotlib plotting\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nfrom math import log\n\n# function for generating plot layout\ndef preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999', gridWidth=1.0):\n plt.close()\n fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')\n ax.axes.tick_params(labelcolor='#999999', labelsize='10')\n for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:\n axis.set_ticks_position('none')\n axis.set_ticks(ticks)\n axis.label.set_color('#999999')\n if hideLabels: axis.set_ticklabels([])\n plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')\n map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])\n return fig, ax\n\n# generate layout and plot data\nx = range(1, 50)\ny = [log(x1 ** 2) for x1 in x]\nfig, ax = preparePlot(range(5, 60, 10), range(0, 12, 1))\nplt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)\nax.set_xlabel(r'$range(1, 50)$'), ax.set_ylabel(r'$\\log_e(x^2)$')\npass",
"Part 4: Check MathJax Formulas \n (4a) Gradient descent formula \nYou should see a formula on the line below this one: $$ \\scriptsize \\mathbf{w}_{i+1} = \\mathbf{w}_i - \\alpha_i \\sum_j (\\mathbf{w}_i^\\top\\mathbf{x}_j - y_j) \\mathbf{x}_j \\,.$$\nThis formula is included inline with the text and is $ \\scriptsize (\\mathbf{w}^\\top \\mathbf{x} - y) \\mathbf{x} $.\n (4b) Log loss formula \nThis formula shows log loss for single point. Log loss is defined as: $$ \\begin{align} \\scriptsize \\ell_{log}(p, y) = \\begin{cases} -\\log (p) & \\text{if } y = 1 \\\\ -\\log(1-p) & \\text{if } y = 0 \\end{cases} \\end{align} $$\n Part 5: Export / download and submit \n (5a) Time to submit \nYou have completed the lab. To submit the lab for grading you will need to download it from your IPython Notebook environment. You can do this by clicking on \"File\", then hovering your mouse over \"Download as\", and then clicking on \"Python (.py)\". This will export your IPython Notebook as a .py file to your computer.\nTo upload this file to the course autograder, go to the edX website and find the page for submitting this assignment. Click \"Choose file\", then navigate to and click on the downloaded .py file. Now click the \"Open\" button and then the \"Check\" button. Your submission will be graded shortly and will be available on the page where you submitted. Note that when submission volumes are high, it may take as long as an hour to receive results."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.