markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Compute new attributes Then we need to loop through the CityObjects and update add the new attributes. Note that the `attributes` CityObject attribute is just a dictionary.Thus we compute the number of vertices of the CityObject and the area of is footprint. Then we going to cluster these two variables. This is completely arbitrary excercise which is simply meant to illustrate how to transform a city model into machine-learnable features.
for co_id, co in zurich.cityobjects.items(): co.attributes['nr_vertices'] = len(co.get_vertices()) co.attributes['fp_area'] = compute_footprint_area(co) zurich.cityobjects[co_id] = co
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
It is possible to export the city model into a pandas DataFrame. Note that only the CityObject attributes are exported into the dataframe, with CityObject IDs as the index of the dataframe. Thus if you want to export the attributes of SemanticSurfaces for example, then you need to add them as CityObject attributes.The function below illustrates this operation.
def assign_cityobject_attribute(cm): """Copy the semantic surface attributes to CityObject attributes. Returns a copy of the citymodel. """ new_cos = {} cm_copy = deepcopy(cm) for co_id, co in cm.cityobjects.items(): for geom in co.geometry: for srf in geom.surfaces.values(): if 'attributes' in srf: for attr,a_v in srf['attributes'].items(): if (attr not in co.attributes) or (co.attributes[attr] is None): co.attributes[attr] = [a_v] else: co.attributes[attr].append(a_v) new_cos[co_id] = co cm_copy.cityobjects = new_cos return cm_copy df = zurich.to_dataframe() df.head()
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
In order to have a nicer distribution of the data, we remove the missing values and apply a log-transform on the two variables. Note that the `FuntionTransformer.transform` transforms a DataFrame to a numpy array that is ready to be used in `scikit-learn`. The details of a machine learning workflow is beyond the scope of this tutorial however.
df_subset = df[df['Geomtype'].notnull() & df['fp_area'] > 0.0].loc[:, ['nr_vertices', 'fp_area']] transformer = FunctionTransformer(np.log, validate=True) df_logtransform = transformer.transform(df_subset) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.scatter(df_logtransform[:,0], df_logtransform[:,1], alpha=0.3, s=1.0) plt.show() def plot_model_results(model, data): fig = plt.figure() ax = fig.add_subplot(1, 1, 1) colormap = np.array(['lightblue', 'red', 'lime', 'blue','black']) ax.scatter(data[:,0], data[:,1], c=colormap[model.labels_], s=10, alpha=0.5) ax.set_xlabel('Number of vertices [log]') ax.set_ylabel('Footprint area [log]') plt.title(f"DBSCAN clustering with estimated {len(set(model.labels_))} clusters") plt.show()
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Since we transformed our DataFrame, we can fit any model in `scikit-learn`. I use DBSCAN because I wanted to find the data points on the fringes of the central cluster.
%matplotlib notebook model = cluster.DBSCAN(eps=0.2).fit(df_logtransform) plot_model_results(model, df_logtransform) # merge the cluster labels back to the data frame df_subset['dbscan'] = model.labels_
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Save the results back to CityJSON And merge the DataFrame with cluster labels back to the city model.
for co_id, co in zurich.cityobjects.items(): if co_id in df_subset.index: ml_results = dict(df_subset.loc[co_id]) else: ml_results = {'nr_vertices': 'nan', 'fp_area': 'nan', 'dbscan': 'nan'} new_attrs = {**co.attributes, **ml_results} co.attributes = new_attrs zurich.cityobjects[co_id] = co
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
At the end, the `save()` method saves the edited city model into a CityJSON file.
path_out = os.path.join('data', 'zurich_output.json') cityjson.save(zurich, path_out)
_____no_output_____
CC-BY-4.0
cjio_tutorial.ipynb
balazsdukai/foss4g2019
Working With TileMatrixSets (other than WebMercator)[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/developmentseed/titiler/master?filepath=docs%2Fexamples%2FWorking_with_nonWebMercatorTMS.ipynb)TiTiler has builtin support for serving tiles in multiple Projections by using [rio-tiler](https://github.com/cogeotiff/rio-tiler) and [morecantile](https://github.com/developmentseed/morecantile).The default `cog` and `stac` endpoint (`titiler.endpoints.cog`and `titiler.endoints.stac`) are built with Mutli TMS support using the default grids provided by morecantile:```pythonfrom fastapi import FastAPIfrom titiler.endpoints.factory import TilerFactory Create a Multi TMS Tiler using `TilerFactory` Factorycog = TilerFactory(router_prefix="cog")app = FastAPI()app.include_router(cog.router, prefix="/cog", tags=["Cloud Optimized GeoTIFF"])``` This Notebook shows how to use and display tiles with non-webmercator TileMatrixSet Requirements- ipyleaflet- requests
# Uncomment if you need to install those module within the notebook # !pip install ipyleaflet requests import json import requests from ipyleaflet import ( Map, basemaps, basemap_to_tiles, TileLayer, WMSLayer, GeoJSON, projections ) titiler_endpoint = "https://api.cogeo.xyz" # Devseed Custom TiTiler endpoint url = "https://s3.amazonaws.com/opendata.remotepixel.ca/cogs/natural_earth/world.tif" # Natural Earth WORLD tif
_____no_output_____
MIT
docs/examples/Working_with_nonWebMercatorTMS.ipynb
Anagraph/titiler
List Supported TileMatrixSets
r = requests.get("https://api.cogeo.xyz/tileMatrixSets").json() print("Supported TMS:") for tms in r["tileMatrixSets"]: print("-", tms["id"])
_____no_output_____
MIT
docs/examples/Working_with_nonWebMercatorTMS.ipynb
Anagraph/titiler
WGS 84 -- WGS84 - World Geodetic System 1984 - EPSG:4326https://epsg.io/4326
r = requests.get( "https://api.cogeo.xyz/cog/WorldCRS84Quad/tilejson.json", params = {"url": url} ).json() m = Map(center=(45, 0), zoom=4, basemap={}, crs=projections.EPSG4326) layer = TileLayer(url=r["tiles"][0], opacity=1) m.add_layer(layer) m
_____no_output_____
MIT
docs/examples/Working_with_nonWebMercatorTMS.ipynb
Anagraph/titiler
WGS 84 / NSIDC Sea Ice Polar Stereographic North - EPSG:3413https://epsg.io/3413
r = requests.get( "https://api.cogeo.xyz/cog/EPSG3413/tilejson.json", params = {"url": url} ).json() m = Map(center=(70, 0), zoom=1, basemap={}, crs=projections.EPSG3413) layer = TileLayer(url=r["tiles"][0], opacity=1) m.add_layer(layer) m
_____no_output_____
MIT
docs/examples/Working_with_nonWebMercatorTMS.ipynb
Anagraph/titiler
ETRS89-extended / LAEA Europe - EPSG:3035https://epsg.io/3035
r = requests.get( "https://api.cogeo.xyz/cog/EuropeanETRS89_LAEAQuad/tilejson.json", params = {"url": url} ).json() my_projection = { 'name': 'EPSG:3035', 'custom': True, #This is important, it tells ipyleaflet that this projection is not on the predefined ones. 'proj4def': '+proj=laea +lat_0=52 +lon_0=10 +x_0=4321000 +y_0=3210000 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs', 'origin': [6500000.0, 5500000.0], 'resolutions': [ 8192.0, 4096.0, 2048.0, 1024.0, 512.0, 256.0 ] } m = Map(center=(50, 65), zoom=1, basemap={}, crs=my_projection) layer = TileLayer(url=r["tiles"][0], opacity=1) m.add_layer(layer) m
_____no_output_____
MIT
docs/examples/Working_with_nonWebMercatorTMS.ipynb
Anagraph/titiler
100 pandas puzzlesInspired by [100 Numpy exerises](https://github.com/rougier/numpy-100), here are 100* short puzzles for testing your knowledge of [pandas'](http://pandas.pydata.org/) power.Since pandas is a large library with many different specialist features and functions, these excercises focus mainly on the fundamentals of manipulating data (indexing, grouping, aggregating, cleaning), making use of the core DataFrame and Series objects. Many of the excerises here are stright-forward in that the solutions require no more than a few lines of code (in pandas or NumPy... don't go using pure Python or Cython!). Choosing the right methods and following best practices is the underlying goal.The exercises are loosely divided in sections. Each section has a difficulty rating; these ratings are subjective, of course, but should be a seen as a rough guide as to how inventive the required solution is.If you're just starting out with pandas and you are looking for some other resources, the official documentation is very extensive. In particular, some good places get a broader overview of pandas are...- [10 minutes to pandas](http://pandas.pydata.org/pandas-docs/stable/10min.html)- [pandas basics](http://pandas.pydata.org/pandas-docs/stable/basics.html)- [tutorials](http://pandas.pydata.org/pandas-docs/stable/tutorials.html)- [cookbook and idioms](http://pandas.pydata.org/pandas-docs/stable/cookbook.htmlcookbook)Enjoy the puzzles!\* *the list of exercises is not yet complete! Pull requests or suggestions for additional exercises, corrections and improvements are welcomed.* Importing pandas Getting started and checking your pandas setupDifficulty: *easy* **1.** Import pandas under the alias `pd`. **2.** Print the version of pandas that has been imported. **3.** Print out all the *version* information of the libraries that are required by the pandas library. DataFrame basics A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFramesDifficulty: *easy*Note: remember to import numpy using:```pythonimport numpy as np```Consider the following Python dictionary `data` and Python list `labels`:``` pythondata = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'], 'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3], 'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1], 'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']```(This is just some meaningless data I made up with the theme of animals and trips to a vet.)**4.** Create a DataFrame `df` from this dictionary `data` which has the index `labels`.
import numpy as np data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'], 'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3], 'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1], 'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']} labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] df = # (complete this line of code)
_____no_output_____
MIT
100-pandas-puzzles.ipynb
greenteausa/100-pandas-puzzles
**5.** Display a summary of the basic information about this DataFrame and its data (*hint: there is a single method that can be called on the DataFrame*). **6.** Return the first 3 rows of the DataFrame `df`. **7.** Select just the 'animal' and 'age' columns from the DataFrame `df`. **8.** Select the data in rows `[3, 4, 8]` *and* in columns `['animal', 'age']`. **9.** Select only the rows where the number of visits is greater than 3. **10.** Select the rows where the age is missing, i.e. it is `NaN`. **11.** Select the rows where the animal is a cat *and* the age is less than 3. **12.** Select the rows the age is between 2 and 4 (inclusive). **13.** Change the age in row 'f' to 1.5. **14.** Calculate the sum of all visits in `df` (i.e. find the total number of visits). **15.** Calculate the mean age for each different animal in `df`. **16.** Append a new row 'k' to `df` with your choice of values for each column. Then delete that row to return the original DataFrame. **17.** Count the number of each type of animal in `df`. **18.** Sort `df` first by the values in the 'age' in *decending* order, then by the value in the 'visit' column in *ascending* order (so row `i` should be first, and row `d` should be last). **19.** The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be `True` and 'no' should be `False`. **20.** In the 'animal' column, change the 'snake' entries to 'python'. **21.** For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (*hint: use a pivot table*). DataFrames: beyond the basics Slightly trickier: you may need to combine two or more methods to get the right answerDifficulty: *medium*The previous section was tour through some basic but essential DataFrame operations. Below are some ways that you might need to cut your data, but for which there is no single "out of the box" method. **22.** You have a DataFrame `df` with a column 'A' of integers. For example:```pythondf = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})```How do you filter out rows which contain the same integer as the row immediately above?You should be left with a column containing the following values:```python1, 2, 3, 4, 5, 6, 7``` **23.** Given a DataFrame of numeric values, say```pythondf = pd.DataFrame(np.random.random(size=(5, 3))) a 5x3 frame of float values```how do you subtract the row mean from each element in the row? **24.** Suppose you have DataFrame with 10 columns of real numbers, for example:```pythondf = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))```Which column of numbers has the smallest sum? Return that column's label. **25.** How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)? As input, use a DataFrame of zeros and ones with 10 rows and 3 columns.```pythondf = pd.DataFrame(np.random.randint(0, 2, size=(10, 3)))``` The next three puzzles are slightly harder.**26.** In the cell below, you have a DataFrame `df` that consists of 10 columns of floating-point numbers. Exactly 5 entries in each row are NaN values. For each row of the DataFrame, find the *column* which contains the *third* NaN value.You should return a Series of column labels: `e, c, d, h, d`
nan = np.nan data = [[0.04, nan, nan, 0.25, nan, 0.43, 0.71, 0.51, nan, nan], [ nan, nan, nan, 0.04, 0.76, nan, nan, 0.67, 0.76, 0.16], [ nan, nan, 0.5 , nan, 0.31, 0.4 , nan, nan, 0.24, 0.01], [0.49, nan, nan, 0.62, 0.73, 0.26, 0.85, nan, nan, nan], [ nan, nan, 0.41, nan, 0.05, nan, 0.61, nan, 0.48, 0.68]] columns = list('abcdefghij') df = pd.DataFrame(data, columns=columns) # write a solution to the question here
_____no_output_____
MIT
100-pandas-puzzles.ipynb
greenteausa/100-pandas-puzzles
**27.** A DataFrame has a column of groups 'grps' and and column of integer values 'vals': ```pythondf = pd.DataFrame({'grps': list('aaabbcaabcccbbc'), 'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})```For each *group*, find the sum of the three greatest values. You should end up with the answer as follows:```grpsa 409b 156c 345```
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'), 'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]}) # write a solution to the question here
_____no_output_____
MIT
100-pandas-puzzles.ipynb
greenteausa/100-pandas-puzzles
**28.** The DataFrame `df` constructed below has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive). For each group of 10 consecutive integers in 'A' (i.e. `(0, 10]`, `(10, 20]`, ...), calculate the sum of the corresponding values in column 'B'.The answer should be a Series as follows:```A(0, 10] 635(10, 20] 360(20, 30] 315(30, 40] 306(40, 50] 750(50, 60] 284(60, 70] 424(70, 80] 526(80, 90] 835(90, 100] 852```
df = pd.DataFrame(np.random.RandomState(8765).randint(1, 101, size=(100, 2)), columns = ["A", "B"]) # write a solution to the question here
_____no_output_____
MIT
100-pandas-puzzles.ipynb
greenteausa/100-pandas-puzzles
DataFrames: harder problems These might require a bit of thinking outside the box......but all are solvable using just the usual pandas/NumPy methods (and so avoid using explicit `for` loops).Difficulty: *hard* **29.** Consider a DataFrame `df` where there is an integer column 'X':```pythondf = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})```For each value, count the difference back to the previous zero (or the start of the Series, whichever is closer). These values should therefore be ```[1, 2, 0, 1, 2, 3, 4, 0, 1, 2]```Make this a new column 'Y'. **30.** Consider the DataFrame constructed below which contains rows and columns of numerical data. Create a list of the column-row index locations of the 3 largest values in this DataFrame. In this case, the answer should be:```[(5, 7), (6, 4), (2, 5)]```
df = pd.DataFrame(np.random.RandomState(30).randint(1, 101, size=(8, 8)))
_____no_output_____
MIT
100-pandas-puzzles.ipynb
greenteausa/100-pandas-puzzles
**31.** You are given the DataFrame below with a column of group IDs, 'grps', and a column of corresponding integer values, 'vals'.```pythondf = pd.DataFrame({"vals": np.random.RandomState(31).randint(-30, 30, size=15), "grps": np.random.RandomState(31).choice(["A", "B"], 15)})```Create a new column 'patched_values' which contains the same values as the 'vals' any negative values in 'vals' with the group mean:``` vals grps patched_vals0 -12 A 13.61 -7 B 28.02 -14 A 13.63 4 A 4.04 -7 A 13.65 28 B 28.06 -2 A 13.67 -1 A 13.68 8 A 8.09 -2 B 28.010 28 A 28.011 12 A 12.012 16 A 16.013 -24 A 13.614 -12 A 13.6``` **32.** Implement a rolling mean over groups with window size 3, which ignores NaN value. For example consider the following DataFrame:```python>>> df = pd.DataFrame({'group': list('aabbabbbabab'), 'value': [1, 2, 3, np.nan, 2, 3, np.nan, 1, 7, 3, np.nan, 8]})>>> df group value0 a 1.01 a 2.02 b 3.03 b NaN4 a 2.05 b 3.06 b NaN7 b 1.08 a 7.09 b 3.010 a NaN11 b 8.0```The goal is to compute the Series:```0 1.0000001 1.5000002 3.0000003 3.0000004 1.6666675 3.0000006 3.0000007 2.0000008 3.6666679 2.00000010 4.50000011 4.000000```E.g. the first window of size three for group 'b' has values 3.0, NaN and 3.0 and occurs at row index 5. Instead of being NaN the value in the new column at this row index should be 3.0 (just the two non-NaN values are used to compute the mean (3+3)/2) Series and DatetimeIndex Exercises for creating and manipulating Series with datetime dataDifficulty: *easy/medium*pandas is fantastic for working with dates and times. These puzzles explore some of this functionality. **33.** Create a DatetimeIndex that contains each business day of 2015 and use it to index a Series of random numbers. Let's call this Series `s`. **34.** Find the sum of the values in `s` for every Wednesday. **35.** For each calendar month in `s`, find the mean of values. **36.** For each group of four consecutive calendar months in `s`, find the date on which the highest value occurred. **37.** Create a DateTimeIndex consisting of the third Thursday in each month for the years 2015 and 2016. Cleaning Data Making a DataFrame easier to work withDifficulty: *easy/medium*It happens all the time: someone gives you data containing malformed strings, Python, lists and missing data. How do you tidy it up so you can get on with the analysis?Take this monstrosity as the DataFrame to use in the following puzzles:```pythondf = pd.DataFrame({'From_To': ['LoNDon_paris', 'MAdrid_miLAN', 'londON_StockhOlm', 'Budapest_PaRis', 'Brussels_londOn'], 'FlightNumber': [10045, np.nan, 10065, np.nan, 10085], 'RecentDelays': [[23, 47], [], [24, 43, 87], [13], [67, 32]], 'Airline': ['KLM(!)', ' (12)', '(British Airways. )', '12. Air France', '"Swiss Air"']})```Formatted, it looks like this:``` From_To FlightNumber RecentDelays Airline0 LoNDon_paris 10045.0 [23, 47] KLM(!)1 MAdrid_miLAN NaN [] (12)2 londON_StockhOlm 10065.0 [24, 43, 87] (British Airways. )3 Budapest_PaRis NaN [13] 12. Air France4 Brussels_londOn 10085.0 [67, 32] "Swiss Air"```(It's some flight data I made up; it's not meant to be accurate in any way.) **38.** Some values in the the **FlightNumber** column are missing (they are `NaN`). These numbers are meant to increase by 10 with each row so 10055 and 10075 need to be put in place. Modify `df` to fill in these missing numbers and make the column an integer column (instead of a float column). **39.** The **From\_To** column would be better as two separate columns! Split each string on the underscore delimiter `_` to give a new temporary DataFrame called 'temp' with the correct values. Assign the correct column names 'From' and 'To' to this temporary DataFrame. **40.** Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame 'temp'. Standardise the strings so that only the first letter is uppercase (e.g. "londON" should become "London".) **41.** Delete the **From_To** column from `df` and attach the temporary DataFrame 'temp' from the previous questions. **42**. In the **Airline** column, you can see some extra puctuation and symbols have appeared around the airline names. Pull out just the airline name. E.g. `'(British Airways. )'` should become `'British Airways'`. **43**. In the RecentDelays column, the values have been entered into the DataFrame as a list. We would like each first value in its own column, each second value in its own column, and so on. If there isn't an Nth value, the value should be NaN.Expand the Series of lists into a DataFrame named `delays`, rename the columns `delay_1`, `delay_2`, etc. and replace the unwanted RecentDelays column in `df` with `delays`. The DataFrame should look much better now.``` FlightNumber Airline From To delay_1 delay_2 delay_30 10045 KLM London Paris 23.0 47.0 NaN1 10055 Air France Madrid Milan NaN NaN NaN2 10065 British Airways London Stockholm 24.0 43.0 87.03 10075 Air France Budapest Paris 13.0 NaN NaN4 10085 Swiss Air Brussels London 67.0 32.0 NaN``` Using MultiIndexes Go beyond flat DataFrames with additional index levelsDifficulty: *medium*Previous exercises have seen us analysing data from DataFrames equipped with a single index level. However, pandas also gives you the possibilty of indexing your data using *multiple* levels. This is very much like adding new dimensions to a Series or a DataFrame. For example, a Series is 1D, but by using a MultiIndex with 2 levels we gain of much the same functionality as a 2D DataFrame.The set of puzzles below explores how you might use multiple index levels to enhance data analysis.To warm up, we'll look make a Series with two index levels. **44**. Given the lists `letters = ['A', 'B', 'C']` and `numbers = list(range(10))`, construct a MultiIndex object from the product of the two lists. Use it to index a Series of random numbers. Call this Series `s`. **45.** Check the index of `s` is lexicographically sorted (this is a necessary proprty for indexing to work correctly with a MultiIndex). **46**. Select the labels `1`, `3` and `6` from the second level of the MultiIndexed Series. **47**. Slice the Series `s`; slice up to label 'B' for the first level and from label 5 onwards for the second level. **48**. Sum the values in `s` for each label in the first level (you should have Series giving you a total for labels A, B and C). **49**. Suppose that `sum()` (and other methods) did not accept a `level` keyword argument. How else could you perform the equivalent of `s.sum(level=1)`? **50**. Exchange the levels of the MultiIndex so we have an index of the form (letters, numbers). Is this new Series properly lexsorted? If not, sort it. Minesweeper Generate the numbers for safe squares in a Minesweeper gridDifficulty: *medium* to *hard*If you've ever used an older version of Windows, there's a good chance you've played with Minesweeper:- https://en.wikipedia.org/wiki/Minesweeper_(video_game)If you're not familiar with the game, imagine a grid of squares: some of these squares conceal a mine. If you click on a mine, you lose instantly. If you click on a safe square, you reveal a number telling you how many mines are found in the squares that are immediately adjacent. The aim of the game is to uncover all squares in the grid that do not contain a mine.In this section, we'll make a DataFrame that contains the necessary data for a game of Minesweeper: coordinates of the squares, whether the square contains a mine and the number of mines found on adjacent squares. **51**. Let's suppose we're playing Minesweeper on a 5 by 4 grid, i.e.```X = 5Y = 4```To begin, generate a DataFrame `df` with two columns, `'x'` and `'y'` containing every coordinate for this grid. That is, the DataFrame should start:``` x y0 0 01 0 12 0 2``` **52**. For this DataFrame `df`, create a new column of zeros (safe) and ones (mine). The probability of a mine occuring at each location should be 0.4. **53**. Now create a new column for this DataFrame called `'adjacent'`. This column should contain the number of mines found on adjacent squares in the grid. (E.g. for the first row, which is the entry for the coordinate `(0, 0)`, count how many mines are found on the coordinates `(0, 1)`, `(1, 0)` and `(1, 1)`.) **54**. For rows of the DataFrame that contain a mine, set the value in the `'adjacent'` column to NaN. **55**. Finally, convert the DataFrame to grid of the adjacent mine counts: columns are the `x` coordinate, rows are the `y` coordinate. Plotting Visualize trends and patterns in dataDifficulty: *medium*To really get a good understanding of the data contained in your DataFrame, it is often essential to create plots: if you're lucky, trends and anomalies will jump right out at you. This functionality is baked into pandas and the puzzles below explore some of what's possible with the library.**56.** Pandas is highly integrated with the plotting library matplotlib, and makes plotting DataFrames very user-friendly! Plotting in a notebook environment usually makes use of the following boilerplate:```pythonimport matplotlib.pyplot as plt%matplotlib inlineplt.style.use('ggplot')```matplotlib is the plotting library which pandas' plotting functionality is built upon, and it is usually aliased to ```plt```.```%matplotlib inline``` tells the notebook to show plots inline, instead of creating them in a separate window. ```plt.style.use('ggplot')``` is a style theme that most people find agreeable, based upon the styling of R's ggplot package.For starters, make a scatter plot of this random data, but use black X's instead of the default markers. ```df = pd.DataFrame({"xs":[1,5,2,8,1], "ys":[4,2,1,9,6]})```Consult the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) if you get stuck! **57.** Columns in your DataFrame can also be used to modify colors and sizes. Bill has been keeping track of his performance at work over time, as well as how good he was feeling that day, and whether he had a cup of coffee in the morning. Make a plot which incorporates all four features of this DataFrame.(Hint: If you're having trouble seeing the plot, try multiplying the Series which you choose to represent size by 10 or more)*The chart doesn't have to be pretty: this isn't a course in data viz!*```df = pd.DataFrame({"productivity":[5,2,3,1,4,5,6,7,8,3,4,8,9], "hours_in" :[1,9,6,5,3,9,2,9,1,7,4,2,2], "happiness" :[2,1,3,2,3,1,2,3,1,2,2,1,3], "caffienated" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})``` **58.** What if we want to plot multiple things? Pandas allows you to pass in a matplotlib *Axis* object for plots, and plots will also return an Axis object.Make a bar plot of monthly revenue with a line plot of monthly advertising spending (numbers in millions)```df = pd.DataFrame({"revenue":[57,68,63,71,72,90,80,62,59,51,47,52], "advertising":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9], "month":range(12) })``` Now we're finally ready to create a candlestick chart, which is a very common tool used to analyze stock price data. A candlestick chart shows the opening, closing, highest, and lowest price for a stock during a time window. The color of the "candle" (the thick part of the bar) is green if the stock closed above its opening price, or red if below.![Candlestick Example](img/candle.jpg)This was initially designed to be a pandas plotting challenge, but it just so happens that this type of plot is just not feasible using pandas' methods. If you are unfamiliar with matplotlib, we have provided a function that will plot the chart for you so long as you can use pandas to get the data into the correct format.Your first step should be to get the data in the correct format using pandas' time-series grouping function. We would like each candle to represent an hour's worth of data. You can write your own aggregation function which returns the open/high/low/close, but pandas has a built-in which also does this. The below cell contains helper functions. Call ```day_stock_data()``` to generate a DataFrame containing the prices a hypothetical stock sold for, and the time the sale occurred. Call ```plot_candlestick(df)``` on your properly aggregated and formatted stock data to print the candlestick chart.
import numpy as np def float_to_time(x): return str(int(x)) + ":" + str(int(x%1 * 60)).zfill(2) + ":" + str(int(x*60 % 1 * 60)).zfill(2) def day_stock_data(): #NYSE is open from 9:30 to 4:00 time = 9.5 price = 100 results = [(float_to_time(time), price)] while time < 16: elapsed = np.random.exponential(.001) time += elapsed if time > 16: break price_diff = np.random.uniform(.999, 1.001) price *= price_diff results.append((float_to_time(time), price)) df = pd.DataFrame(results, columns = ['time','price']) df.time = pd.to_datetime(df.time) return df #Don't read me unless you get stuck! def plot_candlestick(agg): """ agg is a DataFrame which has a DatetimeIndex and five columns: ["open","high","low","close","color"] """ fig, ax = plt.subplots() for time in agg.index: ax.plot([time.hour] * 2, agg.loc[time, ["high","low"]].values, color = "black") ax.plot([time.hour] * 2, agg.loc[time, ["open","close"]].values, color = agg.loc[time, "color"], linewidth = 10) ax.set_xlim((8,16)) ax.set_ylabel("Price") ax.set_xlabel("Hour") ax.set_title("OHLC of Stock Value During Trading Day") plt.show()
_____no_output_____
MIT
100-pandas-puzzles.ipynb
greenteausa/100-pandas-puzzles
**Inheritence in Python**Object Oriented Programming is a coding paradigm that revolves around creating modular code and stopping mulitple uses of the same structure. It is aimed at increasing stability and usability of code. It consists of some well-known concepts stated below:1. Classes: These often show a collection of functions and attributes that are fastened to a precise name and represent an abstract container.2. Attributes: Generally, the data that is associated with each class. Examples are variables declared during creation of the class.3. Objects: An instance generated from the class. There can be multiple objects of a class and every individual object takes on the properties of the class.
# Implementation of Classes in Python # Creating a Class Math with 2 functions class Math: def subtract (self, i, j): return i-j def add (self, x, y): return x+y # Creating an object of the class Math math_child = Math() test_int_A = 10 test_int_B = 20 print(math_child.subtract(test_int_B, test_int_A)) # Creating a Class Person with an attribute and an initialization function class Person: name = 'George' def __init__ (self): self.age = 34 # Creating an object of the class and printing its attributes p1 = Person() print (p1.name) print (p1.age)
George 34
MIT
01. Getting Started with Python/Python_Revision_and_Statistical_Methods.ipynb
Jamess-ai/ai-with-python-series
**Constructors and Inheritance**The constructor is an initialization function that is always called when a class’s instance is created. The constructor is named __init__() in Python and defines the specifics of instantiating a class and its attributes. Class inheritance is a concept of taking values of a class from its origin and giving the same properties to a child class. It creates relationship models like “Class A is a Class B”, like a triangle (child class) is a shape (parent class). All the functions and attributes of a superclass are inherited by the subclass. 1. Overriding: During the inheritance, the behavior of the child class or the subclass can be modified. Doing this modification on functions is class “overriding” and is achieved by declaring functions in the subclass with the same name. Functions created in the subclass will take precedence over those in the parent class.2. Composition: Classes can also be built from other smaller classes that support relationship models like “Class A has a Class B”, like a Department has Students.3. Polymorphism: The functionality of similar looking functions can be changed in run-time, during their implementation. This is achieved using Polymorphism, that includes two objects of different parent class but having the same set of functions. The outward look of these functions is the same, but implementations differ.
# Creating a class and instantiating variables class Animal_Dog: species = "Canis" def __init__(self, name, age): self.name = name self.age = age # Instance method def description(self): return f"{self.name} is {self.age} years old" # Another instance method def animal_sound(self, sound): return f"{self.name} says {sound}" # Check the object’s type Animal_Dog("Bunny", 7) # Even though a and b are both instances of the Dog class, they represent two distinct objects in memory. a = Animal_Dog("Fog", 6) b = Animal_Dog("Bunny", 7) a == b # Instantiating objects with the class’s constructor arguments fog = Animal_Dog("Fog", 6) bunny = Animal_Dog("Bunny", 7) print (bunny.name) print (bunny.age) # Accessing attributes directly print (bunny.species) # Creating a new Object to access through instance functions fog = Animal_Dog("Fog", 6) fog.description() fog.animal_sound("Whoof Whoof") fog.animal_sound("Bhoof Whoof") # Inheriting the Class class GoldRet(Animal_Dog): def speak(self, sound="Warf"): return f"{self.name} says {sound}" bunny = GoldRet("Bunny", 5) bunny.speak() bunny.speak("Grrr Grrr") # Code Snippet 3: Variables and data types int_var = 100 # Integer variable float_var = 1000.0 # Float value string_var = "John" # String variable print (int_var) print (float_var) print (string_var)
100 1000.0 John
MIT
01. Getting Started with Python/Python_Revision_and_Statistical_Methods.ipynb
Jamess-ai/ai-with-python-series
Variables and Data Types in PythonVariables are reserved locations in the computer’s memory that store values defined within them. Whenever a variable is created, a piece of the computer’s memory is allocated to it. Based on the data type of this declared variable, the interpreter allocates varied chunks of memory. Therefore, basis the assignment of variables as integer, float, strings, etc. different sizes of memory allocations are invoked.• Declaration: Variables in Python do not need explicit declaration to reserve memory space. This happens automatically when a value is assigned. The (=) sign is used to assign values to variables. • Multiple Assignment: Python allows for multiple variables to hold a single value and this declaration can be done together for all variables. • Deleting References: Memory reference once created can also be deleted. The 'del' statement is used to delete the reference to a number object. Multiple object deletion is also supported by the 'del' statement.• Strings: Strings are a set of characters, that Python allows representation through single or double quotes. String subsets can be formed using the slice operator ([ ] and [:] ) where indexing starts from 0 on the left and -1 on the right. The (+) sign is the string concatenation operator and the (*) sign is the repetition operator.Datatype ConversionFunction Descriptionint(x [,base]) Converts given input to integer. Base is used for string conversions.long(x [,base] ) Converts given input to a long integerfloat(x) Follows conversion to floating-point number.complex(real [,imag]) Used for creating a complex number.str(x) Converts any given object to a stringeval(str) Evaluates given string and returns an object.tuple(s) Conversion to tuplelist(s) List conversion of given inputset(s) Converts the given value to a setunichr(x) Conversion from an integer to Unicode character.Looking at Variables and DatatypesData stored as Python’s variables is abstracted as objects. Data is represented by objects or through relations between individual objects. Therefore, every variable and its corresponding values are an object of a class, depending on the stored data.
# Multiple Assignment: All are assigned to the same memory location a = b = c = 1 # Assigning multiple variables with multiple values a,b,c = 1,2,"jacob" # Assigning and deleting variable references var1 = 1 var2 = 10 del var1 # Removes the reference of var1 del var2 # Basic String Operations in Python str = 'Hello World!' print (str) # Print the first character of string variable print (str[0]) # Prints characters from 3rd to 5th positions print (str[2:5]) # Print the string twice print (str * 2) # Concatenate the string and print print (str + "TEST")
_____no_output_____
MIT
01. Getting Started with Python/Python_Revision_and_Statistical_Methods.ipynb
Jamess-ai/ai-with-python-series
Models and features
#for route in df.route.unique(): for route in routes: try: df2 = df[df['route'] == route] sns.barplot(data=df2, x="model", y="R2_test", hue="feature_labels") y1,y2 = plt.ylim() plt.ylim((max(0,y1),y2)) plt.title(route) plt.ylabel("$R^2$(test)") f = plt.gcf() f.set_size_inches(15, 10) plt.savefig(f"figs\\models\\models_{route}.pdf", bbox_inches="tight") plt.savefig(f"figs\\models\\models_{route}.png", bbox_inches="tight") plt.show() except: pass #df[(df['route']=='Dunajska1') & (df['feature_labels']=="0655-1")]
_____no_output_____
MIT
analyse_regression_results.ipynb
SusTra/TraCo
Best features
fig, axs = plt.subplots(4, 2, sharey=False) #for i, (route,ax) in enumerate(zip(df.route.unique(), axs.flatten())): for i, (route,ax) in enumerate(zip(routes, axs.flatten())): try: df2 = df[df['route'] == route] df3 = pd.DataFrame() for features in df2.feature_labels.unique(): df_features = df2[df2['feature_labels'] == features] df_best_model = df_features[df_features['R2_test'] == df_features['R2_test'].max()] df3 = df3.append(df_best_model, ignore_index=True) #print(df_best_model.model) sns.barplot(data=df3, x="feature_labels", y="R2_test", ax=ax) #fig = plt.gcf() #fig.setsiz y1,y2 = ax.get_ylim() ax.set_ylim((max(0,y1),y2)) ax.set_title(route) ax.set_ylabel("$R^2$(test)") if i < 6: ax.set_xlabel("") else: ax.set_xlabel("features") except: pass fig.set_size_inches(15, 20) plt.savefig("figs\\models\\features.pdf", bbox_inches="tight") plt.savefig("figs\\models\\features.png", bbox_inches="tight") plt.show() fig, axs = plt.subplots(4, 2, sharey=False) #for i, (route,ax) in enumerate(zip(df.route.unique(), axs.flatten())): for i, (route,ax) in enumerate(zip(routes, axs.flatten())): try: df2 = df[df['route'] == route] df3 = pd.DataFrame() for features in df2.feature_labels.unique(): df_features = df2[df2['feature_labels'] == features] df_best_model = df_features[df_features['R2_train'] == df_features['R2_train'].max()] df3 = df3.append(df_best_model, ignore_index=True) #print(df_best_model.model) sns.barplot(data=df3, x="feature_labels", y="R2_train", ax=ax) #fig = plt.gcf() #fig.setsiz y1,y2 = ax.get_ylim() ax.set_ylim((max(0,y1),y2)) ax.set_title(route) ax.set_ylabel("$R^2$(train)") if i < 6: ax.set_xlabel("") else: ax.set_xlabel("features") except: pass fig.set_size_inches(15, 20) plt.savefig("figs\\models\\features_train.pdf", bbox_inches="tight") plt.savefig("figs\\models\\features_train.png", bbox_inches="tight") plt.show()
_____no_output_____
MIT
analyse_regression_results.ipynb
SusTra/TraCo
Best models
#for features in df.features.unique(): fig, axs = plt.subplots(4, 2, sharey=False) #for i, (route,ax) in enumerate(zip(df.route.unique(), axs.flatten())): for i, (route,ax) in enumerate(zip(routes, axs.flatten())): try: df2 = df[df['route'] == route] df3 = pd.DataFrame() #features = df2.features.unique() #max_feature = sorted(features, key=len, reverse=True)[0] #df2 = df2[df2['features']==max_feature] for model in df2.model.unique(): df_model = df2[df2['model'] == model] df_best_model = df_model[df_model['R2_test'] == df_model['R2_test'].max()] df3 = df3.append(df_best_model, ignore_index=True) #print(df_best_model.feature_labels) sns.barplot(data=df3, x="model", y="R2_test", ax=ax) ax.set_title(route) ax.set_ylabel("$R^2$(test)") if i < 6: ax.set_xlabel("") else: ax.set_xlabel("models") except: pass fig.set_size_inches(15, 20) plt.savefig("figs\\models\\models.pdf", bbox_inches="tight") plt.savefig("figs\\models\\models.png", bbox_inches="tight") plt.show() #for features in df.features.unique(): fig, axs = plt.subplots(4, 2, sharey=False) #for i, (route,ax) in enumerate(zip(df.route.unique(), axs.flatten())): for i, (route,ax) in enumerate(zip(routes, axs.flatten())): try: df2 = df[df['route'] == route] df3 = pd.DataFrame() #features = df2.features.unique() #max_feature = sorted(features, key=len, reverse=True)[0] #df2 = df2[df2['features']==max_feature] for model in df2.model.unique(): df_model = df2[df2['model'] == model] df_best_model = df_model[df_model['R2_train'] == df_model['R2_train'].max()] df3 = df3.append(df_best_model, ignore_index=True) #print(df_best_model.feature_labels) sns.barplot(data=df3, x="model", y="R2_train", ax=ax) ax.set_title(route) ax.set_ylabel("$R^2$(train)") if i < 6: ax.set_xlabel("") else: ax.set_xlabel("models") except: pass fig.set_size_inches(15, 20) plt.savefig("figs\\models\\models_train.pdf", bbox_inches="tight") plt.savefig("figs\\models\\models_train.png", bbox_inches="tight") plt.show()
_____no_output_____
MIT
analyse_regression_results.ipynb
SusTra/TraCo
Best results
df_best = pd.read_csv("regression_results_best.csv") df_best['feature_labels'] = df_best['features'].map(lambda x: set_feature_labels(x, sep=", ")) df_best['R2_test'] = round(df_best['R2_test'],3) df_best['R2_train'] = round(df_best['R2_train'],3) df_best = df_best[['route', 'feature_labels','model', 'R2_train','R2_test']] df_best.columns = ['segment', 'features', 'best model', 'R2(train)', 'R2(test)'] f = open("best_results.txt", "w") print(df_best.to_latex(index=False), file=f) f.close() df_best
_____no_output_____
MIT
analyse_regression_results.ipynb
SusTra/TraCo
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
%mkdir ../data !wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
mkdir: cannot create directory ‘../data’: File exists --2020-08-26 07:30:30-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10 Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84125825 (80M) [application/x-gzip] Saving to: ‘../data/aclImdb_v1.tar.gz’ ../data/aclImdb_v1. 100%[===================>] 80.23M 6.67MB/s in 16s 2020-08-26 07:30:47 (5.05 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
import os import glob def read_imdb_data(data_dir='../data/aclImdb'): data = {} labels = {} for data_type in ['train', 'test']: data[data_type] = {} labels[data_type] = {} for sentiment in ['pos', 'neg']: data[data_type][sentiment] = [] labels[data_type][sentiment] = [] path = os.path.join(data_dir, data_type, sentiment, '*.txt') files = glob.glob(path) for f in files: with open(f) as review: data[data_type][sentiment].append(review.read()) # Here we represent a positive review by '1' and a negative review by '0' labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0) assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \ "{}/{} data size does not match labels size".format(data_type, sentiment) return data, labels data, labels = read_imdb_data() print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format( len(data['train']['pos']), len(data['train']['neg']), len(data['test']['pos']), len(data['test']['neg']))) from sklearn.utils import shuffle def prepare_imdb_data(data, labels): """Prepare training and test sets from IMDb movie reviews.""" #Combine positive and negative reviews and labels data_train = data['train']['pos'] + data['train']['neg'] data_test = data['test']['pos'] + data['test']['neg'] labels_train = labels['train']['pos'] + labels['train']['neg'] labels_test = labels['test']['pos'] + labels['test']['neg'] #Shuffle reviews and corresponding labels within training and test sets data_train, labels_train = shuffle(data_train, labels_train) data_test, labels_test = shuffle(data_test, labels_test) # Return a unified training data, test data, training labels, test labets return data_train, data_test, labels_train, labels_test train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels) print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X))) train_X[100]
_____no_output_____
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
import nltk nltk.download("stopwords") from nltk.corpus import stopwords from nltk.stem.porter import * stemmer = PorterStemmer() import re from bs4 import BeautifulSoup def review_to_words(review): text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case words = text.split() # Split string into words words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords words = [PorterStemmer().stem(w) for w in words] # stem return words import pickle cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists def preprocess_data(data_train, data_test, labels_train, labels_test, cache_dir=cache_dir, cache_file="preprocessed_data.pkl"): """Convert each review to words; read from cache if available.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = pickle.load(f) print("Read preprocessed data from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Preprocess training and test data to obtain words for each review #words_train = list(map(review_to_words, data_train)) #words_test = list(map(review_to_words, data_test)) words_train = [review_to_words(review) for review in data_train] words_test = [review_to_words(review) for review in data_test] # Write to cache file for future runs if cache_file is not None: cache_data = dict(words_train=words_train, words_test=words_test, labels_train=labels_train, labels_test=labels_test) with open(os.path.join(cache_dir, cache_file), "wb") as f: pickle.dump(cache_data, f) print("Wrote preprocessed data to cache file:", cache_file) else: # Unpack data loaded from cache file words_train, words_test, labels_train, labels_test = (cache_data['words_train'], cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test']) return words_train, words_test, labels_train, labels_test # Preprocess data train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
Read preprocessed data from cache file: preprocessed_data.pkl
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
import numpy as np from sklearn.feature_extraction.text import CountVectorizer import joblib # joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays def extract_BoW_features(words_train, words_test, vocabulary_size=5000, cache_dir=cache_dir, cache_file="bow_features.pkl"): """Extract Bag-of-Words for a given set of documents, already preprocessed into words.""" # If cache_file is not None, try to read from it first cache_data = None if cache_file is not None: try: with open(os.path.join(cache_dir, cache_file), "rb") as f: cache_data = joblib.load(f) print("Read features from cache file:", cache_file) except: pass # unable to read from cache, but that's okay # If cache is missing, then do the heavy lifting if cache_data is None: # Fit a vectorizer to training documents and use it to transform them # NOTE: Training documents have already been preprocessed and tokenized into words; # pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x vectorizer = CountVectorizer(max_features=vocabulary_size, preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed features_train = vectorizer.fit_transform(words_train).toarray() # Apply the same vectorizer to transform the test documents (ignore unknown words) features_test = vectorizer.transform(words_test).toarray() # NOTE: Remember to convert the features using .toarray() for a compact representation # Write to cache file for future runs (store vocabulary as well) if cache_file is not None: vocabulary = vectorizer.vocabulary_ cache_data = dict(features_train=features_train, features_test=features_test, vocabulary=vocabulary) with open(os.path.join(cache_dir, cache_file), "wb") as f: joblib.dump(cache_data, f) print("Wrote features to cache file:", cache_file) else: # Unpack data loaded from cache file features_train, features_test, vocabulary = (cache_data['features_train'], cache_data['features_test'], cache_data['vocabulary']) # Return both the extracted features as well as the vocabulary return features_train, features_test, vocabulary # Extract Bag of Words features for both training and test datasets train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
Read features from cache file: bow_features.pkl
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
import pandas as pd val_X = pd.DataFrame(train_X[:10000]) train_X = pd.DataFrame(train_X[10000:]) val_y = pd.DataFrame(train_y[:10000]) train_y = pd.DataFrame(train_y[10000:]) test_y = pd.DataFrame(test_y) test_X = pd.DataFrame(test_X)
_____no_output_____
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists. data_dir = '../data/xgboost' if not os.path.exists(data_dir): os.makedirs(data_dir) # First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth # labels, instead we will use them later to compare with our model output. pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False) pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False) pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False) # To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None. train_X = val_X = train_y = val_y = None
_____no_output_____
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
import sagemaker session = sagemaker.Session() # Store the current SageMaker session # S3 prefix (which folder will we use) prefix = 'sentiment-xgboost' test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix) val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix) train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
_____no_output_____
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
(TODO) Creating a hypertuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
from sagemaker import get_execution_role # Our current execution role is require when creating the model as the training # and inference code will need to access the model artifacts. role = get_execution_role() # We need to retrieve the location of the container which is provided by Amazon for using XGBoost. # As a matter of convenience, the training and inference code both use the same container. from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(session.boto_region_name, 'xgboost') # TODO: Create a SageMaker estimator using the container location determined in the previous cell. # It is recommended that you use a single training instance of type ml.m4.xlarge. It is also # recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the # output path. xgb = sagemaker.estimator.Estimator(container, role, train_instance_count=1, train_instance_type='ml.m4.xlarge', output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix), sagemaker_session=session) # TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary # label so we should be using the 'binary:logistic' objective. xgb.set_hyperparameters(max_depth=5, eta=0.2, gamma=5, min_child_weight=6, subsample=0.8, objective='binary:logistic', early_stopping=10, num_round=300)
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
# First, make sure to import the relevant objects used to construct the tuner from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner # TODO: Create the hyperparameter tuner object xgb_hyperparameter_tuner = HyperparameterTuner(estimator=xgb, objective_metric_name='validation:rmse', objective_type='Minimize', max_jobs=6, max_parallel_jobs=3, hyperparameter_ranges={ 'max_depth': IntegerParameter(3,6), 'eta': ContinuousParameter(0.05, 0.5), 'gamma': IntegerParameter(2,8), 'min_child_weight': IntegerParameter(3,8), 'subsample': ContinuousParameter(0.5, 0.9) })
_____no_output_____
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='csv') s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='csv') xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
_____no_output_____
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
xgb_hyperparameter_tuner.wait()
..................................................................................................................................................................................................................................................................................................................!
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge # should be more than enough. xgb_transformer = xgb_attached.transformer(instance_count=1, instance_type='ml.m4.xlarge')
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data. xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
_____no_output_____
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
xgb_transformer.wait()
............................Arguments: serve Arguments: serve [2020-08-26 08:02:52 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-26 08:02:52 +0000] [1] [INFO] Starting gunicorn 19.7.1 [2020-08-26 08:02:52 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-26 08:02:52 +0000] [1] [INFO] Using worker: gevent [2020-08-26 08:02:52 +0000] [36] [INFO] Booting worker with pid: 36 [2020-08-26 08:02:52 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-26 08:02:52 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-26 08:02:52 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-26:08:02:52:INFO] Model loaded successfully for worker : 36 [2020-08-26:08:02:52:INFO] Model loaded successfully for worker : 37 [2020-08-26:08:02:52:INFO] Model loaded successfully for worker : 38 [2020-08-26:08:02:52:INFO] Model loaded successfully for worker : 39 [2020-08-26 08:02:52 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) [2020-08-26 08:02:52 +0000] [1] [INFO] Using worker: gevent [2020-08-26 08:02:52 +0000] [36] [INFO] Booting worker with pid: 36 [2020-08-26 08:02:52 +0000] [37] [INFO] Booting worker with pid: 37 [2020-08-26 08:02:52 +0000] [38] [INFO] Booting worker with pid: 38 [2020-08-26 08:02:52 +0000] [39] [INFO] Booting worker with pid: 39 [2020-08-26:08:02:52:INFO] Model loaded successfully for worker : 36 [2020-08-26:08:02:52:INFO] Model loaded successfully for worker : 37 [2020-08-26:08:02:52:INFO] Model loaded successfully for worker : 38 [2020-08-26:08:02:52:INFO] Model loaded successfully for worker : 39 [2020-08-26:08:02:52:INFO] Sniff delimiter as ',' [2020-08-26:08:02:52:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:53:INFO] Sniff delimiter as ',' [2020-08-26:08:02:53:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:53:INFO] Sniff delimiter as ',' [2020-08-26:08:02:53:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:53:INFO] Sniff delimiter as ',' [2020-08-26:08:02:53:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:52:INFO] Sniff delimiter as ',' [2020-08-26:08:02:52:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:53:INFO] Sniff delimiter as ',' [2020-08-26:08:02:53:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:53:INFO] Sniff delimiter as ',' [2020-08-26:08:02:53:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:53:INFO] Sniff delimiter as ',' [2020-08-26:08:02:53:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:55:INFO] Sniff delimiter as ',' [2020-08-26:08:02:55:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:55:INFO] Sniff delimiter as ',' [2020-08-26:08:02:55:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:55:INFO] Sniff delimiter as ',' [2020-08-26:08:02:55:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:55:INFO] Sniff delimiter as ',' [2020-08-26:08:02:55:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:56:INFO] Sniff delimiter as ',' [2020-08-26:08:02:56:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:55:INFO] Sniff delimiter as ',' [2020-08-26:08:02:55:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:55:INFO] Sniff delimiter as ',' [2020-08-26:08:02:55:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:56:INFO] Sniff delimiter as ',' [2020-08-26:08:02:56:INFO] Determined delimiter of CSV input is ',' 2020-08-26T08:02:52.548:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD [2020-08-26:08:02:57:INFO] Sniff delimiter as ',' [2020-08-26:08:02:57:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:58:INFO] Sniff delimiter as ',' [2020-08-26:08:02:58:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:58:INFO] Sniff delimiter as ',' [2020-08-26:08:02:58:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:58:INFO] Sniff delimiter as ',' [2020-08-26:08:02:58:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:57:INFO] Sniff delimiter as ',' [2020-08-26:08:02:57:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:58:INFO] Sniff delimiter as ',' [2020-08-26:08:02:58:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:58:INFO] Sniff delimiter as ',' [2020-08-26:08:02:58:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:02:58:INFO] Sniff delimiter as ',' [2020-08-26:08:02:58:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:00:INFO] Sniff delimiter as ',' [2020-08-26:08:03:00:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:00:INFO] Sniff delimiter as ',' [2020-08-26:08:03:00:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:00:INFO] Sniff delimiter as ',' [2020-08-26:08:03:00:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:00:INFO] Sniff delimiter as ',' [2020-08-26:08:03:00:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:02:INFO] Sniff delimiter as ',' [2020-08-26:08:03:02:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:03:INFO] Sniff delimiter as ',' [2020-08-26:08:03:03:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:03:INFO] Sniff delimiter as ',' [2020-08-26:08:03:03:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:03:INFO] Sniff delimiter as ',' [2020-08-26:08:03:03:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:02:INFO] Sniff delimiter as ',' [2020-08-26:08:03:02:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:03:INFO] Sniff delimiter as ',' [2020-08-26:08:03:03:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:03:INFO] Sniff delimiter as ',' [2020-08-26:08:03:03:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:03:INFO] Sniff delimiter as ',' [2020-08-26:08:03:03:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:05:INFO] Sniff delimiter as ',' [2020-08-26:08:03:05:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:05:INFO] Sniff delimiter as ',' [2020-08-26:08:03:05:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:05:INFO] Sniff delimiter as ',' [2020-08-26:08:03:05:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:05:INFO] Sniff delimiter as ',' [2020-08-26:08:03:05:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:05:INFO] Sniff delimiter as ',' [2020-08-26:08:03:05:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:05:INFO] Sniff delimiter as ',' [2020-08-26:08:03:05:INFO] Sniff delimiter as ',' [2020-08-26:08:03:05:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:05:INFO] Sniff delimiter as ',' [2020-08-26:08:03:05:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:05:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:07:INFO] Sniff delimiter as ',' [2020-08-26:08:03:07:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:07:INFO] Sniff delimiter as ',' [2020-08-26:08:03:07:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:08:INFO] Sniff delimiter as ',' [2020-08-26:08:03:07:INFO] Sniff delimiter as ',' [2020-08-26:08:03:07:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:07:INFO] Sniff delimiter as ',' [2020-08-26:08:03:07:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:08:INFO] Sniff delimiter as ',' [2020-08-26:08:03:08:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:08:INFO] Sniff delimiter as ',' [2020-08-26:08:03:08:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:08:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:08:INFO] Sniff delimiter as ',' [2020-08-26:08:03:08:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:10:INFO] Sniff delimiter as ',' [2020-08-26:08:03:10:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:10:INFO] Sniff delimiter as ',' [2020-08-26:08:03:10:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:10:INFO] Sniff delimiter as ',' [2020-08-26:08:03:10:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:10:INFO] Sniff delimiter as ',' [2020-08-26:08:03:10:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:10:INFO] Sniff delimiter as ',' [2020-08-26:08:03:10:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:10:INFO] Sniff delimiter as ',' [2020-08-26:08:03:10:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:12:INFO] Sniff delimiter as ',' [2020-08-26:08:03:12:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:13:INFO] Sniff delimiter as ',' [2020-08-26:08:03:13:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:12:INFO] Sniff delimiter as ',' [2020-08-26:08:03:12:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:13:INFO] Sniff delimiter as ',' [2020-08-26:08:03:13:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:15:INFO] Sniff delimiter as ',' [2020-08-26:08:03:15:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:15:INFO] Sniff delimiter as ',' [2020-08-26:08:03:15:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:15:INFO] Sniff delimiter as ',' [2020-08-26:08:03:15:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:15:INFO] Sniff delimiter as ',' [2020-08-26:08:03:15:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:15:INFO] Sniff delimiter as ',' [2020-08-26:08:03:15:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:15:INFO] Sniff delimiter as ',' [2020-08-26:08:03:15:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:15:INFO] Sniff delimiter as ',' [2020-08-26:08:03:15:INFO] Determined delimiter of CSV input is ',' [2020-08-26:08:03:15:INFO] Sniff delimiter as ',' [2020-08-26:08:03:15:INFO] Determined delimiter of CSV input is ','
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
download: s3://sagemaker-ap-south-1-714138043953/xgboost-200826-0732-001-281d2aa4-2020-08-26-07-58-22-744/test.csv.out to ../data/xgboost/test.csv.out
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None) predictions = [round(num) for num in predictions.squeeze().values] from sklearn.metrics import accuracy_score accuracy_score(test_y, predictions)
_____no_output_____
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
# First we will remove all of the files contained in the data_dir directory !rm $data_dir/* # And then we delete the directory itself !rmdir $data_dir # Similarly we will remove the files in the cache_dir directory and the directory itself !rm $cache_dir/* !rmdir $cache_dir
_____no_output_____
MIT
Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb
itirkaa/sagemaker-deployment
The Boston housing data is a built in dataset in the sklearn library of python. You will be using two of the variables from this dataset, which are stored in **df**. The median home price in thousands of dollars and the crime per capita in the area of the home are shown above.`1.` Use this dataframe to fit a linear model to predict the home price based on the crime rate. Use your output to answer the first quiz below. Don't forget an intercept.
df['intercept'] = 1 lm = sms.OLS(df['MedianHomePrice'], df[['intercept', 'CrimePerCapita']]) results = lm.fit() results.summary()
_____no_output_____
Unlicense
03 Stats/14 Regression/HomesVCrime - Solution.ipynb
Alashmony/DA_Udacity
`2.`Plot the relationship between the crime rate and median home price below. Use your plot and the results from the first question as necessary to answer the remaining quiz questions below.
plt.scatter(df['CrimePerCapita'], df['MedianHomePrice']); plt.xlabel('Crime/Capita'); plt.ylabel('Median Home Price'); plt.title('Median Home Price vs. CrimePerCapita'); ## To show the line that was fit I used the following code from ## https://plot.ly/matplotlib/linear-fits/ ## It isn't the greatest fit... but it isn't awful either import plotly.plotly as py import plotly.graph_objs as go # MatPlotlib import matplotlib.pyplot as plt from matplotlib import pylab # Scientific libraries from numpy import arange,array,ones from scipy import stats xi = arange(0,100) A = array([ xi, ones(100)]) # (Almost) linear sequence y = df['MedianHomePrice'] x = df['CrimePerCapita'] # Generated linear fit slope, intercept, r_value, p_value, std_err = stats.linregress(x,y) line = slope*xi+intercept plt.plot(x,y,'o', xi, line); plt.xlabel('Crime/Capita'); plt.ylabel('Median Home Price'); pylab.title('Median Home Price vs. CrimePerCapita');
_____no_output_____
Unlicense
03 Stats/14 Regression/HomesVCrime - Solution.ipynb
Alashmony/DA_Udacity
Continuous Control---In this notebook, you will learn how to use the Unity ML-Agents environment for the second project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program. 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
import numpy as np import torch import matplotlib.pyplot as plt import time from unityagents import UnityEnvironment from collections import deque from itertools import count import datetime from ddpg import DDPG, ReplayBuffer %matplotlib inline
_____no_output_____
MIT
Continuous_Control.ipynb
bobiblazeski/reacher
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Reacher.app"`- **Windows** (x86): `"path/to/Reacher_Windows_x86/Reacher.exe"`- **Windows** (x86_64): `"path/to/Reacher_Windows_x86_64/Reacher.exe"`- **Linux** (x86): `"path/to/Reacher_Linux/Reacher.x86"`- **Linux** (x86_64): `"path/to/Reacher_Linux/Reacher.x86_64"`- **Linux** (x86, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86"`- **Linux** (x86_64, headless): `"path/to/Reacher_Linux_NoVis/Reacher.x86_64"`For instance, if you are using a Mac, then you downloaded `Reacher.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Reacher.app")```
#env = UnityEnvironment(file_name='envs/Reacher_Linux_NoVis_20/Reacher.x86_64') # Headless env = UnityEnvironment(file_name='envs/Reacher_Linux_20/Reacher.x86_64') # Visual
INFO:unityagents: 'Academy' started successfully! Unity Academy name: Academy Number of Brains: 1 Number of External Brains : 1 Lesson number : 0 Reset Parameters : goal_speed -> 1.0 goal_size -> 5.0 Unity brain name: ReacherBrain Number of Visual Observations (per agent): 0 Vector Observation space type: continuous Vector Observation space size (per agent): 33 Number of stacked Vector Observation: 1 Vector Action space type: continuous Vector Action space size (per agent): 4 Vector Action descriptions: , , ,
MIT
Continuous_Control.ipynb
bobiblazeski/reacher
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
# get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name]
_____no_output_____
MIT
Continuous_Control.ipynb
bobiblazeski/reacher
2. Examine the State and Action SpacesIn this environment, a double-jointed arm can move to target locations. A reward of `+0.1` is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.The observation space consists of `33` variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector must be a number between `-1` and `1`.Run the code cell below to print some information about the environment.
# reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of agents num_agents = len(env_info.agents) print('Number of agents:', num_agents) # size of each action action_size = brain.vector_action_space_size print('Size of each action:', action_size) # examine the state space states = env_info.vector_observations state_size = states.shape[1] print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size)) print('The state for the first agent looks like:', states[0])
Number of agents: 20 Size of each action: 4 There are 20 agents. Each observes a state with length: 33 The state for the first agent looks like: [ 0.00000000e+00 -4.00000000e+00 0.00000000e+00 1.00000000e+00 -0.00000000e+00 -0.00000000e+00 -4.37113883e-08 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 -1.00000000e+01 0.00000000e+00 1.00000000e+00 -0.00000000e+00 -0.00000000e+00 -4.37113883e-08 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 5.75471878e+00 -1.00000000e+00 5.55726624e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00 -1.68164849e-01]
MIT
Continuous_Control.ipynb
bobiblazeski/reacher
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Once this cell is executed, you will watch the agent's performance, if it selects an action at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment. Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
env_info = env.reset(train_mode=False)[brain_name] # reset the environment states = env_info.vector_observations # get the current state (for each agent) scores = np.zeros(num_agents) # initialize the score (for each agent) while True: actions = np.random.randn(num_agents, action_size) # select an action (for each agent) actions = np.clip(actions, -1, 1) # all actions between -1 and 1 env_info = env.step(actions)[brain_name] # send all actions to tne environment next_states = env_info.vector_observations # get next state (for each agent) rewards = env_info.rewards # get reward (for each agent) dones = env_info.local_done # see if episode finished scores += env_info.rewards # update the score (for each agent) states = next_states # roll over states to next time step if np.any(dones): # exit loop if episode finished break break print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
Total score (averaged over agents) this episode: 0.0
MIT
Continuous_Control.ipynb
bobiblazeski/reacher
When finished, you can close the environment. 4. It's Your Turn!Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:```pythonenv_info = env.reset(train_mode=True)[brain_name]```
BUFFER_SIZE = int(5e5) # replay buffer size CACHE_SIZE = int(6e4) BATCH_SIZE = 256 # minibatch size GAMMA = 0.99 # discount factor TAU = 1e-3 # for soft update of target parameters LR_ACTOR = 1e-3 # learning rate of the actor LR_CRITIC = 1e-3 # learning rate of the critic WEIGHT_DECAY = 0 # L2 weight decay UPDATE_EVERY = 20 # timesteps between updates NUM_UPDATES = 15 # num of update passes when updating EPSILON = 1.0 # epsilon for the noise process added to the actions EPSILON_DECAY = 1e-6 # decay for epsilon above NOISE_SIGMA = 0.05 # 96 Neurons solves the environment consistently and usually fastest fc1_units=96 fc2_units=96 random_seed=23 def store(buffers, states, actions, rewards, next_states, dones, timestep): memory, cache = buffers for state, action, reward, next_state, done in zip(states, actions, rewards, next_states, dones): memory.add(state, action, reward, next_state, done) cache.add(state, action, reward, next_state, done) store def learn(agent, buffers, timestep): memory, cache = buffers if len(memory) > BATCH_SIZE and timestep % UPDATE_EVERY == 0: for _ in range(NUM_UPDATES): experiences = memory.sample() agent.learn(experiences, GAMMA) for _ in range(3): experiences = cache.sample() agent.learn(experiences, GAMMA) learn avg_over = 100 print_every = 10 def ddpg(agent, buffers, n_episodes=200, stopOnSolved=True): print('Start: ',datetime.datetime.now()) scores_deque = deque(maxlen=avg_over) scores_global = [] average_global = [] min_global = [] max_global = [] best_avg = -np.inf tic = time.time() print('\rEpis,EpAvg,GlAvg, Max, Min, Time') for i_episode in range(1, n_episodes+1): env_info = env.reset(train_mode=True)[brain_name] # reset the environment states = env_info.vector_observations # get the current state (for each agent) scores = np.zeros(num_agents) # initialize the score (for each agent) agent.reset() score_average = 0 timestep = time.time() for t in count(): actions = agent.act(states, add_noise=True) env_info = env.step(actions)[brain_name] # send all actions to tne environment next_states = env_info.vector_observations # get next state (for each agent) rewards = env_info.rewards # get reward (for each agent) dones = env_info.local_done # see if episode finished store(buffers, states, actions, rewards, next_states, dones, t) learn(agent, buffers, t) states = next_states # roll over states to next time step scores += rewards # update the score (for each agent) if np.any(dones): # exit loop if episode finished break score = np.mean(scores) scores_deque.append(score) score_average = np.mean(scores_deque) scores_global.append(score) average_global.append(score_average) min_global.append(np.min(scores)) max_global.append(np.max(scores)) print('\r {}, {:.2f}, {:.2f}, {:.2f}, {:.2f}, {:.2f}'\ .format(str(i_episode).zfill(3), score, score_average, np.max(scores), np.min(scores), time.time() - timestep), end="\n") if i_episode % print_every == 0: agent.save('./') if stopOnSolved and score_average >= 30.0: toc = time.time() print('\nSolved in {:d} episodes!\tAvg Score: {:.2f}, time: {}'.format(i_episode, score_average, toc-tic)) agent.save('./'+str(i_episode)+'_') break print('End: ',datetime.datetime.now()) return scores_global, average_global, max_global, min_global ddpg # Create new empty buffers to start training from scratch buffers = [ReplayBuffer(action_size, BUFFER_SIZE, BATCH_SIZE, random_seed), ReplayBuffer(action_size, CACHE_SIZE, BATCH_SIZE, random_seed)] agent = DDPG(state_size=state_size, action_size=action_size, random_seed=23, fc1_units=96, fc2_units=96) scores, averages, maxima, minima = ddpg(agent, buffers, n_episodes=130) plt.plot(np.arange(1, len(scores)+1), scores) plt.plot(np.arange(1, len(averages)+1), averages) plt.plot(np.arange(1, len(maxima)+1), maxima) plt.plot(np.arange(1, len(minima)+1), minima) plt.ylabel('Score') plt.xlabel('Episode #') plt.legend(['EpAvg', 'GlAvg', 'Max', 'Min'], loc='upper left') plt.show() # Smaller agent learning this task from larger agent experiences agent = DDPG(state_size=state_size, action_size=action_size, random_seed=23, fc1_units=48, fc2_units=48) scores, averages, maxima, minima = ddpg(agent, buffers, n_episodes=200) plt.plot(np.arange(1, len(scores)+1), scores) plt.plot(np.arange(1, len(averages)+1), averages) plt.plot(np.arange(1, len(maxima)+1), maxima) plt.plot(np.arange(1, len(minima)+1), minima) plt.ylabel('Score') plt.xlabel('Episode #') plt.legend(['EpAvg', 'GlAvg', 'Max', 'Min'], loc='lower center') plt.show()
_____no_output_____
MIT
Continuous_Control.ipynb
bobiblazeski/reacher
Saves experiences for training future agents. Warning file is quite large.
memory, cache = buffers memory.save('experiences.pkl') #env.close()
_____no_output_____
MIT
Continuous_Control.ipynb
bobiblazeski/reacher
5. See the pre-trained agent in action
agent = DDPG(state_size=state_size, action_size=action_size, random_seed=23, fc1_units=96, fc2_units=96) agent.load('./saves/96_96_108_actor.pth', './saves/96_96_108_critic.pth') def play(agent, episodes=3): for i_episode in range(episodes): env_info = env.reset(train_mode=False)[brain_name] # reset the environment states = env_info.vector_observations # get the current state (for each agent) scores = np.zeros(num_agents) # initialize the score (for each agent) while True: actions = np.random.randn(num_agents, action_size) # select an action (for each agent) actions = agent.act(states, add_noise=False) # all actions between -1 and 1 env_info = env.step(actions)[brain_name] # send all actions to tne environment next_states = env_info.vector_observations # get next state (for each agent) rewards = env_info.rewards # get reward (for each agent) dones = env_info.local_done # see if episode finished scores += env_info.rewards # update the score (for each agent) states = next_states # roll over states to next time step if np.any(dones): # exit loop if episode finished break #break print('Ep No: {} Total score (averaged over agents): {}'.format(i_episode, np.mean(scores))) play(agent, 10)
Ep No: 0 Total score (averaged over agents): 37.69499915745109 Ep No: 1 Total score (averaged over agents): 36.70099917966873 Ep No: 2 Total score (averaged over agents): 36.49249918432906 Ep No: 3 Total score (averaged over agents): 37.94049915196374 Ep No: 4 Total score (averaged over agents): 37.35449916506186 Ep No: 5 Total score (averaged over agents): 37.17449916908517 Ep No: 6 Total score (averaged over agents): 36.74749917862937 Ep No: 7 Total score (averaged over agents): 37.175499169062824 Ep No: 8 Total score (averaged over agents): 37.99799915067852 Ep No: 9 Total score (averaged over agents): 36.05999919399619
MIT
Continuous_Control.ipynb
bobiblazeski/reacher
6. ExperiencesExperiences from the Replay Buffer could be saved and loaded for training different agents.As an example I've provided `experiences.pkl.7z` which you should unpack with your favorite archiver. Create new ReplayBuffer and load saved experiences
savedBuffer = ReplayBuffer(action_size, BUFFER_SIZE, BATCH_SIZE, random_seed) savedBuffer.load('experiences.pkl')
_____no_output_____
MIT
Continuous_Control.ipynb
bobiblazeski/reacher
Afterward you can use it to train your agent
savedBuffer.sample()
_____no_output_____
MIT
Continuous_Control.ipynb
bobiblazeski/reacher
NumPy Übung - AufgabeJetzt wo wir schon einiges über NumPy gelernt haben können wir unser Wissen überprüfen. Wir starten dazu mit einigen einfachen Aufgaben und steigern uns zu komplizierteren Fragestellungen. **Importiere NumPy als np** **Erstelle ein Array von 10 Nullen** **Erstelle ein Array von 10 Einsen** **Erstelle ein Array von 10 Fünfen** **Erstelle ein Array der Zahlen von 10 bis 50** **Erstelle ein Array aller gerader Zahlen zwischen 10 und 50** **Erstelle eine 3x3 Matrix, die die Zahlen von 0 bis 8 beinhaltet** **Erstelle eine 3x3 Einheitsmatrix** **Nutze NumPy, um eine Zufallszahl zwischen 0 und 1 zu erstellen** **Nutze NumPy, um ein Array von 25 Zufallszahlen, die einer Standardnormalverteilung folgen, zu erstellen** **Erstelle die folgende Matrix:** **Erstelle ein Array aus 20 gleichmäßig verteilten Punkten zwischen 0 und 1** NumPy Indexing und SelectionDu wirst nun einige Matrizen sehen und deine Aufgabe ist es, die angezeigten Ergebnis-Outputs zu reproduzieren:
# Schreibe deinen Code hier, der das Ergebnis reproduzieren soll # Achte darauf, die untere Zelle nicht auszuführen, sonst # wirst du das Ergebnis nicht mehr sehen können # Schreibe deinen Code hier, der das Ergebnis reproduzieren soll # Achte darauf, die untere Zelle nicht auszuführen, sonst # wirst du das Ergebnis nicht mehr sehen können # Schreibe deinen Code hier, der das Ergebnis reproduzieren soll # Achte darauf, die untere Zelle nicht auszuführen, sonst # wirst du das Ergebnis nicht mehr sehen können # Schreibe deinen Code hier, der das Ergebnis reproduzieren soll # Achte darauf, die untere Zelle nicht auszuführen, sonst # wirst du das Ergebnis nicht mehr sehen können # Schreibe deinen Code hier, der das Ergebnis reproduzieren soll # Achte darauf, die untere Zelle nicht auszuführen, sonst # wirst du das Ergebnis nicht mehr sehen können
_____no_output_____
BSD-3-Clause
2-Data-Analysis/1-Numpy/4-Numpy Uebung - Aufgabe.ipynb
Klaynie/Jupyter-Test
Species distribution modelingModeling species' geographic distributions is an importantproblem in conservation biology. In this example wemodel the geographic distribution of two south americanmammals given past observations and 14 environmentalvariables. Since we have only positive examples (there areno unsuccessful observations), we cast this problem as adensity estimation problem and use the :class:`sklearn.svm.OneClassSVM`as our modeling tool. The dataset is provided by Phillips et. al. (2006).If available, the example uses`basemap `_to plot the coast lines and national boundaries of South America.The two species are: - `"Bradypus variegatus" `_ , the Brown-throated Sloth. - `"Microryzomys minutus" `_ , also known as the Forest Small Rice Rat, a rodent that lives in Peru, Colombia, Ecuador, Peru, and Venezuela.References---------- * `"Maximum entropy modeling of species geographic distributions" `_ S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling, 190:231-259, 2006.
# Authors: Peter Prettenhofer <peter.prettenhofer@gmail.com> # Jake Vanderplas <vanderplas@astro.washington.edu> # # License: BSD 3 clause from time import time import numpy as np import matplotlib.pyplot as plt from sklearn.utils import Bunch from sklearn.datasets import fetch_species_distributions from sklearn import svm, metrics # if basemap is available, we'll use it. # otherwise, we'll improvise later... try: from mpl_toolkits.basemap import Basemap basemap = True except ImportError: basemap = False print(__doc__) def construct_grids(batch): """Construct the map grid from the batch object Parameters ---------- batch : Batch object The object returned by :func:`fetch_species_distributions` Returns ------- (xgrid, ygrid) : 1-D arrays The grid corresponding to the values in batch.coverages """ # x,y coordinates for corner cells xmin = batch.x_left_lower_corner + batch.grid_size xmax = xmin + (batch.Nx * batch.grid_size) ymin = batch.y_left_lower_corner + batch.grid_size ymax = ymin + (batch.Ny * batch.grid_size) # x coordinates of the grid cells xgrid = np.arange(xmin, xmax, batch.grid_size) # y coordinates of the grid cells ygrid = np.arange(ymin, ymax, batch.grid_size) return (xgrid, ygrid) def create_species_bunch(species_name, train, test, coverages, xgrid, ygrid): """Create a bunch with information about a particular organism This will use the test/train record arrays to extract the data specific to the given species name. """ bunch = Bunch(name=' '.join(species_name.split("_")[:2])) species_name = species_name.encode('ascii') points = dict(test=test, train=train) for label, pts in points.items(): # choose points associated with the desired species pts = pts[pts['species'] == species_name] bunch['pts_%s' % label] = pts # determine coverage values for each of the training & testing points ix = np.searchsorted(xgrid, pts['dd long']) iy = np.searchsorted(ygrid, pts['dd lat']) bunch['cov_%s' % label] = coverages[:, -iy, ix].T return bunch def plot_species_distribution(species=("bradypus_variegatus_0", "microryzomys_minutus_0")): """ Plot the species distribution. """ if len(species) > 2: print("Note: when more than two species are provided," " only the first two will be used") t0 = time() # Load the compressed data data = fetch_species_distributions() # Set up the data grid xgrid, ygrid = construct_grids(data) # The grid in x,y coordinates X, Y = np.meshgrid(xgrid, ygrid[::-1]) # create a bunch for each species BV_bunch = create_species_bunch(species[0], data.train, data.test, data.coverages, xgrid, ygrid) MM_bunch = create_species_bunch(species[1], data.train, data.test, data.coverages, xgrid, ygrid) # background points (grid coordinates) for evaluation np.random.seed(13) background_points = np.c_[np.random.randint(low=0, high=data.Ny, size=10000), np.random.randint(low=0, high=data.Nx, size=10000)].T # We'll make use of the fact that coverages[6] has measurements at all # land points. This will help us decide between land and water. land_reference = data.coverages[6] # Fit, predict, and plot for each species. for i, species in enumerate([BV_bunch, MM_bunch]): print("_" * 80) print("Modeling distribution of species '%s'" % species.name) # Standardize features mean = species.cov_train.mean(axis=0) std = species.cov_train.std(axis=0) train_cover_std = (species.cov_train - mean) / std # Fit OneClassSVM print(" - fit OneClassSVM ... ", end='') clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.5) clf.fit(train_cover_std) print("done.") # Plot map of South America plt.subplot(1, 2, i + 1) if basemap: print(" - plot coastlines using basemap") m = Basemap(projection='cyl', llcrnrlat=Y.min(), urcrnrlat=Y.max(), llcrnrlon=X.min(), urcrnrlon=X.max(), resolution='c') m.drawcoastlines() m.drawcountries() else: print(" - plot coastlines from coverage") plt.contour(X, Y, land_reference, levels=[-9998], colors="k", linestyles="solid") plt.xticks([]) plt.yticks([]) print(" - predict species distribution") # Predict species distribution using the training data Z = np.ones((data.Ny, data.Nx), dtype=np.float64) # We'll predict only for the land points. idx = np.where(land_reference > -9999) coverages_land = data.coverages[:, idx[0], idx[1]].T pred = clf.decision_function((coverages_land - mean) / std) Z *= pred.min() Z[idx[0], idx[1]] = pred levels = np.linspace(Z.min(), Z.max(), 25) Z[land_reference == -9999] = -9999 # plot contours of the prediction plt.contourf(X, Y, Z, levels=levels, cmap=plt.cm.Reds) plt.colorbar(format='%.2f') # scatter training/testing points plt.scatter(species.pts_train['dd long'], species.pts_train['dd lat'], s=2 ** 2, c='black', marker='^', label='train') plt.scatter(species.pts_test['dd long'], species.pts_test['dd lat'], s=2 ** 2, c='black', marker='x', label='test') plt.legend() plt.title(species.name) plt.axis('equal') # Compute AUC with regards to background points pred_background = Z[background_points[0], background_points[1]] pred_test = clf.decision_function((species.cov_test - mean) / std) scores = np.r_[pred_test, pred_background] y = np.r_[np.ones(pred_test.shape), np.zeros(pred_background.shape)] fpr, tpr, thresholds = metrics.roc_curve(y, scores) roc_auc = metrics.auc(fpr, tpr) plt.text(-35, -70, "AUC: %.3f" % roc_auc, ha="right") print("\n Area under the ROC curve : %f" % roc_auc) print("\ntime elapsed: %.2fs" % (time() - t0)) plot_species_distribution() plt.show()
_____no_output_____
MIT
sklearn/sklearn learning/demonstration/auto_examples_jupyter/applications/plot_species_distribution_modeling.ipynb
wangyendt/deeplearning_models
First create some random 3d data points
N = 10 # The number of points points = np.random.rand(N, 3)
_____no_output_____
MIT
NaiveCoverage.ipynb
dk1010101/astroplay
Now create KDTree from these so that we can look for the neighbours. TBH we don't really need KDTree. We can do this probably better and easier with a distance matrix but this will do for now.
kdt = KDTree(points)
_____no_output_____
MIT
NaiveCoverage.ipynb
dk1010101/astroplay
Test by looking for the two neighbours of the first point
kdt.query([points[0]], 3, False)
_____no_output_____
MIT
NaiveCoverage.ipynb
dk1010101/astroplay
So the neighbous of 0 are point 2 and 4. ok. Let's plot the 3d points and see them
x = [p[0] for p in points] y = [p[1] for p in points] z = [p[2] for p in points] fig = plt.figure(figsize=(8, 8), constrained_layout=True) ax = fig.add_subplot(projection='3d') ax.scatter(points[0][0],points[0][1],points[0][2], c='yellow',s=75) ax.scatter(x[1:],y[1:],z[1:],c='blue',s=45) for i, p in enumerate(points): ax.text(p[0], p[1], p[2], str(i), fontsize=14) plt.show()
_____no_output_____
MIT
NaiveCoverage.ipynb
dk1010101/astroplay
Now we will look at the the algo and if it works...
def gen_tris(points): processed_points = set() points_to_do = set(range(len(points))) tris = [] # pick the first three points start = 0 nns = kdt.query([points[start]], N, False) work_pts = nns[0][:3] tris.append(Poly3DCollection([[points[i] for i in work_pts]], edgecolors='black', facecolors='w', linewidths=1, alpha=0.8)) for p in work_pts: processed_points.add(p) print(f'added tri [{work_pts[0]}, {work_pts[1]}, {work_pts[2]}]') start = work_pts[1] while True: nns = kdt.query([points[start]], N, False) for p in nns[0]: if p in processed_points: continue nns2 = kdt.query([points[p]], N, False) for p2 in nns2[0]: if p2 in processed_points and p2 != start: break print(f'added tri [{start}, {p}, {p2}]') tris.append(Poly3DCollection([[points[start], points[p], points[p2]]],edgecolors='black',facecolors='w', linewidths=1, alpha=0.8)) processed_points.add(p) start = p break if len(processed_points) == len(points): break return tris tris = gen_tris(points) # and show the points and the triangles fig = plt.figure(figsize=(10, 10), constrained_layout=True) # ax = Axes3D(fig, auto_add_to_figure=False) ax = fig.add_subplot(111, projection='3d') fig.add_axes(ax) ax.scatter(points[0][0],points[0][1],points[0][2], c='yellow',s=75) ax.scatter(x[1:],y[1:],z[1:],c='blue',s=45) for p in tris: ax.add_collection3d(p) for i, p in enumerate(points): ax.text(p[0], p[1], p[2], str(i), fontsize=16) plt.show()
added tri [0, 1, 6] added tri [1, 2, 0] added tri [2, 4, 6] added tri [4, 3, 6] added tri [3, 8, 6] added tri [8, 7, 3] added tri [7, 9, 3] added tri [9, 5, 3]
MIT
NaiveCoverage.ipynb
dk1010101/astroplay
Introduction to the Python language**Note**: This notebooks is not really a ready-to-use tutorial but rather serves as a table of contents that we will fill during the short course. It might later be useful as a memo, but it clearly lacks important notes and explanations.There are lots of tutorials that you can find online, though. A useful ressource is for example the [The Python Tutorial](https://docs.python.org/3/tutorial/).Topics covered:- Primitives (use Python as a calculator)- Control flows (for, while, if...)- Containers (tuple, list, dict)- Some Python specifics! - Immutable vs. mutable - Variables: names bound to objects - Typing - List comprehensions- Functions- Modules- Basic (text) File IO Comments
# this is a comment
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Using Python as a calculator
2 / 2
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Automatic type casting for int and float (more on that later)
2 + 2.
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Automatic float conversion for division (only in Python 3 !!!)
2 / 3
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
**Tip**: if you don't want integer division, use float explicitly (works with both Python 2 and 3)
2. / 3
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Integer division (in Python: returns floor)
2 // 3
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Import math module for built-in math functions (more on how to import modules later)
import math math.sin(math.pi / 2) math.log(2.)
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
**Tip**: to get help interactively for a function, press shift-tab when the cursor is on the function, or alternatively use `?` or `help()`
math.log? help(math.log)
Help on built-in function log in module math: log(...) log(x[, base]) Return the logarithm of x to the given base. If the base not specified, returns the natural logarithm (base e) of x.
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Complex numbers built in the language
0+1j**2 (3+4j).real (3+4j).imag
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Create variables, or rather bound values (objects) to identifiers (more on that later)
earth_radius = 6.371e6 earth_radius * 2
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
*Note*: Python instructions are usually separated by new line characters
a = 1 a + 2
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
It is possible to write several instructions on a single line using semi-colons, but it is strongly discouraged
a = 1; a + 1 A
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
In a notebook, only the output of the last line executed in the cell is shown
a = 10 2 + 2 a 2 + 2 2 + 1
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
To show intermediate results, you need to use the `print()` built-in function, or write code in separate notebook cells
print(2 + 2) print(2 + 1)
4 3
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Strings String are created using single or double quotes
food = "bradwurst" dessert = 'cake'
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
You may need to include a single (double) quote in a string
s = 'you\'ll need the \\ character' s
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
We still see two "\". Why??? This is actually what you want when printing the string
print(s)
you'll need the \ character
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Other special characters (e.g., line return)
two_lines = "frist_line\n\tsecond_line" two_lines print(two_lines)
frist_line second_line
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Long strings
lunch = """ Menu Main courses """ lunch print(lunch)
Menu Main courses
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Concatenate strings using the `+` operator
food + ' and ' + dessert
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Concatenate strings using `join()`
s = ' '.join([food, 'and', dessert, 'coffee']) s s = '\n'.join([food, 'and', dessert, 'coffee']) print(s)
bradwurst and cake coffee
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Some useful string manipulation (see https://docs.python.org/3/library/stdtypes.htmlstring-methods)
food = ' bradwurst ' food.strip()
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Format stringsFor more info, see this very nice user guide: https://pyformat.info/
nb = 2 "{} bradwursts bitte!".format(nb) "{number} bradwursts bitte!".format(number=nb)
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Control flow Example of an if/else statement
x = -1 if x < 0: print("negative")
negative
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Indentation is important!
x = 1 if x < 0: print("negative") print(x) print(x)
1
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
**Warning**: don't mix tabs and space!!! visually it may look as properly indented but for Python tab and space are different. A more complete example: if elif else example + comparison operators (==, !=, , ) + logical operators (and, or, not)
x = -1 if x < 0: x = 0 print("negative and changed to zero") elif x == 0: print("zero") elif x == 1: print("Single") else: print("More") True and False
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
The `range()` function, used in a `for` loop
for i in range(10): print(i)
0 1 2 3 4 5 6 7 8 9
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
*Note*: by default, range starts from 0 (this is consistent with other behavior that we'll see later). Also, its stops just before the given value.Range can be used with more parameters (see help). For example: start, stop, step:
for i in range(1, 11, 2): print(i)
1 3 5 7 9
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
A loop can also be used to iterate through values other than incrementing numbers (more on how to create iterables later).
words = ['cat', 'open', 'window', 'floor 20', 'be careful'] for w in words: print(w)
cat open window floor 20 be careful
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Control the loop: the continue statement
for w in words: if w == 'open': continue print(w)
cat window floor 20 be careful
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
More possibilities, e.g., a `while` loop and the `break` statement
i = 0 while True: i = i + 1 print(i) if i > 9: break
1 2 3 4 5 6 7 8 9 10
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Containers Lists
a = [1, 2, 3, 4] a
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Lists may contain different types of values
a = [1, "2", 3., 4+0j] a
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Lists may contain lists (nested)
c = [1, [2, 3], 4] c
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
"Indexing": retrieve elements of a list by location**Warning**: Unlike Fortran and Matlab, position start at zero!!
c[0]
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Negative position is for starting the search at the end of the list
a = [1, 2, 3, 4] a[-1]
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
"Slicing": extract a sublist
a list(range(4))
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
$$[0, 4[$$ Iterate through a list
for i in a: print(i)
1 2 3 4
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course
Tuplesvery similar to lists
t = (1, 2, 3, 4) t
_____no_output_____
CC-BY-4.0
notebooks/lectures_potsdam_201802/python_intro.ipynb
benbovy/python_short_course