markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Now our DataFrame has all six columns. 11. Create a DataFrame from the clipboard Let's say that you have some data stored in an Excel spreadsheet or a [Google Sheet](https://docs.google.com/spreadsheets/d/1ipv_HAykbky8OXUubs9eLL-LQ1rAkexXG61-B4jd0Rc/edit?usp=sharing), and you want to get it into a DataFrame as quickly as possible.Just select the data and copy it to the clipboard. Then, you can use the `read_clipboard()` function to read it into a DataFrame: | df = pd.read_clipboard()
df | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Just like the `read_csv()` function, `read_clipboard()` automatically detects the correct data type for each column: | df.dtypes | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Let's copy one other dataset to the clipboard: | df = pd.read_clipboard()
df | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Amazingly, pandas has even identified the first column as the index: | df.index | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Keep in mind that if you want your work to be reproducible in the future, `read_clipboard()` is not the recommended approach. 12. Split a DataFrame into two random subsets Let's say that you want to split a DataFrame into two parts, randomly assigning 75% of the rows to one DataFrame and the other 25% to a second DataFrame.For example, we have a DataFrame of movie ratings with 979 rows: | len(movies) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
We can use the `sample()` method to randomly select 75% of the rows and assign them to the "movies_1" DataFrame: | movies_1 = movies.sample(frac=0.75, random_state=1234) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Then we can use the `drop()` method to drop all rows that are in "movies_1" and assign the remaining rows to "movies_2": | movies_2 = movies.drop(movies_1.index) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
You can see that the total number of rows is correct: | len(movies_1) + len(movies_2) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
And you can see from the index that every movie is in either "movies_1": | movies_1.index.sort_values() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
...or "movies_2": | movies_2.index.sort_values() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Keep in mind that this approach will not work if your index values are not unique. 13. Filter a DataFrame by multiple categories Let's take a look at the movies DataFrame: | movies.head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
One of the columns is genre: | movies.genre.unique() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
If we wanted to filter the DataFrame to only show movies with the genre Action or Drama or Western, we could use multiple conditions separated by the "or" operator: | movies[(movies.genre == 'Action') |
(movies.genre == 'Drama') |
(movies.genre == 'Western')].head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
However, you can actually rewrite this code more clearly by using the `isin()` method and passing it a list of genres: | movies[movies.genre.isin(['Action', 'Drama', 'Western'])].head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
And if you want to reverse this filter, so that you are excluding (rather than including) those three genres, you can put a tilde in front of the condition: | movies[~movies.genre.isin(['Action', 'Drama', 'Western'])].head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
This works because tilde is the "not" operator in Python. 14. Filter a DataFrame by largest categories Let's say that you needed to filter the movies DataFrame by genre, but only include the 3 largest genres.We'll start by taking the `value_counts()` of genre and saving it as a Series called counts: | counts = movies.genre.value_counts()
counts | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
The Series method `nlargest()` makes it easy to select the 3 largest values in this Series: | counts.nlargest(3) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
And all we actually need from this Series is the index: | counts.nlargest(3).index | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Finally, we can pass the index object to `isin()`, and it will be treated like a list of genres: | movies[movies.genre.isin(counts.nlargest(3).index)].head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Thus, only Drama and Comedy and Action movies remain in the DataFrame. 15. Handle missing values Let's look at a dataset of UFO sightings: | ufo.head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
You'll notice that some of the values are missing.To find out how many values are missing in each column, you can use the `isna()` method and then take the `sum()`: | ufo.isna().sum() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
`isna()` generated a DataFrame of True and False values, and `sum()` converted all of the True values to 1 and added them up.Similarly, you can find out the percentage of values that are missing by taking the `mean()` of `isna()`: | ufo.isna().mean() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
If you want to drop the columns that have any missing values, you can use the `dropna()` method: | ufo.dropna(axis='columns').head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Or if you want to drop columns in which more than 10% of the values are missing, you can set a threshold for `dropna()`: | ufo.dropna(thresh=len(ufo)*0.9, axis='columns').head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
`len(ufo)` returns the total number of rows, and then we multiply that by 0.9 to tell pandas to only keep columns in which at least 90% of the values are not missing. 16. Split a string into multiple columns Let's create another example DataFrame: | df = pd.DataFrame({'name':['John Arthur Doe', 'Jane Ann Smith'],
'location':['Los Angeles, CA', 'Washington, DC']})
df | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
What if we wanted to split the "name" column into three separate columns, for first, middle, and last name? We would use the `str.split()` method and tell it to split on a space character and expand the results into a DataFrame: | df.name.str.split(' ', expand=True) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
These three columns can actually be saved to the original DataFrame in a single assignment statement: | df[['first', 'middle', 'last']] = df.name.str.split(' ', expand=True)
df | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
What if we wanted to split a string, but only keep one of the resulting columns? For example, let's split the location column on "comma space": | df.location.str.split(', ', expand=True) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
If we only cared about saving the city name in column 0, we can just select that column and save it to the DataFrame: | df['city'] = df.location.str.split(', ', expand=True)[0]
df | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
17. Expand a Series of lists into a DataFrame Let's create another example DataFrame: | df = pd.DataFrame({'col_one':['a', 'b', 'c'], 'col_two':[[10, 40], [20, 50], [30, 60]]})
df | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
There are two columns, and the second column contains regular Python lists of integers.If we wanted to expand the second column into its own DataFrame, we can use the `apply()` method on that column and pass it the Series constructor: | df_new = df.col_two.apply(pd.Series)
df_new | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
And by using the `concat()` function, you can combine the original DataFrame with the new DataFrame: | pd.concat([df, df_new], axis='columns') | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
18. Aggregate by multiple functions Let's look at a DataFrame of orders from the Chipotle restaurant chain: | orders.head(10) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Each order has an order_id and consists of one or more rows. To figure out the total price of an order, you sum the item_price for that order_id. For example, here's the total price of order number 1: | orders[orders.order_id == 1].item_price.sum() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
If you wanted to calculate the total price of every order, you would `groupby()` order_id and then take the sum of item_price for each group: | orders.groupby('order_id').item_price.sum().head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
However, you're not actually limited to aggregating by a single function such as `sum()`. To aggregate by multiple functions, you use the `agg()` method and pass it a list of functions such as `sum()` and `count()`: | orders.groupby('order_id').item_price.agg(['sum', 'count']).head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
That gives us the total price of each order as well as the number of items in each order. 19. Combine the output of an aggregation with a DataFrame Let's take another look at the orders DataFrame: | orders.head(10) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
What if we wanted to create a new column listing the total price of each order? Recall that we calculated the total price using the `sum()` method: | orders.groupby('order_id').item_price.sum().head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
`sum()` is an aggregation function, which means that it returns a reduced version of the input data.In other words, the output of the `sum()` function: | len(orders.groupby('order_id').item_price.sum()) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
...is smaller than the input to the function: | len(orders.item_price) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
The solution is to use the `transform()` method, which performs the same calculation but returns output data that is the same shape as the input data: | total_price = orders.groupby('order_id').item_price.transform('sum')
len(total_price) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
We'll store the results in a new DataFrame column called total_price: | orders['total_price'] = total_price
orders.head(10) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
As you can see, the total price of each order is now listed on every single line.That makes it easy to calculate the percentage of the total order price that each line represents: | orders['percent_of_total'] = orders.item_price / orders.total_price
orders.head(10) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
20. Select a slice of rows and columns Let's take a look at another dataset: | titanic.head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
This is the famous Titanic dataset, which shows information about passengers on the Titanic and whether or not they survived.If you wanted a numerical summary of the dataset, you would use the `describe()` method: | titanic.describe() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
However, the resulting DataFrame might be displaying more information than you need.If you wanted to filter it to only show the "five-number summary", you can use the `loc` accessor and pass it a slice of the "min" through the "max" row labels: | titanic.describe().loc['min':'max'] | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
And if you're not interested in all of the columns, you can also pass it a slice of column labels: | titanic.describe().loc['min':'max', 'Pclass':'Parch'] | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
21. Reshape a MultiIndexed Series The Titanic dataset has a "Survived" column made up of ones and zeros, so you can calculate the overall survival rate by taking a mean of that column: | titanic.Survived.mean() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
If you wanted to calculate the survival rate by a single category such as "Sex", you would use a `groupby()`: | titanic.groupby('Sex').Survived.mean() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
And if you wanted to calculate the survival rate across two different categories at once, you would `groupby()` both of those categories: | titanic.groupby(['Sex', 'Pclass']).Survived.mean() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
This shows the survival rate for every combination of Sex and Passenger Class. It's stored as a MultiIndexed Series, meaning that it has multiple index levels to the left of the actual data.It can be hard to read and interact with data in this format, so it's often more convenient to reshape a MultiIndexed Series into a DataFrame by using the `unstack()` method: | titanic.groupby(['Sex', 'Pclass']).Survived.mean().unstack() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
This DataFrame contains the same exact data as the MultiIndexed Series, except that now you can interact with it using familiar DataFrame methods. 22. Create a pivot table If you often create DataFrames like the one above, you might find it more convenient to use the `pivot_table()` method instead: | titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='mean') | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
With a pivot table, you directly specify the index, the columns, the values, and the aggregation function.An added benefit of a pivot table is that you can easily add row and column totals by setting `margins=True`: | titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='mean',
margins=True) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
This shows the overall survival rate as well as the survival rate by Sex and Passenger Class.Finally, you can create a cross-tabulation just by changing the aggregation function from "mean" to "count": | titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='count',
margins=True) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
This shows the number of records that appear in each combination of categories. 23. Convert continuous data into categorical data Let's take a look at the Age column from the Titanic dataset: | titanic.Age.head(10) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
It's currently continuous data, but what if you wanted to convert it into categorical data?One solution would be to label the age ranges, such as "child", "young adult", and "adult". The best way to do this is by using the `cut()` function: | pd.cut(titanic.Age, bins=[0, 18, 25, 99], labels=['child', 'young adult', 'adult']).head(10) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
This assigned each value to a bin with a label. Ages 0 to 18 were assigned the label "child", ages 18 to 25 were assigned the label "young adult", and ages 25 to 99 were assigned the label "adult".Notice that the data type is now "category", and the categories are automatically ordered. 24. Change display options Let's take another look at the Titanic dataset: | titanic.head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Notice that the Age column has 1 decimal place and the Fare column has 4 decimal places. What if you wanted to standardize the display to use 2 decimal places?You can use the `set_option()` function: | pd.set_option('display.float_format', '{:.2f}'.format) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
The first argument is the name of the option, and the second argument is a Python format string. | titanic.head() | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
You can see that Age and Fare are now using 2 decimal places. Note that this did not change the underlying data, only the display of the data.You can also reset any option back to its default: | pd.reset_option('display.float_format') | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
There are many more options you can specify is a similar way. 25. Style a DataFrame The previous trick is useful if you want to change the display of your entire notebook. However, a more flexible and powerful approach is to define the style of a particular DataFrame.Let's return to the stocks DataFrame: | stocks | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
We can create a dictionary of format strings that specifies how each column should be formatted: | format_dict = {'Date':'{:%m/%d/%y}', 'Close':'${:.2f}', 'Volume':'{:,}'} | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
And then we can pass it to the DataFrame's `style.format()` method: | stocks.style.format(format_dict) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Notice that the Date is now in month-day-year format, the closing price has a dollar sign, and the Volume has commas.We can apply more styling by chaining additional methods: | (stocks.style.format(format_dict)
.hide_index()
.highlight_min('Close', color='red')
.highlight_max('Close', color='lightgreen')
) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
We've now hidden the index, highlighted the minimum Close value in red, and highlighted the maximum Close value in green.Here's another example of DataFrame styling: | (stocks.style.format(format_dict)
.hide_index()
.background_gradient(subset='Volume', cmap='Blues')
) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
The Volume column now has a background gradient to help you easily identify high and low values.And here's one final example: | (stocks.style.format(format_dict)
.hide_index()
.bar('Volume', color='lightblue', align='zero')
.set_caption('Stock Prices from October 2016')
) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
There's now a bar chart within the Volume column and a caption above the DataFrame.Note that there are many more options for how you can style your DataFrame. Bonus: Profile a DataFrame Let's say that you've got a new dataset, and you want to quickly explore it without too much work. There's a separate package called [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling) that is designed for this purpose.First you have to install it using conda or pip. Once that's done, you import `pandas_profiling`: | import pandas_profiling | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Then, simply run the `ProfileReport()` function and pass it any DataFrame. It returns an interactive HTML report:- The first section is an overview of the dataset and a list of possible issues with the data.- The next section gives a summary of each column. You can click "toggle details" for even more information.- The third section shows a heatmap of the correlation between columns.- And the fourth section shows the head of the dataset. | pandas_profiling.ProfileReport(titanic) | _____no_output_____ | MIT | Pandas/.ipynb_checkpoints/top_25_pandas_tricks-checkpoint.ipynb | piszewc/python-deep-learning-data-science |
Implementing a Route PlannerIn this project you will use A\* search to implement a "Google-maps" style route planning algorithm. The Map | # Run this cell first!
from helpers import Map, load_map_10, load_map_40, show_map
import math
%load_ext autoreload
%autoreload 2 | _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
Map Basics | map_10 = load_map_10()
show_map(map_10) | _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
The map above (run the code cell if you don't see it) shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network. This map is quite literal in its expression of distance and connectivity. On the graph above, the edge between 2 nodes(intersections) represents a literal straight road not just an abstract connection of 2 cities.These `Map` objects have two properties you will want to use to implement A\* search: `intersections` and `roads`**Intersections**The `intersections` are represented as a dictionary. In this example, there are 10 intersections, each identified by an x,y coordinate. The coordinates are listed below. You can hover over each dot in the map above to see the intersection number. | map_10.intersections
#type(len(map_10.intersections)) | _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
**Roads**The `roads` property is a list where `roads[i]` contains a list of the intersections that intersection `i` connects to. | # this shows that intersection 0 connects to intersections 7, 6, and 5
map_10.roads[0]
# This shows the full connectivity of the map
map_10.roads
# map_40 is a bigger map than map_10
map_40 = load_map_40()
show_map(map_40) | _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
Advanced VisualizationsThe map above shows a network of roads which spans 40 different intersections (labeled 0 through 39). The `show_map` function which generated this map also takes a few optional parameters which might be useful for visualizing the output of the search algorithm you will write.* `start` - The "start" node for the search algorithm.* `goal` - The "goal" node.* `path` - An array of integers which corresponds to a valid sequence of intersection visits on the map. | # run this code, note the effect of including the optional
# parameters in the function call.
show_map(map_40, start=5, goal=34, path=[5,16,37,12,34]) | _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
The Algorithm Writing your algorithmThe algorithm written will be responsible for generating a `path` like the one passed into `show_map` above. In fact, when called with the same map, start and goal, as above you algorithm should produce the path `[5, 16, 37, 12, 34]`. However you must complete several methods before it will work.```bash> PathPlanner(map_40, 5, 34).path[5, 16, 37, 12, 34]``` PathPlanner classThe below class is already partly implemented for you - you will implement additional functions that will also get included within this class further below.Let's very briefly walk through each part below.`__init__` - We initialize our path planner with a map, M, and typically a start and goal node. If either of these are `None`, the rest of the variables here are also set to none. If you don't have both a start and a goal, there's no path to plan! The rest of these variables come from functions you will soon implement. - `closedSet` includes any explored/visited nodes. - `openSet` are any nodes on our frontier for potential future exploration. - `cameFrom` will hold the previous node that best reaches a given node- `gScore` is the `g` in our `f = g + h` equation, or the actual cost to reach our current node- `fScore` is the combination of `g` and `h`, i.e. the `gScore` plus a heuristic; total cost to reach the goal- `path` comes from the `run_search` function, which is already built for you.`reconstruct_path` - This function just rebuilds the path after search is run, going from the goal node backwards using each node's `cameFrom` information.`_reset` - Resets *most* of our initialized variables for PathPlanner. This *does not* reset the map, start or goal variables, for reasons which you may notice later, depending on your implementation.`run_search` - This does a lot of the legwork to run search once you've implemented everything else below. First, it checks whether the map, goal and start have been added to the class. Then, it will also check if the other variables, other than `path` are initialized (note that these are only needed to be re-run if the goal or start were not originally given when initializing the class, based on what we discussed above for `__init__`.From here, we use a function you will implement, `is_open_empty`, to check that there are still nodes to explore (you'll need to make sure to feed `openSet` the start node to make sure the algorithm doesn't immediately think there is nothing to open!). If we're at our goal, we reconstruct the path. If not, we move our current node from the frontier (`openSet`) and into explored (`closedSet`). Then, we check out the neighbors of the current node, check out their costs, and plan our next move.This is the main idea behind A*, but none of it is going to work until you implement all the relevant parts, which will be included below after the class code. | # Do not change this cell
# When you write your methods correctly this cell will execute
# without problems
class PathPlanner():
"""Construct a PathPlanner Object"""
def __init__(self, M, start=None, goal=None):
""" """
self.map = M
self.start= start
self.goal = goal
self.closedSet = self.create_closedSet() if goal != None and start != None else None
self.openSet = self.create_openSet() if goal != None and start != None else None
self.cameFrom = self.create_cameFrom() if goal != None and start != None else None
self.gScore = self.create_gScore() if goal != None and start != None else None
self.fScore = self.create_fScore() if goal != None and start != None else None
self.path = self.run_search() if self.map and self.start != None and self.goal != None else None
def reconstruct_path(self, current):
""" Reconstructs path after search """
total_path = [current]
while current in self.cameFrom.keys():
current = self.cameFrom[current]
total_path.append(current)
return total_path
def _reset(self):
"""Private method used to reset the closedSet, openSet, cameFrom, gScore, fScore, and path attributes"""
self.closedSet = None
self.openSet = None
self.cameFrom = None
self.gScore = None
self.fScore = None
self.path = self.run_search() if self.map and self.start and self.goal else None
def run_search(self):
""" """
if self.map == None:
raise(ValueError, "Must create map before running search. Try running PathPlanner.set_map(start_node)")
if self.goal == None:
raise(ValueError, "Must create goal node before running search. Try running PathPlanner.set_goal(start_node)")
if self.start == None:
raise(ValueError, "Must create start node before running search. Try running PathPlanner.set_start(start_node)")
self.closedSet = self.closedSet if self.closedSet != None else self.create_closedSet()
self.openSet = self.openSet if self.openSet != None else self.create_openSet()
self.cameFrom = self.cameFrom if self.cameFrom != None else self.create_cameFrom()
self.gScore = self.gScore if self.gScore != None else self.create_gScore()
self.fScore = self.fScore if self.fScore != None else self.create_fScore()
while not self.is_open_empty():
current = self.get_current_node()
if current == self.goal:
self.path = [x for x in reversed(self.reconstruct_path(current))]
return self.path
else:
self.openSet.remove(current)
self.closedSet.add(current)
for neighbor in self.get_neighbors(current):
if neighbor in self.closedSet:
continue # Ignore the neighbor which is already evaluated.
if not neighbor in self.openSet: # Discover a new node
self.openSet.add(neighbor)
# The distance from start to a neighbor
#the "dist_between" function may vary as per the solution requirements.
if self.get_tentative_gScore(current, neighbor) >= self.get_gScore(neighbor):
continue # This is not a better path.
# This path is the best until now. Record it!
self.record_best_path_to(current, neighbor)
print("No Path Found")
self.path = None
return False | _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
Your TurnImplement the following functions to get your search algorithm running smoothly! Data StructuresThe next few functions requre you to decide on data structures to use - lists, sets, dictionaries, etc. Make sure to think about what would work most efficiently for each of these. Some can be returned as just an empty data structure (see `create_closedSet()` for an example), while others should be initialized with one or more values within. | def create_closedSet(self):
""" Creates and returns a data structure suitable to hold the set of nodes already evaluated"""
# EXAMPLE: return a data structure suitable to hold the set of nodes already evaluated
return set()
def create_openSet(self):
""" Creates and returns a data structure suitable to hold the set of currently discovered nodes
that are not evaluated yet. Initially, only the start node is known."""
if self.start != None:
# TODO: return a data structure suitable to hold the set of currently discovered nodes
# that are not evaluated yet. Make sure to include the start node.
return set([self.start])
raise(ValueError, "Must create start node before creating an open set. Try running PathPlanner.set_start(start_node)")
def create_cameFrom(self):
"""Creates and returns a data structure that shows which node can most efficiently be reached from another,
for each node."""
# TODO: return a data structure that shows which node can most efficiently be reached from another,
# for each node.
#cameFrom_Register = {}
#cameFrom_Register[self.start] = None
cameFrom_Register = {}
return cameFrom_Register
def create_gScore(self):
"""Creates and returns a data structure that holds the cost of getting from the start node to that node,
for each node. The cost of going from start to start is zero."""
# TODO: return a data structure that holds the cost of getting from the start node to that node, for each node.
# for each node. The cost of going from start to start is zero. The rest of the node's values should
# be set to infinity.
the_map = self.map
node_data = the_map.intersections
start = self.start
goal = self.goal
gScore_register = {}
g0 = 0
for node, cood in node_data.items():
if node == start:
gScore_register[node] = g0
else:
gScore_register[node] = float('Inf')
#print(gScore_register)
return gScore_register
def create_fScore(self):
"""Creates and returns a data structure that holds the total cost of getting from the start node to the goal
by passing by that node, for each node. That value is partly known, partly heuristic.
For the first node, that value is completely heuristic."""
# TODO: return a data structure that holds the total cost of getting from the start node to the goal
# by passing by that node, for each node. That value is partly known, partly heuristic.
# For the first node, that value is completely heuristic. The rest of the node's value should be
# set to infinity.
#fScore_register = {node:fscore_for_the_node}
#fScore_register = {node_0:h, node_n:g+h}
#fScore_register[0] = distance(start,goal)
# for nodes in range(0,len(map_10.intersections))
the_map = self.map
node_data = the_map.intersections
start = self.start
#print("start = ",start)
goal = self.goal
#print("goal = ",goal)
fScore_register = {}
h0 = self.distance(node_1=start,node_2=goal)
#h0 = 10
for node, cood in node_data.items():
#if node == 0:
if node == start:
fScore_register[node] = h0
else:
fScore_register[node] = float('Inf')
#print(fScore_register)
return fScore_register
| _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
Set certain variablesThe below functions help set certain variables if they weren't a part of initializating our `PathPlanner` class, or if they need to be changed for anothe reason. | def set_map(self, M):
"""Method used to set map attribute """
self._reset(self)
self.start = None
self.goal = None
# TODO: Set map to new value.
self.map = M
def set_start(self, start):
"""Method used to set start attribute """
self._reset(self)
# TODO: Set start value. Remember to remove goal, closedSet, openSet, cameFrom, gScore, fScore,
# and path attributes' values.
self.start = start
def set_goal(self, goal):
"""Method used to set goal attribute """
self._reset(self)
# TODO: Set goal value.
self.goal = goal
| _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
Get node informationThe below functions concern grabbing certain node information. In `is_open_empty`, you are checking whether there are still nodes on the frontier to explore. In `get_current_node()`, you'll want to come up with a way to find the lowest `fScore` of the nodes on the frontier. In `get_neighbors`, you'll need to gather information from the map to find the neighbors of the current node. | def is_open_empty(self):
"""returns True if the open set is empty. False otherwise. """
# TODO: Return True if the open set is empty. False otherwise.
openSet = self.openSet
if len(openSet) == 0:
return True
else:
return False
def get_current_node(self):
""" Returns the node in the open set with the lowest value of f(node)."""
# TODO: Return the node in the open set with the lowest value of f(node).
openSet = self.openSet
fScore = self.fScore
#print(fScore)
openSet_fScore = {}
for node in openSet:
openSet_fScore[node] = fScore[node]
#the assumption here is that fScore is a dictionary that is like {node:fScore}
#print("openSet_fScore = ", openSet_fScore)
min_openSet_fScore_node = min(openSet_fScore,key=openSet_fScore.get)
#print("Min fScore of node ", min_openSet_fScore_node," = ",openSet_fScore[min_openSet_fScore_node])
return min_openSet_fScore_node
def get_neighbors(self, node):
"""Returns the neighbors of a node"""
# TODO: Return the neighbors of a node
the_map = self.map
return the_map.roads[node] | _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
Scores and CostsBelow, you'll get into the main part of the calculation for determining the best path - calculating the various parts of the `fScore`. | def get_gScore(self, node):
"""Returns the g Score of a node"""
# TODO: Return the g Score of a node
# I need to go to some dictionary and enter the node number (and maybe point ot the g-score) and get the g-score
# I will call the "some dictionary" node_data
gScore_data = self.gScore
gScore = gScore_data[node]
return gScore
def distance(self, node_1, node_2):
""" Computes the Euclidean L2 Distance"""
# TODO: Compute and return the Euclidean L2 Distance
#print('here')
nodal_data = self.map.intersections
#print(nodal_data)
node_1_x = nodal_data[node_1][0]
node_1_y = nodal_data[node_1][1]
node_2_x = nodal_data[node_2][0]
node_2_y = nodal_data[node_2][1]
euc_dis = ((node_2_x - node_1_x)**2 + (node_2_y - node_1_y)**2)**0.5
return euc_dis
def get_tentative_gScore(self, current, neighbor):
"""Returns the tentative g Score of a node"""
# TODO: Return the g Score of the current node
# plus distance from the current node to it's neighbors
#print("current node is ",current)
current_node_g_score = self.get_gScore(current)
#print("current node gscore = ", current_node_g_score)
distance_from_current_to_neighbor = self.distance(current,neighbor)
#print("distance from current to neighbor", distance_from_current_to_neighbor)
tentative_g_score = current_node_g_score + distance_from_current_to_neighbor
return tentative_g_score
def heuristic_cost_estimate(self, node):
""" Returns the heuristic cost estimate of a node """
# TODO: Return the heuristic cost estimate of a node
goal = self.goal
distance_from_node_to_goal = self.distance(node,goal)
return distance_from_node_to_goal
def calculate_fscore(self, node):
"""Calculate the f score of a node. """
# TODO: Calculate and returns the f score of a node.
# REMEMBER F = G + H
fscore = self.get_gScore(node) + self.heuristic_cost_estimate(node)
| _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
Recording the best pathNow that you've implemented the various functions on scoring, you can record the best path to a given neighbor node from the current node! | def record_best_path_to(self, current, neighbor):
"""Record the best path to a node """
# TODO: Record the best path to a node, by updating cameFrom, gScore, and fScore
#self.cameFrom[neighbor] = current
#self.gScore[neighbor] = self.distance(current,neighbor)
#print(gScore)
#fScore_register = self.fScore
self.cameFrom[neighbor] = current
self.gScore[neighbor] = self.get_tentative_gScore(current,neighbor)
self.fScore[neighbor] = self.gScore[neighbor] + self.heuristic_cost_estimate(neighbor)
| _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
Associating your functions with the `PathPlanner` classTo check your implementations, we want to associate all of the above functions back to the `PathPlanner` class. Python makes this easy using the dot notation (i.e. `PathPlanner.myFunction`), and setting them equal to your function implementations. Run the below code cell for this to occur.*Note*: If you need to make further updates to your functions above, you'll need to re-run this code cell to associate the newly updated function back with the `PathPlanner` class again! | # Associates implemented functions with PathPlanner class
PathPlanner.create_closedSet = create_closedSet
PathPlanner.create_openSet = create_openSet
PathPlanner.create_cameFrom = create_cameFrom
PathPlanner.create_gScore = create_gScore
PathPlanner.create_fScore = create_fScore
PathPlanner.set_map = set_map
PathPlanner.set_start = set_start
PathPlanner.set_goal = set_goal
PathPlanner.is_open_empty = is_open_empty
PathPlanner.get_current_node = get_current_node
PathPlanner.get_neighbors = get_neighbors
PathPlanner.get_gScore = get_gScore
PathPlanner.distance = distance
PathPlanner.get_tentative_gScore = get_tentative_gScore
PathPlanner.heuristic_cost_estimate = heuristic_cost_estimate
PathPlanner.calculate_fscore = calculate_fscore
PathPlanner.record_best_path_to = record_best_path_to | _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
Preliminary TestThe below is the first test case, just based off of one set of inputs. If some of the functions above aren't implemented yet, or are implemented incorrectly, you likely will get an error from running this cell. Try debugging the error to help you figure out what needs further revision! | planner = PathPlanner(map_40, 5, 34)
path = planner.path
if path == [5, 16, 37, 12, 34]:
print("great! Your code works for these inputs!")
else:
print("something is off, your code produced the following:")
print(path) | great! Your code works for these inputs!
| MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
VisualizeOnce the above code worked for you, let's visualize the results of your algorithm! | # Visualize your the result of the above test! You can also change start and goal here to check other paths
start = 5
goal = 34
show_map(map_40, start=start, goal=goal, path=PathPlanner(map_40, start, goal).path) | _____no_output_____ | MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
Testing your CodeIf the code below produces no errors, your algorithm is behaving correctly. You are almost ready to submit! Before you submit, go through the following submission checklist:**Submission Checklist**1. Does my code pass all tests?2. Does my code implement `A*` search and not some other search algorithm?3. Do I use an **admissible heuristic** to direct search efforts towards the goal?4. Do I use data structures which avoid unnecessarily slow lookups?When you can answer "yes" to all of these questions, and also have answered the written questions below, submit by pressing the Submit button in the lower right! | from test import test
test(PathPlanner) | All tests pass! Congratulations!
| MIT | MainProjectNotebook.ipynb | sarandara/Udacity_ISDC_Implement_Route_Planner |
U6 m6A IP analysis | u6_ip_data = pd.read_csv('ext_data/u6_m6a_ip.tsv', sep='\t')
u6_ip_data = u6_ip_data.pivot_table(index=['genotype', 'bio_rep', 'gene_id'], columns='treatment', values=['ct'])
u6_ip_data.columns = u6_ip_data.columns.droplevel(0)
u6_ip_data = u6_ip_data.reset_index()
u6_ip_data['dct'] = ((u6_ip_data['input'] - np.log2(2)) - u6_ip_data['IP'])
u6_u2_exprs_data = u6_ip_data.pivot_table(index=['genotype', 'bio_rep'], columns='gene_id', values=['input'])
u6_u2_exprs_data.columns = u6_u2_exprs_data.columns.droplevel(0)
u6_u2_exprs_data = u6_u2_exprs_data.reset_index()
u6_u2_exprs_data['dct'] = ((u6_u2_exprs_data['U2']) - u6_u2_exprs_data['U6'])
display_formatted_markdown(MD_TEXT[3])
fig, axes = plt.subplots(figsize=(12, 5), ncols=2)
sns.pointplot(
x='genotype',
y='dct',
data=u6_u2_exprs_data,
order=['col0', 'fio1'],
color='#777777',
join=False,
errwidth=2,
capsize=0.1,
ci='sd',
ax=axes[0]
)
sns.stripplot(
x='genotype',
y='dct',
data=u6_u2_exprs_data,
jitter=0.2,
size=8,
order=['col0', 'fio1'],
ax=axes[0]
)
axes[0].set_xticklabels(['Col-0', 'fio1-1'])
axes[0].set_ylabel('Expression relative to U2 (-ΔCt)')
axes[0].set_xlabel('Genotype')
axes[0].set_title('U6 expression')
sns.pointplot(
hue='genotype',
y='dct',
x='gene_id',
data=u6_ip_data,
hue_order=['col0', 'fio1'],
order=['U6', 'U2'],
palette=['#777777', '#777777'],
dodge=0.5,
join=False,
errwidth=2,
capsize=0.1,
ci='sd',
ax=axes[1]
)
sns.stripplot(
hue='genotype',
y='dct',
x='gene_id',
data=u6_ip_data,
jitter=0.2,
dodge=0.5,
size=8,
hue_order=['col0', 'fio1'],
order=['U6', 'U2'],
ax=axes[1]
)
axes[1].set_xticklabels(['U6', 'U2'])
axes[1].set_ylabel('Enrichment over input (-ΔCt)')
axes[1].set_xlabel('Template')
axes[1].set_title('m6A-IP')
axes[1].axvline(0.5, ls='--', color='#252525')
axes[1].legend_.remove()
h1 = axes[1].scatter([], [], color=pal[0], label='Col-0')
h2 = axes[1].scatter([], [], color=pal[1], label='fio1-1')
axes[1].legend([h1, h2], ['Col-0', 'fio1-1'], loc=4)
plt.savefig('figures/u6_m6a_ip_qpcr.svg')
plt.show() | _____no_output_____ | MIT | fiona_nanopore/pipeline/notebook_processed/yanocomp_upsetplots.py.ipynb | bartongroup/Simpson_Davies_Barton_U6_methylation |
* print("Welcome 2021!") | # Welcome!
hello="Welcome 2021!!!"
print(hello)
what is hello | _____no_output_____ | Unlicense | code/my first python code.ipynb | caarrick/DataAnalytics |
Broadcasting tablic pythonowych: | #tworze tablice do przeprowadzenia bradcastowania, wyświetlam obie tablice
arr1= np.array([[1],[2],[3]])
arr2= np.array([1, 2, 3])
arr1, arr2
#to jest broadcasting mówiąc prościej jest to reshape (rozciąganie) mniejszej macierzy do większej
#możliwe jest też dociąganie obu tablic, zwrócić uwagę należy że to nie jest operator mnożenia macieżowego
#mnożone macieżowe jest zdefiniowane specjalną funkcją numpy
arr1*arr2 | _____no_output_____ | MIT | Dzien01/03-numpy-oper.ipynb | gitgeoman/PAD2022 |
wybieranie danych z tablicy | np.random.seed(0)
arr = np.random.randint(-10,11,(7,8))
arr
#ostatni wiersz
arr[6],arr[-1]
#wiersz 2 i 3, co 3 cia kolumna (slice danych)
arr[1:3, ::3]
#tylko wiersz pierwszy i ostatni
arr[[0,-1]]
#tylko wiersz ostatni a potem pierwszy
arr[[-1,0]]
#ostatnia kolumna
arr[:,-1]
#ostatnia kolumna
arr[:,[-1]]
#TRANSPOZYCJA
arr.transpose()[-1] #pierwsza metoda
arr.T[-1] #druga metoda
#numpy - indexing/slicing - doczytać
#iterowanie - tablice mają wbudowany generator więc można względem nich interować
for row in arr:
print (row)
print('='*50) | [ 2 5 -10 -7 -7 -3 -1 9]
==================================================
[ 8 -6 -4 2 -9 -4 -3 4]
==================================================
[ 7 -5 3 -2 -1 10 9 6]
==================================================
[ 9 -5 5 5 -10 8 -7 7]
==================================================
[ 9 9 9 4 -3 -10 -9 -1]
==================================================
[-10 0 10 -7 1 8 -8 -10]
==================================================
[-10 -6 -5 -4 -2 10 7 5]
==================================================
| MIT | Dzien01/03-numpy-oper.ipynb | gitgeoman/PAD2022 |
Porównanie wartości między python list a numpy list | %%timeit -r 20 -n 500
[x**2 for x in range(1, 1001)]
%%timeit -r 20 -n 500
np.arange(1, 1001)**2
298/3.59 | _____no_output_____ | MIT | Dzien01/03-numpy-oper.ipynb | gitgeoman/PAD2022 |
Traveling Salesman Problem with Reinforcement Learning Description of ProblemThe travelling salesman problem (TSP) is a classic algorithmic problem in the field of computer science and operations research. Given a list of cities and the distances between each pair of cities, the problem is to find the shortest possible route that visits each city and returns to the origin city.The problem is NP-complete as the number of combinations of cities grows larger as we add more cities. In the classic version of the problem, the salesman picks a city to start, travels through remaining cities and returns to the original city. In this version, we have slightly modified the problem, presenting it as a restaurant order delivery problem on a 2D gridworld. The agent (driver) starts at the restaurant, a fixed point on the grid. Then, delivery orders appear elsewhere on the grid. The driver needs to visit the orders, and return to the restaurant, to obtain rewards. Rewards are proportional to the time taken to do this (equivalent to the distance, as each timestep moves one square on the grid). Why Reinforcement Learning?For canonical Operations problems like this one, we're very interested about RL's potential to push the state of the art.There are a few reasons we think RL offers some unique value for this type of problem:1. RL seems to perform well in high-dimensional spaces, when an approximate solution to a complex problem may be of greater value than an exact/optimal solution to a simpler problem.2. RL can do quite well in partially observed environments. When there are aspects of a problem we don't know about and therefore can't model, which is often the case in the real-world (and we can pretend is the case with these problems), RL's ability to deal with the messiness is valuable.3. RL may have things to teach us! We've seen this to be the case with Go, and Dota 2, where the RL agent came up with innovative strategies that have later been adopted by human players. What if there are clever strategies we can use to solve versions of TSP, Knapsack, Newsvendor, or extensions of any of those? RL might surprise us. Easy Version of TSPIn the Easy Version, we are on a 5x5 grid. All orders are generated at the start of the episode. Order locations are fixed, and are invariant (non-random) from episode to episode. The objective is to visit each order location, and return to the restaurant. We have a maximum time-limit of 50 steps. StatesAt each time step, our agent is aware of the following information:1. For the Restuarant: 1. Location (x,y coordinates) 2. For the Driver 1. Location (x,y coordinates) 2. Is driver at restaurant (yes/no)3. For each Order: 1. Location (x,y coordinates) 2. Status (Delivered or Not Delivered) 3. Time (Time taken to deliver reach order -- incrementing until delivered)4. Miscellaneous 1. Time since start of episode 2. Time remaining until end of episode (i.e. until max time) ActionsAt each time step, our agent can take the following steps:- Up - Move one step up in the map- Down - Move one step down in the map- Right - Move one step right in the map- Left - Move one step left in the map RewardsAgent gets a reward of -1 for each time step. If an order is delivered in that timestep, it gets a positive reward inversely proportional to the time taken to deliver. If all the orders are delivered and the agent is back to the restaurant, it gets an additional reward inversely proportional to time since start of episode. Using AWS SageMaker for RLAWS SageMaker allows you to train your RL agents in cloud machines using docker containers. You do not have to worry about setting up your machines with the RL toolkits and deep learning frameworks. You can easily switch between many different machines setup for you, including powerful GPU machines that give a big speedup. You can also choose to use multiple machines in a cluster to further speedup training, often necessary for production level loads. Prerequisites ImportsTo get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations. | import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import HTML
import time
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
SettingsYou can run this notebook from your local host or from a SageMaker notebook instance. In both of these scenarios, you can run the following in either local or SageMaker modes. The local mode uses the SageMaker Python SDK to run your code in a local container before deploying to SageMaker. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. You just need to set local_mode = True. | # run in local mode?
local_mode = False
env_type = "tsp-easy"
# create unique job name
job_name_prefix = "rl-" + env_type
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = "s3://{}/".format(s3_bucket)
print("S3 bucket path: {}".format(s3_output_path)) | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
Install docker for `local` modeIn order to work in `local` mode, you need to have docker installed. When running from you local machine, please make sure that you have docker or docker-compose (for local CPU machines) and nvidia-docker (for local GPU machines) installed. Alternatively, when running from a SageMaker notebook instance, you can simply run the following script to install dependenceis.Note, you can only run a single local notebook at one time. | # only run from SageMaker notebook instance
if local_mode:
!/bin/bash common/setup.sh | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
Create an IAM roleEither get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role. | try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role()
print("Using IAM role arn: {}".format(role)) | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
Setup the environmentThe environment is defined in a Python file called “TSP_env.py” and the file is uploaded on /src directory. The environment also implements the init(), step(), reset() and render() functions that describe how the environment behaves. This is consistent with Open AI Gym interfaces for defining an environment. 1. Init() - initialize the environment in a pre-defined state2. Step() - take an action on the environment3. reset()- restart the environment on a new episode4. render() - get a rendered image of the environment in its current state Configure the presets for RL algorithm The presets that configure the RL training jobs are defined in the “preset-tsp-easy.py” file which is also uploaded on the /src directory. Using the preset file, you can define agent parameters to select the specific agent algorithm. You can also set the environment parameters, define the schedule and visualization parameters, and define the graph manager. The schedule presets will define the number of heat up steps, periodic evaluation steps, training steps between evaluations.These can be overridden at runtime by specifying the RLCOACH_PRESET hyperparameter. Additionally, it can be used to define custom hyperparameters. | !pygmentize src/preset-tsp-easy.py | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
Write the Training Code The training code is written in the file “train-coach.py” which is uploaded in the /src directory. First import the environment files and the preset files, and then define the main() function. | !pygmentize src/train-coach.py | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
Train the RL model using the Python SDK Script modeIf you are using local mode, the training will run on the notebook instance. When using SageMaker for training, you can select a GPU or CPU instance. The RLEstimator is used for training RL jobs. 1. Specify the source directory where the environment, presets and training code is uploaded.2. Specify the entry point as the training code 3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container. 4. Define the training parameters such as the instance count, job name, S3 path for output and job name. 5. Specify the hyperparameters for the RL agent algorithm. The RLCOACH_PRESET or the RLRAY_PRESET can be used to specify the RL agent algorithm you want to use. 6. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks. | %%time
if local_mode:
instance_type = "local"
else:
instance_type = "ml.m4.4xlarge"
estimator = RLEstimator(
entry_point="train-coach.py",
source_dir="src",
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version="1.0.0",
framework=RLFramework.TENSORFLOW,
role=role,
instance_type=instance_type,
instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
hyperparameters={
# expected run time 12 mins for TSP Easy
"RLCOACH_PRESET": "preset-"
+ env_type,
},
)
estimator.fit(wait=local_mode) | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
Store intermediate training output and model checkpoints The output from the training job above is stored on S3. The intermediate folder contains gifs and metadata of the training. | job_name = estimator._current_job_name
print("Job name: {}".format(job_name))
s3_url = "s3://{}/{}".format(s3_bucket, job_name)
if local_mode:
output_tar_key = "{}/output.tar.gz".format(job_name)
else:
output_tar_key = "{}/output/output.tar.gz".format(job_name)
intermediate_folder_key = "{}/output/intermediate".format(job_name)
output_url = "s3://{}/{}".format(s3_bucket, output_tar_key)
intermediate_url = "s3://{}/{}".format(s3_bucket, intermediate_folder_key)
print("S3 job path: {}".format(s3_url))
print("Output.tar.gz location: {}".format(output_url))
print("Intermediate folder path: {}".format(intermediate_url))
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir)) | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
Visualization Comparing against a baseline policy | !pip install gym
!pip install pygame
os.chdir("src")
# Get baseline reward
from TSP_env import TSPEasyEnv
from TSP_baseline import get_mean_baseline_reward
baseline_mean, baseline_std_dev = get_mean_baseline_reward(env=TSPEasyEnv(), num_of_episodes=1)
print(baseline_mean, baseline_std_dev)
os.chdir("../") | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
Plot metrics for training jobWe can pull the reward metric of the training and plot it to see the performance of the model over time. | import pandas as pd
import matplotlib
%matplotlib inline
# csv_file has all the RL training metrics
# csv_file = "{}/worker_0.simple_rl_graph.main_level.main_level.agent_0.csv".format(tmp_dir)
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
x_axis = "Episode #"
y_axis_rl = "Training Reward"
y_axis_base = "Baseline Reward"
df[y_axis_rl] = df[y_axis_rl].rolling(5).mean()
df[y_axis_base] = baseline_mean
y_axes = [y_axis_rl]
ax = df.plot(
x=x_axis,
y=[y_axis_rl, y_axis_base],
figsize=(18, 6),
fontsize=18,
legend=True,
color=["b", "r"],
)
fig = ax.get_figure()
ax.set_xlabel(x_axis, fontsize=20)
# ax.set_ylabel(y_axis,fontsize=20)
# fig.savefig('training_reward_vs_wall_clock_time.pdf') | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
Visualize the rendered gifsThe latest gif file found in the gifs directory is displayed. You can replace the tmp.gif file below to visualize other files generated. | key = intermediate_folder_key + "/gifs"
wait_for_s3_object(s3_bucket, key, tmp_dir)
print("Copied gifs files to {}".format(tmp_dir))
glob_pattern = os.path.join("{}/*.gif".format(tmp_dir))
gifs = [file for file in glob.iglob(glob_pattern, recursive=True)]
extract_episode = lambda string: int(
re.search(".*episode-(\d*)_.*", string, re.IGNORECASE).group(1)
)
gifs.sort(key=extract_episode)
print("GIFs found:\n{}".format("\n".join([os.path.basename(gif) for gif in gifs])))
# visualize a specific episode
gif_index = -1 # since we want last gif
gif_filepath = gifs[gif_index]
gif_filename = os.path.basename(gif_filepath)
print("Selected GIF: {}".format(gif_filename))
os.system(
"mkdir -p ./src/tmp_render/ && cp {} ./src/tmp_render/{}.gif".format(gif_filepath, gif_filename)
)
HTML('<img src="./src/tmp_render/{}.gif">'.format(gif_filename)) | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
Evaluation of RL modelsWe use the last checkpointed model to run evaluation for the RL Agent. Load checkpointed modelCheckpointed data from the previously trained models will be passed on for evaluation / inference in the checkpoint channel. In local mode, we can simply use the local directory, whereas in the SageMaker mode, it needs to be moved to S3 first. | %%time
wait_for_s3_object(s3_bucket, output_tar_key, tmp_dir)
if not os.path.isfile("{}/output.tar.gz".format(tmp_dir)):
raise FileNotFoundError("File output.tar.gz not found")
os.system("tar -xvzf {}/output.tar.gz -C {}".format(tmp_dir, tmp_dir))
if local_mode:
checkpoint_dir = "{}/data/checkpoint".format(tmp_dir)
else:
checkpoint_dir = "{}/checkpoint".format(tmp_dir)
print("Checkpoint directory {}".format(checkpoint_dir))
%%time
if local_mode:
checkpoint_path = "file://{}".format(checkpoint_dir)
print("Local checkpoint file path: {}".format(checkpoint_path))
else:
checkpoint_path = "s3://{}/{}/checkpoint/".format(s3_bucket, job_name)
if not os.listdir(checkpoint_dir):
raise FileNotFoundError("Checkpoint files not found under the path")
os.system("aws s3 cp --recursive {} {}".format(checkpoint_dir, checkpoint_path))
print("S3 checkpoint file path: {}".format(checkpoint_path)) | _____no_output_____ | Apache-2.0 | reinforcement_learning/rl_traveling_salesman_vehicle_routing_coach/rl_traveling_salesman_vehicle_routing_coach.ipynb | Amirosimani/amazon-sagemaker-examples |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.