markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
We can then combine has_many_airports with small_cities to find small cities with a large number of airports.
To do this we first need to use the & operator to combine the two Boolean tables:
|
small_but_flighty = has_many_airports & small_cities
small_but_flighty
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
We can use this boolean index to select the rows from the original DataFrame that contain data about cities with fewer than one million residents and more than two airports.
|
airport_df[small_but_flighty]
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
In this example we broke the filter down into many steps. It could actually be performed in one expression as shown below.
|
airport_df[(airport_df['Airports'] > 2) & (airport_df['Population'] < 1000000)]
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Notice the need for parenthesis around each Boolean expression. This is because & has a higher precedence than > and <.
The term 'filtering' is typically used when talking about rows of data. However, it is possible to filter out columns of a dataset. To filter columns simply list the columns that you do want to keep in a list and pass it to the DataFrame selector:
|
population_df = airport_df[['City Name', 'Population']]
population_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
If a dataset has many columns, it might be easier to exclude a column using a list expansion instead of explicitly listing many columns:
|
population_df = airport_df[
[col for col in airport_df.columns if col is not 'Airports']]
population_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
This works for multiple columns also:
|
population_df = airport_df[
[col for col in airport_df.columns if col not in {'Airports', 'Population'}]]
population_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Exercise 3: SoCal
Using the California housing DataFrame from the previous unit, make a new DataFrame that only contains data from the southern part of California. What is 'southern'? For the purpose of this exercise, let's say that southern California includes everything below 36 degrees latitude.
Create a new DataFrame called socal_df containing only data points below the 36 latitude. Then print out the shape of that DataFrame.
Student Solution
|
url = "https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv"
cali_df = pd.read_csv(url)
# Your Code Goes here
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Grouping Data
We can also aggregate DataFrame objects by grouping rows of data together.
For our examples we will create a DataFrame containing the ages, heights, and weights of a sample of children:
|
import pandas as pd
body_measurement_df = pd.DataFrame.from_records((
(2, 83.82, 8.4),
(4, 99.31, 16.97),
(3, 96.52, 14.41),
(6, 114.3, 20.14),
(4, 101.6, 16.91),
(2, 86.36, 12.64),
(3, 92.71, 14.23),
(2, 85.09, 11.11),
(2, 85.85, 14.18),
(5, 106.68, 20.01),
(4, 99.06, 13.17),
(5, 109.22, 15.36),
(4, 100.84, 14.78),
(6, 115.06, 20.06),
(2, 84.07, 10.02),
(7, 121.67, 28.4),
(3, 94.49, 14.05),
(6, 116.59, 17.55),
(7, 121.92, 22.96),
), columns=("Age (yrs)", "Height (cm)", "Weight (kg)"))
body_measurement_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
As you can see, we have a fairly low-level dump of data. It is unsorted and is generally difficult to gain any insight from. We could group the data by age and find metrics such as the count, max, min, mean, median, and more. This information might be more easy to analyze.
In order to do this grouping, we use the groupby method on the DataFrame.
For instance, if we wanted to know the mean values for the columns for each year of age, we could run the following code:
|
body_measurement_df.groupby('Age (yrs)').mean()
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
We get a DataFrame sorted by the column that we chose to group by. The 'Height (cm)' and 'Weight (kg)' columns now represent the mean height and weight for each age represented in our dataset.
Looking at this data, you can now see a steady increase in height and weight as age increases, which is what you are likely to expect.
You might notice here that the 'Age (yrs)' column looks a little different. It is now not a regular column, but is instead an index column.
Let's see what this means by saving the grouped data into a new DataFrame:
|
mean_body_measurement_df = body_measurement_df.groupby('Age (yrs)').mean()
mean_body_measurement_df.columns
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
You'll notice that 'Age (yrs)' is no longer listed as a column. In order to access the age you instead have to use the .index property of the DataFrame.
Note that we get an Int64Index object back and not a Series as we would if we referenced a single column.
|
mean_body_measurement_df.index
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
We aren't restricted to just using mean(). There are many other aggregate functions that we could use, including max(), which gives us the largest sample in each grouping:
|
body_measurement_df.groupby('Age (yrs)').max()
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
And min() which gives the smallest value in each grouping:
|
body_measurement_df.groupby('Age (yrs)').min()
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
There are many other aggregate functions. You can see the entire list in the GroupBy documentation.
Sometimes performing a single aggregation across all columns is limiting. What if you want the mean of one column and the max of another? What if you want to perform multiple aggregations on one column?
You can perform different and multiple aggregations using the agg() function:
|
body_measurement_df.groupby('Age (yrs)').agg({
'Height (cm)': 'mean',
'Weight (kg)': ['max', 'min'],
})
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
As you can see, agg() accepts a dictionary. The keys are the columns that you want to aggregate. The values are either a single aggregation function name or lists of aggregation function names.
Exercise 4: Grouping
Given the body measurement dataset in a DataFrame, group the data by 'Age (yrs)' and find the following aggregations using the agg() function:
'Age (yrs)' count
'Height (cm)' min
'Height (cm)' max
'Height (cm)' mean
'Height (cm)' standard deviation
'Weight (kg)' min
'Weight (kg)' max
'Weight (kg)' mean
'Weight (kg)' standard deviation
Student Solution
|
import pandas as pd
body_measurement_df = pd.DataFrame.from_records((
(2, 83.82, 8.4),
(4, 99.31, 16.97),
(3, 96.52, 14.41),
(6, 114.3, 20.14),
(4, 101.6, 16.91),
(2, 86.36, 12.64),
(3, 92.71, 14.23),
(2, 85.09, 11.11),
(2, 85.85, 14.18),
(5, 106.68, 20.01),
(4, 99.06, 13.17),
(5, 109.22, 15.36),
(4, 100.84, 14.78),
(6, 115.06, 20.06),
(2, 84.07, 10.02),
(7, 121.67, 28.4),
(3, 94.49, 14.05),
(6, 116.59, 17.55),
(7, 121.92, 22.96),
), columns=("Age (yrs)", "Height (cm)", "Weight (kg)"))
body_measurement_df
# Your Solution Goes Here
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Merging Data
It is common for related data to be stored in different locations. When this happens you sometimes need to merge the data into a single DataFrame in order to work with all of the data in an easy manner.
Let's take a look at some data about popular desserts. First, we have nutritional information:
|
import pandas as pd
nutrition_information_df = pd.DataFrame.from_records((
('Cupcake', 178, 5.26, 32.54, 1.37),
('Donut', 190, 10.51, 21.62, 2.62),
('Eclair', 267, 16.01, 24.68, 6.53),
('Froyo', 214, 2.94, 39.24, 9.4),
('Gingerbread', 130, 5, 19, 2),
('Honeycomb', 190, 13, 23, 2),
('Ice Cream Sandwich', 143, 5.6, 21.75, 2.61),
('Jelly Bean', 100, 0, 25, 0),
('KitKat', 210, 11, 27, 3),
('Lollipop', 110, 0, 28, 0),
('Marshmallow', 100, 0, 24, 1),
('Nougat', 56, 0.23, 12.93, 0.47),
('Oreo', 160, 7, 25, 1),
('Pie', 356, 16.5, 51, 2.85),
), columns=('Name', 'Calories', 'Fat (g)', 'Carbs (g)', 'Protein (g)'))
nutrition_information_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
We also have data about the manufacturing costs and the retail price of each of the desserts:
|
import pandas as pd
costs_df = pd.DataFrame.from_records((
('Cupcake', 1.24, 4.50),
('Donut', 0.17, 0.99),
('Eclair', 0.54, 2.50),
('Froyo', 0.78, 3.50),
('Gingerbread', 0.45, 0.99),
('Honeycomb', 1.25, 3.00),
('Ice Cream Sandwich', 1.21, 2.99),
('Jelly Bean', 0.04, 0.99),
('KitKat', 0.33, 1.50),
('Lollipop', 0.11, 1.10),
('Marshmallow', 0.03, 0.50),
('Nougat', 0.75, 1.50),
('Oreo', 0.78, 2.00),
('Pie', 0.66, 2.25),
), columns=('Name', 'Manufacturing (USD)', 'Retail (USD)'))
costs_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
If we want to combine the data into a single DataFrame, we can merge the data:
|
pd.merge(nutrition_information_df, costs_df)
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Pandas searches for columns with the same name and uses those columns to match rows of data. The result is a single DataFrame with columns from the merged DataFrame objects.
What if we have yet another DataFrame that contains the inventory of desserts that we have in stock:
|
import pandas as pd
inventory_df = pd.DataFrame.from_records((
('Marshmallow', 1004),
('Nougat', 563),
('Oreo', 789),
('Pie', 33),
), columns=('Name', '# In Stock'))
inventory_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
If we want to join our inventory with our cost data to see how much earning potential we have in stock, we can join the costs_df with the inventory_df:
|
pd.merge(costs_df, inventory_df)
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
If we wanted we could then sum up our retail prices multiplied by inventory to see how much gross revenue potential we currently have.
Notice that we only have four desserts. What happened?
By default when merging DataFrame objects only rows that match across DataFrame objects are returned. Non-matching rows are filtered out.
We can change this by telling merge to do an outer join. This will keep all of the data in the first DataFrame passed to merge() and fill in any missing data with null values.
|
pd.merge(costs_df, inventory_df, how='outer')
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
There are many options for merging data. You have options available to keep rows in specific DataFrames, to use different columns to join on, and much more. Check out the merge documentation to learn more.
Exercise 5: Merging DataFrame Objects
In this exercise we will answer a few questions about our dessert-making operation. In order to answer these questions, you are provided with the costs_df DataFrame, which contains names of treats and costs related to them.
The columns are:
* Name: The name of the treat.
* Manufacturing (USD): The cost in United States dollars to create one saleable unit of the treat.
* Retail (USD): The price that one serving of the treat is sold for.
|
import pandas as pd
costs_df = pd.DataFrame.from_records((
('Cupcake', 1.24, 4.50),
('Donut', 0.17, 0.99),
('Eclair', 0.54, 2.50),
('Froyo', 0.78, 3.50),
('Gingerbread', 0.45, 0.99),
('Honeycomb', 1.25, 3.00),
('Ice Cream Sandwich', 1.21, 2.99),
('Jelly Bean', 0.04, 0.99),
('KitKat', 0.33, 1.50),
('Lollipop', 0.11, 1.10),
('Marshmallow', 0.03, 0.50),
('Nougat', 0.75, 1.50),
('Oreo', 0.78, 2.00),
('Pie', 0.66, 2.25),
), columns=('Name', 'Manufacturing (USD)', 'Retail (USD)'))
costs_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
The other DataFrame that we have at our disposal is the inventory_df. This DataFrame contains information about how many of each type of treat we have in stock and ready to sell.
The columns are:
* Name: The name of the treat.
* # In Stock: The number of saleable units of the treat that we have.
Any treats not in inventory are assumed to be out of stock.
|
inventory_df = pd.DataFrame.from_records((
('Marshmallow', 1004),
('Nougat', 563),
('Oreo', 789),
('Pie', 33),
), columns=('Name', '# In Stock'))
inventory_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Question 1: Potential Profit
For this question we want to determine the potential profit that we can make with the items that we have in stock.
$profit = \Sigma^{t}_{i=1} n * (r - m)$
Where:
t is every type of treat in stock
n is the number of units of that treat
r is the retail price of the treat
m are the manufacturing costs for the treat
Merge inventory_df and costs_df to calculate the potential_profit. Print out the potential profit.
Student Solution
|
# Merge the DataFrame objects
dessert_df = None
# Calculate potential profit
potential_profit = None
# Print the potential profit
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Question 2: Restocking Cost
There are only four different treats available for sale. We need to get some more inventory in this shop!
In this portion of the exercise we will calculate the total cost to get 100 units of each of the missing treats onto the shelves and ready to sale.
The cost is calculated with:
$cost = \Sigma^{t}_{i=1} 100 * m$
Where:
t is every type of treat NOT in stock
100 is the number of units of that treat that we'd like to make
m are the manufacturing costs for the treat
Merge inventory_df and costs_df to calculate the cost_to_make. Print out the cost.
Student Solution
|
# Merge the DataFrame objects
dessert_df = None
# Identify the missing desserts
missing_dessert_df = None
# Calculate the cost to make 100 of each of the missing treats
cost_to_make = None
# Print the cost
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Sorting
It is often important to sort data in order to visually examine the data for patterns and anomalies. Luckily this is easy to do in Pandas.
To start off, let's build a DataFrame to sort. For this example we will use a DataFrame containing information about cities, their populations, and the number of airports in-and-around the cities.
|
import pandas as pd
airport_df = pd.DataFrame.from_records((
('Atlanta', 498044, 2),
('Austin', 964254, 2),
('Kansas City', 491918, 8),
('New York City', 8398748, 3),
('Portland', 653115, 1),
('San Francisco', 883305, 3),
('Seattle', 744955, 2),
), columns=("City Name", "Population", "Airports"))
airport_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
The data seems to be sorted by City Name. If we want to sort the data by Population we can use the sort_values() method:
|
airport_df.sort_values('Population')
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
We can see that Kansas City is the smallest city in our dataset, and New York City is the largest.
If you were thinking, Why does Kansas City have so many airports?, good for you!
This is one of the benefits we can get from viewing our data in different sorting orders. We can see that the smallest city by population has the largest number of airports. This doesn't seem right.
If we were going to be using this dataset for an actual data science project, we would want to investigate this further. We could:
Verify that Kansas City actually does have 8 airports
Verify that a few of the other cities, especially the larger ones, have so few airports
Look into how the data was collected to see if the count for Kansas City was collected differently:
Does it contain regional airports while others do not?
What counts as an airport for the city? Farm landing strips? Military bases?
How close to a city does an airport need to be to be considered an airport for that city?
You can probably think of many more questions to ask about the data and how it was collected.
When you see something that looks odd in your data, ask questions!
For now, let's get back to sorting. What if we wanted to sort by more than one column?
For instance, we can sort by the number of airports in a city and then by population:
|
airport_df.sort_values(['Airports', 'Population'])
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Using this we can now answer questions such as What is the smallest city with two airports?
Notice that although we sorted the DataFrame, we didn't actually change the DataFrame itself:
|
airport_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
If we do want to save the sort order we can assign the return value of sort_values() to another variable:
|
sorted_airport_df = airport_df.sort_values(['Airports', 'Population'])
sorted_airport_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
But this doesn't modify the original DataFrame. To do that, use the inplace argument:
|
airport_df.sort_values(['Airports', 'Population'], inplace=True)
airport_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
References and Copies
Both Python and Pandas strive to hide lower-level programming details from you whenever they can. However, there are some cases where you do have to be aware of how your data is being managed.
One place where this often happens is when Pandas is working indirectly with with a DataFrame.
We'll walk through some examples using the airport data we have seen many times in this lab.
|
import pandas as pd
airport_df = pd.DataFrame.from_records((
('Atlanta', 498044, 2),
('Austin', 964254, 2),
('Kansas City', 491918, 8),
('New York City', 8398748, 3),
('Portland', 653115, 1),
('San Francisco', 883305, 3),
('Seattle', 744955, 2),
), columns=("City Name", "Population", "Airports"))
airport_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
We'll start simple and assign the airport_df to another variable, airport_df2. We then try to double the number of airports in airport_df2.
What happens to airport_df and airport_df2?
|
airport_df2 = airport_df
airport_df2.loc[:, 'Airports'] *= 2
airport_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Yikes! When we modified airport_df2 we also modified airport_df.
This actually has nothing to do with Pandas, but instead is a case where Python creates a reference to our original DataFrame instead of a copy.
When we assign airport_df to airport_df2 Python just makes airport_df2 refer to the object that is in airport_df. Both refer to the same copy of the data.
This is desirable in many cases. Your data might be big. Having many copies can consume a lot of memory and take a lot of time.
But sometimes you need to actually copy data. Let's reset our airport DataFrame and do just that.
|
import pandas as pd
airport_df = pd.DataFrame.from_records((
('Atlanta', 498044, 2),
('Austin', 964254, 2),
('Kansas City', 491918, 8),
('New York City', 8398748, 3),
('Portland', 653115, 1),
('San Francisco', 883305, 3),
('Seattle', 744955, 2),
), columns=("City Name", "Population", "Airports"))
airport_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
To make a copy of a DataFrame use the copy() method.
|
airport_df2 = airport_df.copy()
airport_df2.loc[:, 'Airports'] *= 2
airport_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
As you can see, airport_df did not change.
But you can see below that airport_df2 did:
|
airport_df2
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Pandas adds an additional level of abstraction called views. Views are a way to look at the same data from a different perspective.
Let's work through an example using our airport dataset.
Say we wanted to filter to only rows with more than two airports:
|
many_airports_df = airport_df[airport_df['Airports'] > 2]
many_airports_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
What is many_airports_df? Is it a new DataFrame? Does it only contain three rows of data? Are the rows separate or the same as the rows in airport_df? If we modify many_airports_df will airports_df be modified?
Let's try and see:
|
many_airports_df = airport_df[airport_df['Airports'] > 2]
many_airports_df['City Name'] = \
many_airports_df['City Name'].apply(lambda s: s.upper())
airport_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
We didn't modify airport_df, so we must be working with a copy.
We did get a warning though:
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
In this case Pandas created a copy of the data, but it was uncertain if we wanted to modify the copy or the original DataFrame.
Warnings are typically a bad sign. We can get rid of the warning by being explicit about what we want to do.
If we want to copy the data into a new DataFrame, we can use .copy():
|
many_airports_df = airport_df[airport_df['Airports'] > 2].copy()
many_airports_df['City Name'] = \
many_airports_df['City Name'].apply(lambda s: s.upper())
airport_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
And if we want to not copy the data and to modify the original we need to index into airport_df for the modification:
|
has_many_airports = airport_df['Airports'] > 2
airport_df.loc[has_many_airports, 'City Name'] = \
airport_df.loc[has_many_airports, 'City Name'].apply(lambda s: s.upper())
airport_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Exercise 6: Updating Calories
We just learned that the calorie count for our candy shop's jelly beans and lollipops is 10% too low. We need to update the calorie count for these two treats.
Below you'll find the nutrition_information_df which contains nutritional information about our treats. Write some code to increase the calories for 'Jelly Bean' and 'Lollipop' by 10%. Be sure that the data stored in nutrition_information_df is updated.
Be sure that no warnings are issued!
Student Solution
|
import pandas as pd
nutrition_information_df = pd.DataFrame.from_records((
('Cupcake', 178, 5.26, 32.54, 1.37),
('Donut', 190, 10.51, 21.62, 2.62),
('Eclair', 267, 16.01, 24.68, 6.53),
('Froyo', 214, 2.94, 39.24, 9.4),
('Gingerbread', 130, 5, 19, 2),
('Honeycomb', 190, 13, 23, 2),
('Ice Cream Sandwich', 143, 5.6, 21.75, 2.61),
('Jelly Bean', 100, 0, 25, 0),
('KitKat', 210, 11, 27, 3),
('Lollipop', 110, 0, 28, 0),
('Marshmallow', 100, 0, 24, 1),
('Nougat', 56, 0.23, 12.93, 0.47),
('Oreo', 160, 7, 25, 1),
('Pie', 356, 16.5, 51, 2.85),
), columns=('Name', 'Calories', 'Fat (g)', 'Carbs (g)', 'Protein (g)'))
# Update 'Lollipop' and 'Jelly Bean' calories by 10%
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Additional Exercises
Exercise 7: Retail Data
You have been hired to organize a small-town retail chain's data and report to them which of their stores have the most effective marketing, measured by how many dollars of merchandise are sold per visitor.
To accomplish this you are given access to two tables of data.
The first table keeps track of the average daily traffic to each store. We store it in traffic_df:
|
import pandas as pd
traffic_df = pd.DataFrame.from_records((
('43 Crescent Way', 2036),
('1001 Main St.', 1399),
('235 Pear Lane', 1386),
('199 Forest Way', 1295),
('703 Grove St.', 1154),
('55 Orchard Blvd.', 1022),
('202 Pine Drive', 968),
('98 Mountain Circle', 730),
('2136 A St.', 729),
('3430 17th St.', 504),
('7766 Ocean Ave.', 452),
('1797 Albatross Ct.', 316),
), columns=('Location', 'Traffic'))
traffic_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
The second table contains the average revenue from each store. We store in it locations_df:
|
locations_df = pd.DataFrame.from_records((
('43 Crescent Way', 6832),
('55 Orchard Blvd.', 13985),
('98 Mountain Circle', 3956),
('199 Forest Way', 572),
('202 Pine Drive', 3963),
('235 Pear Lane', 25653),
('703 Grove St.', 496),
('1001 Main St.', 38532),
('1797 Albatross Ct.', 26445),
('2136 A St.', 34560),
('3430 17th St.', 1826),
('7766 Ocean Ave.', 5124),
), columns=('Location', 'Revenue'))
locations_df
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Given the two DataFrame objects mentioned above, perform the following tasks:
Merge the two dataframes to create a single dataframe with store names: average daily traffic and average daily revenue. Call this new DataFrame performance_df.
Make a new column in performance_df, showing the average daily revenue per customer. Call the new column 'Revenue per Customer'. Revenue per customer is defined as rpc = revenue / traffic.
Print the 'Location' of the store that has the highest 'Revenue per Customer'.
|
# Part 1: Perform merge
performance_df = None # ...
# Part 2: Create column
# ...
# Part 3: Print location of store with the most revenue per customer
# ...
|
content/02_data/02_intermediate_pandas/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Okay, so we have a good idea about type of information the tweet table contains so let's return to the other tables in the database. We mentioned these above, but if you were just curious about what tables existed in the database, you could query Postgress for it like so...
|
statement = """SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'twitter'"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
tables = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
tables
|
twitter/twitter_1.ipynb
|
MissouriDSA/twitter-locale
|
mit
|
<span style="background-color: #FFFF00">YOUR TURN</span>
Above we ran a bit of code to return the column names of the tweet table. Pick one of the tables above (not tweet) and check out their columns.
|
# put your code here
# ------------------
statement = """SELECT column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_name = 'hashtag';"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
# execute the statement from above
cursor.execute(statement)
# fetch all of the rows associated with the query
cols = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
cols
|
twitter/twitter_1.ipynb
|
MissouriDSA/twitter-locale
|
mit
|
Let's start looking at some of the data now. Most of our analysis revolves around the tweet table, so we will pick up from there. We can look at some of the data in the tweet table. The first 10 rows should suffice...
|
statement = """SELECT *
FROM twitter.tweet
LIMIT 10;"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
# execute the statement from above
cursor.execute(statement)
# fetch all of the rows associated with the query
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
rows
|
twitter/twitter_1.ipynb
|
MissouriDSA/twitter-locale
|
mit
|
This is kind of a messy way to view the data. Right now we have an object called rows, which stores each row in a tuple and all of the tuples are stored in a single list. How about we change this into a more readable format...
|
statement = """SELECT *
FROM twitter.tweet
LIMIT 10;"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
|
twitter/twitter_1.ipynb
|
MissouriDSA/twitter-locale
|
mit
|
We can easily turn this into a dictionary object, which can subsequently be turned into a pandas DataFrame object. For this dictionary, the field name (column name) will be the key and the values will be a list of corresponding values in the rows object.
For example, rows[<x>][0] are all values of tweet_id_str. Conveniently, column_names[0] is tweet_id_str. Then all we have to do is create an empty dictionary and begin to fill it with the the contents of column_names and rows.
|
tweet_dict = {}
for i in list(range(len(column_names))):
tweet_dict['{}'.format(column_names[i])] = [x[i] for x in rows]
|
twitter/twitter_1.ipynb
|
MissouriDSA/twitter-locale
|
mit
|
What's happening here? Well, we can assume that the number or values per row should equal the number of column names. Therefore, we iterate through a list ranging from to the length of the column_names object and create a key from each column name item (k['{}'.format(column_names[i])]). The values are the corresponding value index in the rows object. We use the single line list constructor to build a list of values for each key. For a multiline for loop that does the same thing, it would look like:
```python
list_o_lists = [] # create an empty list to store lists of values
for i in list(range(len(column_names))):
vals = [] # create empty list to store the values
for x in rows:
vals.append(x[i])
list_o_lists.append(vals)
```
...and then turning this dict into a data frame is simple. Just run pandas' DataFrame method over the dictionary object you just created.
|
pd.DataFrame(tweet_dict)
|
twitter/twitter_1.ipynb
|
MissouriDSA/twitter-locale
|
mit
|
Back to linguistic diversity...
So we know that one of the component of linguistic diversity includes the number of unique languages. We can find the unique langauges through a simple query of the database. Below we find the unique languages from 10,000 rows.
<span style="background-color: #E6E6FA">A Note about Limits: We limit the data returned to 10,000 rows because there are approximately 300 million tweets in the total datasset. If everyone queried the whole table without limits at the same time, we would need a much larger server! If you are curious about the results for all the tweets, drop us a note and we will run your analysis when the server load is low (i.e., while you sleep!).</span>
|
statement = """SELECT DISTINCT iso_language
FROM (SELECT iso_language FROM twitter.tweet LIMIT 10000) AS langs"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
langs = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
|
twitter/twitter_1.ipynb
|
MissouriDSA/twitter-locale
|
mit
|
Now the languages are in a variable called "lang". To view that variable, we simply type it in an output window like below
|
langs
|
twitter/twitter_1.ipynb
|
MissouriDSA/twitter-locale
|
mit
|
<span style="background-color: #FFFF00">YOUR TURN</span>
How many unique languages are there from 10,000 rows. Feel free to use SQL or Python.
|
# put your code here
# ------------------
statement = """SELECT COUNT(*) FROM (SELECT DISTINCT iso_language
FROM (SELECT iso_language FROM twitter.tweet LIMIT 10000) AS langs) AS lang_count"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
num_langs = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
print(num_langs)
|
twitter/twitter_1.ipynb
|
MissouriDSA/twitter-locale
|
mit
|
The other component of diversity is the number of speakers per language (at least for our measure). Again, this can be done in SQL.
|
# use COUNT with a GROUP BY to count the number of speakers per language
statement = """SELECT DISTINCT iso_language, COUNT(*)
FROM (SELECT iso_language FROM twitter.tweet LIMIT 10000) AS langs GROUP BY iso_language;"""
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
num_langs = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
print(num_langs)
|
twitter/twitter_1.ipynb
|
MissouriDSA/twitter-locale
|
mit
|
<span style="background-color: #FFFF00">YOUR TURN</span>
Put the above counts in a data frame object where one column is the language and the other is the number of speakers (which are really represented as tweets by language) for that language.
|
# put your code here
# ------------------
lang_dict = {}
for i in list(range(len(column_names))):
lang_dict['{}'.format(column_names[i])] = [x[i] for x in num_langs]
pd.DataFrame(lang_dict)
|
twitter/twitter_1.ipynb
|
MissouriDSA/twitter-locale
|
mit
|
.. _tut_modifying_data_inplace:
Modifying data in-place
|
from __future__ import print_function
import mne
import os.path as op
import numpy as np
from matplotlib import pyplot as plt
|
0.12/_downloads/plot_modifying_data_inplace.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
It is often necessary to modify data once you have loaded it into memory.
Common examples of this are signal processing, feature extraction, and data
cleaning. Some functionality is pre-built into MNE-python, though it is also
possible to apply an arbitrary function to the data.
|
# Load an example dataset, the preload flag loads the data into memory now
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(data_path, preload=True, verbose=False)
raw = raw.crop(0, 2)
print(raw)
|
0.12/_downloads/plot_modifying_data_inplace.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Signal processing
Most MNE objects have in-built methods for filtering:
|
filt_bands = [(1, 3), (3, 10), (10, 20), (20, 60)]
f, (ax, ax2) = plt.subplots(2, 1, figsize=(15, 10))
_ = ax.plot(raw._data[0])
for fband in filt_bands:
raw_filt = raw.copy()
raw_filt.filter(*fband)
_ = ax2.plot(raw_filt._data[0])
ax2.legend(filt_bands)
ax.set_title('Raw data')
ax2.set_title('Band-pass filtered data')
|
0.12/_downloads/plot_modifying_data_inplace.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
In addition, there are functions for applying the Hilbert transform, which is
useful to calculate phase / amplitude of your signal
|
# Filter signal, then take hilbert transform
raw_band = raw.copy()
raw_band.filter(12, 18)
raw_hilb = raw_band.copy()
hilb_picks = mne.pick_types(raw_band.info, meg=False, eeg=True)
raw_hilb.apply_hilbert(hilb_picks)
print(raw_hilb._data.dtype)
|
0.12/_downloads/plot_modifying_data_inplace.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Finally, it is possible to apply arbitrary to your data to do what you want.
Here we will use this to take the amplitude and phase of the hilbert
transformed data. (note that you can use amplitude=True in the call to
:func:mne.io.Raw.apply_hilbert to do this automatically).
|
# Take the amplitude and phase
raw_amp = raw_hilb.copy()
raw_amp.apply_function(np.abs, hilb_picks, float, 1)
raw_phase = raw_hilb.copy()
raw_phase.apply_function(np.angle, hilb_picks, float, 1)
f, (a1, a2) = plt.subplots(2, 1, figsize=(15, 10))
a1.plot(raw_band._data[hilb_picks[0]])
a1.plot(raw_amp._data[hilb_picks[0]])
a2.plot(raw_phase._data[hilb_picks[0]])
a1.set_title('Amplitude of frequency band')
a2.set_title('Phase of frequency band')
|
0.12/_downloads/plot_modifying_data_inplace.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
This code load a particular snapshot and and a particular HOD model. In this case, 'redMagic' is the Zheng07 HOD with the f_c variable added in.
|
for boxno in xrange(40):
cosmo_params = {'simname':'trainingbox', 'boxno':boxno, 'scale_factors':[a]}
cat = cat_dict[cosmo_params['simname']](**cosmo_params)#construct the specified catalog!
cat.load_catalog(a, particles = True, downsample_factor = 1e-2)
cat.halocat.halo_table.write('/u/ki/swmclau2/des/Trainbox_%02d_LSD.hdf5'%boxno, format = 'hdf5', path = 'Trainbox_%02d_LSD.hdf5'%boxno, overwrite = True)
np.sum(cat.halocat.halo_table['halo_local_density_10'] ==0)*1.0/len(cat.halocat.halo_table['halo_local_density_10'])
gtz = cat.halocat.halo_table['halo_local_density_1'] > 0
plt.hist(np.log10(cat.halocat.halo_table['halo_local_density_1'][gtz]), label = '1')
gtz = cat.halocat.halo_table['halo_local_density_5'] > 0
plt.hist(np.log10(cat.halocat.halo_table['halo_local_density_5'][gtz]), label = '5')
plt.hist(np.log10(cat.halocat.halo_table['halo_local_density_10']), label = '10')
plt.legend(loc='best')
plt.yscale('log')
gtz = cat.halocat.halo_table['halo_local_density_1'] > 0
plt.scatter( cat.halocat.halo_table['halo_mvir'][gtz], cat.halocat.halo_table['halo_local_density_1'][gtz] )
plt.loglog()
from astropy import units
cat.cosmology.critical_density(a).to( (units.Msun)/(units.Mpc**3) )
|
notebooks/Load Jeremey LSD Box.ipynb
|
mclaughlin6464/pearce
|
mit
|
<a id='Jupyter'></a>
2. Interacting with Jupyter Notebook
This interface (what you are reading now) is know as Jupyter Notebook, an interactive document, which is a mix of Markdown and Python code executed by IPython:
|
print("Hello world!") # Modify me and push <SHIFT> + <RETURN>
|
01-hello_world/03-hello_world.ipynb
|
vicente-gonzalez-ruiz/YAPT
|
cc0-1.0
|
<a id='Python'></a>
3. Interacting with the Python interpreter
Run python in a shell and type: print("Hello world!") <enter> quit().
```
$ python
Python 2.7.10 (default, Oct 23 2015, 19:19:21)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
print("Hello world!")
Hello world!
quit()
$
```
Alternatively, instead of python we can use ipython, which provides dynamic object introspection, command completion, access to the system shell, etc.
```
$ ipython
Python 3.5.1rc1 (v3.5.1rc1:948ef16a6951, Nov 22 2015, 11:29:13)
Type "copyright", "credits" or "license" for more information.
IPython 5.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: print("Hello world!")
Hello world!
In [2]: help(print)
Help on built-in function print in module builtins:
print(...)
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
(type: <q> to exit)
In [3]: quit() # <ctrl> + <d> also works in Unixes
$
```
Interpreted?
Python is an interpreted programming language. When we run a Python program, we are executing the translation to bytecode of each Python statement of our program over the Python Virtual Machine (PVM). The .pyc files that appear after running a collection of modules as a script for the first time, contains the bytecode of such modules. This is used by Python to speed up their future executions.
|
def hello():
print('Hello world!')
import dis
dis.dis(hello)
|
01-hello_world/03-hello_world.ipynb
|
vicente-gonzalez-ruiz/YAPT
|
cc0-1.0
|
<a id='Scripts'></a>
4. Running Python programs (modules) as scripts
|
!cat hello_world.py
# Check the code (optional)
!pyflakes3 hello_world.py
!./hello_world.py # Specific of Unix
!python hello_world.py
%run hello_world.py # Specific of Ipython
|
01-hello_world/03-hello_world.ipynb
|
vicente-gonzalez-ruiz/YAPT
|
cc0-1.0
|
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise (the one and only)</h1>
</div>
Let's use what we've learned about GPs in a real application: fitting an exoplanet transit model in the presence of correlated noise.
Below is a (fictitious) light curve for a star with a transiting planet. There is a transit visible to the eye at $t = 0$, which (say) is when you'd expect the planet to transit if its orbit were perfectly periodic. However, a recent paper claims that the planet shows transit timing variations, which are indicative of a second, perturbing planet in the system, and that a transit at $t = 0$ can be ruled out at 3 $\sigma$. Your task is to verify this claim.
Assume you have no prior information on the planet other than the transit occurs in the observation window, the depth of the transit is somewhere in the range $(0, 1)$, and the transit duration is somewhere between $0.1$ and $1$ day. You don't know the exact process generating the noise, but you are certain that there's correlated noise in the dataset, so you'll have to pick a reasonable kernel and estimate its hyperparameters.
Fit the transit with a simple inverted Gaussian with three free parameters:
python
def transit_shape(depth, t0, dur):
return -depth * np.exp(-0.5 * (t - t0) ** 2 / (0.2 * dur) ** 2)
HINT: I borrowed heavily from this tutorial in the celerite documentation, so you might want to take a look at it...
|
import matplotlib.pyplot as plt
from celerite.modeling import Model
import os
# Define the model
class MeanModel(Model):
parameter_names = ("depth", "t0", "dur")
def get_value(self, t):
return -self.depth * np.exp(-0.5 * (t - self.t0) ** 2 / (0.2 * self.dur) ** 2)
mean_model = MeanModel(depth=0.5, t0=0.05, dur=0.7)
mean_model.parameter_bounds = [(0, 1.0), (-0.1, 0.4), (0.1, 1.0)]
true_params = mean_model.get_parameter_vector()
# Simuate the data
np.random.seed(71)
x = np.sort(np.random.uniform(-1, 1, 70))
yerr = np.random.uniform(0.075, 0.1, len(x))
K = 0.2 * np.exp(-0.5 * (x[:, None] - x[None, :]) ** 2 / 10.5)
K[np.diag_indices(len(x))] += yerr ** 2
y = np.random.multivariate_normal(mean_model.get_value(x), K)
y -= np.nanmedian(y)
# Plot the data
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
t = np.linspace(-1, 1, 1000)
plt.plot(t, mean_model.get_value(t))
plt.ylabel(r"$y$")
plt.xlabel(r"$t$")
plt.xlim(-1, 1)
plt.gca().yaxis.set_major_locator(plt.MaxNLocator(5))
plt.title("simulated data");
# Save it
X = np.hstack((x.reshape(-1, 1), y.reshape(-1, 1), yerr.reshape(-1, 1)))
if not (os.path.exists("data")):
os.mkdir("data")
np.savetxt("data/sample_transit.txt", X)
import matplotlib.pyplot as plt
t, y, yerr = np.loadtxt("data/sample_transit.txt", unpack=True)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.xlabel("time")
plt.ylabel("relative flux");
from celerite.modeling import Model
from scipy.optimize import minimize
# Define the transit model as a celerite `Model`
class MeanModel(Model):
parameter_names = ("depth", "t0", "dur")
def get_value(self, t):
return -self.depth * np.exp(-0.5 * (t - self.t0) ** 2 / (0.2 * self.dur) ** 2)
# Instantiate it with some guesses (which are actually the true values in this case!)
mean_model = MeanModel(depth=0.5, t0=0.05, dur=0.7)
mean_model.parameter_bounds = [(0, 1.0), (-0.1, 0.4), (0.1, 1.0)]
true_params = mean_model.get_parameter_vector()
# Set up the GP model
kernel = terms.RealTerm(log_a=np.log(np.var(y)), log_c=0)
gp = celerite.GP(kernel, mean=mean_model, fit_mean=True)
gp.compute(x, yerr)
print("Initial log-likelihood: {0}".format(gp.log_likelihood(y)))
# Define a cost function
def neg_log_like(params, y, gp):
gp.set_parameter_vector(params)
return -gp.log_likelihood(y)
def grad_neg_log_like(params, y, gp):
gp.set_parameter_vector(params)
return -gp.grad_log_likelihood(y)[1]
# Fit for the maximum likelihood parameters
initial_params = gp.get_parameter_vector()
bounds = gp.get_parameter_bounds()
soln = minimize(neg_log_like, initial_params,
method="L-BFGS-B", bounds=bounds, args=(y, gp))
gp.set_parameter_vector(soln.x)
print("Final log-likelihood: {0}".format(-soln.fun))
# Make the maximum likelihood prediction
t = np.linspace(-1, 1, 500)
mu, var = gp.predict(y, t, return_var=True)
std = np.sqrt(var)
# Plot the data
color = "#ff7f0e"
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(t, mu, color=color)
plt.fill_between(t, mu+std, mu-std, color=color, alpha=0.3, edgecolor="none")
plt.ylabel(r"$y$")
plt.xlabel(r"$t$")
plt.xlim(-1, 1)
plt.gca().yaxis.set_major_locator(plt.MaxNLocator(5))
plt.title("maximum likelihood prediction");
def log_probability(params):
gp.set_parameter_vector(params)
lp = gp.log_prior()
if not np.isfinite(lp):
return -np.inf
try:
return gp.log_likelihood(y) + lp
except celerite.solver.LinAlgError:
return -np.inf
import emcee
initial = np.array(soln.x)
ndim, nwalkers = len(initial), 32
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_probability)
print("Running burn-in...")
p0 = initial + 1e-8 * np.random.randn(nwalkers, ndim)
p0, lp, _ = sampler.run_mcmc(p0, 1000)
print("Running production...")
sampler.reset()
sampler.run_mcmc(p0, 2000);
# Plot the data.
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
# Plot 24 posterior samples.
samples = sampler.flatchain
for s in samples[np.random.randint(len(samples), size=24)]:
gp.set_parameter_vector(s)
mu = gp.predict(y, t, return_cov=False)
plt.plot(t, mu, color=color, alpha=0.3)
plt.ylabel(r"$y$")
plt.xlabel(r"$t$")
plt.xlim(-1, 1)
plt.gca().yaxis.set_major_locator(plt.MaxNLocator(5))
plt.title("posterior predictions");
import corner
names = gp.get_parameter_names()
cols = mean_model.get_parameter_names()
inds = np.array([names.index("mean:"+k) for k in cols])
corner.corner(sampler.flatchain[:, inds], truths=true_params,
labels=[r"depth", r"$t_0$", r"dur"]);
|
Sessions/Session13/Day2/answers/02-Fast-GPs.ipynb
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
mit
|
Import de modules et fonctions spécifiques à OpenFisca
|
from openfisca_france_indirect_taxation.tests import base
from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_line, save_dataframe_to_graph
|
openfisca_france_indirect_taxation/examples/notebooks/compute_cas_type_ticpe.ipynb
|
openfisca/openfisca-france-indirect-taxation
|
agpl-3.0
|
Définition d'un cas type et de son exposition à la TICPE entre 2000 et 2013
|
index = range(2000, 2014)
columns = ['si une essence et une diesel', 'si seulement vehicules diesel', 'si seulement vehicules essence']
depenses_ticpe_pour_1000_euros_carbu = pd.DataFrame(index = index, columns = columns)
for element in columns:
if element == 'si seulement vehicules essence':
dies = 0
else:
dies = 1
if element == 'si seulement vehicules diesel':
ess = 0
else:
ess = 1
for year in range(2000, 2014):
year = year
simulation = base.tax_benefit_system.new_scenario().init_single_entity(
period = year,
personne_de_reference = dict(
birth = datetime.date(year - 40, 1, 1),
),
menage = dict(
depenses_carburants = 1000,
veh_essence = ess,
veh_diesel = dies,
),
).new_simulation(debug = True)
depenses_ticpe_pour_1000_euros_carbu.loc[depenses_ticpe_pour_1000_euros_carbu.index == year, element] = \
simulation.calculate('ticpe_totale')
|
openfisca_france_indirect_taxation/examples/notebooks/compute_cas_type_ticpe.ipynb
|
openfisca/openfisca-france-indirect-taxation
|
agpl-3.0
|
Réalisation d'un graphique
|
graph_builder_line(depenses_ticpe_pour_1000_euros_carbu)
|
openfisca_france_indirect_taxation/examples/notebooks/compute_cas_type_ticpe.ipynb
|
openfisca/openfisca-france-indirect-taxation
|
agpl-3.0
|
Testing pilon on PGM data
|
%%bash
java -Xmx8G -jar ../utilities/pilon-1.10.jar \
--genome ../data/RM8375/ref/CFSAN008157.HGAP.fasta \
--unpaired ../data/RM8375/PGM/bam/IonXpress_001_R_2014_03_23_18_22_09_user_SN2-17-8375_Orthogonal_Measurement_1_Run_2_PacBioRef2.bam \
--changes --vcf --tracks \
--fix "all" --debug #note --fix "all" default
|
dev/.ipynb_checkpoints/notebook_2014_12_12-checkpoint.ipynb
|
nate-d-olson/micro_rm_dev
|
gpl-2.0
|
Notice that when we send a Point to the console we get:
|
p
|
Classes.ipynb
|
crowd-course/python
|
mit
|
Which is not useful, so we will define how Point is represented in the console using __repr__.
|
class Point():
"""Holds on a point (x,y) in the plane"""
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
Point(1,2)
|
Classes.ipynb
|
crowd-course/python
|
mit
|
Next up we define a method to add two points. Addition is by elements - $(x_1, y_1) + (x_2, y_2) = (x_1+x_2, y_1+y_2)$.
We also allow to add an int, in which case we add the point to a another point with both coordinates equal to the argument value.
|
class Point():
"""Holds on a point (x,y) in the plane"""
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
def add(self, other):
assert isinstance(other, (int, Point))
if isinstance(other, Point):
return Point(self.x + other.x , self.y + other.y)
else: # other is int, taken as (int, int)
return Point(self.x + other , self.y + other)
Point(1,1).add(Point(2,2))
Point(1,1).add(2)
|
Classes.ipynb
|
crowd-course/python
|
mit
|
A nicer way to do it is to overload the addition operator + by defining the addition method name to a name Python reserves for addition - __add__ (those are double underscores):
|
class Point():
"""Holds on a point (x,y) in the plane"""
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
def __add__(self, other):
assert isinstance(other, (int, Point))
if isinstance(other, Point):
return Point(self.x + other.x , self.y + other.y)
else: # other is int, taken as (int, int)
return Point(self.x + other , self.y + other)
Point(1,1) + Point(2,2)
Point(1,1) + 2
|
Classes.ipynb
|
crowd-course/python
|
mit
|
We want to be a able to compare Points:
|
Point(1,2) == Point(2,1)
Point(1,2) == Point(1,2)
p = Point()
p == p
Point(1,2) > Point(2,1)
|
Classes.ipynb
|
crowd-course/python
|
mit
|
So == checks by identity and > is not defined. Let us overload both these operators:
|
class Point():
"""Holds on a point (x,y) in the plane"""
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
def __add__(self, other):
assert isinstance(other, (int, Point))
if isinstance(other, Point):
return Point(self.x + other.x , self.y + other.y)
else: # other is int, taken as (int, int)
return Point(self.x + other , self.y + other)
def __eq__(self, other):
return (self.x, self.y) == (other.x, other.y)
def __gt__(self, other):
return (self.x > other.x and self.y > other.y)
|
Classes.ipynb
|
crowd-course/python
|
mit
|
First we check if two points are equal:
|
Point(1,0) == Point(1,2)
Point(1,0) == Point(1,0)
|
Classes.ipynb
|
crowd-course/python
|
mit
|
Then if one is strictly smaller than the other:
|
Point(1,0) > Point(1,2)
|
Classes.ipynb
|
crowd-course/python
|
mit
|
The addition operator + returns a new instance.
Next we will write a method that instead of returning a new instance, changes the current instance:
|
class Point():
"""Holds on a point (x,y) in the plane"""
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
def __eq__(self, other):
return (self.x, self.y) == (other.x, other.y)
def __gt__(self, other):
return (self.x > other.x and self.y > other.y)
def __add__(self, other):
assert isinstance(other, (int, Point))
if isinstance(other, Point):
return Point(self.x + other.x , self.y + other.y)
else: # other is int, taken as (int, int)
return Point(self.x + other , self.y + other)
def increment(self, other):
'''this method changes self (add "inplace")'''
assert isinstance(other,Point)
self.x += other.x
self.y += other.y
p = Point(6.5, 7)
p + Point(1,2)
print(p)
p.increment(Point(1,2))
print(p)
Point(5,6) > Point(1,2)
|
Classes.ipynb
|
crowd-course/python
|
mit
|
We now write a method that given many points, checks if the current point is more extreme than the other points.
Note that the argument *points means that more than one argument may be given.
|
class Point():
"""Holds on a point (x,y) in the plane"""
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
def __eq__(self, other):
return (self.x, self.y) == (other.x, other.y)
def __lt__(self, other):
return (self.x < other.x and self.y < other.y)
def __add__(self, other):
assert isinstance(other, (int, Point))
if isinstance(other, Point):
return Point(self.x + other.x , self.y + other.y)
else: # other is int, taken as (int, int)
return Point(self.x + other , self.y + other)
def increment(self, other):
'''this method changes self (add "inplace")'''
assert isinstance(other,Point)
self.x += other.x
self.y += other.y
def is_extreme(self, *points):
for point in points:
if not self > point:
return False
return True
p = Point(5, 6)
p.is_extreme(Point(1,1))
p.is_extreme(Point(1,1), Point(2,5), Point(6,2))
|
Classes.ipynb
|
crowd-course/python
|
mit
|
We can also use the method via the class instead of the instance, and give the instance of interest (the one that we want to know if it is the extreme) as the first argument self. Much like this, we can either do 'hi'.upper() or str.upper('hi').
|
Point.is_extreme(Point(7,8), Point(1,1), Point(4,5), Point(2,3))
|
Classes.ipynb
|
crowd-course/python
|
mit
|
Rectangle class
We will implement two classes for rectangles, and compare the two implementations.
First implementation - two points
The first implementation defines a rectangle by its lower left and upper right vertices.
|
class Rectangle1():
"""
Holds a parallel-axes rectangle by storing two points
lower left vertex - llv
upper right vertex - urv
"""
def __init__(self, lower_left_vertex, upper_right_vertex):
assert isinstance(lower_left_vertex, Point)
assert isinstance(upper_right_vertex, Point)
assert lower_left_vertex < upper_right_vertex
self.llv = lower_left_vertex
self.urv = upper_right_vertex
def __repr__(self):
representation = "Rectangle with lower left {0} and upper right {1}"
return representation.format(self.llv, self.urv)
def dimensions(self):
height = self.urv.y - self.llv.y
width = self.urv.x - self.llv.x
return height, width
def area(self):
height, width = self.dimensions()
area = height * width
return area
def transpose(self):
"""
Reflection with regard to the line passing through lower left vertex with angle 315 (-45) degrees
"""
height, width = self.dimensions()
self.urv = self.llv
self.llv = Point(self.urv.x - height, self.urv.y - width)
rec = Rectangle1(Point(), Point(2,1))
print(rec)
print("Area:", rec.area())
print("Dimensions:", rec.dimensions())
rec.transpose()
print("Transposed:", rec)
|
Classes.ipynb
|
crowd-course/python
|
mit
|
Second implementation - point and dimensions
The second implementation defines a rectangle by the lower left point, the height and the width.
We define the exact same methods as in Rectangle1, with the same input and output, but different inner representation / implementation.
|
class Rectangle2():
"""
Holds a parallel-axes rectangle by storing lower left point, height and width
"""
def __init__(self, point, height, width):
assert isinstance(point, Point)
assert isinstance(height, (int,float))
assert isinstance(width, (int,float))
assert height > 0
assert width > 0
self.point = point
self.height = float(height)
self.width = float(width)
def __repr__(self):
representation = "Rectangle with lower left {0} and upper right {1}"
return representation.format(self.point, Point(self.point.x + self.width, self.point.y + self.height))
def dimensions(self):
return self.height, self.width
def area(self):
area = self.height * self.width
return area
def transpose(self):
self.point = Point(self.point.x - self.height , self.point.y - self.width)
self.height, self.width = self.width, self.height
rec = Rectangle2(Point(), 1, 2)
print(rec)
print("Area:", rec.area())
print("Dimensions:", rec.dimensions())
rec.transpose()
print("Transposed:", rec)
|
Classes.ipynb
|
crowd-course/python
|
mit
|
Use pandas to read the csv of the monthly-milk-production.csv file and set index_col='Month'
|
data = pd.read_csv("./data/monthly-milk-production.csv", index_col = 'Month')
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Check out the head of the dataframe
|
data.head()
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Make the index a time series by using:
milk.index = pd.to_datetime(milk.index)
|
data.index = pd.to_datetime(data.index)
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Plot out the time series data.
|
data.plot()
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Train Test Split
Let's attempt to predict a year's worth of data. (12 months or 12 steps into the future)
Create a test train split using indexing (hint: use .head() or tail() or .iloc[]). We don't want a random train test split, we want to specify that the test set is the last 12 months of data is the test set, with everything before it is the training.
|
data.info()
training_set = data.head(156)
test_set = data.tail(12)
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Scale the Data
Use sklearn.preprocessing to scale the data using the MinMaxScaler. Remember to only fit_transform on the training data, then transform the test data. You shouldn't fit on the test data as well, otherwise you are assuming you would know about future behavior!
|
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
training_set = scaler.fit_transform(training_set)
test_set_scaled = scaler.transform(test_set)
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Batch Function
We'll need a function that can feed batches of the training data. We'll need to do several things that are listed out as steps in the comments of the function. Remember to reference the previous batch method from the lecture for hints. Try to fill out the function template below, this is a pretty hard step, so feel free to reference the solutions!
|
def next_batch(training_data, batch_size, steps):
"""
INPUT: Data, Batch Size, Time Steps per batch
OUTPUT: A tuple of y time series results. y[:,:-1] and y[:,1:]
"""
# STEP 1: Use np.random.randint to set a random starting point index for the batch.
# Remember that each batch needs have the same number of steps in it.
# This means you should limit the starting point to len(data)-steps
random_start = np.random.randint(0, len(training_data) - steps)
# STEP 2: Now that you have a starting index you'll need to index the data from
# the random start to random start + steps + 1. Then reshape this data to be (1,steps+1)
# Create Y data for time series in the batches
y_batch = np.array(training_data[random_start : random_start + steps + 1]).reshape(1, steps+1)
# STEP 3: Return the batches. You'll have two batches to return y[:,:-1] and y[:,1:]
# You'll need to reshape these into tensors for the RNN to .reshape(-1,steps,1)
return y_batch[:, :-1].reshape(-1, steps, 1), y_batch[:, 1:].reshape(-1, steps, 1)
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
The Constants
Define the constants in a single cell. You'll need the following (in parenthesis are the values I used in my solution, but you can play with some of these):
* Number of Inputs (1)
* Number of Time Steps (12)
* Number of Neurons per Layer (100)
* Number of Outputs (1)
* Learning Rate (0.03)
* Number of Iterations for Training (4000)
* Batch Size (1)
|
num_inputs = 1
num_time_steps = 12
num_neurons = 100
num_outputs = 1
learning_rate = 0.03
num_train_iter = 4000
batch_size = 1
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Now create the RNN Layer, you have complete freedom over this, use tf.contrib.rnn and choose anything you want, OutputProjectionWrappers, BasicRNNCells, BasicLSTMCells, MultiRNNCell, GRUCell etc... Keep in mind not every combination will work well! (If in doubt, the solutions used an Outputprojection Wrapper around a basic LSTM cell with relu activation.
|
cell = tf.contrib.rnn.OutputProjectionWrapper(tf.contrib.rnn.BasicLSTMCell(num_units = num_neurons, activation = tf.nn.relu), output_size = num_outputs)
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Loss Function and Optimizer
Create a Mean Squared Error Loss Function and use it to minimize an AdamOptimizer, remember to pass in your learning rate.
|
# MSE
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
train = optimizer.minimize(loss)
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Session
Run a tf.Session that trains on the batches created by your next_batch function. Also add an a loss evaluation for every 100 training iterations. Remember to save your model after you are done training.
|
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction = 0.75)
with tf.Session() as sess:
# Run
sess.run(init)
for iteration in range(num_train_iter):
X_batch, Y_batch = next_batch(training_set, batch_size, num_time_steps)
sess.run(train, feed_dict = {X: X_batch, y: Y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict = {X: X_batch, y: Y_batch})
print(iteration, "\tMSE:", mse)
# Save Model for Later
saver.save(sess, "./checkpoints/ex_time_series_model")
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Now we want to attempt to predict these 12 months of data, using only the training data we had. To do this we will feed in a seed training_instance of the last 12 months of the training_set of data to predict 12 months into the future. Then we will be able to compare our generated 12 months to our actual true historical values from the test set!
Generative Session
NOTE: Recall that our model is really only trained to predict 1 time step ahead, asking it to generate 12 steps is a big ask, and technically not what it was trained to do! Think of this more as generating new values based off some previous pattern, rather than trying to directly predict the future. You would need to go back to the original model and train the model to predict 12 time steps ahead to really get a higher accuracy on the test data. (Which has its limits due to the smaller size of our data set)
Fill out the session code below to generate 12 months of data based off the last 12 months of data from the training set. The hardest part about this is adjusting the arrays with their shapes and sizes. Reference the lecture for hints.
|
with tf.Session() as sess:
# Use your Saver instance to restore your saved rnn time series model
saver.restore(sess, "./checkpoints/ex_time_series_model")
# Create a numpy array for your genreative seed from the last 12 months of the
# training set data. Hint: Just use tail(12) and then pass it to an np.array
train_seed = list(training_set[-12:])
## Now create a for loop that
for iteration in range(12):
X_batch = np.array(train_seed[-num_time_steps:]).reshape(1, num_time_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
train_seed.append(y_pred[0, -1, 0])
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/02-Time-Series-Exercise.ipynb
|
arcyfelix/Courses
|
apache-2.0
|
Kernel Approximations for Large-Scale Non-Linear Learning
Predictions in a kernel-SVM are made using the formular
$$
\hat{y} = \alpha_1 y_1 k(\mathbf{x^{(1)}}, \mathbf{x}) + ... + \alpha_n y_n k(\mathbf{x^{(n)}}, \mathbf{x})> 0
$$
$$
0 \leq \alpha_i \leq C
$$
Radial basis function (Gaussian) kernel:
$$k(\mathbf{x}, \mathbf{x'}) = \exp(-\gamma ||\mathbf{x} - \mathbf{x'}||^2)$$
Kernel approximation $\phi$:
$$\phi(\mathbf{x})\phi(\mathbf{x'}) \approx k(\mathbf{x}, \mathbf{x'})$$
$$\hat{y} \approx w^T \phi(\mathbf{x})> 0$$
|
from helpers import Timer
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
digits = load_digits()
X, y = digits.data / 16., digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
|
scikit/Chapter 8/Kernel Approximations.ipynb
|
obulpathi/datascience
|
apache-2.0
|
Linear SVM
|
from sklearn.svm import LinearSVC
from sklearn.grid_search import GridSearchCV
grid = GridSearchCV(LinearSVC(random_state=0),
param_grid={'C': np.logspace(-3, 2, 6)}, cv=5)
with Timer():
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
|
scikit/Chapter 8/Kernel Approximations.ipynb
|
obulpathi/datascience
|
apache-2.0
|
Kernel SVM
|
from sklearn.svm import SVC
from sklearn.grid_search import GridSearchCV
grid = GridSearchCV(SVC(), param_grid={'C': np.logspace(-3, 2, 6),
'gamma': np.logspace(-3, 2, 6)}, cv=5)
with Timer():
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
|
scikit/Chapter 8/Kernel Approximations.ipynb
|
obulpathi/datascience
|
apache-2.0
|
Kernel Approximation + Linear SVM
|
from sklearn.kernel_approximation import RBFSampler
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(RBFSampler(random_state=0),
LinearSVC(dual=False, random_state=0))
grid = GridSearchCV(pipe, param_grid={'linearsvc__C': np.logspace(-3, 2, 6),
'rbfsampler__gamma': np.logspace(-3, 2, 6)}, cv=5)
with Timer():
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
|
scikit/Chapter 8/Kernel Approximations.ipynb
|
obulpathi/datascience
|
apache-2.0
|
Out of core Kernel approximation
|
import cPickle
from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier(random_state=0)
for iteration in range(30):
for i in range(9):
X_batch, y_batch = cPickle.load(open("data/batch_%02d.pickle" % i))
sgd.partial_fit(X_batch, y_batch, classes=range(10))
X_test, y_test = cPickle.load(open("data/batch_09.pickle"))
sgd.score(X_test, y_test)
sgd = SGDClassifier(random_state=0)
rbf_sampler = RBFSampler(gamma=.2, random_state=0).fit(np.ones((1, 64)))
for iteration in range(30):
for i in range(9):
X_batch, y_batch = cPickle.load(open("data/batch_%02d.pickle" % i))
X_kernel = rbf_sampler.transform(X_batch)
sgd.partial_fit(X_kernel, y_batch, classes=range(10))
sgd.score(rbf_sampler.transform(X_test), y_test)
|
scikit/Chapter 8/Kernel Approximations.ipynb
|
obulpathi/datascience
|
apache-2.0
|
Podemos, si queremos, ver un resumen rapido de las columnas, usando la función describe.
Esta función a a mostrarlos algunas estadísticas matemáticas de dataset, asi que no va a mostrar nada de las columnas que contienen datos tipo string (i.e. cadenas de caracteres).
Verán que tambien hace estadística sobre la columna Gene_Id, esto claramente está mal.
No nos interesa esa variable de esa forma, pero bueno, Pandas no sabe distinguir qué variable es relevante para nuestros análisis o no.
Ahí es donde empieza el trabajo suyo.
|
disgenet_data.describe()
|
python/Extras/Pandas_DataFrames/braindiseases.ipynb
|
fifabsas/talleresfifabsas
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.