markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
https://twitter.com/iitenki_moruten/status/1467683477474930688
# Original: https://github.com/moruten/julia-code/blob/1dafca3e1a4e3b36445c2263e440f6e4056b90aa/2021-12-6-test-no1.ipynb using Plots using Random using Distributions using QuadGK #============関数定義============================================# function action(x) 0.5*x*x end function deriv_action(x) x end func...
P(accept) = 0.8680966666666666 <x> = -0.004413428035654437 <xx> = 0.8885940396252682
MIT
0025/HMC revised.ipynb
genkuroki/public
Analysis - exp51- DQN with a conv net. First tuning attempt.
import os import csv import numpy as np import torch as th from glob import glob from pprint import pprint import matplotlib import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' import seaborn as sns sns.set(font_scale=1.5) sns.set_style('ticks') matplotlib.rcParams.upd...
_____no_output_____
MIT
notebooks/wythoff_exp51.ipynb
CoAxLab/azad
Load data
path = "/Users/qualia/Code/azad/data/wythoff/exp51/" exp_51 = load_data(path, run_index=(0, 99)) print(len(exp_51)) pprint(exp_51[1].keys()) pprint(exp_51[1]['score'][:20])
dict_keys(['file', 'episode', 'loss', 'score']) [0.0851063829787234, 0.07476635514018691, 0.07692307692307693, 0.06923076923076923, 0.06896551724137931, 0.06535947712418301, 0.07317073170731707, 0.06976744186046512, 0.06593406593406594, 0.06030150753768844, 0.06944444444444445, 0.07234042553191489, 0.069387...
MIT
notebooks/wythoff_exp51.ipynb
CoAxLab/azad
Plots All parameter summaryHow's it look overall. Timecourse
plt.figure(figsize=(6, 3)) for r, mon in enumerate(exp_51): if mon is not None: _ = plt.plot(mon['episode'], mon['score'], color='black') _ = plt.ylim(0, 1) _ = plt.ylabel("Optimal score") _ = plt.tight_layout() _ = plt.xlabel("Episode")
_____no_output_____
MIT
notebooks/wythoff_exp51.ipynb
CoAxLab/azad
Histograms of final values
data = [] plt.figure(figsize=(6, 3)) for r, mon in enumerate(exp_51): if mon is not None: data.append(np.max(mon['score'])) _ = plt.hist(data, bins=5, range=(0,1), color='black') _ = plt.xlabel("Max score") _ = plt.ylabel("Count") _ = plt.tight_layout() data = [] plt.figure(figsize=(6...
_____no_output_____
MIT
notebooks/wythoff_exp51.ipynb
CoAxLab/azad
03: Generate countsThis script takes a directory of `.csv` files containing entity counts by month in the following format:```csv,2012-01,2012-02meat,1011.0,873.0salt,805.0,897.0chicken,694.0,713.0```It sums the counts from all files, only keeps the `N` most common records and calculates the variance, scaled by the av...
INPUT_DIR = "./counts" # directory of counts file(s) created in the previous step OUTPUT_FILE = "./output_counts.csv" # path to output file MOST_COMMON = 10_000 # number of most common entities to keep DROP_MOST_FREQUENT = 10 # number of most frequent entities to drop N_TOTAL...
_____no_output_____
MIT
ner-food-ingredients/03_Generate_counts.ipynb
hertelm/projects
Downloads the odc-colab Python module and runs it to setup ODC.
!wget -nc https://raw.githubusercontent.com/ceos-seo/odc-colab/master/odc_colab.py from odc_colab import odc_colab_init odc_colab_init(install_odc_gee=True)
_____no_output_____
Apache-2.0
notebooks/02.09.Colab_Mission_Coincidences.ipynb
gamedaygeorge/odc-colab
Downloads an existing index and populates the new ODC environment with it.
from odc_colab import populate_db populate_db()
_____no_output_____
Apache-2.0
notebooks/02.09.Colab_Mission_Coincidences.ipynb
gamedaygeorge/odc-colab
Mission CoincidencesThis notebook finds concident acquisition regions for three missions: Landsat-8, Sentinel-2 and Sentinel-1. Each of these missions has a different orbit and revisit rates, so coincident pairs (two missions at the same location and day) are not that common and coincident triplets (all 3 missions at ...
# Load Data Cube Configuration from odc_gee import earthengine dc = earthengine.Datacube(app='Mission_Coincidences') # Import Utilities from IPython.display import display_html from utils.data_cube_utilities.clean_mask import landsat_qa_clean_mask import matplotlib.pyplot as plt import numpy as np import pandas as pd ...
_____no_output_____
Apache-2.0
notebooks/02.09.Colab_Mission_Coincidences.ipynb
gamedaygeorge/odc-colab
Create new functions to display data output and find coincidences * `display_side_by_side`: A method [found here](https://stackoverflow.com/a/44923103) for displaying Pandas DataFrames next to each other in one output.* `find_coincidences`: A helper method to return the various intersection of dates for the three prod...
def display_side_by_side(*args, index=True): html_str='' for df in args: if index: html_str+=df.to_html() else: html_str+=df.to_html(index=False) display_html(html_str.replace('table','table style="display:inline"'),raw=True) def find_coincidence(ls_index, s1_ind...
_____no_output_____
Apache-2.0
notebooks/02.09.Colab_Mission_Coincidences.ipynb
gamedaygeorge/odc-colab
Analysis parameters* `latitude`: The latitude extents for the analysis area.* `longitude`: The longitude extents for the analysis area.* `time`: The time window for the analysis (Year-Month)
# MODIFY HERE # Select the center of an analysis region (lat_long) # Adjust the surrounding box size (box_size) around the center (in degrees) # Remove the comment tags (#) below to change the sample location # Barekese Dam, Ghana, Africa lat_long = (6.846, -1.709) box_size_deg = 0.05 # Calculate the latitude and l...
_____no_output_____
Apache-2.0
notebooks/02.09.Colab_Mission_Coincidences.ipynb
gamedaygeorge/odc-colab
Load partial datasetsLoad only the dates, coordinate, and scene classification values if available for determining cloud coverage.
# Define the product details to load in the next code block platforms = {'LANDSAT_8': dict(product=f'ls8_google',latitude=latitude,longitude=longitude), 'SENTINEL-1': dict(product=f's1_google',group_by='solar_day'), 'SENTINEL-2': dict(product=f's2_google',group_by='solar_day')} # Load Landsat ...
_____no_output_____
Apache-2.0
notebooks/02.09.Colab_Mission_Coincidences.ipynb
gamedaygeorge/odc-colab
Cloud MaskingCreate cloud masks for the optical data (Landsat-8 and Sentinel-2)
ls_clean_mask = landsat_qa_clean_mask(ls_dataset, platform='LANDSAT_8') s2_clean_mask = (s2_dataset.scl != 0) & (s2_dataset.scl != 1) & \ (s2_dataset.scl != 3) & (s2_dataset.scl != 8) & \ (s2_dataset.scl != 9) & (s2_dataset.scl != 10)
/content/utils/data_cube_utilities/clean_mask.py:278: UserWarning: Please specify a value for `collection`. Assuming data is collection 1. warnings.warn('Please specify a value for `collection`. Assuming data is collection 1.') /content/utils/data_cube_utilities/clean_mask.py:283: UserWarning: Please specify a value ...
Apache-2.0
notebooks/02.09.Colab_Mission_Coincidences.ipynb
gamedaygeorge/odc-colab
Display a table of scenesFilter optical data by cloud cover
# MODIFY HERE # Percent of clean pixels in the optical images. # The default is 80% which will yield mostly clear scenes percent_clean = 80 # Display the dates and cloud information for the available scenes ls_df = pd.DataFrame(list(zip(ls_dataset.time.values.astype('datetime64[D]'), [r...
_____no_output_____
Apache-2.0
notebooks/02.09.Colab_Mission_Coincidences.ipynb
gamedaygeorge/odc-colab
CoincidencesFind the coincidence dates for the datasets using the filtered data from the previous section.
ls_index = pd.Index(ls_df['Landsat 8 Date'].values) s2_index = pd.Index(s2_df['Sentinel-2 Date'].values) s1_index = pd.Index(s1_df['Sentinel-1 Date'].values) # List the double and triple coincidences args = [pd.DataFrame(val, columns=[key]) for key, val in find_coincidence(ls_index, s1_index, s2_index).items()] display...
_____no_output_____
Apache-2.0
notebooks/02.09.Colab_Mission_Coincidences.ipynb
gamedaygeorge/odc-colab
Plot a single time selection to view the scene detailsSelect and plot a time from the coincidence results listed above.
# MODIFY HERE # Select a time from the table above. time_selection = '2019-01-22' # Define the plotting bands for each image on the specified date s1 = s2 = ls = None if ls_dataset.time.dt.floor('D').isin(np.datetime64(time_selection)).sum(): ls = dc.load(measurements=['red', 'green', 'blue'], ti...
_____no_output_____
Apache-2.0
notebooks/02.09.Colab_Mission_Coincidences.ipynb
gamedaygeorge/odc-colab
Discretization---In this notebook, you will deal with continuous state and action spaces by discretizing them. This will enable you to apply reinforcement learning algorithms that are only designed to work with discrete spaces. 1. Import the Necessary Packages
import sys import gym import numpy as np import pandas as pd import matplotlib.pyplot as plt # Set plotting options %matplotlib inline plt.style.use('ggplot') np.set_printoptions(precision=3, linewidth=120)
_____no_output_____
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
2. Specify the Environment, and Explore the State and Action SpacesWe'll use [OpenAI Gym](https://gym.openai.com/) environments to test and develop our algorithms. These simulate a variety of classic as well as contemporary reinforcement learning tasks. Let's use an environment that has a continuous state space, but ...
# Create an environment and set random seed env = gym.make('MountainCar-v0') env.seed(505);
_____no_output_____
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
Run the next code cell to watch a random agent.
state = env.reset() score = 0 for t in range(200): action = env.action_space.sample() env.render() state, reward, done, _ = env.step(action) score += reward if done: break print('Final score:', score) env.close()
Final score: -200.0
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
In this notebook, you will train an agent to perform much better! For now, we can explore the state and action spaces, as well as sample them.
# Explore state (observation) space print("State space:", env.observation_space) print("- low:", env.observation_space.low) print("- high:", env.observation_space.high) # Generate some samples from the state space print("State space samples:") print(np.array([env.observation_space.sample() for i in range(10)])) # Expl...
Action space: Discrete(3) Action space samples: [1 0 1 2 0 2 0 1 1 2]
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
3. Discretize the State Space with a Uniform GridWe will discretize the space using a uniformly-spaced grid. Implement the following function to create such a grid, given the lower bounds (`low`), upper bounds (`high`), and number of desired `bins` along each dimension. It should return the split points for each dimen...
def create_uniform_grid(low, high, bins=(10, 10)): """Define a uniformly-spaced grid that can be used to discretize a space. Parameters ---------- low : array_like Lower bounds for each dimension of the continuous space. high : array_like Upper bounds for each dimension of the c...
Uniform grid: [<low>, <high>] / <bins> => <splits> [-1.0, 1.0] / 10 => [-0.8 -0.6 -0.4 -0.2 0. 0.2 0.4 0.6 0.8] [-5.0, 5.0] / 10 => [-4. -3. -2. -1. 0. 1. 2. 3. 4.]
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
Now write a function that can convert samples from a continuous space into its equivalent discretized representation, given a grid like the one you created above. You can use the [`numpy.digitize()`](https://docs.scipy.org/doc/numpy-1.9.3/reference/generated/numpy.digitize.html) function for this purpose.Assume the gri...
def discretize(sample, grid): """Discretize a sample as per given grid. Parameters ---------- sample : array_like A single sample from the (original) continuous space. grid : list of array_like A list of arrays containing split points for each dimension. Returns ---...
Uniform grid: [<low>, <high>] / <bins> => <splits> [-1.0, 1.0] / 10 => [-0.8 -0.6 -0.4 -0.2 0. 0.2 0.4 0.6 0.8] [-5.0, 5.0] / 10 => [-4. -3. -2. -1. 0. 1. 2. 3. 4.] Samples: array([[-1. , -5. ], [-0.81, -4.1 ], [-0.8 , -4. ], [-0.5 , 0. ], [ 0.2 , -1.9 ], [ 0....
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
4. VisualizationIt might be helpful to visualize the original and discretized samples to get a sense of how much error you are introducing.
import matplotlib.collections as mc def visualize_samples(samples, discretized_samples, grid, low=None, high=None): """Visualize original and discretized samples on a given 2-dimensional grid.""" fig, ax = plt.subplots(figsize=(10, 10)) # Show grid ax.xaxis.set_major_locator(plt.FixedLocator(grid...
/usr/local/Cellar/jupyterlab/1.2.4/libexec/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3319: FutureWarning: arrays to stack must be passed as a "sequence" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the futu...
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
Now that we have a way to discretize a state space, let's apply it to our reinforcement learning environment.
# Create a grid to discretize the state space state_grid = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(10, 10)) state_grid # Obtain some samples from the space, discretize them, and then visualize them state_samples = np.array([env.observation_space.sample() for i in range(10)]) dis...
/usr/local/Cellar/jupyterlab/1.2.4/libexec/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3319: FutureWarning: arrays to stack must be passed as a "sequence" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the futu...
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
You might notice that if you have enough bins, the discretization doesn't introduce too much error into your representation. So we may be able to now apply a reinforcement learning algorithm (like Q-Learning) that operates on discrete spaces. Give it a shot to see how well it works! 5. Q-LearningProvided below is a s...
class QLearningAgent: """Q-Learning agent that can act on a continuous state space by discretizing it.""" def __init__(self, env, state_grid, alpha=0.02, gamma=0.99, epsilon=1.0, epsilon_decay_rate=0.9995, min_epsilon=.01, seed=505): """Initialize variables, create grid for discretizat...
Environment: <TimeLimit<MountainCarEnv<MountainCar-v0>>> State space size: (10, 10) Action space size: 3 Q table size: (10, 10, 3)
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
Let's also define a convenience function to run an agent on a given environment. When calling this function, you can pass in `mode='test'` to tell the agent not to learn.
def run(agent, env, num_episodes=20000, mode='train'): """Run agent in given reinforcement learning environment and return scores.""" scores = [] max_avg_score = -np.inf for i_episode in range(1, num_episodes+1): # Initialize episode state = env.reset() action = agent.reset_episo...
Episode 13900/20000 | Max Average Score: -137.36
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
The best way to analyze if your agent was learning the task is to plot the scores. It should generally increase as the agent goes through more episodes.
# Plot scores obtained per episode plt.plot(scores); plt.title("Scores");
_____no_output_____
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
If the scores are noisy, it might be difficult to tell whether your agent is actually learning. To find the underlying trend, you may want to plot a rolling mean of the scores. Let's write a convenience function to plot both raw scores as well as a rolling mean.
def plot_scores(scores, rolling_window=100): """Plot scores and optional rolling mean using specified window.""" plt.plot(scores); plt.title("Scores"); rolling_mean = pd.Series(scores).rolling(rolling_window).mean() plt.plot(rolling_mean); return rolling_mean rolling_mean = plot_scores(scores)
_____no_output_____
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
You should observe the mean episode scores go up over time. Next, you can freeze learning and run the agent in test mode to see how well it performs.
# Run in test mode and analyze scores obtained test_scores = run(q_agent, env, num_episodes=100, mode='test') print("[TEST] Completed {} episodes with avg. score = {}".format(len(test_scores), np.mean(test_scores))) _ = plot_scores(test_scores)
_____no_output_____
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
It's also interesting to look at the final Q-table that is learned by the agent. Note that the Q-table is of size MxNxA, where (M, N) is the size of the state space, and A is the size of the action space. We are interested in the maximum Q-value for each state, and the corresponding (best) action associated with that v...
def plot_q_table(q_table): """Visualize max Q-value for each state and corresponding action.""" q_image = np.max(q_table, axis=2) # max Q-value for each state q_actions = np.argmax(q_table, axis=2) # best action for each state fig, ax = plt.subplots(figsize=(10, 10)) cax = ax.imshow(q_image,...
_____no_output_____
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
6. Modify the GridNow it's your turn to play with the grid definition and see what gives you optimal results. Your agent's final performance is likely to get better if you use a finer grid, with more bins per dimension, at the cost of higher model complexity (more parameters to learn).
# TODO: Create a new agent with a different state space grid state_grid_new = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(20, 20)) q_agent_new = QLearningAgent(env, state_grid_new) q_agent_new.scores = [] # initialize a list to store scores for this agent # Train it over a desired ...
_____no_output_____
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
7. Watch a Smart Agent
state = env.reset() score = 0 for t in range(200): action = q_agent_new.act(state, mode='test') env.render() state, reward, done, _ = env.step(action) score += reward if done: break print('Final score:', score) env.close()
_____no_output_____
MIT
discretization/Discretization_Solution.ipynb
Jeromeschmidt/Udacity-Deep-Reinforcement-Nanodegree
Lecture 08: Basic data analysis [Download on GitHub](https://github.com/NumEconCopenhagen/lectures-2021)[](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2021/master?urlpath=lab/tree/08/Basic_data_analysis.ipynb) 1. [Combining datasets (merging and concatenating)](Combining-datasets-(merging-and-concatenating))...
import numpy as np import pandas as pd import datetime import pandas_datareader # install with `pip install pandas-datareader` import pydst # install with `pip install git+https://github.com/elben10/pydst` import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') from matplotlib_venn import venn2 # `pip inst...
C:\Users\gmf123\Anaconda3\envs\new\lib\site-packages\pandas_datareader\compat\__init__.py:7: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. from pandas.util.testing import assert_frame_equal
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
1. Combining datasets (merging and concatenating) When **combining datasets** there are a few crucial concepts: 1. **Concatenate (append)**: "stack" rows (observations) on top of each other. This works if the datasets have the same columns (variables).2. **Merge**: the two datasets have different variables, but may or...
empl = pd.read_csv('../07/data/RAS200_long.csv') # .. -> means one folder up inc = pd.read_csv('../07/data/INDKP107_long.csv') area = pd.read_csv('../07/data/area.csv')
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
1.1 Concatenating datasetsSuppose we have two datasets that have the same variables and we just want to concatenate them.
empl.head(5) N = empl.shape[0] A = empl.loc[empl.index < N/2,:] # first half of observations B = empl.loc[empl.index >= N/2,:] # second half of observations print(f'A has shape {A.shape} ') print(f'B has shape {B.shape} ')
A has shape (495, 3) B has shape (495, 3)
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Concatenation** is done using the command `pd.concat([df1, df2])`.
C = pd.concat([A,B]) print(f'C has shape {C.shape} (same as the original empl, {empl.shape})')
C has shape (990, 3) (same as the original empl, (990, 3))
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
1.2 Merging datasets Two datasets with **different variables**: `empl` and `inc`. **Central command:** `pd.merge(empl, inc, on=[municipalitiy, year], how=METHOD)`. 1. The keyword `on` specifies the **merge key(s)**. They uniquely identify observations in both datasets (for sure in at least one of them). 2. The keywor...
print(f'Years in empl: {empl.year.unique()}') print(f'Municipalities in empl = {len(empl.municipality.unique())}') print(f'Years in inc: {inc.year.unique()}') print(f'Municipalities in inc = {len(inc.municipality.unique())}')
Years in empl: [2008 2009 2010 2011 2012 2013 2014 2015 2016 2017] Municipalities in empl = 99 Years in inc: [2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017] Municipalities in inc = 98
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Find differences:**
diff_y = [y for y in inc.year.unique() if y not in empl.year.unique()] print(f'years in inc data, but not in empl data: {diff_y}') diff_m = [m for m in empl.municipality.unique() if m not in inc.municipality.unique()] print(f'municipalities in empl data, but not in inc data: {diff_m}')
years in inc data, but not in empl data: [2004, 2005, 2006, 2007] municipalities in empl data, but not in inc data: ['Christiansø']
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Conclusion:** `inc` has more years than `empl`, but `empl` has one municipality that is not in `inc`.
plt.figure() v = venn2(subsets = (4, 4, 10), set_labels = ('empl', 'inc')) v.get_label_by_id('100').set_text('Cristiansø') v.get_label_by_id('010').set_text('2004-07' ) v.get_label_by_id('110').set_text('common observations') plt.show()
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
Outer join: union
plt.figure() v = venn2(subsets = (4, 4, 10), set_labels = ('empl', 'inc')) v.get_label_by_id('100').set_text('included') v.get_label_by_id('010').set_text('included') v.get_label_by_id('110').set_text('included') plt.title('outer join') plt.show() outer = pd.merge(empl,inc,on=['municipality','year'],how='outer') print...
Number of municipalities = 99 Number of years = 14
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
We see that the **outer join** includes rows that exist in either dataframe and therefore includes missing values:
I = (outer.year.isin(diff_y)) | (outer.municipality.isin(diff_m)) outer.loc[I, :].head(15)
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
Inner join
plt.figure() v = venn2(subsets = (4, 4, 10), set_labels = ('empl', 'inc')) v.get_label_by_id('100').set_text('dropped'); v.get_patch_by_id('100').set_alpha(0.05) v.get_label_by_id('010').set_text('dropped'); v.get_patch_by_id('010').set_alpha(0.05) v.get_label_by_id('110').set_text('included') plt.title('inner join') p...
Number of municipalities = 98 Number of years = 10
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
We see that the **inner join** does not contain any rows that are not in both datasets.
I = (inner.year.isin(diff_y)) | (inner.municipality.isin(diff_m)) inner.loc[I, :].head(15)
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
Left join In my work, I most frequently use the **left join**. It is also known as a *many-to-one* join. * **Left dataset:** `inner` many observations of a given municipality (one per year),* **Right dataset:** `area` at most one observation per municipality and new variable (km2).
inner_with_area = pd.merge(inner, area, on='municipality', how='left') inner_with_area.head(5) print(f'inner has shape {inner.shape}') print(f'area has shape {area.shape}') print(f'merge result has shape {inner_with_area.shape}') plt.figure() v = venn2(subsets = (4, 4, 10), set_labels = ('inner', 'area')) v.get_label_b...
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Intermezzo:** Finding the non-overlapping observations
not_in_area = [m for m in inner.municipality.unique() if m not in area.municipality.unique()] not_in_inner = [m for m in area.municipality.unique() if m not in inner.municipality.unique()] print(f'There are {len(not_in_area)} municipalities in inner that are not in area. They are:') print(not_in_area) print('') print...
There are 0 municipalities in inner that are not in area. They are: [] There is 1 municipalities in area that are not in inner. They are: ['Christiansø']
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Check that km2 is never missing:**
inner_with_area.km2.isnull().mean()
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
Alternative function for left joins: `df.join()` To use a left join function `df.join()`, we must first set the index. Technically, we do not need this, but if you ever need to join on more than one variable, `df.join()` requires you to work with indices so we might as well learn it now.
inner.set_index('municipality', inplace=True) area.set_index('municipality', inplace=True) final = inner.join(area) print(f'final has shape: {final.shape}') final.head(5)
final has shape: (980, 4)
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
1.3 Other programming languages **SQL** (including SAS *proc sql*) SQL is one of the most powerful database languages and many other programming languages embed a version of it. For example, SAS has the `proc SQL`, where you can use SQL syntax. SQL is written in statements such as * **left join** `select * from em...
Dst = pydst.Dst(lang='en') # setup data loader with the langauge 'english'
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
Data from DST are organized into: 1. **Subjects:** indexed by numbers. Use `Dst.get_subjects()` to see the list. 2. **Tables:** with names like "INDKP107". Use `Dst.get_tables(subjects=['X'])` to see all tables in a subject. **Data is extracted** with `Dst.get_data(table_id = 'NAME', variables = DICT)`. **Subjects:**...
Dst.get_subjects()
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Tables:** With `get_tables()`, we can list all tables under a subject.
tables = Dst.get_tables(subjects=['04']) tables
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Variable in a dataset:**
tables[tables.id == 'INDKP107'] indk_vars = Dst.get_variables(table_id='INDKP107') indk_vars
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Values of variable in a dataset:**
indk_vars = Dst.get_variables(table_id='INDKP107') for id in ['ENHED','KOEN','UDDNIV','INDKOMSTTYPE']: print(id) values = indk_vars.loc[indk_vars.id == id,['values']].values[0,0] for value in values: print(f' id = {value["id"]}, text = {value["text"]}')
ENHED id = 101, text = People with type of income (number) id = 110, text = Amount of income (DKK 1.000) id = 116, text = Average income for all people (DKK) id = 121, text = Average income for people with type of income (DKK) KOEN id = MOK, text = Men and women, total id = M, text = Men id = K, text = Women UDD...
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Get data:**
variables = {'OMRÅDE':['*'],'ENHED':['110'],'KOEN':['M','K'],'TID':['*'],'UDDNIV':['65'],'INDKOMSTTYPE':['100']} inc_api = Dst.get_data(table_id = 'INDKP107', variables=variables) inc_api.head(5)
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
2.2 FRED (Federal Reserve Economic Data) **GDP data** for the US
start = datetime.datetime(2005,1,1) end = datetime.datetime(2017,1,1) gdp = pandas_datareader.data.DataReader('GDP', 'fred', start, end) gdp.head(10)
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Finding data:**1. go to https://fred.stlouisfed.org 2. search for employment3. click first link4. table name is next to header **Fetch:**
empl_us = pandas_datareader.data.DataReader('PAYEMS', 'fred', datetime.datetime(1939,1,1), datetime.datetime(2018,12,1))
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Plot:**
fig = plt.figure() ax = fig.add_subplot(1,1,1) empl_us.plot(ax=ax) ax.legend(frameon=True) ax.set_xlabel('') ax.set_ylabel('employment (US)');
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
2.3 World Bank indicators: `wb` **Finding data:**1. go to https://data.worldbank.org/indicator/2. search for GDP 3. variable name ("NY.GDP.PCAP.KD") is in the URL **Fetch GDP:**
from pandas_datareader import wb wb_gdp = wb.download(indicator='NY.GDP.PCAP.KD', country=['SE','DK','NO'], start=1990, end=2017) wb_gdp = wb_gdp.rename(columns = {'NY.GDP.PCAP.KD':'GDP'}) wb_gdp = wb_gdp.reset_index() wb_gdp.head(5) wb_gdp.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 84 entries, 0 to 83 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 country 84 non-null object 1 year 84 non-null object 2 GDP 84 non-null float64 dtypes: float64(1), object(2...
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Problem:** Unfortunately, it turns out that the dataframe has stored the variable year as an "object", meaning in practice that it is a string. Country is an object because it is a string, but that cannot be helped. Fortunately, GDP is a float (i.e. a number). Let's convert year to make it an integer:
wb_gdp.year = wb_gdp.year.astype(int) wb_gdp.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 84 entries, 0 to 83 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 country 84 non-null object 1 year 84 non-null int32 2 GDP 84 non-null float64 dtypes: float64(1), int32(1)...
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Fetch employment-to-population ratio:**
wb_empl = wb.download(indicator='SL.EMP.TOTL.SP.ZS', country=['SE','DK','NO'], start=1990, end=2017) wb_empl.rename(columns = {'SL.EMP.TOTL.SP.ZS':'employment_to_pop'}, inplace=True) wb_empl.reset_index(inplace = True) wb_empl.year = wb_empl.year.astype(int) wb_empl.head(3)
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Merge:**
wb = pd.merge(wb_gdp, wb_empl, how='outer', on = ['country','year']); wb.head(5)
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
3. Split-apply-combine One of the most useful skills to learn is **the split-apply-combine process**. For example, we may want to compute the average employment rate within a municipality over time and calculate whether the employment rate in each year is above or below the average. We calculate this variable using a ...
empl = empl.sort_values(['municipality','year']) # sort by first municipality then year empl.head(5)
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
Use **groupby** to calculate **within means**:
empl.groupby('municipality')['e'].mean().head(5)
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Custom functions** can be specified by using the `lambda` notation. E.g., average change:
empl.groupby('municipality')['e'].apply(lambda x: x.diff(1).mean()).head(5)
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
Or:
myfun = lambda x: np.mean(x[1:]-x[:-1]) empl.groupby('municipality')['e'].apply(lambda x: myfun(x.values)).head(5)
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Plot statistics**: Dispersion in employment rate across Danish municipalities over time.
fig = plt.figure() ax = fig.add_subplot(1,1,1) empl.groupby('year')['e'].std().plot(ax=ax,style='-o') ax.set_ylabel('std. dev.') ax.set_title('std. dev. across municipalities in the employment rate');
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
3.2 Split-Apply-Combine **Goal:** Calculate within municipality difference to mean employment rate. **1. Split**:
e_grouped = empl.groupby('municipality')['e']
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**2. Apply:**
e_mean = e_grouped.mean() # mean employment rate e_mean.head(10)
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
Change name of series:
e_mean.name = 'e_mean' # necessary for join
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**3. Combine:**
empl_ = empl.set_index('municipality').join(e_mean, how='left') empl_['diff'] = empl_.e - empl_.e_mean empl_.xs('Copenhagen')
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Plot:**
municipalities = ['Copenhagen','Roskilde','Lejre'] fig = plt.figure() ax = fig.add_subplot(1,1,1) for m in municipalities: empl_.xs(m).plot(x='year',y='diff',ax=ax,label=m) ax.legend(frameon=True) ax.set_ylabel('difference to mean')
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
with `agg()` **Agg:** The same value for all observations in a group.
empl_ = empl.copy() # a. split-apply e_mean = empl_.groupby('municipality')['e'].agg(lambda x: x.mean()) e_mean.name = 'e_mean' # b. combine empl_ = empl_.set_index('municipality').join(e_mean, how='left') empl_['diff'] = empl_.e - empl_.e_mean empl_.xs('Copenhagen')
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
**Note:** Same result!! with - `transform()` **Transform:** Different values across observations in a group.
empl_ = empl.copy() empl_['diff'] = empl_.groupby('municipality')['e'].transform(lambda x: x - x.mean()) empl_.set_index('municipality').xs('Copenhagen')
_____no_output_____
MIT
web/08/Basic_data_analysis.ipynb
Jovansam/lectures-2021
Advanced agromanagement with PCSE/WOFOSTThis notebook will demonstrate how to implement advanced agromanagement options with PCSE/WOFOST.Allard de Wit, April 2018For the example we will assume that data files are in the data directory within the directory where this notebook is located. This will be the case if you d...
%matplotlib inline import os, sys import matplotlib matplotlib.style.use("ggplot") import matplotlib.pyplot as plt import pandas as pd import yaml import pcse from pcse.models import Wofost71_WLP_FD from pcse.fileinput import CABOFileReader, YAMLCropDataProvider from pcse.db import NASAPowerWeatherDataProvider from p...
This notebook was built with: python version: 3.7.5 (default, Oct 31 2019, 15:18:51) [MSC v.1916 64 bit (AMD64)] PCSE version: 5.4.2
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Input requirementsFor running the PCSE/WOFOST (and PCSE models in general), you need three types of inputs:1. Model parameters that parameterize the different model components. These parameters usually consist of a set of crop parameters (or multiple sets in case of crop rotations), a set of soil parameters and a ...
crop = YAMLCropDataProvider() soil = CABOFileReader(os.path.join(data_dir, "soil", "ec3.soil")) site = WOFOST71SiteDataProvider(WAV=100,CO2=360) parameterprovider = ParameterProvider(soildata=soil, cropdata=crop, sitedata=site) crop = YAMLCropDataProvider()
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Reading weather dataFor reading weather data we will use the NASAPowerWeatherDataProvider.
from pcse.fileinput import ExcelWeatherDataProvider weatherfile = os.path.join(data_dir, 'meteo', 'nl1.xlsx') weatherdataprovider = ExcelWeatherDataProvider(weatherfile)
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Defining agromanagement with timed eventsDefining agromanagement needs a bit more explanation because agromanagement is a relativelycomplex piece of PCSE. The agromanagement definition for PCSE is written in a format called `YAML` and for a thorough discusion have a look at the [Section on Agromanagement](https://pcse...
yaml_agro = """ - 2006-01-01: CropCalendar: crop_name: sugarbeet variety_name: Sugarbeet_603 crop_start_date: 2006-03-31 crop_start_type: emergence crop_end_date: 2006-10-20 crop_end_type: harvest max_duration: 300 TimedEvents: - event_signal: irriga...
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Starting and running the WOFOSTWe have now all parameters, weather data and agromanagement information available to start WOFOST and make a simulation.
wofost = Wofost71_WLP_FD(parameterprovider, weatherdataprovider, agromanagement) wofost.run_till_terminate()
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Getting and visualizing resultsNext, we can easily get the output from the model using the get_output() method and turn it into a pandas DataFrame:
output = wofost.get_output() df = pd.DataFrame(output).set_index("day") df.tail()
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Finally, we can visualize the results from the pandas DataFrame with a few commands:
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(16,8)) df['LAI'].plot(ax=axes[0], title="Leaf Area Index") df['SM'].plot(ax=axes[1], title="Root zone soil moisture") fig.autofmt_xdate()
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Defining agromanagement with state events Connecting events to development stagesIt is also possible to connect irrigation events to state variables instead of dates. A logical approach is to connect an irrigation even to a development stage instead of a date, in this way changes in the sowing date will be automatical...
yaml_agro = """ - 2006-01-01: CropCalendar: crop_name: sugarbeet variety_name: Sugarbeet_603 crop_start_date: 2006-03-31 crop_start_type: emergence crop_end_date: 2006-10-20 crop_end_type: harvest max_duration: 300 TimedEvents: null StateEvents: - ...
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Again we run the model with all inputs but a changed agromanagement and plot the results
wofost2 = Wofost71_WLP_FD(parameterprovider, weatherdataprovider, agromanagement) wofost2.run_till_terminate() output2 = wofost2.get_output() df2 = pd.DataFrame(output2).set_index("day") fig2, axes2 = plt.subplots(nrows=1, ncols=2, figsize=(16,8)) df2['LAI'].plot(ax=axes2[0], title="Leaf Area Index") df2['SM'].plot(ax=...
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Connecting events to soil moisture levelsThe logical approach is to connect irrigation events to stress levels that are experiences by the crop. In this case we connect the irrigation event to the state variables soil moisture (SM) and define the agromanagement like this: Version: 1.0 AgroManagement: - 2006-...
yaml_agro = """ - 2006-01-01: CropCalendar: crop_name: sugarbeet variety_name: Sugarbeet_603 crop_start_date: 2006-03-31 crop_start_type: emergence crop_end_date: 2006-10-20 crop_end_type: harvest max_duration: 300 TimedEvents: null StateEvents: - ...
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Showing the differences in irrigation events============================================We combine the `SM` column from the different data frames in a new dataframe and plot the results to see the effect of the differences in agromanagement.
df_all = pd.DataFrame({"by_date": df.SM, "by_DVS": df2.SM, "by_SM": df3.SM}, index=df.index) fig4, axes4 = plt.subplots(nrows=1, ncols=1, figsize=(14,12)) df_all.plot(ax=axes4, title="differences in irrigation approach.") axes4.set_ylabel("irrigation amount [cm]") fig4.au...
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Adjusting the sowing date with the AgroManager and making multiple runs==============================================The most straightforward way of adjusting the sowing date is by editing the crop management definition in YAML format directly. Here we put a placeholder `{crop_start_date}` at the point where the crop s...
agromanagement_yaml = """ - 2006-01-01: CropCalendar: crop_name: sugarbeet variety_name: Sugarbeet_603 crop_start_date: {crop_start_date} crop_start_type: emergence crop_end_date: 2006-10-20 crop_end_type: harvest max_duration: 300 TimedEvents: null St...
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
The main loop for making several WOFOST runs
import datetime as dt sdate = dt.date(2006,3,1) step = 10 # Loop over six different start dates results = [] for i in range(6): # get new start date csdate = sdate + dt.timedelta(days=i*step) # update agromanagement with new start date and load it with yaml.load tmp = agromanagement_yaml.format(crop_st...
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Plot the results for the different runs and variables
colors = ['k','r','g','b','m','y'] fig5, axes5 = plt.subplots(nrows=6, ncols=2, figsize=(16,30)) for c, df in zip(colors, results): for key, axis in zip(df.columns, axes5.flatten()): df[key].plot(ax=axis, title=key, color=c) fig5.autofmt_xdate()
_____no_output_____
MIT
06_advanced_agromanagement_with_PCSE.ipynb
albertdaniell/wofost_kalro
Wind Turbine combined with Heat Pump and Water TankIn this example the heat demand is supplied by a wind turbine in combination with a heat pump and a water tank that stores hot water with a standing loss.
import pypsa import pandas as pd from pyomo.environ import Constraint network = pypsa.Network() network.set_snapshots(pd.date_range("2016-01-01 00:00","2016-01-01 03:00", freq="H")) network.add("Bus", "0", carrier="AC") network.add("Bus", "0 heat", carrier="heat") network.add("Carrier", "wind") network.add("Carrier",...
_____no_output_____
MIT
examples/notebooks/power-to-heat-water-tank.ipynb
martacki/PyPSA
Replication of Carneiro, Heckman, & Vytlacil's (2011) *Local Instrumental Variables* approach In this notebook, I reproduce the semiparametric results from> Carneiro, P., Heckman, J. J., & Vytlacil, E. J. (2011). [Estimating marginal returns to education.](https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.101.6.2754) *A...
import numpy as np from tutorial_semipar_auxiliary import plot_semipar_mte from grmpy.estimate.estimate import fit import warnings warnings.filterwarnings('ignore') %load_ext autoreload %autoreload 2
_____no_output_____
MIT
promotion/grmpy_tutorial_notebook/tutorial_semipar_notebook.ipynb
OpenSourceEconomics/grmpy
1) The LIV Framework The method of Local Instrumental Variables (LIV) is based on the generalized Roy model, which is characterized by the following equations: \begin{align*} &\textbf{Potential Outcomes} & & \textbf{Choice} &\\ & Y_1 = \beta_1 X + U_{1} & & I = Z \gamma - V &\\ & Y_0 = \beta_0 X + U_{0} & & D_i = \...
%%file files/tutorial_semipar.yml --- ESTIMATION: file: data/aer-replication-mock.pkl dependent: wage indicator: state semipar: True show_output: True logit: True nbins: 30 bandwidth: 0.322 gridsize: 500 trim_support: True reestimate_p: False rbandwidth: 0.05 derivati...
Overwriting files/tutorial_semipar.yml
MIT
promotion/grmpy_tutorial_notebook/tutorial_semipar_notebook.ipynb
OpenSourceEconomics/grmpy
Note that I do not include a constant in the __TREATED, UNTREATED__ section. The reason for this is that in the semiparametric setup, $\beta_1$ and $\beta_0$ are determined by running a Double Residual Regression without an intercept: $$ e_Y =e_X \beta_0 \ + \ e_{X \ \times \ p} (\beta_1 - \beta_0) \ + \ \epsilon $$ ...
rslt = fit('files/tutorial_semipar.yml', semipar=True)
Logit Regression Results ============================================================================== Dep. Variable: y No. Observations: 1747 Model: Logit Df Residuals: 1717 Meth...
MIT
promotion/grmpy_tutorial_notebook/tutorial_semipar_notebook.ipynb
OpenSourceEconomics/grmpy
The rslt dictionary contains information on the estimated parameters and the final MTE.
list(rslt)
_____no_output_____
MIT
promotion/grmpy_tutorial_notebook/tutorial_semipar_notebook.ipynb
OpenSourceEconomics/grmpy
Before plotting the MTE, let's see what else we can learn.For instance, we can account for the variation in $X$. Note that we divide the MTE by 4 to investigate the effect of one additional year of college education.
np.min(rslt['mte_min']) / 4, np.max(rslt['mte_max']) / 4
_____no_output_____
MIT
promotion/grmpy_tutorial_notebook/tutorial_semipar_notebook.ipynb
OpenSourceEconomics/grmpy
Next we plot the MTE based on the estimation results. As shown in the figure below, the replicated MTE gets very close to the original, but its 90 percent confidence bands are wider. This is due to the use of a mock data set which merges basic and local variables randomly. The bootsrap method, which is used to estimate...
mte, quantiles = plot_semipar_mte(rslt, 'files/tutorial_semipar.yml', nbootstraps=250)
_____no_output_____
MIT
promotion/grmpy_tutorial_notebook/tutorial_semipar_notebook.ipynb
OpenSourceEconomics/grmpy
Multivariate Logistic Regression Demo_Source: 🤖[Homemade Machine Learning](https://github.com/trekhleb/homemade-machine-learning) repository_> ☝Before moving on with this demo you might want to take a look at:> - 📗[Math behind the Logistic Regression](https://github.com/trekhleb/homemade-machine-learning/tree/master...
# To make debugging of logistic_regression module easier we enable imported modules autoreloading feature. # By doing this you may change the code of logistic_regression library and all these changes will be available here. %load_ext autoreload %autoreload 2 # Add project root folder to module loading paths. import sy...
_____no_output_____
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Import Dependencies- [pandas](https://pandas.pydata.org/) - library that we will use for loading and displaying the data in a table- [numpy](http://www.numpy.org/) - library that we will use for linear algebra operations- [matplotlib](https://matplotlib.org/) - library that we will use for plotting the data- [math](ht...
# Import 3rd party dependencies. import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.image as mpimg import math # Import custom logistic regression implementation. from homemade.logistic_regression import LogisticRegression
_____no_output_____
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Load the DataIn this demo we will be using a sample of [MNIST dataset in a CSV format](https://www.kaggle.com/oddrationale/mnist-in-csv/home). Instead of using full dataset with 60000 training examples we will use cut dataset of just 10000 examples that we will also split into training and testing sets.Each row in the...
# Load the data. data = pd.read_csv('../../data/mnist-demo.csv') # Print the data table. data.head(10)
_____no_output_____
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Plot the DataLet's peek first 25 rows of the dataset and display them as an images to have an example of digits we will be working with.
# How many numbers to display. numbers_to_display = 25 # Calculate the number of cells that will hold all the numbers. num_cells = math.ceil(math.sqrt(numbers_to_display)) # Make the plot a little bit bigger than default one. plt.figure(figsize=(10, 10)) # Go through the first numbers in a training set and plot them...
_____no_output_____
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Split the Data Into Training and Test SetsIn this step we will split our dataset into _training_ and _testing_ subsets (in proportion 80/20%).Training data set will be used for training of our model. Testing dataset will be used for validating of the model. All data from testing dataset will be new to model and we may...
# Split data set on training and test sets with proportions 80/20. # Function sample() returns a random sample of items. pd_train_data = data.sample(frac=0.8) pd_test_data = data.drop(pd_train_data.index) # Convert training and testing data from Pandas to NumPy format. train_data = pd_train_data.values test_data = pd_...
_____no_output_____
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Init and Train Logistic Regression Model> ☝🏻This is the place where you might want to play with model configuration.- `polynomial_degree` - this parameter will allow you to add additional polynomial features of certain degree. More features - more curved the line will be.- `max_iterations` - this is the maximum numbe...
# Set up linear regression parameters. max_iterations = 10000 # Max number of gradient descent iterations. regularization_param = 10 # Helps to fight model overfitting. polynomial_degree = 0 # The degree of additional polynomial features. sinusoid_degree = 0 # The degree of sinusoid parameter multipliers of additio...
_____no_output_____
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Print Training ResultsLet's see how model parameters (thetas) look like. For each digit class (from 0 to 9) we've just trained a set of 784 parameters (one theta for each image pixel). These parameters represents the importance of every pixel for specific digit recognition.
# Print thetas table. pd.DataFrame(thetas) # How many numbers to display. numbers_to_display = 9 # Calculate the number of cells that will hold all the numbers. num_cells = math.ceil(math.sqrt(numbers_to_display)) # Make the plot a little bit bigger than default one. plt.figure(figsize=(10, 10)) # Go through the the...
_____no_output_____
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning