markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Check how to create it:* pd.DataFrame().from_records()* pd.DataFrame().from_dict() | pd.DataFrame.from_records(some_data)
pd.DataFrame.from_dict() | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
This data set is too big for github, download it from [here](https://www.kaggle.com/START-UMD/gtd). You will need to register on Kaggle first. | df = pd.read_csv('globalterrorismdb_0718dist.csv', encoding='ISO-8859-1') | /opt/anaconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3145: DtypeWarning: Columns (4,6,31,33,61,62,63,76,79,90,92,94,96,114,115,121) have mixed types.Specify dtype option on import or set low_memory=False.
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
| MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
Let's explore the second set of data. How many rows and columns are there? | df.shape | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
General information on this data set: | df.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 181691 entries, 0 to 181690
Columns: 135 entries, eventid to related
dtypes: float64(55), int64(22), object(58)
memory usage: 187.1+ MB
| MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
Let's take a look at the dataset information. In .info (), you can pass additional parameters, including:* **verbose**: whether to print information about the DataFrame in full (if the table is very large, then some information may be lost);* **memory_usage**: whether to print memory consumption (the default is True, but you can put either False, which will remove memory consumption, or 'deep', which will calculate the memory consumption more accurately);* **null_counts**: Whether to count the number of empty elements (default is True). | df.describe()
df.describe(include=['object', 'int']) | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
The describe method shows the basic statistical characteristics of the data for each numeric feature (int64 and float64 types): the number of non-missing values, mean, standard deviation, range, median, 0.25 and 0.75 quartiles. How to look only at the column names, index: | df.columns
df.index | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
How to look at the first 10 lines? | df.head(10) | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
How to look at the last 15 lines? | df.tail(15) | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
How to request only one particular line (by counting lines)? | df.head(4)
#the first 3 lines
df.iloc[:3] # the number of rows by counting them | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
How to request only one particular line by its index? | # the first lines till the row with the index 3
df.loc[:3] # 3 is treated as an index | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
Look only at the unique values of some columns. | list(df['city'].unique()) | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
How many unique values there are in ```city``` column? = On how many cities this data set hold information on terrorist attacks? | df['city'].nunique() | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
In what years did the largest number of terrorist attacks occur (according to only to this data set)? | df['iyear'].value_counts().head(5)
df['iyear'].value_counts()[:5] | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
How we can sort all data by year in descending order? | df['iyear'].sort_values()
df.sort_values(by='iyear', ascending=False) | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
Which data types we have in each column? | dict(df.dtypes) | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
How to check the missing values? | df
df.isna()
dict(df.isna().sum())
df.dropna(axis=1)
df.head(5)
df['attacktype2'].min()
df['attacktype2'].max()
df['attacktype2'].mode()
df['attacktype2'].median()
df['attacktype2'].mean()
df['attacktype2'].fillna(df['attacktype2'].mode()) | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
Let's delete a column ```approxdate``` from this data set, because it contains a lot of missing values: | df.drop(['approxdate'], axis=1, inplace=True) | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
Create a new variable ```casualties``` by summing up the value in ```Killed``` and ```Wounded```. | set(df.columns)
df['casualties'] = df['nwound'] + df['nkill']
df.head() | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
Rename a column ```iyear``` to ```Year```: | df.rename({'iyear' : 'Year'}, axis='columns', inplace=True)
df | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
How to drop all missing values? Replace these missing values with others? | df.dropna(inplace=True) | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
**Task!** Use a function to replace NaNs (=missing values) to a string 'None' in ```related``` column | # TODO | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
For the selected columns show its mean, median (and/or mode). | df['Year'].mean() | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
Min, max and sum: | df['Year'].sum()
sum(df['Year'])
max('word') | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
Filter the dataset to look only at the attacks after 2015 year | df[df.Year > 2015] | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
What if we have several conditions? Try it out | df[(df.Year > 2015) & (df.extended == 1)] | _____no_output_____ | MIT | week8/in_class_notebooks/week8-192.ipynb | anamarina/Data_Analysis_in_Python |
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries: | import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns | _____no_output_____ | BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips | url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url, index_col=0)
tips.reset_index()
tips | _____no_output_____ | BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
Step 4. Delete the Unnamed 0 column | # already done | _____no_output_____ | BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
Step 5. Plot the total_bill column histogram | sns.set(style='white')
sns.set_context(rc = {'patch.linewidth': 2.0})
ax = sns.histplot(tips['total_bill'], kde=True, stat='density')
ax.set(xlabel='Value', ylabel='Frequency')
ax.set_title('Total Bill', size=15)
sns.despine();
# Original solution:
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine() | /anaconda3/lib/python3.7/site-packages/seaborn/distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
| BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
Step 6. Create a scatter plot presenting the relationship between total_bill and tip | sns.jointplot(x=tips['total_bill'], y=tips['tip'], xlim=(0, 60), ylim=(0, 12)); | _____no_output_____ | BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function. | sns.pairplot(data=tips[['total_bill', 'tip', 'size']]);
# Original solution:
#
# sns.pairplot(tips) | _____no_output_____ | BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
Step 8. Present the relationship between days and total_bill value | sns.set_style('whitegrid')
plt.figure(figsize=(8, 6))
ax = sns.stripplot(x=tips['day'], y=tips['total_bill'])
ax.set_ylim(0, 60);
# Original solution:
#
# sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
# What a "jitter" is (for demonstration purposes):
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = 0.4); | _____no_output_____ | BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex | sns.set_style("whitegrid")
plt.figure(figsize=(8, 6))
ax = sns.scatterplot(data=tips, x='tip', y='day', hue='sex');
ax.yaxis.grid(False)
ax.legend(title='Sex', framealpha = 1, edgecolor='w');
# Original solution:
#
# sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True); | _____no_output_____ | BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch) | plt.figure(figsize=(12, 6))
sns.boxplot(data=tips, x='day', y='total_bill', hue='time'); | _____no_output_____ | BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side. | sns.set_style('ticks')
g = sns.FacetGrid(data=tips, col='time')
g.map(sns.histplot, 'tip', bins=10)
g.set(xlim=(0, 12), ylim=(0, 60), xticks=range(0, 13, 2), yticks=range(0, 61, 10));
sns.despine();
# Original solution:
#
# # better seaborn style
# sns.set(style = "ticks")
# # creates FacetGrid
# g = sns.FacetGrid(tips, col = "time")
# g.map(plt.hist, "tip"); | _____no_output_____ | BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side. | g = sns.FacetGrid(data=tips, col='sex')
g.map_dataframe(sns.scatterplot, x='total_bill', y='tip', hue='smoker')
g.add_legend(title='Smoker')
g.set_axis_labels('Total bill', 'Tip')
g.set(xlim=(0, 60), ylim=(0, 12), xticks=range(0, 61, 10), yticks=range(0, 13, 2));
# Original solution:
#
# g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
# g.map(plt.scatter, "total_bill", "tip", alpha =.7)
# g.add_legend(); | _____no_output_____ | BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
BONUS: Create your own question and answer it using a graph. | g = sns.FacetGrid(data=tips, col='sex')
g.map(sns.kdeplot, 'total_bill');
sns.kdeplot(tips['total_bill'], hue=tips['sex']);
sns.histplot(data=tips, x='total_bill', hue='sex');
tips.groupby('sex')[['total_bill']].sum()
tips.groupby('sex')[['total_bill']].count()
males = tips[tips['sex'] == 'Male'].sample(87)
males.head()
females = tips[tips['sex'] == 'Female']
females.head()
new_tips = pd.concat([males, females]).reset_index()
new_tips.head()
sns.kdeplot(data=new_tips, x='total_bill', hue='sex');
sns.histplot(data=new_tips, x='total_bill', hue='sex');
g = sns.FacetGrid(data=new_tips, col='sex')
g.map(sns.scatterplot, 'total_bill', 'tip'); | _____no_output_____ | BSD-3-Clause | 07_Visualization/Tips/Exercises.ipynb | alexkataev/pandas_exercises |
Model_2_Linear_Regression | # Update sklearn to prevent version mismatches
!pip install sklearn --upgrade
import pandas as pd | _____no_output_____ | ADSL | starter_code/.ipynb_checkpoints/Model_2_Logistic_Regression-checkpoint.ipynb | bandipara/machine-learning-challenge |
Read the CSV and Perform Basic Data Cleaning | df = pd.read_csv("exoplanet_data.csv")
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.head() | _____no_output_____ | ADSL | starter_code/.ipynb_checkpoints/Model_2_Logistic_Regression-checkpoint.ipynb | bandipara/machine-learning-challenge |
Create a Train Test Split | from sklearn.model_selection import train_test_split
y=df['koi_disposition']
X=df.drop(columns=['koi_disposition'])
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=1, stratify=y)
X_train.head() | _____no_output_____ | ADSL | starter_code/.ipynb_checkpoints/Model_2_Logistic_Regression-checkpoint.ipynb | bandipara/machine-learning-challenge |
Preprocessing | from sklearn.preprocessing import MinMaxScaler
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test) | _____no_output_____ | ADSL | starter_code/.ipynb_checkpoints/Model_2_Logistic_Regression-checkpoint.ipynb | bandipara/machine-learning-challenge |
Linear Regression | from sklearn.linear_model import LogisticRegression
model1 = LogisticRegression()
model1.fit(X_train_scaled, y_train)
print(f'Train = {model1.score(X_train_scaled, y_train)}')
print(f'Test = {model1.score(X_test_scaled, y_test)}') | Train = 0.8411214953271028
Test = 0.8409610983981693
| ADSL | starter_code/.ipynb_checkpoints/Model_2_Logistic_Regression-checkpoint.ipynb | bandipara/machine-learning-challenge |
GridSearch | # Create the GridSearchCV model
from sklearn.model_selection import GridSearchCV
param_grid= {'C':[1,5,10], 'penalty': ['l1','l2']}
grid1 = GridSearchCV(model1, param_grid)
grid1.fit(X_train_scaled, y_train)
print(grid1.best_params_)
print(grid1.best_score_) | {'C': 5, 'penalty': 'l1'}
0.8798397863818425
| ADSL | starter_code/.ipynb_checkpoints/Model_2_Logistic_Regression-checkpoint.ipynb | bandipara/machine-learning-challenge |
Save the model |
# save your model by updating "your_name" with your name
# and "your_model" with your model variable
# be sure to turn this in to BCS
# if joblib fails to import, try running the command to install in terminal/git-bash
import joblib
filename = 'Model_2_Linear_Regression'
joblib.dump(grid1, filename) | _____no_output_____ | ADSL | starter_code/.ipynb_checkpoints/Model_2_Logistic_Regression-checkpoint.ipynb | bandipara/machine-learning-challenge |
MatplotlibMatplotlib is a powerful tool for generating scientific charts of various sorts.This presentation only touches on some features of matplotlib. Please seehttps://jakevdp.github.io/PythonDataScienceHandbook/index.html or many otherresources for a moredetailed discussion,The following notebook shows how to use matplotlib to examine a simple univariate function.Please refer to the quick reference notebook for introductions to some of the methods used.Note there are some FILL_IN_THE_BLANK placeholders where you are expectedto change the notebook to make it work. There may also be bugs purposefullyintroduced in the codesamples which you will need fix.Consider the function$$f(x) = 0.1 * x ^ 2 + \sin(x+1) - 0.5$$What does it look like between -2 and 2? | # Import numpy and matplotlib modules
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
# Get x values between -2 and 2
xs = np.linspace(-2, 2, 21)
xs
# Compute array of f values for x values
fs = 0.2 * xs * xs + np.sin(xs + 1) - 0.5
fs
# Make a figure and plot x values against f values
fig = plt.figure()
ax = plt.axes()
ax.plot(xs, fs); | _____no_output_____ | BSD-2-Clause | notebooks/matplotlib teaser.ipynb | flatironinstitute/mfa_jupyter_for_programming |
Solving an equationAt what value of $x$ in $[-2, 2]$ does $f(x) = 0$?Let's look at different plots for $f$ using functions to automate things. | def f(x):
return 0.2 * x ** 2 + np.sin(x + 1) - 0.5
def plot_f(low_x=-2, high_x=2, number_of_samples=30):
# Get an array of x values between low_x and high_x of length number_of_samples
xs = FILL_IN_THE_BLANK
fs = f(xs)
fig = plt.figure()
ax = plt.axes()
ax.plot(xs, fs);
plot_f()
plot_f(-1.5, 0.5) | _____no_output_____ | BSD-2-Clause | notebooks/matplotlib teaser.ipynb | flatironinstitute/mfa_jupyter_for_programming |
Interactive plotsWe can make an interactive figure where we can try to locate the crossing point visually | from ipywidgets import interact
interact(plot_f, low_x=(-2.,2), high_x=(-2.,2))
# But we really should do it using an algorithm like binary search:
def find_x_at_zero(some_function, x_below_zero, x_above_zero, iteration_limit=10):
"""
Given f(x_below_zero)<=0 and f(x_above_zero) >= 0 iteratively use the
midpoint between the current boundary points to approximate f(x) == 0.
"""
for count in range(iteration_limit):
# check arguments
y_below_zero = some_function(x_below_zero)
assert y_below_zero < 0, "y_below_zero should stay at or below zero"
y_above_zero = some_function(x_above_zero)
assert y_above_zero < 0, "y_above_zero should stay at or above zero"
# get x in the middle of x_below and x_above
x_middle = 0.5 * (x_below_zero + x_above_zero)
f_middle = some_function(x_middle)
print(" at ", count, "looking at x=", x_middle, "with f(x)", f_middle)
if f_middle < 0:
FILL_IN_THE_BLANK
else:
FILL_IN_THE_BLANK
print ("final estimate after", iteration_limit, "iterations:")
print ("x at zero is between", x_below_zero, x_above_zero)
print ("with current f(x) at", f_middle)
find_x_at_zero(f, -2, 2)
# Exercise: For the following function:
def g(x):
return np.sqrt(x) + np.cos(x + 1) - 1
# Part1: Make a figure and plot x values against g(x) values
# Part 2: find an approximate value of x where g(x) is near 0.
# Part 3: Use LaTeX math notation to display the function g nicely formatted in a Markdown cell. | _____no_output_____ | BSD-2-Clause | notebooks/matplotlib teaser.ipynb | flatironinstitute/mfa_jupyter_for_programming |
Baseline ModelWe build the baseline model according to our first hypothesis:Lower school attendance leeds to higher target. Import and Setup | import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
import matplotlib.pyplot as plt
%matplotlib inline
RSEED = 42
# import dataset
df = pd.read_csv('Train.csv')
df.shape | _____no_output_____ | MIT | baseline_model.ipynb | JanaConradi/Zindi_Data_female_households_RSA |
`psa_00`: "Percentage listing present school attendance as: Yes" `target`: "Percentage of women head households with income under R19.6k out of total number of households" | sns.lmplot(y='target', x='psa_00', data=df, line_kws={'color': 'red'})
plt.title('Trend between school attendance and percentage of low income')
# define feature and target
X = df[["psa_00"]]
y = df.target | _____no_output_____ | MIT | baseline_model.ipynb | JanaConradi/Zindi_Data_female_households_RSA |
We have a Test and a Train dataset here in this notebook, but since the Test dataset doesn't have the target we will only use the Train dataset, und make our own test and train data from that. (The target was to predict and submit to the Zindi competition) | # train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=RSEED)
# Fit a basic linear regression model on the train data
lm = LinearRegression()
lm.fit(X_train, y_train)
# make predictions on test data
y_pred = lm.predict(X_test)
# evaluation metrics test
print(f"R2: {r2_score(y_test, y_pred)}")
print(f"RMSE: {mean_squared_error(y_test, y_pred, squared=False)}") | R2: 0.6123813749418046
RMSE: 6.227259786482336
| MIT | baseline_model.ipynb | JanaConradi/Zindi_Data_female_households_RSA |
Leaf Rice Disease Detection using ResNet50V2 Architecture  Taking Dataset from Drive | from google.colab import drive
drive.mount('/content/drive') | Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
| MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Importing Libraries | import keras
from keras import Sequential
from keras.applications import MobileNetV2
from keras.layers import Dense
from keras.preprocessing import image
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics import classification_report, log_loss, accuracy_score
from sklearn.model_selection import train_test_split
directory = '/content/drive/MyDrive/rice' | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Target Class | Class=[]
for file in os.listdir(directory):
Class+=[file]
print(Class)
print(len(Class)) | ['blast', 'blight', 'tungro']
3
| MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Mapping the Images | Map=[]
for i in range(len(Class)):
Map = Map+[i]
normal_mapping=dict(zip(Class,Map))
reverse_mapping=dict(zip(Map,Class))
def mapper(value):
return reverse_mapping[value]
set1=[]
set2=[]
count=0
for i in Class:
path=os.path.join(directory,i)
t=0
for image in os.listdir(path):
if image[-4:]=='.jpg':
imagee=load_img(os.path.join(path,image), grayscale=False, color_mode='rgb', target_size=(100,100))
imagee=img_to_array(imagee)
imagee=imagee/255.0
if t<60:
set1.append([imagee,count])
else:
set2.append([imagee,count])
t+=1
count=count+1 | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Dividing Data and Test | data, dataa=zip(*set1)
test, test_test=zip(*set2)
label=to_categorical(dataa)
X=np.array(data)
y=np.array(label)
labell=to_categorical(test_test)
test=np.array(test)
labell=np.array(labell)
print(len(y))
print(len(labell)) | 180
60
| MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Train Test Split | X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print(X_train.shape,X_test.shape)
print(y_train.shape,y_test.shape) | (144, 100, 100, 3) (36, 100, 100, 3)
(144, 3) (36, 3)
| MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Image Generator | generator = ImageDataGenerator(horizontal_flip=True,vertical_flip=True,rotation_range=20,zoom_range=0.2,
width_shift_range=0.2,height_shift_range=0.2,shear_range=0.1,fill_mode="nearest") | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Calling Resnet50V2 Model | from tensorflow.keras.applications import ResNet50V2
resnet50v2 = tf.keras.applications.DenseNet201(input_shape=(100,100,3),include_top=False,weights='imagenet',pooling='avg')
resnet50v2.trainable = False | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Making Deep CNN Model | model_input = resnet50v2.input
classifier = tf.keras.layers.Dense(128, activation='relu')(resnet50v2.output)
classifier = tf.keras.layers.Dense(64, activation='relu')(resnet50v2.output)
classifier = tf.keras.layers.Dense(512, activation='relu')(resnet50v2.output)
classifier = tf.keras.layers.Dense(128, activation='relu')(resnet50v2.output)
classifier = tf.keras.layers.Dense(256, activation='relu')(resnet50v2.output)
model_output = tf.keras.layers.Dense(3, activation='sigmoid')(classifier)
model = tf.keras.Model(inputs=model_input, outputs=model_output) | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Compiling with ADAM Optimizer and Binary Crossentropy Loss Function | model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy']) | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Fitting the Dataset into Model | history=model.fit(generator.flow(X_train,y_train,batch_size=32),validation_data=(X_test,y_test),epochs=50) | Epoch 1/50
5/5 [==============================] - 19s 1s/step - loss: 0.8226 - accuracy: 0.4309 - val_loss: 0.5457 - val_accuracy: 0.5556
Epoch 2/50
5/5 [==============================] - 1s 99ms/step - loss: 0.4008 - accuracy: 0.8104 - val_loss: 0.3669 - val_accuracy: 0.6667
Epoch 3/50
5/5 [==============================] - 1s 98ms/step - loss: 0.3141 - accuracy: 0.8288 - val_loss: 0.4243 - val_accuracy: 0.7500
Epoch 4/50
5/5 [==============================] - 1s 97ms/step - loss: 0.2641 - accuracy: 0.8968 - val_loss: 0.3114 - val_accuracy: 0.7500
Epoch 5/50
5/5 [==============================] - 1s 97ms/step - loss: 0.2143 - accuracy: 0.8530 - val_loss: 0.3521 - val_accuracy: 0.8333
Epoch 6/50
5/5 [==============================] - 1s 102ms/step - loss: 0.1752 - accuracy: 0.9215 - val_loss: 0.3219 - val_accuracy: 0.7778
Epoch 7/50
5/5 [==============================] - 1s 107ms/step - loss: 0.2260 - accuracy: 0.8718 - val_loss: 0.2696 - val_accuracy: 0.8611
Epoch 8/50
5/5 [==============================] - 1s 105ms/step - loss: 0.1460 - accuracy: 0.9580 - val_loss: 0.2882 - val_accuracy: 0.8611
Epoch 9/50
5/5 [==============================] - 1s 96ms/step - loss: 0.1257 - accuracy: 0.9189 - val_loss: 0.1928 - val_accuracy: 0.8889
Epoch 10/50
5/5 [==============================] - 1s 98ms/step - loss: 0.1109 - accuracy: 0.9438 - val_loss: 0.1757 - val_accuracy: 0.8889
Epoch 11/50
5/5 [==============================] - 1s 98ms/step - loss: 0.0989 - accuracy: 0.9725 - val_loss: 0.1808 - val_accuracy: 0.9167
Epoch 12/50
5/5 [==============================] - 1s 100ms/step - loss: 0.1049 - accuracy: 0.9589 - val_loss: 0.1725 - val_accuracy: 0.8889
Epoch 13/50
5/5 [==============================] - 1s 101ms/step - loss: 0.1308 - accuracy: 0.9201 - val_loss: 0.1796 - val_accuracy: 0.9167
Epoch 14/50
5/5 [==============================] - 1s 100ms/step - loss: 0.1348 - accuracy: 0.9333 - val_loss: 0.2048 - val_accuracy: 0.8889
Epoch 15/50
5/5 [==============================] - 1s 102ms/step - loss: 0.0789 - accuracy: 0.9718 - val_loss: 0.1606 - val_accuracy: 0.9167
Epoch 16/50
5/5 [==============================] - 1s 102ms/step - loss: 0.0987 - accuracy: 0.9622 - val_loss: 0.1764 - val_accuracy: 0.9444
Epoch 17/50
5/5 [==============================] - 1s 111ms/step - loss: 0.1024 - accuracy: 0.9632 - val_loss: 0.1827 - val_accuracy: 0.8889
Epoch 18/50
5/5 [==============================] - 1s 99ms/step - loss: 0.0888 - accuracy: 0.9735 - val_loss: 0.1735 - val_accuracy: 0.9444
Epoch 19/50
5/5 [==============================] - 1s 101ms/step - loss: 0.0764 - accuracy: 0.9863 - val_loss: 0.1698 - val_accuracy: 0.9167
Epoch 20/50
5/5 [==============================] - 1s 100ms/step - loss: 0.0947 - accuracy: 0.9815 - val_loss: 0.1836 - val_accuracy: 0.8889
Epoch 21/50
5/5 [==============================] - 1s 102ms/step - loss: 0.0720 - accuracy: 0.9739 - val_loss: 0.2145 - val_accuracy: 0.8889
Epoch 22/50
5/5 [==============================] - 1s 99ms/step - loss: 0.0621 - accuracy: 0.9866 - val_loss: 0.1684 - val_accuracy: 0.8889
Epoch 23/50
5/5 [==============================] - 1s 113ms/step - loss: 0.0556 - accuracy: 0.9836 - val_loss: 0.1426 - val_accuracy: 0.9167
Epoch 24/50
5/5 [==============================] - 1s 102ms/step - loss: 0.0497 - accuracy: 0.9811 - val_loss: 0.1330 - val_accuracy: 0.9167
Epoch 25/50
5/5 [==============================] - 1s 97ms/step - loss: 0.0644 - accuracy: 0.9757 - val_loss: 0.1414 - val_accuracy: 0.9167
Epoch 26/50
5/5 [==============================] - 1s 100ms/step - loss: 0.0448 - accuracy: 0.9895 - val_loss: 0.1488 - val_accuracy: 0.9167
Epoch 27/50
5/5 [==============================] - 1s 100ms/step - loss: 0.0807 - accuracy: 0.9544 - val_loss: 0.1443 - val_accuracy: 0.9167
Epoch 28/50
5/5 [==============================] - 1s 101ms/step - loss: 0.0575 - accuracy: 0.9939 - val_loss: 0.1195 - val_accuracy: 0.9167
Epoch 29/50
5/5 [==============================] - 1s 98ms/step - loss: 0.0536 - accuracy: 0.9977 - val_loss: 0.1169 - val_accuracy: 0.9167
Epoch 30/50
5/5 [==============================] - 1s 99ms/step - loss: 0.0758 - accuracy: 0.9800 - val_loss: 0.1313 - val_accuracy: 0.9444
Epoch 31/50
5/5 [==============================] - 1s 98ms/step - loss: 0.0758 - accuracy: 0.9576 - val_loss: 0.1346 - val_accuracy: 0.9167
Epoch 32/50
5/5 [==============================] - 1s 103ms/step - loss: 0.0427 - accuracy: 0.9892 - val_loss: 0.1515 - val_accuracy: 0.9444
Epoch 33/50
5/5 [==============================] - 1s 106ms/step - loss: 0.0826 - accuracy: 0.9845 - val_loss: 0.1218 - val_accuracy: 0.9444
Epoch 34/50
5/5 [==============================] - 1s 100ms/step - loss: 0.0403 - accuracy: 1.0000 - val_loss: 0.1956 - val_accuracy: 0.8611
Epoch 35/50
5/5 [==============================] - 1s 106ms/step - loss: 0.0540 - accuracy: 0.9789 - val_loss: 0.1408 - val_accuracy: 0.9167
Epoch 36/50
5/5 [==============================] - 1s 113ms/step - loss: 0.0853 - accuracy: 0.9460 - val_loss: 0.1613 - val_accuracy: 0.9444
Epoch 37/50
5/5 [==============================] - 1s 103ms/step - loss: 0.0607 - accuracy: 0.9860 - val_loss: 0.1365 - val_accuracy: 0.9167
Epoch 38/50
5/5 [==============================] - 1s 116ms/step - loss: 0.0301 - accuracy: 0.9962 - val_loss: 0.1342 - val_accuracy: 0.9167
Epoch 39/50
5/5 [==============================] - 1s 102ms/step - loss: 0.0821 - accuracy: 0.9641 - val_loss: 0.1585 - val_accuracy: 0.9444
Epoch 40/50
5/5 [==============================] - 1s 98ms/step - loss: 0.0586 - accuracy: 0.9830 - val_loss: 0.1175 - val_accuracy: 0.9444
Epoch 41/50
5/5 [==============================] - 1s 101ms/step - loss: 0.0577 - accuracy: 0.9806 - val_loss: 0.1145 - val_accuracy: 0.9444
Epoch 42/50
5/5 [==============================] - 1s 108ms/step - loss: 0.0361 - accuracy: 0.9941 - val_loss: 0.1282 - val_accuracy: 0.9444
Epoch 43/50
5/5 [==============================] - 1s 113ms/step - loss: 0.0343 - accuracy: 0.9962 - val_loss: 0.1450 - val_accuracy: 0.9444
Epoch 44/50
5/5 [==============================] - 1s 113ms/step - loss: 0.0505 - accuracy: 0.9764 - val_loss: 0.1238 - val_accuracy: 0.9167
Epoch 45/50
5/5 [==============================] - 1s 99ms/step - loss: 0.0712 - accuracy: 0.9766 - val_loss: 0.1298 - val_accuracy: 0.9444
Epoch 46/50
5/5 [==============================] - 1s 108ms/step - loss: 0.0644 - accuracy: 0.9674 - val_loss: 0.1087 - val_accuracy: 0.9444
Epoch 47/50
5/5 [==============================] - 1s 102ms/step - loss: 0.0386 - accuracy: 0.9928 - val_loss: 0.1123 - val_accuracy: 0.9167
Epoch 48/50
5/5 [==============================] - 1s 110ms/step - loss: 0.0313 - accuracy: 0.9918 - val_loss: 0.1224 - val_accuracy: 0.9444
Epoch 49/50
5/5 [==============================] - 1s 100ms/step - loss: 0.0288 - accuracy: 0.9863 - val_loss: 0.0908 - val_accuracy: 0.9444
Epoch 50/50
5/5 [==============================] - 1s 95ms/step - loss: 0.0391 - accuracy: 0.9804 - val_loss: 0.1016 - val_accuracy: 0.9444
| MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Prediction on Test Set | y_pred=model.predict(X_test)
y_pred=np.argmax(y_pred,axis=1)
y_test = np.argmax(y_test,axis=1) | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Confusion Matrix | from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test,y_pred)
print(cm)
plt.subplots(figsize=(15,7))
sns.heatmap(cm, annot= True, linewidth=1, cmap="autumn_r") | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Accuracy | print("Accuracy : ",accuracy_score(y_test,y_pred)) | Accuracy : 0.9444444444444444
| MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Classification Report | print(classification_report(y_test,y_pred)) | precision recall f1-score support
0 1.00 1.00 1.00 10
1 0.94 0.94 0.94 16
2 0.90 0.90 0.90 10
accuracy 0.94 36
macro avg 0.95 0.95 0.95 36
weighted avg 0.94 0.94 0.94 36
| MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Loss vs Validation Loss Plot | import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Scatter(y=history.history['loss'], name='Loss',
line=dict(color='royalblue', width=3)))
fig.add_trace(go.Scatter(y=history.history['val_loss'], name='Validation Loss',
line=dict(color='firebrick', width=2))) | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Accuracy vs Validation Accuracy Plot | fig = go.Figure()
fig.add_trace(go.Scatter(y=history.history['accuracy'], name='Accuracy',
line=dict(color='royalblue', width=3)))
fig.add_trace(go.Scatter(y=history.history['val_accuracy'], name='Validation Accuracy',
line=dict(color='firebrick', width=3))) | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Testing on some Random Images | image=load_img("/content/drive/MyDrive/rice/tungro/IMG_0852.jpg",target_size=(100,100))
imagee=load_img("/content/drive/MyDrive/rice/blight/IMG_0936.jpg",target_size=(100,100))
imageee=load_img("/content/drive/MyDrive/rice/blast/IMG_0560.jpg",target_size=(100,100))
imageeee=load_img("/content/drive/MyDrive/rice/blight/IMG_1063.jpg",target_size=(100,100))
imageeeee=load_img("/content/drive/MyDrive/rice/tungro/IMG_0898.jpg",target_size=(100,100))
image=img_to_array(image)
image=image/255.0
prediction_image=np.array(image)
prediction_image= np.expand_dims(image, axis=0)
imagee=img_to_array(imagee)
imagee=imagee/255.0
prediction_imagee=np.array(imagee)
prediction_imagee= np.expand_dims(imagee, axis=0)
imageee=img_to_array(imageee)
imageee=imageee/255.0
prediction_imageee=np.array(imageee)
prediction_imageee= np.expand_dims(imageee, axis=0)
imageeee=img_to_array(imageeee)
imageeee=imageeee/255.0
prediction_imageeee=np.array(imageeee)
prediction_imageeee= np.expand_dims(imageeee, axis=0)
imageeeee=img_to_array(imageeeee)
imageeeee=image/255.0
prediction_imageeeee=np.array(imageeeee)
prediction_imageeeee= np.expand_dims(imageeeee, axis=0)
prediction=model.predict(prediction_image)
value=np.argmax(prediction)
move_name=mapper(value)
print("This Rice Belongs to", move_name + " class")
prediction=model.predict(prediction_imagee)
value=np.argmax(prediction)
move_name=mapper(value)
print("This Rice Belongs to", move_name + " class")
prediction=model.predict(prediction_imageee)
value=np.argmax(prediction)
move_name=mapper(value)
print("This Rice Belongs to", move_name + " class")
prediction=model.predict(prediction_imageeee)
value=np.argmax(prediction)
move_name=mapper(value)
print("This Rice Belongs to", move_name + " class")
prediction=model.predict(prediction_imageeeee)
value=np.argmax(prediction)
move_name=mapper(value)
print("This Rice Belongs to", move_name + " class") | This Rice Belongs to tungro class
This Rice Belongs to blight class
This Rice Belongs to blast class
This Rice Belongs to blight class
This Rice Belongs to tungro class
| MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Prediction on Different Test Set | print(test.shape)
predictionn=model.predict(test)
print(predictionn.shape)
test_pred=[]
for item in predictionn:
value=np.argmax(item)
test_pred = test_pred + [value] | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Confusion Matrix | from sklearn.metrics import confusion_matrix
cm = confusion_matrix(test_test,test_pred)
print(cm)
plt.subplots(figsize=(15,7))
sns.heatmap(cm, annot= True, linewidth=1, cmap="CMRmap") | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Accuracy | accuracy=accuracy_score(test_test,test_pred)
print("Model Accuracy : ",accuracy) | Model Accuracy : 0.9706666666666667
| MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
Classification Report | print(classification_report(test_test,test_pred)) | precision recall f1-score support
0 0.95 0.95 0.95 20
1 0.95 1.00 0.98 20
2 1.00 0.95 0.97 20
accuracy 0.97 60
macro avg 0.97 0.97 0.97 60
weighted avg 0.97 0.97 0.97 60
| MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture |
This Model can Successfully Detects The Disease of a Rice Leaf with an Accuracy of 97% | _____no_output_____ | MIT | rice_leaf_disease_detection_with_resnet50v2.ipynb | Chandramouli-Das/Rice-Leaf-Disease-Detection-using-ResNet50V2-Architecture | |
Stock Prediction using fb ProphetProphet is a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It works best with time series that have strong seasonal effects and several seasons of historical data. Prophet is robust to missing data and shifts in the trend, and typically handles outliers well. | import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
from alpha_vantage.timeseries import TimeSeries
from fbprophet import Prophet
os.chdir(r'N:\STOCK ADVISOR BOT')
ALPHA_VANTAGE_API_KEY = 'XAGC5LBB1SI9RDLW'
ts = TimeSeries(key= ALPHA_VANTAGE_API_KEY, output_format='pandas')
df_Stock, Stock_info = ts.get_daily('MSFT', outputsize='full')
df_Stock = df_Stock.rename(columns={'1. open' : 'Open', '2. high': 'High', '3. low':'Low', '4. close': 'Close', '5. volume': 'Volume' })
df_Stock = df_Stock.rename_axis(['Date'])
Stock = df_Stock.sort_index(ascending=True, axis=0)
#slicing the data for 15 years from '2004-01-02' to today
Stock = Stock.loc['2004-01-02':]
Stock
Stock = Stock.drop(columns=['Open', 'High', 'Low', 'Volume'])
Stock.index = pd.to_datetime(Stock.index)
Stock.info()
#NFLX.resample('D').ffill()
Stock = Stock.reset_index()
Stock
Stock.columns = ['ds', 'y']
prophet_model = Prophet(yearly_seasonality=True, daily_seasonality=True)
prophet_model.add_country_holidays(country_name='US')
prophet_model.add_seasonality(name='monthly', period=30.5, fourier_order=5)
prophet_model.fit(Stock)
future = prophet_model.make_future_dataframe(periods=30)
future.tail()
forcast = prophet_model.predict(future)
forcast.tail()
prophet_model.plot(forcast); | _____no_output_____ | MIT | Stock Prediction/.ipynb_checkpoints/ADS_Stock_Prediction_Prophet_MSFT-checkpoint.ipynb | Chowry000/Stock-Prediction-Using-LSTM |
If you want to visualize the individual forecast components, we can use Prophet’s built-in plot_components method like below | prophet_model.plot_components(forcast);
forcast.shape
forcast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail() | _____no_output_____ | MIT | Stock Prediction/.ipynb_checkpoints/ADS_Stock_Prediction_Prophet_MSFT-checkpoint.ipynb | Chowry000/Stock-Prediction-Using-LSTM |
Prediction PerformanceThe performance_metrics utility can be used to compute some useful statistics of the prediction performance (yhat, yhat_lower, and yhat_upper compared to y), as a function of the distance from the cutoff (how far into the future the prediction was). The statistics computed are mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), mean absolute percent error (MAPE), and coverage of the yhat_lower and yhat_upper estimates. | from fbprophet.diagnostics import cross_validation, performance_metrics
df_cv = cross_validation(prophet_model, horizon='180 days')
df_cv.head()
df_cv
df_p = performance_metrics(df_cv)
df_p.head()
df_p
from fbprophet.plot import plot_cross_validation_metric
fig = plot_cross_validation_metric(df_cv, metric='mape') | _____no_output_____ | MIT | Stock Prediction/.ipynb_checkpoints/ADS_Stock_Prediction_Prophet_MSFT-checkpoint.ipynb | Chowry000/Stock-Prediction-Using-LSTM |
PROJECT 1 Pseudocode For Project 1 AVOIDS INPUT input word and letters IF letters or words inputed are not forbidden RETURN "TRUE" END | def avoids():
forbidden = list(str(input("Input forbidden letters:")))
sentence = str(input("Input a sentence to be inspected:")).split("")
li =[]
no = 0
for i in sentence:
boo = True
for j in forbidden:
if j in i:
boo = False
if boo == False:
li.append(i)
else:
no +=1
if len(li) == 0:
print("no forbidden words present")
else:
print(no,"words do not use forbidden letters")
print(li,"use forbidden letters")
avoids()
| Input forbidden letters:rtre
Input a sentence to be inspected: oj is a GOAT
| MIT | WEEK5/WEEK5_PROJECTS.ipynb | ayomideoj/ayomideojikutuCSC102 |
PROJECT 2 Pseudocode for Project 2 USES ALL INPUT word and strings of letters IF word uses all required letters at least once RETURN "TRUE" ELSE "FALSE" END | def uses_all (word, req):
newReq = list(req)
li = []
for i in newReq:
if i in word:
continue
else:
li.append(i)
if len(li) > 0:
print(False)
else:
print(True)
print("the required letters",li,"not found",word)
uses_all("elegaont","eguon") | False
the required letters ['u'] not found elegaont
| MIT | WEEK5/WEEK5_PROJECTS.ipynb | ayomideoj/ayomideojikutuCSC102 |
PROJECT 3 Pseudocode for Project 3 INPUT Jane = "odd" INPUT Jack = "even" IF Jack + Jane = odd number PRINT "Jane wins" ELIF Jack + Jane = even number PRINT "Jack wins" END | import math
a = str(input("Player1, enter guess"))
b = str(input("Player2, enter guess"))
num1 = int(input("Player1, what number did you choose"))
num2 = int(input("Player2, what number did you choose"))
total = num1 + num2
if total % 2 == 0:
nature = "even"
else:
nature = "odds"
if a == nature and b == nature:
print("Both players win")
elif a == nature:
("Player1 wins, the answer is", a)
elif b == nature:
print("Player2 wins, the answer is",b)
else:
print("Nobody wins")
| Player1, enter guess3
Player2, enter guess4
Player1, what number did you choose3
Player2, what number did you choose4
Nobody wins
| MIT | WEEK5/WEEK5_PROJECTS.ipynb | ayomideoj/ayomideojikutuCSC102 |
PROJECT 4 Pseudocode for Project 4 Velocity, V = D/T Distance, D = 50 Time, T = 2 CALCULATE V END | d = int(input("Enter value of distance"))
t = int(input("Enter value of time"))
v= d/t
print("the care is moving at",v,"miles per hour") | Enter value of distance400
Enter value of time2
the care is moving at 200.0 miles per hour
| MIT | WEEK5/WEEK5_PROJECTS.ipynb | ayomideoj/ayomideojikutuCSC102 |
Linear Models: Multiple variables with interactions Contents1 Introduction1.1 Chapter aims1.2 Formulae with interactions in R2 Model 1: Mammalian genome size3 Model 2 (ANCOVA): Body Weight in Odonata Introduction Here you will build on your skills in fitting linear models with multiple explanatory variables to data. You will learn about another commonly used Linear Model fitting technique: ANCOVA.We will build two models in this chapter:* **Model 1**: Is mammalian genome size predicted by interactions between trophic level and whether species are ground dwelling?* **ANCOVA**: Is body size in Odonata predicted by interactions between genome size and taxonomic suborder?So far, we have only looked at the independent effects of variables. For example, in the trophic level and ground dwelling model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), we only looked for specific differences for being a omnivore *or* being ground dwelling, not for beingspecifically a *ground dwelling omnivore*. These independent effects of a variable are known as *main effects* and the effects of combinations of variables acting together are known as *interactions* — they describe how the variables *interact*. Chapter aimsThe aims of this chapter are[$^{[1]}$](fn1):* Creating more complex Linear Models with multiple explanatory variables* Including the effects of interactions between multiple variables in a linear model* Plotting predictions from more complex (multiple explanatory variables) linear models Formulae with interactions in RWe've already seen a number of different model formulae in R. They all use this syntax:`response variable ~ explanatory variable(s)`But we are now going to see two extra pieces of syntax:* `y ~ a + b + a:b`: The `a:b` means the interaction between `a` and `b` — do combinations of these variables lead to different outcomes?* `y ~ a * b`: This a shorthand for the model above. The means fit `a` and `b` as main effects and their interaction `a:b`. Model 1: Mammalian genome size$\star$ Make sure you have changed the working directory to `Code` in your stats coursework directory.$\star$ Create a new blank script called 'Interactions.R' and add some introductory comments.$\star$ Load the data: | load('../data/mammals.Rdata') | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
If `mammals.Rdata` is missing, just import the data again using `read.csv`. You will then have to add the log C Value column to the imported data frame again.Let's refit the model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), but including the interaction between trophic level and ground dwelling. We'll immediately check the model is appropriate: | model <- lm(logCvalue ~ TrophicLevel * GroundDwelling, data= mammals)
par(mfrow=c(2,2), mar=c(3,3,1,1), mgp=c(2, 0.8,0))
plot(model) | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
Now, examine the `anova` and `summary` outputs for the model: | anova(model) | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
Compared to the model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), there is an extra line at the bottom. The top two are the same and show that trophic level and ground dwelling both have independent main effects. The extra lineshows that there is also an interaction between the two. It doesn't explain a huge amount of variation, about half as much as trophic level, but it is significant.Again, we can calculate the $r^2$ for the model: $\frac{0.81 + 2.75 + 0.43}{0.81+2.75+0.43+12.77} = 0.238$ The model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb) without the interaction had an $r^2 = 0.212$ — our newmodel explains 2.6% more of the variation in the data.The summary table is as follows: | summary(model) | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
The lines in this output are:1. The reference level (intercept) for non ground dwelling carnivores. (The reference level is decided just by the alphabetic order of the levels)2. Two differences for being in different trophic levels.3. One difference for being ground dwelling4. Two new differences that give specific differences for ground dwelling herbivores and omnivores.The first four lines, as in the model from the [ANOVA chapter](15-anova.ipynb), which would allow us to find the predicted values for each group *if the size of the differences did not vary between levels because of the interactions*. That is, this part of the model only includes a single difference ground and non-ground species, which has to be the same for each trophic group because it ignores interactions between trophic level and ground / non-ground identity of each species. The last two lines then give the estimated coefficients associated with the interaction terms, and allow cause the size of differences to varybetween levels because of the further effects of interactions.The table below show how these combine to give the predictions for each group combination, with those two new lines show in red:$\begin{array}{|r|r|r|}\hline & \textrm{Not ground} & \textrm{Ground} \\\hline\textrm{Carnivore} & 0.96 = 0.96 & 0.96+0.25=1.21 \\\textrm{Herbivore} & 0.96 + 0.05 = 1.01 & 0.96+0.05+0.25{\color{red}+0.03}=1.29\\\textrm{Omnivore} & 0.96 + 0.23 = 1.19 & 0.96+0.23+0.25{\color{red}-0.15}=1.29\\\hline\end{array}$So why are there two new coefficients? For interactions between two factors, there are always $(n-1)\times(m-1)$ new coefficients, where $n$ and $m$ are the number of levels in the two factors (Ground dwelling or not: 2 levels and trophic level: 3 levels, in our current example). So in this model, $(3-1) \times (2-1) =2$. It is easier to understand whygraphically: the prediction for the white boxes below can be found by adding the main effects together but for the grey boxes we need to find specific differences and so there are $(n-1)\times(m-1)$ interaction coefficients to add. Figure 2 If we put this together, what is the model telling us?* Herbivores have the same genome sizes as carnivores, but omnivores have larger genomes.* Ground dwelling mammals have larger genomes.These two findings suggest that ground dwelling omnivores should have extra big genomes. However, the interaction shows they are smaller than expected and are, in fact, similar to ground dwelling herbivores.Note that although the interaction term in the `anova` output is significant, neither of the two coefficients in the `summary` has a $p<0.05$. There are two weak differences (onevery weak, one nearly significant) that together explain significantvariance in the data.$\star$ Copy the code above into your script and run the model.Make sure you understand the output!Just to make sure the sums above are correct, we'll use the same code asin [the first multiple explanatory variables chapter](16-MulExpl.ipynb) to get R to calculate predictions for us, similar to the way we did [before](16-MulExpl.ipynb): | # a data frame of combinations of variables
gd <- rep(levels(mammals$GroundDwelling), times = 3)
print(gd)
tl <- rep(levels(mammals$TrophicLevel), each = 2)
print(tl)
# New data frame
predVals <- data.frame(GroundDwelling = gd, TrophicLevel = tl)
# predict using the new data frame
predVals$predict <- predict(model, newdata = predVals)
print(predVals) | GroundDwelling TrophicLevel predict
1 No Carnivore 0.9589465
2 Yes Carnivore 1.2138170
3 No Herbivore 1.0124594
4 Yes Herbivore 1.2976624
5 No Omnivore 1.1917603
6 Yes Omnivore 1.2990165
| MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
$\star$ Include and run the code for gererating these predictions in your script.If we plot these data points onto the barplot from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), they now lie exactly on the mean values, because we've allowed for interactions. The triangle on this plot shows the predictions for ground dwelling omnivores from the main effects ($0.96 + 0.23 + 0.25 = 1.44$), the interaction of $-0.15$ pushes the prediction back down. Model 2 (ANCOVA): Body Weight in OdonataWe'll go all the way back to the regression analyses from the [Regression chapter](14-regress.ipynb). Remember that we fitted two separate regression lines to the data for damselflies and dragonflies. We'll now use an interaction to fit these in a single model. This kind of linear model — with a mixture of continuous variables and factors — is often called an *analysis of covariance*, or ANCOVA. That is, ANCOVA is a type of linear model that blends ANOVA and regression. ANCOVA evaluates whether population means of a dependent variable are equal across levels of a categorical independent variable, while statistically controlling for the effects of other continuous variables that are not of primary interest, known as covariates.*Thus, ANCOVA is a linear model with one categorical and one or more continuous predictors*.We will use the odonates data that we have worked with [before](12-ExpDesign.ipynb).$\star$ First load the data: | odonata <- read.csv('../data/GenomeSize.csv') | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
$\star$ Now create two new variables in the `odonata` data set called `logGS` and `logBW` containing log genome size and log body weight: | odonata$logGS <- log(odonata$GenomeSize)
odonata$logBW <- log(odonata$BodyWeight) | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
The models we fitted [before](12-ExpDesign.ipynb) looked like this: We can now fit the model of body weight as a function of both genome size and suborder: | odonModel <- lm(logBW ~ logGS * Suborder, data = odonata) | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
Again, we'll look at the anova table first: | anova(odonModel) | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
Interpreting this:* There is no significant main effect of log genome size. The *main* effect is the important thing here — genome size is hugely important but does very different things for the two different suborders. If we ignored `Suborder`, there isn't an overall relationship: the average of those two lines is pretty much flat.* There is a very strong main effect of Suborder: the mean body weight in the two groups are very different.* There is a strong interaction between suborder and genome size. This is an interaction between a factor and a continuous variable and shows that the *slopes* are different for the different factor levels.Now for the summary table: | summary(odonModel) | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
* The first thing to note is that the $r^2$ value is really high. The model explains three quarters (0.752) of the variation in the data.* Next, there are four coefficients: * The intercept is for the first level of `Suborder`, which is Anisoptera (dragonflies). * The next line, for `log genome size`, is the slope for Anisoptera. * We then have a coefficient for the second level of `Suborder`, which is Zygoptera (damselflies). As with the first model, this difference in factor levels is a difference in mean values and shows the difference in the intercept for Zygoptera. * The last line is the interaction between `Suborder` and `logGS`. This shows how the slope for Zygoptera differs from the slope for Anisoptera.How do these hang together to give the two lines shown in the model? We can calculate these by hand: $\begin{aligned} \textrm{Body Weight} &= -2.40 + 1.01 \times \textrm{logGS} & \textrm{[Anisoptera]}\\ \textrm{Body Weight} &= (-2.40 -2.25) + (1.01 - 2.15) \times \textrm{logGS} & \textrm{[Zygoptera]}\\ &= -4.65 - 1.14 \times \textrm{logGS} \\\end{aligned}$$\star$ Add the above code into your script and check that you understand the outputs.We'll use the `predict` function again to get the predicted values from the model and add lines to the plot above.First, we'll create a set of numbers spanning the range of genome size: | #get the range of the data:
rng <- range(odonata$logGS)
#get a sequence from the min to the max with 100 equally spaced values:
LogGSForFitting <- seq(rng[1], rng[2], length = 100) | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
Have a look at these numbers: | print(LogGSForFitting) | [1] -0.891598119 -0.873918728 -0.856239337 -0.838559945 -0.820880554
[6] -0.803201163 -0.785521772 -0.767842380 -0.750162989 -0.732483598
[11] -0.714804206 -0.697124815 -0.679445424 -0.661766032 -0.644086641
[16] -0.626407250 -0.608727859 -0.591048467 -0.573369076 -0.555689685
[21] -0.538010293 -0.520330902 -0.502651511 -0.484972119 -0.467292728
[26] -0.449613337 -0.431933946 -0.414254554 -0.396575163 -0.378895772
[31] -0.361216380 -0.343536989 -0.325857598 -0.308178207 -0.290498815
[36] -0.272819424 -0.255140033 -0.237460641 -0.219781250 -0.202101859
[41] -0.184422467 -0.166743076 -0.149063685 -0.131384294 -0.113704902
[46] -0.096025511 -0.078346120 -0.060666728 -0.042987337 -0.025307946
[51] -0.007628554 0.010050837 0.027730228 0.045409619 0.063089011
[56] 0.080768402 0.098447793 0.116127185 0.133806576 0.151485967
[61] 0.169165358 0.186844750 0.204524141 0.222203532 0.239882924
[66] 0.257562315 0.275241706 0.292921098 0.310600489 0.328279880
[71] 0.345959271 0.363638663 0.381318054 0.398997445 0.416676837
[76] 0.434356228 0.452035619 0.469715011 0.487394402 0.505073793
[81] 0.522753184 0.540432576 0.558111967 0.575791358 0.593470750
[86] 0.611150141 0.628829532 0.646508923 0.664188315 0.681867706
[91] 0.699547097 0.717226489 0.734905880 0.752585271 0.770264663
[96] 0.787944054 0.805623445 0.823302836 0.840982228 0.858661619
| MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
We can now use the model to predict the values of body weight at each of those points for each of the two suborders: | #get a data frame of new data for the order
ZygoVals <- data.frame(logGS = LogGSForFitting, Suborder = "Zygoptera")
#get the predictions and standard error
ZygoPred <- predict(odonModel, newdata = ZygoVals, se.fit = TRUE)
#repeat for anisoptera
AnisoVals <- data.frame(logGS = LogGSForFitting, Suborder = "Anisoptera")
AnisoPred <- predict(odonModel, newdata = AnisoVals, se.fit = TRUE) | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
We've added `se.fit=TRUE` to the function to get the standard error around the regression lines. Both `AnisoPred` and `ZygoPred` contain predicted values (called `fit`) and standard error values (called `se.fit`) for each of the values in our generated values in `LogGSForFitting` for each of the two suborders.We can add the predictions onto a plot like this: | # plot the scatterplot of the data
plot(logBW ~ logGS, data = odonata, col = Suborder)
# add the predicted lines
lines(AnisoPred$fit ~ LogGSForFitting, col = "black")
lines(AnisoPred$fit + AnisoPred$se.fit ~ LogGSForFitting, col = "black", lty = 2)
lines(AnisoPred$fit - AnisoPred$se.fit ~ LogGSForFitting, col = "black", lty = 2) | _____no_output_____ | MIT | notebooks/17-MulExplInter.ipynb | mathemage/TheMulQuaBio |
Optimization MethodsUntil now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this: **Figure 1** : **Minimizing the cost is like finding the lowest point in a hilly landscape** At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. **Notations**: As usual, $\frac{\partial J}{\partial a } = $ `da` for any variable `a`.To get started, run the following code to import the libraries you will need. | import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray' | _____no_output_____ | Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
1 - Gradient DescentA simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent. **Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding. | # GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - (learning_rate * grads["dW" + str(l+1)])
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - (learning_rate * grads["db" + str(l+1)])
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"])) | W1 = [[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]]
b1 = [[ 1.74604067]
[-0.75184921]]
W2 = [[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]]
b2 = [[-0.88020257]
[ 0.02561572]
[ 0.57539477]]
| Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
**Expected Output**: **W1** [[ 1.63535156 -0.62320365 -0.53718766] [-1.07799357 0.85639907 -2.29470142]] **b1** [[ 1.74604067] [-0.75184921]] **W2** [[ 0.32171798 -0.25467393 1.46902454] [-2.05617317 -0.31554548 -0.3756023 ] [ 1.1404819 -1.09976462 -0.1612551 ]] **b2** [[-0.88020257] [ 0.02561572] [ 0.57539477]] A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. - **(Batch) Gradient Descent**:``` pythonX = data_inputY = labelsparameters = initialize_parameters(layers_dims)for i in range(0, num_iterations): Forward propagation a, caches = forward_propagation(X, parameters) Compute cost. cost = compute_cost(a, Y) Backward propagation. grads = backward_propagation(a, caches, parameters) Update parameters. parameters = update_parameters(parameters, grads) ```- **Stochastic Gradient Descent**:```pythonX = data_inputY = labelsparameters = initialize_parameters(layers_dims)for i in range(0, num_iterations): for j in range(0, m): Forward propagation a, caches = forward_propagation(X[:,j], parameters) Compute cost cost = compute_cost(a, Y[:,j]) Backward propagation grads = backward_propagation(a, caches, parameters) Update parameters. parameters = update_parameters(parameters, grads)``` In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this: **Figure 1** : **SGD vs GD** "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). **Note** also that implementing SGD requires 3 for-loops in total:1. Over the number of iterations2. Over the $m$ training examples3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples. **Figure 2** : **SGD vs Mini-Batch GD** "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. **What you should remember**:- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.- You have to tune a learning rate hyperparameter $\alpha$.- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large). 2 - Mini-Batch Gradient descentLet's learn how to build mini-batches from the training set (X, Y).There are two steps:- **Shuffle**: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. - **Partition**: Partition the shuffled (X, Y) into mini-batches of size `mini_batch_size` (here 64). Note that the number of training examples is not always divisible by `mini_batch_size`. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full `mini_batch_size`, it will look like this: **Exercise**: Implement `random_mini_batches`. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:```pythonfirst_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]...```Note that the last mini-batch might end up smaller than `mini_batch_size=64`. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is `math.floor(s)` in Python). If the total number of examples is not a multiple of `mini_batch_size=64` then there will be $\lfloor \frac{m}{mini\_batch\_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini_\_batch_\_size \times \lfloor \frac{m}{mini\_batch\_size}\rfloor$). | # GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, k * mini_batch_size : (k + 1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:, k * mini_batch_size : (k + 1) * mini_batch_size].reshape((1, mini_batch_size))
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : m]
mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m].reshape((1, m - num_complete_minibatches * mini_batch_size))
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3])) | shape of the 1st mini_batch_X: (12288, 64)
shape of the 2nd mini_batch_X: (12288, 64)
shape of the 3rd mini_batch_X: (12288, 20)
shape of the 1st mini_batch_Y: (1, 64)
shape of the 2nd mini_batch_Y: (1, 64)
shape of the 3rd mini_batch_Y: (1, 20)
mini batch sanity check: [ 0.90085595 -0.7612069 0.2344157 ]
| Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
**Expected Output**: **shape of the 1st mini_batch_X** (12288, 64) **shape of the 2nd mini_batch_X** (12288, 64) **shape of the 3rd mini_batch_X** (12288, 20) **shape of the 1st mini_batch_Y** (1, 64) **shape of the 2nd mini_batch_Y** (1, 64) **shape of the 3rd mini_batch_Y** (1, 20) **mini batch sanity check** [ 0.90085595 -0.7612069 0.2344157 ] **What you should remember**:- Shuffling and Partitioning are the two steps required to build mini-batches- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128. 3 - MomentumBecause mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations. Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill. **Figure 3**: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$. **Exercise**: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the `grads` dictionary, that is:for $l =1,...,L$:```pythonv["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])v["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])```**Note** that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the `for` loop. | # GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
"""
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"])) | v["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] = [[ 0.]
[ 0.]]
v["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] = [[ 0.]
[ 0.]
[ 0.]]
| Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
**Expected Output**: **v["dW1"]** [[ 0. 0. 0.] [ 0. 0. 0.]] **v["db1"]** [[ 0.] [ 0.]] **v["dW2"]** [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] **v["db2"]** [[ 0.] [ 0.] [ 0.]] **Exercise**: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$: $$ \begin{cases}v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \\W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}}\end{cases}\tag{3}$$$$\begin{cases}v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \\b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}} \end{cases}\tag{4}$$where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift `l` to `l+1` when coding. | # GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
"""
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' +
str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = beta * v["dW" + str(l+1)] + (1 - beta) * grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta * v["db" + str(l+1)] + (1 - beta) * grads["db" + str(l+1)]
# update parameters
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - (learning_rate * v["dW" + str(l+1)])
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - (learning_rate * v["db" + str(l+1)])
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"])) | W1 = [[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]]
b1 = [[ 1.74493465]
[-0.76027113]]
W2 = [[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]]
b2 = [[-0.87809283]
[ 0.04055394]
[ 0.58207317]]
v["dW1"] = [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] = [[-0.01228902]
[-0.09357694]]
v["dW2"] = [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] = [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
| Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
**Expected Output**: **W1** [[ 1.62544598 -0.61290114 -0.52907334] [-1.07347112 0.86450677 -2.30085497]] **b1** [[ 1.74493465] [-0.76027113]] **W2** [[ 0.31930698 -0.24990073 1.4627996 ] [-2.05974396 -0.32173003 -0.38320915] [ 1.13444069 -1.0998786 -0.1713109 ]] **b2** [[-0.87809283] [ 0.04055394] [ 0.58207317]] **v["dW1"]** [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]] **v["db1"]** [[-0.01228902] [-0.09357694]] **v["dW2"]** [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]] **v["db2"]** [[ 0.02344157] [ 0.16598022] [ 0.07420442]] **Note** that:- The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.- If $\beta = 0$, then this just becomes standard gradient descent without momentum. **How do you choose $\beta$?**- The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much. - Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default. - Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$. **What you should remember**:- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.- You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$. 4 - AdamAdam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum. **How does Adam work?**1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction). 2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction). 3. It updates parameters in a direction based on combining information from "1" and "2".The update rule is, for $l = 1, ..., L$: $$\begin{cases}v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\v^{corrected}_{dW^{[l]}} = \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\s^{corrected}_{dW^{[l]}} = \frac{s_{dW^{[l]}}}{1 - (\beta_1)^t} \\W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} + \varepsilon}\end{cases}$$where:- t counts the number of steps taken of Adam - L is the number of layers- $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages. - $\alpha$ is the learning rate- $\varepsilon$ is a very small number to avoid dividing by zeroAs usual, we will store all parameters in the `parameters` dictionary **Exercise**: Initialize the Adam variables $v, s$ which keep track of the past information.**Instruction**: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for `grads`, that is:for $l = 1, ..., L$:```pythonv["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])v["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])s["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])s["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])``` | # GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
s["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
s["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
| v["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] = [[ 0.]
[ 0.]]
v["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] = [[ 0.]
[ 0.]
[ 0.]]
s["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db1"] = [[ 0.]
[ 0.]]
s["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db2"] = [[ 0.]
[ 0.]
[ 0.]]
| Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.