markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
For Regression taskLoad the dataset.
cal_housing = fetch_california_housing() print(cal_housing.DESCR) X = cal_housing.data y = cal_housing.target cal_features = cal_housing.feature_names df = pd.concat((pd.DataFrame(X, columns=cal_features), pd.DataFrame({'MedianHouseVal': y})), axis=1) df.head()
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
Visualizing a Decision TreeYou will need to install the `pydotplus` library.
#!pip install pydotplus import pydotplus # Create dataset X_train, X_test, y_train, y_test = train_test_split(df[cal_features], y, test_size=0.2) dt_reg = DecisionTreeRegressor(max_depth=3) dt_reg.fit(X_train, y_train) dot_data = export_graphviz(dt_reg, out_file="ca_housing.dot", feature_nam...
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
Make a sample prediction.
X_test[cal_features].iloc[[0]].transpose() dt_reg.predict(X_test[cal_features].iloc[[0]])
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
The root node is the mean of the labels from the training data.
y_train.mean()
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
Train a simple Random Forest
rf_reg = RandomForestRegressor() rf_reg.fit(X_train, y_train) print(f'Instance 11 prediction: {rf_reg.predict(X_test.iloc[[11]])}') print(f'Instance 17 prediction: {rf_reg.predict(X_test.iloc[[17]])}') idx = 11 from treeinterpreter import treeinterpreter prediction, bias, contributions = treeinterpreter.predict(rf_reg,...
prediction: [4.8203671] bias + contributions: [4.8203671]
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
In fact, we can check that this holds for all elements of the test set:
predictions, biases, contributions = treeinterpreter.predict( rf_reg, X_test.values) assert(np.allclose(np.squeeze(predictions), biases + np.sum(contributions, axis=1))) assert(np.allclose(rf_reg.predict(X_test), biases + np.sum(contributions, axis=1)))
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
Comparing Contributions across data slices
X1_test = X_test[:X_test.shape[0]//2:] X2_test = X_test[X_test.shape[0]//2:] predictions1, biases1, contributions1 = ti.predict(rf_reg, X1_test.values) predictions2, biases2, contributions2 = ti.predict(rf_reg, X2_test.values) total_contribs1 = np.mean(contributions1, axis=0) total_contribs2 = np.mean(contributions2, ...
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
TreeExplainer with SHAP
from sklearn.model_selection import train_test_split import xgboost as xgb import shap # print the JS visualization code to the notebook shap.initjs() import xgboost as xgb xgb_reg = xgb.XGBClassifier(max_depth=3, n_estimators=300, learning_rate=0.05) xgb_reg.fit...
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
Bayesian Ridge Regression Part 2 Multiple Features
import numpy as np import matplotlib.pyplot as plt import pandas as pd import warnings warnings.filterwarnings("ignore") # yahoo finance is used to fetch data import yfinance as yf yf.pdr_override() # input symbol = 'AMD' start = '2014-01-01' end = '2018-08-27' # Read data dataset = yf.download(symbol,start,end) ...
Bayesian Ridge Regression Score: 0.9996452147933678
MIT
Stock_Algorithms/Bayesian_Ridge_Regression_Part2.ipynb
clairvoyant/Deep-Learning-Machine-Learning-Stock
Application of Linear Algebra in Data Science Here is the Python code to calculate and plot the MSE
import matplotlib.pyplot as plt x = list(range(1,6)) #data points y = [1,1,2,2,4] #original values y_bar = [0.6,1.29,1.99,2.69,3.4] #predicted values summation = 0 n = len(y) for i in range(0, n): # finding the difference between observed and predicted value difference = y[i] - y_bar[i] squared_dif...
_____no_output_____
Apache-2.0
Linear_Algebra_in_Research.ipynb
adriangalarion/Lab-Activities-1.1
Excercises Electric Machinery Fundamentals Chapter 2 Problem 2-15
%pylab notebook
Populating the interactive namespace from numpy and matplotlib
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
Description An autotransformer is used to connect a 12.6-kV distribution line to a 13.8-kV distribution line. It must be capable of handling 2000 kVA. There are three phases, connected Y-Y with their neutrals solidly grounded.
Vl = 12.6e3 # [V] Vh = 13.8e3 # [V] Sio = 2000e3 # [VA]
_____no_output_____
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
(a) * What must the $N_C / N _{SE}$ turns ratio be to accomplish this connection? (b) * How much apparent power must the windings of each autotransformer handle? (c) * What is the power advantage of this autotransformer system? (d) * If one of the autotransformers were reconnected as an ordinary transformer, what...
a = (Vh/sqrt(3)) / (Vl/sqrt(3)) n_a = 1 / (a-1) # n_a = Nc/Nse print(''' Nc/Nse = {:.1f} ============= '''.format(n_a))
Nc/Nse = 10.5 =============
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
(b)The power advantage of this autotransformer is:$$\frac{S_{IO}}{S_W} = \frac{N_C + N_{SE}}{N_{SE}}$$
n_b = (10.5 + 1) / 1 # n_b = Sio/Sw print('Sio/Sw = {:.1f}'.format(n_b))
Sio/Sw = 11.5
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
Since 1/3 of the total power is associated with each phase, **the windings in each autotransformer must handle:**
Sw = Sio / (3*n_b) print(''' Sw = {:.1f} kVA ============== '''.format(Sw/1000))
Sw = 58.0 kVA ==============
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
(c)As determined in (b), the power advantage of this autotransformer system is:
print(''' Sio/Sw = {:.1f} ============= '''.format(n_b))
Sio/Sw = 11.5 =============
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
(d)The voltages across each phase of the autotransformer are:
Vh_p = Vh / sqrt(3) Vl_p = Vl / sqrt(3) print(''' Vh_p = {:.0f} V Vl_p = {:.0f} V '''.format(Vh_p, Vl_p))
Vh_p = 7967 V Vl_p = 7275 V
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
The voltage across the common winding ( $N_C$ ) is:
Vnc = Vl_p print('Vnc = {:.0f} V'.format(Vnc))
Vnc = 7275 V
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
and the voltage across the series winding ( $N_{SE}$ ) is:
Vnse = Vh_p - Vl_p print('Vnse = {:.0f} V'.format(Vnse))
Vnse = 693 V
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
Therefore, a single phase of the autotransformer connected as an ordinary transformer would be rated at:
print(''' Vnc/Vnse = {:.0f}/{:.0f} Sw = {:.1f} kVA =================== ============= '''.format(Vnc, Vnse, Sw/1000))
Vnc/Vnse = 7275/693 Sw = 58.0 kVA =================== =============
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
Analyze A/B Test ResultsYou may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/!/projects/37e27304-ad47-4eb0-a1ab-8c12f60e43d0/rubric). **Please s...
import pandas as pd import numpy as np import random import matplotlib.pyplot as plt %matplotlib inline #We are setting the seed to assure you get the same answers on quizzes as we set up random.seed(42)
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
`1.` Now, read in the `ab_data.csv` data. Store it in `df`. **Use your dataframe to answer the questions in Quiz 1 of the classroom.**a. Read in the dataset and take a look at the top few rows here:
df = pd.read_csv('ab_data.csv') df.head()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
b. Use the cell below to find the number of rows in the dataset.
df.shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
c. The number of unique users in the dataset.
df.nunique()[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
d. The proportion of users converted.
df['converted'].sum() / df.shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
e. The number of times the `new_page` and `treatment` don't match.
df[((df['group'] == 'treatment') & (df['landing_page'] != 'new_page')) | ((df['group'] != 'treatment') & (df['landing_page'] == 'new_page'))].shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
f. Do any of the rows have missing values?
df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 294478 entries, 0 to 294477 Data columns (total 5 columns): user_id 294478 non-null int64 timestamp 294478 non-null object group 294478 non-null object landing_page 294478 non-null object converted 294478 non-null int64 dtypes: int64(2),...
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
`2.` For the rows where **treatment** does not match with **new_page** or **control** does not match with **old_page**, we cannot be sure if this row truly received the new or old page. Use **Quiz 2** in the classroom to figure out how we should handle these rows. a. Now use the answer to the quiz to create a new dat...
df2 = df[(((df['group'] == 'treatment') & (df['landing_page'] == 'new_page')) | ((df['group'] == 'control') & (df['landing_page'] == 'old_page')))] df2.head() # Double Check all of the correct rows were removed - this should be 0 df2[((df2['group'] == 'treatment') == (df2['landing_page'] == 'new_page')) == False].shape...
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
`3.` Use **df2** and the cells below to answer questions for **Quiz3** in the classroom. a. How many unique **user_id**s are in **df2**?
df2.nunique()[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
b. There is one **user_id** repeated in **df2**. What is it?
uid = df2[df2['user_id'].duplicated() == True].index[0] uid
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
c. What is the row information for the repeat **user_id**?
df2.loc[uid]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
d. Remove **one** of the rows with a duplicate **user_id**, but keep your dataframe as **df2**.
df2.drop(2893, inplace=True) df2.shape[0]
/opt/conda/lib/python3.6/site-packages/pandas/core/frame.py:3697: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy errors=errors)
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
`4.` Use **df2** in the cells below to answer the quiz questions related to **Quiz 4** in the classroom.a. What is the probability of an individual converting regardless of the page they receive?
df2[df2['converted'] == 1].shape[0] / df2.shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
b. Given that an individual was in the `control` group, what is the probability they converted?
df2[(df2['converted'] == 1) & ((df2['group'] == 'control'))].shape[0] / df2[(df2['group'] == 'control')].shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
c. Given that an individual was in the `treatment` group, what is the probability they converted?
df2[(df2['converted'] == 1) & ((df2['group'] == 'treatment'))].shape[0] / df2[(df2['group'] == 'treatment')].shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
d. What is the probability that an individual received the new page?
df2[df2['landing_page'] == 'new_page'].shape[0] / df2.shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
e. Consider your results from parts (a) through (d) above, and explain below whether you think there is sufficient evidence to conclude that the new treatment page leads to more conversions. **The probability of converting for an individual who received the control page is more than that who received the treatment page...
df2.head()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
a. What is the **conversion rate** for $p_{new}$ under the null?
p_new = df2[(df2['converted'] == 1)].shape[0] / df2.shape[0] p_new
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
b. What is the **conversion rate** for $p_{old}$ under the null?
p_old = df2[(df2['converted'] == 1)].shape[0] / df2.shape[0] p_old
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
c. What is $n_{new}$, the number of individuals in the treatment group?
n_new = df2[(df2['landing_page'] == 'new_page') & (df2['group'] == 'treatment')].shape[0] n_new
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
d. What is $n_{old}$, the number of individuals in the control group?
n_old = df2[(df2['landing_page'] == 'old_page') & (df2['group'] == 'control')].shape[0] n_old
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
e. Simulate $n_{new}$ transactions with a conversion rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in **new_page_converted**.
new_page_converted = np.random.choice([1,0],n_new, p=(p_new,1-p_new)) new_page_converted.mean()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
f. Simulate $n_{old}$ transactions with a conversion rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in **old_page_converted**.
old_page_converted = np.random.choice([1,0],n_old, p=(p_old,1-p_old)) old_page_converted.mean()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f).
# p_new - p_old new_page_converted.mean() - old_page_converted.mean()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
h. Create 10,000 $p_{new}$ - $p_{old}$ values using the same simulation process you used in parts (a) through (g) above. Store all 10,000 values in a NumPy array called **p_diffs**.
p_diffs = [] for _ in range(10000): new_page_converted = np.random.choice([0, 1], size = n_new, p = [1-p_new, p_new], replace = True).sum() old_page_converted = np.random.choice([0, 1], size = n_old, p = [1-p_old, p_old], replace = True).sum() diff = new_page_converted/n_new - old_page_converted/n_old ...
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
i. Plot a histogram of the **p_diffs**. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.
plt.hist(p_diffs); plt.plot();
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
j. What proportion of the **p_diffs** are greater than the actual difference observed in **ab_data.csv**?
# (p_diffs > (p_new - p_old)) prop = (p_diffs > df['converted'].sample(10000)).mean() prop
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
k. Please explain using the vocabulary you've learned in this course what you just computed in part **j.** What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages? **Difference is not significant** l. We could also use a...
import statsmodels.api as sm convert_old = df2[(df2['landing_page'] == 'old_page') & (df2['group'] == 'control')] convert_new = df2[(df2['landing_page'] == 'new_page') & (df2['group'] == 'treatment')] n_old = convert_old.shape[0] n_new = convert_new.shape[0] n_old, n_new # df2.head()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
m. Now use `stats.proportions_ztest` to compute your test statistic and p-value. [Here](http://knowledgetack.com/python/statsmodels/proportions_ztest/) is a helpful link on using the built in.
from statsmodels.stats.proportion import proportions_ztest (df2['converted'] == 1).sum() df2.shape[0] prop stat, pval = proportions_ztest((df2['converted'] == 1).sum(), df2.shape[0], prop) stat, pval
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts **j.** and **k.**? **p val = 0****No** Part III - A regression approach`1.` In this final part, you will see that the result you achieved in the A/B...
df2.head()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
b. The goal is to use **statsmodels** to fit the regression model you specified in part **a.** to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create in df2 a column for the intercept, and create a dummy variable column for which page each us...
import statsmodels.api as sm df2[['control','ab_page']] = pd.get_dummies(df2['group']) df2.drop(['control','group'],axis=1, inplace=True) df2.head()
/opt/conda/lib/python3.6/site-packages/pandas/core/frame.py:3140: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-...
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
c. Use **statsmodels** to instantiate your regression model on the two columns you created in part b., then fit the model using the two columns you created in part **b.** to predict whether or not an individual converts.
df2['intercept'] = 1 logit_mod = sm.Logit(df2['converted'], df2[['intercept','ab_page']]) results = logit_mod.fit() np.exp(-0.0150) 1/np.exp(-0.0150)
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
d. Provide the summary of your model below, and use it as necessary to answer the following questions.
results.summary()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
e. What is the p-value associated with **ab_page**? Why does it differ from the value you found in **Part II**? **Hint**: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in **Part II**? **P value = 0.190** f. Now, you ar...
df_countries = pd.read_csv('countries.csv') df_countries.head() df_merged = pd.merge(df2,df_countries, left_on='user_id', right_on='user_id') df_merged.head() df_merged[['US','UK','CA']] = pd.get_dummies(df_merged['country']) df_merged.drop(['country','CA'],axis=1, inplace=True) df_merged.head() df_merged['intercept']...
Optimization terminated successfully. Current function value: 0.366116 Iterations 6
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
**US ia having negative coeff which means that conversion rate decreases if person is from US****UK ia having positive coeff which means that conversion rate increases if person is from UK** h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an intera...
final_df = df_merged[['user_id','timestamp','landing_page','converted','ab_page','US','UK']] final_df.head() final_df['intercept'] = 1 logit_mod = sm.Logit(final_df['ab_page'], final_df[['intercept','US','UK']]) results = logit_mod.fit() results.summary()
Optimization terminated successfully. Current function value: 0.760413 Iterations 3
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
**'ab_page' column is 1 when an individual receives the treatment and 0 if control.****US ia having positive coeff which means that chance of getting treatment page increases ****UK ia having negative coeff which means that change of getting control page increases** Finishing Up> Congratulations! You have reached the...
from subprocess import call call(['python', '-m', 'nbconvert', 'Analyze_ab_test_results_notebook.ipynb'])
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
Eurocode 8 - Chapter 3 - seismic_actionraw functions
from streng.codes.eurocodes.ec8.raw.ch3.seismic_action import spectra
_____no_output_____
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
spectra αg
print(spectra.αg.__doc__) αg = spectra.αg(αgR=0.24, γI=1.20) print(f'αg = {αg}g')
αg = 0.288g
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
S
print(spectra.S.__doc__) S = spectra.S(ground_type='B', spectrum_type=1) print(f'S = {S}')
S = 1.2
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
TB
print(spectra.TB.__doc__) TB = spectra.TB(ground_type='B', spectrum_type=1) print(f'TB = {TB}')
TB = 0.15
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
TC
print(spectra.TC.__doc__) TC = spectra.TC(ground_type='B', spectrum_type=1) print(f'TC = {TC}')
TC = 0.5
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
TD
print(spectra.TD.__doc__) TD = spectra.TD(ground_type='B', spectrum_type=1) print(f'TD = {TD}')
TD = 2.0
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
Se
print(spectra.Se.__doc__) Se = spectra.Se(T=0.50, αg = 0.24, S=1.20, TB=0.15, TC=0.50, TD=2.0, η=1.0) print(f'Se = {Se}g')
Se = 0.72g
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
SDe
print(spectra.SDe.__doc__) Sde = spectra.SDe(T=0.5, Se=0.72*9.81) print(f'Sde = {Sde:.3f}m')
Sde = 0.045m
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
dg
print(spectra.dg.__doc__) dg = spectra.dg(αg=0.24, S=1.20, TC=0.50, TD=2.0) print(f'dg = {dg:.4f}g')
dg = 0.0072g
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
Sd
print(spectra.Sd.__doc__) Sd = spectra.Sd(T=0.50, αg = 0.24, S=1.20, TB=0.15, TC=0.50, TD=2.0, q=3.9, β=0.20) print(f'Sd = {Sd:.3f}g')
Sd = 0.185g
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
η
print(spectra.η.__doc__) η_5 = spectra.η(5) print(f'η(5%) = {η_5:.2f}') η_7 = spectra.η(7) print(f'η(7%) = {η_7:.2f}')
η(5%) = 1.00 η(7%) = 0.91
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
Standalone Convergence Checker for the numerical vKdV solverCopied from Standalone Convergence Checker for the numerical KdV solver - just add bathyDoes not save or require any input data
import xarray as xr from iwaves.kdv.kdvimex import KdVImEx#from_netcdf from iwaves.kdv.vkdv import vKdV from iwaves.kdv.solve import solve_kdv #from iwaves.utils.plot import vKdV_plot import iwaves.utils.initial_conditions as ics import numpy as np from scipy.interpolate import PchipInterpolator as pchip import matpl...
_____no_output_____
BSD-2-Clause
sandpit/standalone_vkdv_convergence.ipynb
mrayson/iwaves
Random Forest Classification Random Forest The fundamental idea behind a random forest is to combine many decision trees into a single model. Individually, predictions made by decision trees (or humans) may not be accurate, but combined together, the predictions will be closer to the mark on average. Pros - can hand...
### imports ### import pandas as pd import numpy as np import sklearn df = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/CS_Notes/main/Classification_Notes/bill_authentication.csv') # read in the file print('data frame shape:', df.shape) # show the data frame shape df.head() # show the data frame ### ins...
--- bar plots ---
MIT
Classification_Notes/SKlearn_RandomForest_Classification.ipynb
CVanchieri/CS_Notes
Encode + Clean + Organize
### encoding not necessary with this example, all are numericals ### ### check for outliers in the data ### import matplotlib.pyplot as plt # view each feature in a boxplot for column in df: plt.figure() # plot figure f, ax = plt.subplots(1, 1, figsize = (10, 7)) df.boxplot([column]) # set data ### funct...
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy after removing the cwd ...
MIT
Classification_Notes/SKlearn_RandomForest_Classification.ipynb
CVanchieri/CS_Notes
Random Forest Classification - GridSearch CV - RandomSearch CV
### copy the data frame ### df1 = df.copy() ### split the data into features & target sets ### X = df1.iloc[:, 0:4].values # set the features y = df1.iloc[:, 4].values # set the target print('--- data shapes --- ') print('X shape:', X.shape) print('y shape:', y.shape) ### set the train test split parameters ### fro...
--- distplot accuracy ---
MIT
Classification_Notes/SKlearn_RandomForest_Classification.ipynb
CVanchieri/CS_Notes
GridSearch CV
### copy the data frame ### df2 = df.copy() ### split the data into features & target sets ### # for single regression select 1 feature X = df2.iloc[:, 0:4].values # set the features y = df2.iloc[:, 4].values # set the target print('--- data shapes --- ') print('X shape:', X.shape) print('y shape:', y.shape) ### set...
--- distplot accuracy ---
MIT
Classification_Notes/SKlearn_RandomForest_Classification.ipynb
CVanchieri/CS_Notes
RandomSearch CV
### copy the data frame ### df3 = df.copy() ### split the data into features & target sets ### # for single regression select the 1 feature X = df3.iloc[:, 0:4].values # set the features y = df3.iloc[:, 4].values # set the target print('--- data shapes --- ') print('X shape:', X.shape) # show the shape print('y shape...
--- distplot accuracy ---
MIT
Classification_Notes/SKlearn_RandomForest_Classification.ipynb
CVanchieri/CS_Notes
.init setup keras-retinanet
!git clone https://github.com/fizyr/keras-retinanet.git %cd keras-retinanet/ !pip install . !python setup.py build_ext --inplace
Cloning into 'keras-retinanet'... remote: Enumerating objects: 4712, done. remote: Total 4712 (delta 0), reused 0 (delta 0), pack-reused 4712 Receiving objects: 100% (4712/4712), 14.43 MiB | 36.84 MiB/s, done. Resolving deltas: 100% (3128/3128), done. /content/keras-retinanet Processing /content/keras-retinanet R...
Apache-2.0
RetinaNet_Video_Object_Detection.ipynb
thingumajig/colab-experiments
download model
#!curl -LJO --output snapshots/pretrained.h5 https://github.com/fizyr/keras-retinanet/releases/download/0.5.0/resnet50_coco_best_v2.1.0.h5 import urllib PRETRAINED_MODEL = './snapshots/_pretrained_model.h5' URL_MODEL = 'https://github.com/fizyr/keras-retinanet/releases/download/0.5.0/resnet50_coco_best_v2.1.0.h5' url...
_____no_output_____
Apache-2.0
RetinaNet_Video_Object_Detection.ipynb
thingumajig/colab-experiments
inference modules
!pwd #import os, sys #sys.path.insert(0, 'keras-retinanet') # show images inline %matplotlib inline # automatically reload modules when they have changed %load_ext autoreload %autoreload 2 import os #os.environ['CUDA_VISIBLE_DEVICES'] = '0' # import keras import keras from keras_retinanet import models from ker...
/content/keras-retinanet
Apache-2.0
RetinaNet_Video_Object_Detection.ipynb
thingumajig/colab-experiments
load model
# %cd keras-retinanet/ model_path = os.path.join('snapshots', sorted(os.listdir('snapshots'), reverse=True)[0]) print(model_path) print(os.path.isfile(model_path)) # load retinanet model model = models.load_model(model_path, backbone_name='resnet50') # model = models.convert_model(model) # load label to names mappin...
snapshots/_pretrained_model.h5 True WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically b...
Apache-2.0
RetinaNet_Video_Object_Detection.ipynb
thingumajig/colab-experiments
detect objects
def img_inference(img_path, threshold_score = 0.8): image = read_image_bgr(img_path) # copy to draw on draw = image.copy() draw = cv2.cvtColor(draw, cv2.COLOR_BGR2RGB) # preprocess image for network image = preprocess_image(image) image, scale = resize_image(image) # process image start = time.ti...
_____no_output_____
Apache-2.0
RetinaNet_Video_Object_Detection.ipynb
thingumajig/colab-experiments
Project: Part of Speech Tagging with Hidden Markov Models --- IntroductionPart of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Ta...
# Jupyter "magic methods" -- only need to be run once per kernel restart %load_ext autoreload %aimport helpers, tests %autoreload 1 # import python modules -- this cell needs to be run again if you make changes to any of the files import matplotlib.pyplot as plt import numpy as np from IPython.core.display import HTML...
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Step 1: Read and preprocess the dataset---We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processe...
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8) print("There are {} sentences in the corpus.".format(len(data))) print("There are {} sentences in the training set.".format(len(data.training_set))) print("There are {} sentences in the testing set.".format(len(data.testing_set))) asser...
There are 57340 sentences in the corpus. There are 45872 sentences in the training set. There are 11468 sentences in the testing set.
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
The Dataset InterfaceWe can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, to make sure you und...
key = 'b100-38532' print("Sentence: {}".format(key)) print("words:\n\t{!s}".format(data.sentences[key].words)) print("tags:\n\t{!s}".format(data.sentences[key].tags))
Sentence: b100-38532 words: ('Perhaps', 'it', 'was', 'right', ';', ';') tags: ('ADV', 'PRON', 'VERB', 'ADJ', '.', '.')
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data. Counting Unique ElementsW...
print("There are a total of {} samples of {} unique words in the corpus." .format(data.N, len(data.vocab))) print("There are {} samples of {} unique words in the training set." .format(data.training_set.N, len(data.training_set.vocab))) print("There are {} samples of {} unique words in the testing set." ...
There are a total of 1161192 samples of 56057 unique words in the corpus. There are 928458 samples of 50536 unique words in the training set. There are 232734 samples of 25112 unique words in the testing set. There are 5521 words in the test set that are missing in the training set.
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Accessing word and tag SequencesThe `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset.
# accessing words with Dataset.X and tags with Dataset.Y for i in range(2): print("Sentence {}:".format(i + 1), data.X[i]) print() print("Labels {}:".format(i + 1), data.Y[i]) print()
Sentence 1: ('Mr.', 'Podger', 'had', 'thanked', 'him', 'gravely', ',', 'and', 'now', 'he', 'made', 'use', 'of', 'the', 'advice', '.') Labels 1: ('NOUN', 'NOUN', 'VERB', 'VERB', 'PRON', 'ADV', '.', 'CONJ', 'ADV', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', '.') Sentence 2: ('But', 'there', 'seemed', 'to', 'be', 'som...
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Accessing (word, tag) SamplesThe `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus.
# use Dataset.stream() (word, tag) samples for the entire corpus print("\nStream (word, tag) pairs:\n") for i, pair in enumerate(data.stream()): print("\t", pair) if i > 5: break
Stream (word, tag) pairs: ('Mr.', 'NOUN') ('Podger', 'NOUN') ('had', 'VERB') ('thanked', 'VERB') ('him', 'PRON') ('gravely', 'ADV') (',', '.')
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. The next several cells will complete functions to compute the counts of several sets of counts. Step 2: Build a Most Frequent Class tagger---P...
def pair_counts(sequences_A, sequences_B): """Return a dictionary keyed to each unique value in the first sequence list that counts the number of occurrences of the corresponding value from the second sequences list. For example, if sequences_A is tags and sequences_B is the corresponding words...
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
IMPLEMENTATION: Most Frequent Class TaggerUse the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.The `MFCTagger` class is ...
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word from collections import namedtuple FakeState = namedtuple("FakeState", "name") class MFCTagger: # NOTE: You should not need to modify this class or any of its methods missing = FakeState(name="...
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Making Predictions with a ModelThe helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these function...
def replace_unknown(sequence): """Return a copy of the input sequence where each unknown word is replaced by the literal string value 'nan'. Pomegranate will ignore these values during computation. """ return [w if w in data.training_set.vocab else 'nan' for w in sequence] def simplify_decoding(X, ...
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Example Decoding Sequences with MFC Tagger
for key in data.testing_set.keys[:3]: print("Sentence Key: {}\n".format(key)) print("Predicted labels:\n-----------------") print(simplify_decoding(data.sentences[key].words, mfc_model)) print() print("Actual labels:\n--------------") print(data.sentences[key].tags) print("\n")
Sentence Key: b100-28144 Predicted labels: ----------------- ['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.'] Actual labels: -------------- ('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN...
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Evaluating Model AccuracyThe function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus.
def accuracy(X, Y, model): """Calculate the prediction accuracy by using the model to decode each sequence in the input X and comparing the prediction with the true labels in Y. The X should be an array whose first dimension is the number of sentences to test, and each element of the array should b...
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Evaluate the accuracy of the MFC taggerRun the next cell to evaluate the accuracy of the tagger on the training and test corpus.
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model) print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc)) mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model) print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc)) assert ...
training accuracy mfc_model: 95.72% testing accuracy mfc_model: 93.00%
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Step 3: Build an HMM tagger---The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of movi...
def unigram_counts(sequences): """Return a dictionary keyed to each unique value in the input sequence list that counts the number of occurrences of the value in the sequences list. The sequences collection should be a 2-dimensional array. For example, if the tag NOUN appears 275558 times over all ...
{'ADV': 44877, 'NOUN': 220632, '.': 117757, 'VERB': 146161, 'ADP': 115808, 'ADJ': 66754, 'CONJ': 30537, 'DET': 109671, 'PRT': 23906, 'NUM': 11878, 'PRON': 39383, 'X': 1094}
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
IMPLEMENTATION: Bigram CountsComplete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(t...
def bigram_counts(sequences): """Return a dictionary keyed to each unique PAIR of values in the input sequences list that counts the number of occurrences of pair in the sequences list. The input should be a 2-dimensional array. For example, if the pair of tags (NOUN, VERB) appear 61582 times, then...
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
IMPLEMENTATION: Sequence Starting CountsComplete the code below to estimate the bigram probabilities of a sequence starting with each tag.
def starting_counts(sequences): """Return a dictionary keyed to each unique value in the input sequences list that counts the number of occurrences where that value is at the beginning of a sequence. For example, if 8093 sequences start with NOUN, then you should return a dictionary such that y...
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
IMPLEMENTATION: Sequence Ending CountsComplete the function below to estimate the bigram probabilities of a sequence ending with each tag.
def ending_counts(sequences): """Return a dictionary keyed to each unique value in the input sequences list that counts the number of occurrences where that value is at the end of a sequence. For example, if 18 sequences end with DET, then you should return a dictionary such that your_starting_...
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
IMPLEMENTATION: Basic HMM TaggerUse the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.- Add one state per tag - The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$- Add an edge from the starting state `basic_model.start` to ea...
basic_model = HiddenMarkovModel(name="base-hmm-tagger") states = {} for tag in emission_counts: tag_count = tag_unigrams[tag] prob_distributuion = {word : word_count/tag_count for word, word_count in emission_counts[tag].items() } state = State(DiscreteDistribution(prob_distributuion), name=tag) state...
training accuracy basic hmm model: 97.54% testing accuracy basic hmm model: 96.18%
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Example Decoding Sequences with the HMM Tagger
for key in data.testing_set.keys[:3]: print("Sentence Key: {}\n".format(key)) print("Predicted labels:\n-----------------") print(simplify_decoding(data.sentences[key].words, basic_model)) print() print("Actual labels:\n--------------") print(data.sentences[key].tags) print("\n")
Sentence Key: b100-28144 Predicted labels: ----------------- ['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.'] Actual labels: -------------- ('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN...
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Step 4: [Optional] Improving model performance---There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there ...
import nltk from nltk import pos_tag, word_tokenize from nltk.corpus import brown nltk.download('brown') training_corpus = nltk.corpus.brown training_corpus.tagged_sents()[0]
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
[Bucles `for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) Iterando listas
mi_lista = [1, 2, 3, 4, 'Python', 'es', 'piola'] for item in mi_lista: print(item)
_____no_output_____
MIT
notebooks/beginner/notebooks/for_loops.ipynb
mateodif/learn-python3
`break`Parar la ejecución del bucle.
for item in mi_lista: if item == 'Python': break print(item)
_____no_output_____
MIT
notebooks/beginner/notebooks/for_loops.ipynb
mateodif/learn-python3
`continue`Continúa al próximo item sin ejecutar las lineas después de `continue` dentro del bucle.
for item in mi_lista: if item == 1: continue print(item)
_____no_output_____
MIT
notebooks/beginner/notebooks/for_loops.ipynb
mateodif/learn-python3