markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Implementation: Data ExplorationA cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \$50,000. In the code cell below, you will need to compute the following:- The total number of records, `'n_recor... | # Total number of records
n_records = data.shape[0]
# Number of records where individual's income is more than $50,000
n_greater_50k = data[data['income'] == '>50K'].shape[0]
# Number of records where individual's income is at most $50,000
n_at_most_50k = data[data['income'] == '<=50K'].shape[0]
# Percentage of indi... | Total number of records: 45222
Individuals making more than $50,000: 11208
Individuals making at most $50,000: 34014
Percentage of individuals making more than $50,000: 24.78%
| Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
---- Preparing the DataBefore data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as **preprocessing**. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about... | # Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# Visualize skewed continuous features of original data
vs.distribution(data) | _____no_output_____ | Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
For highly-skewed feature distributions such as `'capital-gain'` and `'capital-loss'`, it is common practice to apply a logarithmic transformation on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly r... | # Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
vs.distribution(features_raw, transformed = True) | _____no_output_____ | Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
Normalizing Numerical FeaturesIn addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as `'capital-gain'` or `'capital-los... | # Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical]... | _____no_output_____ | Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
Implementation: Data PreprocessingFrom the table in **Exploring the Data** above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called *categorical variables*) be converted. One popular wa... | # One-hot encode the 'features_raw' data using pandas.get_dummies()
features = pd.get_dummies(features_raw)
# Encode the 'income_raw' data to numerical values
income = income_raw.apply(lambda x: 1 if x == '>50K' else 0)
# Print the number of features after one-hot encoding
encoded = list(features.columns)
print "{} t... | 103 total features after one-hot encoding.
['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week', 'workclass_ Federal-gov', 'workclass_ Local-gov', 'workclass_ Private', 'workclass_ Self-emp-inc', 'workclass_ Self-emp-not-inc', 'workclass_ State-gov', 'workclass_ Without-pay', 'education_level_ 10th... | Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
Shuffle and Split DataNow all _categorical variables_ have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.Run the c... | # Import train_test_split
from sklearn.cross_validation import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0)
# Show the results of the split
print "Training set has {} sa... | Training set has 36177 samples.
Testing set has 9045 samples.
| Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
---- Evaluating Model PerformanceIn this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of your choice, and the fourth algorithm is known as a *naive predictor*. Metrics and the Naive Predictor*CharityML*, ... | # Calculate accuracy
accuracy = 1.0 * n_greater_50k / n_records
# Calculate F-score using the formula above for beta = 0.5
recall = 1.0
fscore = (
(1 + 0.5**2) * accuracy * recall
) / (
0.5**2 * accuracy + recall
)
# Print the results
print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(... | Naive Predictor: [Accuracy score: 0.2478, F-score: 0.2917]
| Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
Supervised Learning Models**The following supervised learning models are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:**- Gaussian Naive Bayes (GaussianNB)- Decision Trees- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boo... | # Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the si... | _____no_output_____ | Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
Implementation: Initial Model EvaluationIn the code cell, you will need to implement the following:- Import the three supervised learning models you've discussed in the previous section.- Initialize the three models and store them in `'clf_A'`, `'clf_B'`, and `'clf_C'`. - Use a `'random_state'` for each model you use... | # Import the three supervised learning models from sklearn
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
# Initialize the three models
clf_A = LinearSVC(random_state=42)
clf_B = LogisticRegression(random_state=42)
clf_C = KNeigh... | LinearSVC trained on 362 samples.
LinearSVC trained on 3618 samples.
LinearSVC trained on 36177 samples.
LogisticRegression trained on 362 samples.
LogisticRegression trained on 3618 samples.
LogisticRegression trained on 36177 samples.
KNeighborsClassifier trained on 362 samples.
KNeighborsClassifier trained on 3618 s... | Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
---- Improving ResultsIn this final section, you will choose from the three supervised learning models the *best* model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (`X_train` and `y_train`) by tuning at least one parameter to improve upon the u... | # Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
# Initialize the classifier
clf = LinearSVC(random_state=42)
# Create the parameters list you wish to tune
parameters = {
'C': [.1, .5, 1.0, 5.0, 10.0],
... | Optimized params for Linear SVM: {'loss': 'squared_hinge', 'C': 10.0, 'random_state': 0, 'tol': 0.001}
| Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
Question 5 - Final Model Evaluation_What is your optimized model's accuracy and F-score on the testing data? Are these scores better or worse than the unoptimized model? How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in **Question 1**?_ **Note:** Fill in the t... | # Import a supervised learning model that has 'feature_importances_'
from sklearn.ensemble import AdaBoostClassifier
# Train the supervised model on the training set
model = AdaBoostClassifier(random_state=42).fit(X_train, y_train)
# Extract the feature importances
importances = model.feature_importances_
# Plot
vs... | _____no_output_____ | Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
Question 7 - Extracting Feature ImportanceObserve the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \$50,000. _How do these five features compare to the five features you discussed in **Question 6**? If you were close to the same answ... | # print top 10 features importances
def rank_features(features, scores, descending=True, n=10):
"""
sorts and cuts features by scores.
:return: array of [feature name, score] tuples
"""
return sorted(
[[f, s] for f, s in zip(features, scores) if s],
key=lambda x: x[1],
revers... | _____no_output_____ | Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
**Answer:**From the top 5 features selected by *AdaBoostClassifier* we got 4 hits (*age*, *capital-gain*, *hours-per-week* and *education-level*). That *capital-loss* has a such big influence is really surprising and by looking at the cell above, *income* and *capital-loss* are even positively correlated. Our top one g... | # Import functionality for cloning a model
from sklearn.base import clone
# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# Train on the "best" model found from grid se... | Relative Diff. of training times: 94.68%
| Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
Question 8 - Effects of Feature Selection*How does the final model's F-score and accuracy score on the reduced data using only five features compare to those same scores when all features are used?* *If training time was a factor, would you consider using the reduced data as your training set?* **Answer:** Both the a... | import IPython
print IPython.sys_info()
!pip freeze | alabaster==0.7.9
anaconda-client==1.6.0
anaconda-navigator==1.4.3
argcomplete==1.0.0
astroid==1.4.9
astropy==1.3
Babel==2.3.4
backports-abc==0.5
backports.shutil-get-terminal-size==1.0.0
backports.ssl-match-hostname==3.4.0.2
beautifulsoup4==4.5.3
bitarray==0.8.1
blaze==0.10.1
bokeh==0.12.4
boto==2.45.0
Bottleneck==1.2.... | Apache-2.0 | p2_sl_finding_donors/p2_sl_finding_donors.ipynb | superkley/udacity-mlnd |
Describing continuous variables using Probability Density Functions | import numpy as np
import matplotlib.pyplot as plt
data = np.random.normal(0.5, 0.1, 1000)
histogram = plt.hist(data, bins=10, range=(0.1, 1.5))
histogram = plt.hist(data, bins=20, range=(0.1, 1.5), density=True)
height = histogram[0][6].round(4)
x1 = histogram[1][6].round(4)
x2 = histogram[1][7].round(4)
3.24 * 0.07 | _____no_output_____ | MIT | module_9_statistics_probability/probability_density_function_test.ipynb | wiplane/foundations-of-datascience-ml |
__General basic approach for applying Data Science.__- Collect Data.- Extract features.- Extract the target(label).- Select the Estimator for learning.- Tune the parameters.- Fit the train data set. - Test against testing_data_set.- Check accuracy.- Deploy to production.- Write unit test cases for model.- w... | #Import Seaborn
import seaborn as sns | _____no_output_____ | Apache-2.0 | Iris/Iris.ipynb | sachin032/Supervised-Machin-Learning |
__Seaborn comes with the iris data set , all we need is to load it. After loading we can do some spy things over data__ | #Load iris data set from Sea born
iris = sns.load_dataset("iris")
iris.head(4)
%matplotlib inline
import seaborn as sns;
sns.set()
sns.pairplot(iris, hue='species', size=3.5); | _____no_output_____ | Apache-2.0 | Iris/Iris.ipynb | sachin032/Supervised-Machin-Learning |
__Drop 'Species' feature from feature matrix, and look at the shape.__ | iris.shape
#Perform basic EDA
iris.describe()
#Spy over how many outcomes are present in the Dataset
iris.species.unique() | _____no_output_____ | Apache-2.0 | Iris/Iris.ipynb | sachin032/Supervised-Machin-Learning |
__Time to split the iris dataset into Training:Tesing datset. Remember there is no standard approach fro this dividation even though we divide, Based on suggestions from ML/Data science leaders 70:30 approach is good.__ | #Import train_test_split
from sklearn.model_selection import train_test_split
#Split iris dataset into training and testing datset
trainIris , testIris = train_test_split(iris,test_size = 0.3)
#Look over training set
trainIris.head()
#Look over testing set
testIris.head() | _____no_output_____ | Apache-2.0 | Iris/Iris.ipynb | sachin032/Supervised-Machin-Learning |
__Testing dataset must not hold the target variable/outcomes, so that we can predict the outcome using our trained regression model from trainig datset__ | #Drop Species from testing dataset
testIris = testIris.drop(['species'],axis=1)
#Test set after dropping target/outcome column
testIris.head() | _____no_output_____ | Apache-2.0 | Iris/Iris.ipynb | sachin032/Supervised-Machin-Learning |
Structural Transformation NotesBelow some brief notes on general equilibrium modeling of structural transformation. Some of the presentation illustrates and expands upon this short useful survey:> Matsuyama, K., 2008. Structural change. in Durlauf and Blume eds. *The new Palgrave dictionary of economics* 2, pp.The no... | import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import fsolve
def F(n, a):
return n ** a
def Fprime(n, a):
return a* n ** (a-1)
def PPF(A1=1, A2=1, a1=0.5, a2=0.5, ax=None):
if ax is None:
ax = plt.gca()
n = np.linspace(0,1,50)
plt.plot( A1*F(n, a1), A2*F(1-n, a... | _____no_output_____ | MIT | notebooks/StructuralT1.ipynb | jhconning/DevII |
Push and/or PullThe large literature on structural transformation often distinguishes between forces that 'Push' or 'Pull' labor out of agriculture. 'Pull' could come about, for example, via an increase over time of the relative price of manufactures $p$, or an increase in relative TFP $A_2/A_1$. These have the effect... | def weq(A1=1, A2=1, a1=0.5, a2=0.5, p=1):
def foc(n):
return p * A2 * Fprime(1-n, a2) - A1 * Fprime(n, a1)
n = 0.75 # guess
ne = fsolve(foc, n)[0]
we = A1 * Fprime(ne, a1)
return ne, we
def sfm(A1=1, A2=1, a1=0.5, a2=0.5, p=1, ax=None):
if ax is None:
ax = plt.gca()... | _____no_output_____ | MIT | notebooks/StructuralT1.ipynb | jhconning/DevII |
**Pull: Impact of increase in relative price of manufactures in open Economy**Exactly like a specific factors model diagram. A very similar diagram would depict effect of increase in sector 2 relative TFP $A_2/A_1$ | sfm(p=1)
sfm(p=1.5) | _____no_output_____ | MIT | notebooks/StructuralT1.ipynb | jhconning/DevII |
Exogenously driven increases in the relative productivity of manufactures drives this 'pull' effect. As Matsuyama explains, this is the sort of mechanism envisioned by Lewis (1954) although the Lewis model also has a form of dualism not captured here. In particular, we can see (from the diagram above) that in these mo... | c1 = np.linspace(0,4,100)
def c2(c1, gam, beta, p):
return (c1 - gam)/(beta * p)
plt.plot(c1, c2(c1, 0, 0.5, 1))
plt.plot(c1, c2(c1, 1, 0.5, 1))
plt.ylim(0, 4), plt.xlim(0, 4)
plt.xlabel(r'$C_1$'), plt.ylabel(r'$C_2$')
plt.grid()
plt.gca().set_aspect('equal') | _____no_output_____ | MIT | notebooks/StructuralT1.ipynb | jhconning/DevII |
Closed Economy We're looking for a tangency between the PPF and the representative agent's indifference curve, equal to the common price ratio. This $MRS = p= MPT$ condition can be written:$$\frac{1}{\beta} \frac{C_1 - \gamma}{C_2} = p = \frac{A_1 F_1^\prime (n)}{A_2 F_2^\prime (1-n)} $$Using the fact that a closed e... | def lhs(n, a1, a2, beta):
F1 = F(n, a1)
dF1 = Fprime(n, a1)
F2 = F(1-n, a2)
dF2 = Fprime(1-n, a2)
return F1 - (beta*F2*dF1)/dF2
n = np.linspace(0.2,0.7,50)
plt.plot(n, lhs(n, 0.5, 0.5, 0.5), color='r')
plt.axhline(0);
plt.axhline(0.5, linestyle='--')
plt.xlabel(r'$n$'); | _____no_output_____ | MIT | notebooks/StructuralT1.ipynb | jhconning/DevII |
We can solve for the closed economy equilibrium and plot things on a PPF diagram. | def neq(A1=1, a1=0.5, a2=0.5, beta=0.5, gamma= 0.5):
'''Closed economy eqn from MRS=MPT'''
def foc(n):
return lhs(n, a1, a2, beta) - gamma/A1
n = 0.7 # guess
ne = fsolve(foc, n)[0]
return ne
def plot_opt(A1, A2, a1, a2, beta, gamma):
ne = neq(A1, a1, a2, beta, gamma)
Y1 = A1 * ... | _____no_output_____ | MIT | notebooks/StructuralT1.ipynb | jhconning/DevII |
Here we see structural transformation and a rise in the relative price of manufactures as TFP in agriculture increases: | plot_opt(1, 1, 0.5, 0.75, 1, 0.4)
plot_opt(2, 1, 0.5, 0.75, 1, 0.4)
plt.xlim(left=0)
plt.ylim(bottom=0); | A1=1, n=0.58, p=0.70
A1=2, n=0.48, p=1.63
| MIT | notebooks/StructuralT1.ipynb | jhconning/DevII |
Feature ExtractionIn machine learning, feature extraction aims to compute values (features) from images, intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations. These features may be handcrafted (manually co... | ## data: vendor; magnetic field; age; gender; feats (65300)
# vendor: ge -> 10; philips -> 11; siemens -> 12
# gender: female -> 10; male -> 11
# feats: fs1 - histogram (8); fs2 - gradient (10); fs3 - lbp (10); fs4 - haar (8); fs5 - convolutional (75264)
import numpy as np
data = np.load('../Data/feats_cc... | #samples, #info: (359, 75304)
patients age: [ 55. 56. 63. 67. 62. 63. 62. 60. 69. 69. 49. 43. 66. 62. 44.
55. 50. 41. 57. 65. 48. 43. 43. 65. 51. 65. 41. 63. 51. 42.
65. 44. 67. 43. 49. 49. 41. 41. 41. 55. 61. 67. 58. 36. 49.
42. 54. 53. 43. 45. 44. 51. 39. 46. ... | MIT | JNotebooks/feats-CC-hand-conv.ipynb | rmsouza01/ML101 |
Imports | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import ensemble
from sklearn import metrics
from io import StringIO
from csv import writer | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Read in csv files | matches = pd.read_csv('../csv/matches.csv')
players = pd.read_csv('../csv/players.csv')
hero_names = pd.read_json('../json/heroes.json')
cluster_regions = pd.read_csv('./Data/cluster_regions.csv')
matches
players.head()
hero_names.head() | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Data info Hero InfoMost and least popular heroes | num_heroes = len(hero_names)
plt.hist(players['hero_id'], num_heroes)
plt.show()
hero_counts = players['hero_id'].value_counts().rename_axis('hero_id').reset_index(name='num_matches')
pd.merge(hero_counts, hero_names, left_on='hero_id', right_on='id') | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Server InfoWhere the most and least games are played | plt.hist(matches['cluster'], bins=np.arange(matches['cluster'].min(), matches['cluster'].max()+1))
plt.show()
cluster_counts = matches['cluster'].value_counts().rename_axis('cluster').reset_index(name='num_matches')
pd.merge(cluster_counts, cluster_regions, on='cluster')
short_players = players.iloc[:, :11]
short_playe... | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Data cleaningWe start with an empty list of DataFrams and add to it as we create DataFrames of bad match ids. In the end we combine all the DataFrames and remove their match ids from the Matches DataFrame. | dfs_bad_matches = [] | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Abandonsremove games were a player has abandoned the match | abandoned_matches = players[players.leaver_status > 1][['match_id']]
abandoned_matches = abandoned_matches.drop_duplicates().reset_index(drop=True)
dfs_bad_matches.append(abandoned_matches)
abandoned_matches | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Missing Hero idremove games where a player is not assigned a hero id, but didnt get flaged for an abandon | player_no_hero = players[players.hero_id == 0][['match_id']].reset_index(drop=True)
dfs_bad_matches.append(player_no_hero)
player_no_hero | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Wrong Game Moderemove games not played in "Ranked All Pick" (22) | wrong_mode = matches[matches.game_mode != 22].reset_index()[['match_id']]
dfs_bad_matches.append(wrong_mode)
wrong_mode | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Game length (short)remove games we deem too short (< 15 min) | short_length = 15 * 60
short_matches = matches[matches.duration < short_length].reset_index()[['match_id']]
dfs_bad_matches.append(short_matches)
short_matches | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Game length (long)Next we want to get matches with a too long duration (>90 min) | long_length = 90 * 60
long_matches = matches[matches.duration > long_length].reset_index()[['match_id']]
dfs_bad_matches.append(long_matches)
long_matches | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Combine all our lists of bad matchescombine matches and create a filtered match dataframe with only good matches | bad_match_ids = pd.concat(dfs_bad_matches, ignore_index=True).drop_duplicates()
bad_match_ids | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Remove bad matches | filtered_matches = matches[~matches['match_id'].isin(bad_match_ids['match_id'])]
filtered_matches.info() | <class 'pandas.core.frame.DataFrame'>
Int64Index: 115823 entries, 0 to 145324
Data columns (total 22 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 radiant_win 115823 non-null bool
1 duration 115823 non-null int... | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Remove duplicate matches | filtered_matches = filtered_matches.drop_duplicates(subset=['match_id'])
filtered_matches.info()
filtered_players = players[~players['match_id'].isin(bad_match_ids['match_id'])]
filtered_players.info()
filtered_players = filtered_players.drop_duplicates(subset=['match_id', 'player_slot'])
filtered_players.info()
filte... | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Convert our match listConvert our match list to the form of :r_1, r_2, r_3, r_4, r_5, d_1, d_2, d_3, d_4, d_5, r_win | r_names = []
d_names = []
for slot in range(1, 6):
r_name = 'r_' + str(slot)
d_name = 'd_' + str(slot)
r_names.append(r_name)
d_names.append(d_name)
columns = (r_names + d_names + ['r_win'])
new_row = [-1] * (5 + 5 + 1)
# test_players = players.iloc[:500, :]
# test_matches = matches.iloc[:50, :]
col... | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
Stats | players
player_stats = players.drop(columns=['account_id', 'match_id', 'leaver_status'])
player_stats_short = player_stats.drop(columns=['item_0','item_1','item_2','item_3','item_4','item_5','backpack_0','backpack_1','backpack_2','item_neutral', 'player_slot']).groupby(['hero_id']).mean()
player_stats_short
player_stat... | _____no_output_____ | MIT | jupyter notebook/Dota2 new data.ipynb | alykkehoy/Dota-2-winning-team-predictor |
K-NN regression 알고리즘 | from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors = 10, weights = "distance")
regressor.fit(X_train.drop(columns='player_name'), y_train)
y_pred = regressor.predict(X_test.drop(columns='player_name'))
y_pred_train = regressor.predict(X_train.drop(columns='player_name'))
res... | _____no_output_____ | MIT | 0.Project/3. Machine Learning Practice/2. Football/2. K-NN Regression parctice.ipynb | jskim0406/Study |
K-NN regression 알고리즘 -> 전체를 다 학습시킴.. | from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors = 10, weights = "distance")
regressor.fit(data.drop(columns=['player_name','value']), data.value)
y_pred = regressor.predict(data.drop(columns=['player_name','value']))
result = []
for i in range(len(y_pred)):
if data.v... | _____no_output_____ | MIT | 0.Project/3. Machine Learning Practice/2. Football/2. K-NN Regression parctice.ipynb | jskim0406/Study |
Lambda School Data Science*Unit 2, Sprint 3, Module 1*--- Define ML problemsYou will use your portfolio project dataset for all assignments this sprint. AssignmentComplete these tasks for your project, and document your decisions.- [x] Choose your target. Which column in your tabular dataset will you predict?- [x] Is ... | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
url = 'https://raw.githubusercontent.com/Skantastico/DS-Unit-2-Applied-Modeling/master/data/Anime.csv'
df = pd.read_csv(url) | _____no_output_____ | MIT | LS_DSPT3_231_Updated_assignment_applied_modeling_1.ipynb | Skantastico/DS-Unit-2-Applied-Modeling |
My DatasetAnime Ratings from the 'iMDB" of Anime, called myanimelist.net | df.head(7) | _____no_output_____ | MIT | LS_DSPT3_231_Updated_assignment_applied_modeling_1.ipynb | Skantastico/DS-Unit-2-Applied-Modeling |
Summary of numeric and non-numeric columns at a glance | df.describe().T
df.describe(exclude='number').T
col_list = df.columns.values.tolist()
col_list | _____no_output_____ | MIT | LS_DSPT3_231_Updated_assignment_applied_modeling_1.ipynb | Skantastico/DS-Unit-2-Applied-Modeling |
I was running into trouble during data exploration, there seems to be a space after every column | ## I found this piece of code on medium that seems like a catch-all for fixing columns
df.columns = df.columns.str.strip().str.lower().str.replace(' ', '_').str.replace('(', '').str.replace(')', '')
df
col_list = df.columns.values.tolist()
col_list
df.columns
df.columns.map(lambda x: x.strip())
df.columns
df.genre.va... | _____no_output_____ | MIT | LS_DSPT3_231_Updated_assignment_applied_modeling_1.ipynb | Skantastico/DS-Unit-2-Applied-Modeling |
Ok that seems to have fixed it.There seem to be at least around 900 'adult themed' anime which I will probably remove from the dataset, or at least from any public portions just to be safe.If it affects the model accuracy at all or is relevant, I will include it for calculations and just make a note. Choose Your Targ... | # My Target will be involving the 'score' column
df.score.value_counts(ascending=False) | _____no_output_____ | MIT | LS_DSPT3_231_Updated_assignment_applied_modeling_1.ipynb | Skantastico/DS-Unit-2-Applied-Modeling |
As I will be using the entire spectrum of score, this will be a regression. How is my target distributed? | # The mean seems to be around 6.3, with only 25% of the dataset above a 7.05
df.score.describe()
df['mean'] = df['score'] >= 6.2845
df['mean'].value_counts(normalize=True) | _____no_output_____ | MIT | LS_DSPT3_231_Updated_assignment_applied_modeling_1.ipynb | Skantastico/DS-Unit-2-Applied-Modeling |
So there are about 51% anime that are above average (before cleaning) Which Observations will I use to train? There's lots of options, but at the very least these look interesting:Numeric:* Episodes* Airing*Aired*Duration*Score*Popularity*RankNon-numeric:* Type*Source*Producer*Genre*Studio*Rating On my old dataset,... | _____no_output_____ | MIT | LS_DSPT3_231_Updated_assignment_applied_modeling_1.ipynb | Skantastico/DS-Unit-2-Applied-Modeling | |
Day 5: Optimal Mind ControlWelcome to Day 6! Now that we can simulate a model network of conductance-based neurons, we discuss the limitations of our approach and attempts to work around these issues. Memory ManagementUsing Python and TensorFlow allowed us to write code that is readable, parallizable and scalable acr... | import numpy as np
import tf_integrator as tf_int
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
## OR ##
# import tensorflow.compat.v1 as tf
# tf.disable_v2_behavior() | _____no_output_____ | MIT | Tutorial/Supplementary: Jupyter Notebooks/Day 5: Optimal Mind Control/.ipynb_checkpoints/Day 5-checkpoint.ipynb | matpalm/PSST |
Recall the ModelFor implementing a Batch system, we do not need to change how we construct our model only how we execute it. Step 1: Initialize Parameters and Dynamical Equations; Define Input | n_n = 3 # number of simultaneous neurons to simulate
sim_res = 0.01 # Time Resolution of the Simulation
sim_time = 700 # Length of the Simulation
t = np.arange(0,sim_time,sim_res)
# Acetylcholine
ach_mat = np.zeros((n_n,n_n)) # Ach Synapse Connectivity Matrix
ach_mat[1... | _____no_output_____ | MIT | Tutorial/Supplementary: Jupyter Notebooks/Day 5: Optimal Mind Control/.ipynb_checkpoints/Day 5-checkpoint.ipynb | matpalm/PSST |
Step 2: Define the Initial Condition of the Network and Add some Noise to the initial conditions | # Initializing the State Vector and adding 1% noise
state_vector = [-71]*n_n+[0,0,0]*n_n+[0]*n_ach+[0]*n_gaba+[-9999999]*n_n
state_vector = np.array(state_vector)
state_vector = state_vector + 0.01*state_vector*np.random.normal(size=state_vector.shape) | _____no_output_____ | MIT | Tutorial/Supplementary: Jupyter Notebooks/Day 5: Optimal Mind Control/.ipynb_checkpoints/Day 5-checkpoint.ipynb | matpalm/PSST |
Step 3: Splitting Time Series into independent batches and Run Each Batch SequentiallySince we will be dividing the computation into batches, we have to split the time array such that for each new call, the final state vector of the last batch will be the initial condition for the current batch. The function $np.array... | # Define the Number of Batches
n_batch = 2
# Split t array into batches using numpy
t_batch = np.array_split(t,n_batch)
# Iterate over the batches of time array
for n,i in enumerate(t_batch):
# Inform start of Batch Computation
print("Batch",(n+1),"Running...",end="")
# In np.array_split(), the ... | Batch 1 Running...Finished
Batch 2 Running...Finished
| MIT | Tutorial/Supplementary: Jupyter Notebooks/Day 5: Optimal Mind Control/.ipynb_checkpoints/Day 5-checkpoint.ipynb | matpalm/PSST |
Putting the Output TogetherThe output from our batch implementation is a set of binary files that store parts of our total simulation. To get the overall output we have to stitch them back together. | overall_state = []
# Iterate over the generated output files
for n,i in enumerate(["part_"+str(n+1)+".npy" for n in range(n_batch)]):
# Since the first element in the series was the last output, we remove them
if n>0:
overall_state.append(np.load(i)[1:,:])
else:
overall_state.append(np... | _____no_output_____ | MIT | Tutorial/Supplementary: Jupyter Notebooks/Day 5: Optimal Mind Control/.ipynb_checkpoints/Day 5-checkpoint.ipynb | matpalm/PSST |
Visualizing the Overall DataFinally, we plot the voltage traces of the 3 neurons as a Voltage vs Time heatmap. | plt.figure(figsize=(12,6))
sns.heatmap(overall_state[::100,:3].T,xticklabels=100,yticklabels=5,cmap='RdBu_r')
plt.xlabel("Time (in ms)")
plt.ylabel("Neuron Number")
plt.title("Voltage vs Time Heatmap for Projection Neurons (PNs)")
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | Tutorial/Supplementary: Jupyter Notebooks/Day 5: Optimal Mind Control/.ipynb_checkpoints/Day 5-checkpoint.ipynb | matpalm/PSST |
By this method, we have maximized the usage of our available memory but we can go further and develop a method to allow indefinitely long simulation. The issue behind this entire algorithm is that the memory is not cleared until the python kernel finishes. One way to overcome this is to save the parameters of the model... | from subprocess import call
import numpy as np
total_time = 700
n_splits = 2
time = np.split(np.arange(0,total_time,0.01),n_splits)
# Append the last time point to the beginning of the next batch
for n,i in enumerate(time):
if n>0:
time[n] = np.append(i[0]-0.01,i)
np.save("time",time)
# call successive ... | _____no_output_____ | MIT | Tutorial/Supplementary: Jupyter Notebooks/Day 5: Optimal Mind Control/.ipynb_checkpoints/Day 5-checkpoint.ipynb | matpalm/PSST |
Implementing the Runner code"run.py" is essentially identical to the batch-implemented model we developed above with the changes described below: | # Additional Imports #
import sys
# Duration of Simulation #
# t = np.arange(0,sim_time,sim_res)
t = np.load("time.npy")[int(sys.argv[1])] # get first argument to run.py
# Connectivity Matrix Definitions #
if sys.argv[1] == '0':
ach_mat = np.zeros((n_n,n_n)) # Ach Synapse Connectivity Matrix
ach_mat[... | _____no_output_____ | MIT | Tutorial/Supplementary: Jupyter Notebooks/Day 5: Optimal Mind Control/.ipynb_checkpoints/Day 5-checkpoint.ipynb | matpalm/PSST |
Combining all DataJust like we merged all the batches, we merge all the sub-batches and batches. | overall_state = []
# Iterate over the generated output files
for n,i in enumerate(["batch"+str(x+1) for x in range(n_splits)]):
for m,j in enumerate(["_part_"+str(x+1)+".npy" for x in range(n_batch)]):
# Since the first element in the series was the last output, we remove them
if n>0 and m>0:
... | _____no_output_____ | MIT | Tutorial/Supplementary: Jupyter Notebooks/Day 5: Optimal Mind Control/.ipynb_checkpoints/Day 5-checkpoint.ipynb | matpalm/PSST |
Using $L_0$ regularization in predicting genetic risk====================================The main aim of this document is to outline the code and theory of using the $L_0$ norm in a regularized regression with the objective to predict disease risk from genetic data.This document contains my thought process and understa... | import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
def l0(x):
return(np.sum(x!=0))
def l1(x):
return(np.sum(np.abs(x)))
def l2(x):
return(np.sum(np.power(x, 2)))
x = np.linspace(-2, 2, 50)
x = np.append(x, 0)
x = np.sort(x)
fig, (axs0, axs1, axs2) = plt.subplots(1, 3, sha... | _____no_output_____ | MIT | notebooks/L0_norm.ipynb | rmporsch/ML_genetic_risk |
The plot above demonstrates nicely the penalty for different norms.As one can see both $p=1$ and $p=2$ allow shrinkage for large values of $\theta$, while $p=0$ the penalty is constant. Minimizing $L_0$ norm for parametric modelsOptimization under the $L_0$ penalty is computational difficult due to the non-differentia... | def hard_sigmoid(x):
return np.min([1, np.max([0, x])])
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def hard_concrete_dist(loc, temp, gamma, zeta):
u = np.random.random()
s = sigmoid((np.log(u) - np.log(1 - u) + np.log(loc)) / temp)
shat = s*(zeta - gamma) + gamma
return hard_sigmoid(shat)
de... | _____no_output_____ | MIT | notebooks/L0_norm.ipynb | rmporsch/ML_genetic_risk |
Implementation of the $L_0$ normThe next step is to implement the theory into practice.I will therefore make use of Google's tensorflow to implement the $L_0$ norm.Here its good to know that this has been implemented before in PyTorch.I will compare my and their implementation to assure I have done it correctly.The re... | import tensorflow as tf
from sklearn.model_selection import train_test_split
from pyplink import PyPlink
import sys
import os
DATAFOLDER = os.path.realpath(filename='../data')
PLINKDATA = '1kgb'
FILEPATH = os.path.join(DATAFOLDER, PLINKDATA)
def count_lines(filepath, header=False):
"""Count the number of rows in a... | _____no_output_____ | MIT | notebooks/L0_norm.ipynb | rmporsch/ML_genetic_risk |
Regular ExpressionsRegular expressions are `text matching patterns` described with a formal syntax. You'll often hear regular expressions referred to as 'regex' or 'regexp' in conversation. Regular expressions can include a variety of rules, for finding repetition, to text-matching, and much more. As you advance in Py... | import re
# List of patterns to search for
patterns = [ 'term1', 'term2' ]
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
for p in patterns:
print ('Searching for "%s" in Sentence: \n"%s"' % (p, text))
#Check for match
if re.search(p, text):
print ... | Searching for "term1" in Sentence:
"This is a string with term1, but it does not have the other term."
Match was found.
Searching for "term2" in Sentence:
"This is a string with term1, but it does not have the other term."
No Match was found.
| MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
Now we've seen that re.search() will take the pattern, scan the text, and then returns a **Match** object. If no pattern is found, a **None** is returned. To give a clearer picture of this match object, check out the cell below: | # List of patterns to search for
pattern = 'term1'
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
match = re.search(pattern, text)
type(match)
match | _____no_output_____ | MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
This **Match** object returned by the search() method is more than just a Boolean or None, it contains information about the match, including the original input string, the regular expression that was used, and the location of the match. Let's see the methods we can use on the match object: | # Show start of match
match.start()
# Show end
match.end()
s = "abassabacdReddyceaabadjfvababaReddy"
r = re.compile("Reddy")
r
l = re.findall(r,s)
print(l)
import re
s = "abcdefg1234"
r = re.compile("^[a-z][0-9]$")
l = re.findall(r,s)
print(l)
s = "ABCDE1234a"
r = re.compile(r"^[A-Z]{5}[0-9]{4}[a-z]$")
l = re.findall(r... | _____no_output_____ | MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
Split with regular expressionsLet's see how we can split with the re syntax. This should look similar to how you used the split() method with strings. | # Term to split on
split_term = '@'
phrase = 'What is the domain name of someone with the email: hello@gmail.com'
# Split the phrase
re.split(split_term,phrase) | _____no_output_____ | MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
Note how re.split() returns a list with the term to spit on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand! Finding all instances of a patternYou can use re.findall() to find all the instances of a pattern in a string. Fo... | # Returns a list of all matches
re.findall('is','test phrase match is in middle')
a = " a list with the term to spit on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand!"
copy = re.findall("to",a)
copy
len(copy) | _____no_output_____ | MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
Pattern re SyntaxThis will be the bulk of this lecture on using re with Python. Regular expressions supports a huge variety of patterns the just simply finding where a single string occurred. We can use *metacharacters* along with re to find specific types of patterns. Since we will be testing multiple re syntax forms... | def multi_re_find(patterns,phrase):
'''
Takes in a list of regex patterns
Prints a list of all matches
'''
for pattern in patterns:
print ('Searching the phrase using the re check: %r' %pattern)
print (re.findall(pattern,phrase)) | _____no_output_____ | MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
Repetition SyntaxThere are five ways to express repetition in a pattern: 1.) A pattern followed by the meta-character * is repeated zero or more times. 2.) Replace the * with + and the pattern must appear at least once. 3.) Using ? means the pattern appears zero or one time. 4.) For a specific number of... | test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = [ 'sd*', # s followed by zero or more d's
'sd+', # s followed by one or more d's
'sd?', # s followed by zero or one d's
'sd{3}', # s followed by three d's
... | Searching the phrase using the re check: 'sd*'
['sd', 'sd', 's', 's', 'sddd', 'sddd', 'sddd', 'sd', 's', 's', 's', 's', 's', 's', 'sdddd']
Searching the phrase using the re check: 'sd+'
['sd', 'sd', 'sddd', 'sddd', 'sddd', 'sd', 'sdddd']
Searching the phrase using the re check: 'sd?'
['sd', 'sd', 's', 's', 'sd', 'sd', ... | MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
Character SetsCharacter sets are used when you wish to match any one of a group of characters at a point in the input. Brackets are used to construct character set inputs. For example: the input [ab] searches for occurrences of either a or b.Let's see some examples: | test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = [ '[sd]', # either s or d
's[sd]+'] # s followed by one or more s or d
multi_re_find(test_patterns,test_phrase) | Searching the phrase using the re check: '[sd]'
['s', 'd', 's', 'd', 's', 's', 's', 'd', 'd', 'd', 's', 'd', 'd', 'd', 's', 'd', 'd', 'd', 'd', 's', 'd', 's', 'd', 's', 's', 's', 's', 's', 's', 'd', 'd', 'd', 'd']
Searching the phrase using the re check: 's[sd]+'
['sdsd', 'sssddd', 'sdddsddd', 'sds', 'sssss', 'sdddd']
| MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
It makes sense that the first [sd] returns every instance. Also the second input will just return any thing starting with an s in this particular case of the test phrase input. ExclusionWe can use ^ to exclude terms by incorporating it into the bracket syntax notation. For example: [^...] will match any single charact... | test_phrase = 'This is a string! But it has punctuation. How can we remove it?' | _____no_output_____ | MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
Use [^!.? ] to check for matches that are not a !,.,?, or space. Add the + to check that the match appears at least once, this basically translate into finding the words. | re.findall('[^!.? ]+',test_phrase) | _____no_output_____ | MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
Character RangesAs character sets grow larger, typing every character that should (or should not) match could become very tedious. A more compact format using character ranges lets you define a character set to include all of the contiguous characters between a start and stop point. The format used is [start-end].Comm... |
test_phrase = 'This is an example sentence. Lets see if we can find some letters.'
test_patterns=[ '[a-z]+', # sequences of lower case letters
'[A-Z]+', # sequences of upper case letters
'[a-zA-Z]+', # sequences of lower or upper case letters
'[A-Z][a-z]+'] ... | Searching the phrase using the re check: '[a-z]+'
['his', 'is', 'an', 'example', 'sentence', 'ets', 'see', 'if', 'we', 'can', 'find', 'some', 'letters']
Searching the phrase using the re check: '[A-Z]+'
['T', 'L']
Searching the phrase using the re check: '[a-zA-Z]+'
['This', 'is', 'an', 'example', 'sentence', 'Lets', '... | MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
Escape CodesYou can use special escape codes to find specific types of patterns in your data, such as digits, non-digits,whitespace, and more. For example:CodeMeaning\da digit\Da non-digit\swhitespace (tab, space, newline, etc.)\Snon-whitespace\walphanumeric\Wnon-alphanumericEscapes are indicated by prefixing the char... | test_phrase = 'This is a string with some numbers 1233 and a symbol #hashtag'
test_patterns=[ r'\d+', # sequence of digits
r'\D+', # sequence of non-digits
r'\s+', # sequence of whitespace
r'\S+', # sequence of non-whitespace
r'\w+', # alphanumeric charac... | Searching the phrase using the re check: '\\d+'
['1233']
Searching the phrase using the re check: '\\D+'
['This is a string with some numbers ', ' and a symbol #hashtag']
Searching the phrase using the re check: '\\s+'
[' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ']
Searching the phrase using the re check: '\\S... | MIT | Regular Expression/PY0101EN-Regular Expressions.ipynb | reddyprasade/PYTHON-BASIC-FOR-ALL |
You will scrape this mockup site that lists a few data points for addiction centers. | pip install icecream
## import library(ies)
import requests
from bs4 import BeautifulSoup
import pandas as pd
from icecream import ic
## capture the contents of the site in a response object
url = "https://sandeepmj.github.io/scrape-example-page/homework-site.html"
response = requests.get(url)
ic(response.status_code... | _____no_output_____ | MIT | homework/homework-for-week-5-SOLUTION.ipynb | jchapamalacara/fall21-students-practical-python |
Place all the registration data into a list with only the numbers in the format.It should look like this:```['4235', '4234', '4231']``` | ## for loop
regs = soup.find_all("p", class_="registration")
reg_list_fl = []
for item in regs:
reg_list_fl.append(item.get_text().replace("Registration# ", ""))
reg_list_fl
## do it here (create more cells if you need them)
## via list comprehension
regs = soup.find_all("p", class_="registration")
reg_list_lc = [i... | _____no_output_____ | MIT | homework/homework-for-week-5-SOLUTION.ipynb | jchapamalacara/fall21-students-practical-python |
Place all the company names into a list.It should look like this:```['Recovery Foundation','New Horizons','Renewable Light']``` | ## do it here (create more cells if you need them)
cos = soup.find_all("a")
cos
### lc
co_names_list = [item.get_text() for item in cos]
co_names_list | _____no_output_____ | MIT | homework/homework-for-week-5-SOLUTION.ipynb | jchapamalacara/fall21-students-practical-python |
Place all the URLS into a list. | ## do it here (create more cells if you need them)
co_urls = [item.get("href") for item in cos]
co_urls | _____no_output_____ | MIT | homework/homework-for-week-5-SOLUTION.ipynb | jchapamalacara/fall21-students-practical-python |
Place all the status into a list.It should look like this:```['Passed', 'Failed', 'Passed']``` | ## do it here (create more cells if you need them)
center_status = soup.find_all("p", class_="status")
center_status
status_list = [status.get_text().replace("Inspection: ", "") for status in center_status ]
status_list
| _____no_output_____ | MIT | homework/homework-for-week-5-SOLUTION.ipynb | jchapamalacara/fall21-students-practical-python |
Turn these lists into dataframes and export to a csv | ### use pandas DataFrame method to zip files into a dataframe
df = pd.DataFrame(list(zip(co_names_list, reg_list, status_list, co_urls)),
columns =['center_name', "registration_number",'status', 'link'])
df
## export to csv
filename = "recovery_center_list.csv"
df.to_csv(filename, encoding='utf-8', ind... | _____no_output_____ | MIT | homework/homework-for-week-5-SOLUTION.ipynb | jchapamalacara/fall21-students-practical-python |
---_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._--- Assignment 2 - Pandas Int... | import pandas as pd
df = pd.read_csv('olympics.csv', index_col=0, skiprows=1)
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold'+col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver'+col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns... | _____no_output_____ | MIT | 1_introduction/w2_pandas/4_assignment (ipynb)/Assignment 2.ipynb | shijiansu/coursera-applied-data-science-with-python |
Question 0 (Example)What is the first country in df?*This function should return a Series.* | # You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the row for Afghanistan, which is a Series object. The assignment
# question description will tel... | _____no_output_____ | MIT | 1_introduction/w2_pandas/4_assignment (ipynb)/Assignment 2.ipynb | shijiansu/coursera-applied-data-science-with-python |
Question 1Which country has won the most gold medals in summer games?*This function should return a single string value.* | def answer_one():
return "YOUR ANSWER HERE" | _____no_output_____ | MIT | 1_introduction/w2_pandas/4_assignment (ipynb)/Assignment 2.ipynb | shijiansu/coursera-applied-data-science-with-python |
Question 2Which country had the biggest difference between their summer and winter gold medal counts?*This function should return a single string value.* | def answer_two():
return "YOUR ANSWER HERE" | _____no_output_____ | MIT | 1_introduction/w2_pandas/4_assignment (ipynb)/Assignment 2.ipynb | shijiansu/coursera-applied-data-science-with-python |
Question 3Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count? $$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$Only include countries that have won at least 1 gold in both summer and winter.*This function should return a ... | def answer_three():
return "YOUR ANSWER HERE" | _____no_output_____ | MIT | 1_introduction/w2_pandas/4_assignment (ipynb)/Assignment 2.ipynb | shijiansu/coursera-applied-data-science-with-python |
Question 4Write a function to update the dataframe to include a new column called "Points" which is a weighted value where each gold medal counts for 3 points, silver medals for 2 points, and bronze mdeals for 1 point. The function should return only the column (a Series object) which you created.*This function should... | def answer_four():
return "YOUR ANSWER HERE" | _____no_output_____ | MIT | 1_introduction/w2_pandas/4_assignment (ipynb)/Assignment 2.ipynb | shijiansu/coursera-applied-data-science-with-python |
Part 2For the next set of questions, we will be using census data from the [United States Census Bureau](http://www.census.gov/popest/data/counties/totals/2015/CO-EST2015-alldata.html). Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties... | census_df = pd.read_csv('census.csv')
census_df.head()
def answer_five():
return "YOUR ANSWER HERE" | _____no_output_____ | MIT | 1_introduction/w2_pandas/4_assignment (ipynb)/Assignment 2.ipynb | shijiansu/coursera-applied-data-science-with-python |
Question 6Only looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)?*This function should return a list of string values.* | def answer_six():
return "YOUR ANSWER HERE" | _____no_output_____ | MIT | 1_introduction/w2_pandas/4_assignment (ipynb)/Assignment 2.ipynb | shijiansu/coursera-applied-data-science-with-python |
Question 7Which county has had the largest absolute change in population within the period 2010-2015? (Hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all six columns.)e.g. If County Population in the 5 year period is 100, 120, 80, 105, 100, 130, then its lar... | def answer_seven():
return "YOUR ANSWER HERE" | _____no_output_____ | MIT | 1_introduction/w2_pandas/4_assignment (ipynb)/Assignment 2.ipynb | shijiansu/coursera-applied-data-science-with-python |
Question 8In this datafile, the United States is broken up into four regions using the "REGION" column. Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.*This function should return a 5x2 DataFra... | def answer_eight():
return "YOUR ANSWER HERE" | _____no_output_____ | MIT | 1_introduction/w2_pandas/4_assignment (ipynb)/Assignment 2.ipynb | shijiansu/coursera-applied-data-science-with-python |
Visualize all the RGB channel | def visualize_RGB_Channels(imgArray=None, fig_size=(10,7)):
# spliting the RGB components
B,G,R=cv2.split(imgArray)
#zero matrix
Z=np.zeros(B.shape,dtype=B.dtype)
#initilize subplot
fig,ax=plt.subplots(2,2, figsize=fig_size)
[axi.set_axis_off() for axi in ax.ravel()]
ax[0,0].set_title("O... | _____no_output_____ | MIT | Image-Processing/image-understanding-in-Details.ipynb | TUCchkul/ComputerVision-ObjectDetection |
Filters | sobel=np.array([[1,0,-1],[2,0,-2],[1,0,-1]])
print(sobel)
sobel.T
example1=[[0,0,0,255,255,255],
[0,0,0,255,255,255],
[0,0,0,255,255,255],
[0,0,0,255,255,255],
[0,0,0,255,255,255],
[0,0,0,255,255,255]]
example1=np.array(example1)
plt.imshow(example1, cmap="gray") | _____no_output_____ | MIT | Image-Processing/image-understanding-in-Details.ipynb | TUCchkul/ComputerVision-ObjectDetection |
Apply filter on this image | def find_edges(imgFilter=None, picture=None):
# extract row and column of an input picture
p_row,p_col=picture.shape
k=imgFilter.shape[0]
temp=list()
strides=1
#resultant rows and columns
final_columns=(p_col -k)//strides +1
final_rows=(p_row -k)//strides +1
#take vertically dow... | _____no_output_____ | MIT | Image-Processing/image-understanding-in-Details.ipynb | TUCchkul/ComputerVision-ObjectDetection |
lets now apply horizontal edges | result_car_hor=find_edges(sobel.T, car1_cv2_BGR_Gray)
plt.imshow(result_car, cmap="gray")
example1
example1=[[255,0,0,0,255,255,255,255,0,0,0,255],
[0,0,0,0,255,255,255,255,0,0,0,0],
[0,0,0,0,255,255,255,255,255,255,255,255],
[0,0,0,0,255,255,255,255,255,255,255,255],
[0,0,0,0,25... | _____no_output_____ | MIT | Image-Processing/image-understanding-in-Details.ipynb | TUCchkul/ComputerVision-ObjectDetection |
Tarefa 1 1. Stemizacao | from nltk.stem.snowball import SnowballStemmer
# É importante definir a lingua
stemizador = SnowballStemmer('portuguese')
palavras_stemizadas = []
for palavra in nltk.word_tokenize(texto_formatado):
print(palavra, ' = ', stemizador.stem(palavra))
palavras_stemizadas.append(stemizador.stem(palavra))
print(palav... | _____no_output_____ | MIT | processamento-de-linguagem-natural/aula1.ipynb | andredarcie/my-data-science-notebooks |
2. Lematizacao | import spacy
!python -m spacy download pt_core_news_sm
pln = spacy.load('pt_core_news_sm')
pln
palavras = pln(texto_formatado)
# Spacy já separa as palavras em tokens
palavras_lematizadas = []
for palavra in palavras:
#print(palavra.text, ' = ', palavra.lemma_)
palavras_lematizadas.append(palavra.lemma_)
print... | _____no_output_____ | MIT | processamento-de-linguagem-natural/aula1.ipynb | andredarcie/my-data-science-notebooks |
Fim da Tarefa 1 Uso da lib Goose3 | from goose3 import Goose
g = Goose()
url = 'https://www.techtudo.com.br/noticias/2017/08/o-que-e-replika-app-usa-inteligencia-artificial-para-criar-um-clone-seu.ghtml'
materia = g.extract(url)
materia.title
materia.tags
materia.infos
materia.cleaned_text | _____no_output_____ | MIT | processamento-de-linguagem-natural/aula1.ipynb | andredarcie/my-data-science-notebooks |
Tarefa 2 | frequencia_palavras.keys()
frequencia_palavras
frase = """Algoritmos de aprendizados supervisionados utilizam dados coletados""".split(' ')
frequencia_palavras_frase = []
for palavra in frase:
for freq_palavra in frequencia_palavras:
if palavra in freq_palavra:
frequencia_palavras_frase.append(... | correr
característico
dar
inteligente
aprendizado
coletados
partir
estruturar
estatístico
algoritmo
supervisionar
utilizar
conjuntar
extrair
poder
ser
corrido
estabelecer
relação
inteligência
enquanto
quantitativo
modelo
máquina
construir
reconhecimento
atividades
humano
| MIT | processamento-de-linguagem-natural/aula1.ipynb | andredarcie/my-data-science-notebooks |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.