markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Analyze Gradient Descent ProgressThe plot below illustrates how the cost function value changes over each iteration. You should see it decreasing. In case if cost function value increases it may mean that gradient descent missed the cost function minimum and with each step it goes further away from it.From this plot y...
# Draw gradient descent progress for each label. labels = logistic_regression.unique_labels for index, label in enumerate(labels): plt.plot(range(len(costs[index])), costs[index], label=labels[index]) plt.xlabel('Gradient Steps') plt.ylabel('Cost') plt.legend() plt.show()
_____no_output_____
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Calculate Model Training PrecisionCalculate how many of training and test examples have been classified correctly. Normally we need test precission to be as high as possible. In case if training precision is high and test precission is low it may mean that our model is overfitted (it works really well with the trainin...
# Make training set predictions. y_train_predictions = logistic_regression.predict(x_train) y_test_predictions = logistic_regression.predict(x_test) # Check what percentage of them are actually correct. train_precision = np.sum(y_train_predictions == y_train) / y_train.shape[0] * 100 test_precision = np.sum(y_test_pre...
Training Precision: 96.6833% Test Precision: 90.4500%
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Plot Test Dataset PredictionsIn order to illustrate how our model classifies unknown examples let's plot first 64 predictions for testing dataset. All green digits on the plot below have been recognized corrctly but all the red digits have not been recognized correctly by our classifier. On top of each digit image you...
# How many numbers to display. numbers_to_display = 64 # Calculate the number of cells that will hold all the numbers. num_cells = math.ceil(math.sqrt(numbers_to_display)) # Make the plot a little bit bigger than default one. plt.figure(figsize=(15, 15)) # Go through the first numbers in a test set and plot them. fo...
_____no_output_____
MIT
notebooks/logistic_regression/multivariate_logistic_regression_demo.ipynb
pugnator-12/homemade-machine-learning
Question: What is the safest type of intersection?Let's see how accidents are splitted based on the place of the event and see where we can feel to be the safest.First step before any data analysis is to import required libraries and data. Any information required to understand columns is available here: https://www.k...
import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns caracteristics = pd.read_csv('data/caracteristics.csv', encoding='latin1') caracteristics.head()
_____no_output_____
CC0-1.0
Which intersection has the highest number of accidents.ipynb
danpeczek/france-accidents
Let's change the values in intersection column from numbers to categorical values, below the look-up for development.* 1 - Out of intersection* 2 - Intersection in X* 3 - Intersection in T* 4 - Intersection in Y* 5 - Intersection with more than 4 branches* 6 - Giratory* 7 - Place* 8 - Level crossing* 9 - Other intersec...
caracteristics.columns[caracteristics.isna().sum() != 0]
_____no_output_____
CC0-1.0
Which intersection has the highest number of accidents.ipynb
danpeczek/france-accidents
So it looks like 'int' column is not having missing values, which is super for us in this case. Let's go with renaming values in 'int' column.
int_dict = { '1': 'Out of intersection', '2': 'X intersection', '3': 'T intersection', '4': 'Y intersection', '5': 'More than 4 branches intersection', '6': 'Giratory', '7': 'Place', '8': 'Level crossing', '9': 'Other' } caracteristics['int'] = caracteristics['int'].astype(str) car...
_____no_output_____
CC0-1.0
Which intersection has the highest number of accidents.ipynb
danpeczek/france-accidents
install pymongo package- mac - pip(3) install pymongo- window - conda install -c anaconda pymongo
import pymongo, requests
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
1. server에 연결(client생성)
client = pymongo.MongoClient('mongodb://13.125.237.246:27017') client
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
2. db선택
db = client.dss db
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
3. db의 collection 리스트를 확인
db.collection_names()
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
4. collection 선택
collection = db.info collection
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
5. find
# find_one : 한 개의 document를 가져옵니다. document = collection.find_one({"subject" : "java"}) type(document), document # find : 여러 개의 documents를 가져옵니다 documents = collection.find({"subject": "java"}) documents datas = list(documents) len(datas) datas list(documents)
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
다 사라짐.
# count - documents의 갯수를 가져오는 함수 documents = collection.find() documents.count() # sort - 정렬 documents = collection.find({"level":{"$lte":3}}).sort("level", pymongo.DESCENDING) list(documents)
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
6. insert
# insert_one data = {"subject":"css", "level":1, "comments":[{"name":"peter", "msg":"easy"}]} result = collection.insert_one(data) result result.inserted_id # insert_many datas = [ {"subject":"webpack", "level":2, "comments":[{"name":"peter", "msg":"easy"}]}, {"subject":"gulp", "level":3, "comments":[{"name":...
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
직방 데이터 크롤링 후 저장
url = "https://api.zigbang.com/v3/items?detail=true&item_ids=[12258942,12217921,12251354,12042761,12270198,12263778,12149733,12263079,12046500,12227516,12245261,12258364,11741210,11947081,12081429,12248641,12039772,12148952,12271001,12201879,12269163,12268373,12268568,12204018,12247416,12241201,12174611,12254380,122337...
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
렌트비용이 50이하인 데이터 추출
query = {"rent":{"$lte":50}} documents = collection.find(query) documents datas = list(documents) len(datas) # pandas로 만들어보자 df = pd.DataFrame(datas) df.tail() filtered_df = df[['rent','options','size','deposit']] filtered_df.tail() query = {"rent":{"$lte":50}} documents = collection.find(query, {"_id":False,"deposit...
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
delete - database
client.drop_database("crawling")
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
delete - collection
client.crawling.drop_collection("zigbang")
_____no_output_____
MIT
database/pymongo.ipynb
Junhojuno/TIL
Synthetic data: Catogorical variables
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.model_selection import KFold from sklearn.utils import shuffle from sklearn.metrics import accuracy_score from synthesize_data import synthesize_data import expectation_reflection as ER from sklearn.linear_model im...
_____no_output_____
MIT
.ipynb_checkpoints/category-checkpoint.ipynb
danhtaihoang/expectation-reflection
DDPG - BipedalWalker-v2- Xinyao Qian- Tianhao Liu - Get familiar with the BipedalWalker-v2 environment firstFind that BipedalWalker behaves embarrasingly bad if taking random walking strategy.
import tensorflow as tf import numpy as np import gym # Load Environment ENV_NAME = 'BipedalWalker-v2' env = gym.make(ENV_NAME) # Repeoducible environment parameters env.seed(1) s=env.reset() episode=100 steps=5000 while i in range(episode): for j in range(steps): env.render() a=env.action_space.s...
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.
MIT
ActorCritic/.ipynb_checkpoints/DDPG-Copy1-checkpoint.ipynb
bluemapleman/Maple-Reinforcement-Learning
Our solution **Since the action space of BipedalWalker is consecutive, that means value based models such as Q-Learning or DQN, are not applicable**, because value based models generally try to fit a better value function that tells us how good it is to be at a certain state s (V(s)) or to take action a at the state s...
import gym import os import tensorflow as tf import numpy as np import shutil np.random.seed(1) tf.set_random_seed(1) # Load Environment ENV_NAME = 'BipedalWalker-v2' env = gym.make(ENV_NAME) # Repeoducible environment parameters env.seed(1) STATE_DIM = env.observation_space.shape[0] # 24 environment variables ACT...
WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype. Finished!
MIT
ActorCritic/.ipynb_checkpoints/DDPG-Copy1-checkpoint.ipynb
bluemapleman/Maple-Reinforcement-Learning
Main loop for trainning
######################################## Hyperparameters ######################################## MAX_EPISODES = 500 LR_A = 0.000005 # learning rate for actor LR_C = 0.000005 # learning rate for mcritic GAMMA = 0.999 # reward discount REPLACE_ITER_A = 1700 REPLACE_ITER_C = 1500 MEMORY_CAPACITY = 200000 BATCH_SIZE...
INFO:tensorflow:Restoring parameters from ./data/DDPG.ckpt-1200000 Episode: 0 | Achieve | Running_r: 271 | Epi_r: 271.74 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 1 | Achieve | Running_r: 271 | Epi_r: 269.24 | Exploration: 0.000 | Pos: 88 | LR_A: 0.000000 | LR_C: 0.000000 Episode: 2 | ...
MIT
ActorCritic/.ipynb_checkpoints/DDPG-Copy1-checkpoint.ipynb
bluemapleman/Maple-Reinforcement-Learning
ES Module 3 Welcome to Module 3!Last time, we went over: 1. Strings and Intergers 2. Arrays 3. Tables Today we will continue working with tables, and introduce a new procedure called filtering. Before you start, run the following cell.
# Loading our libraries, i.e. tool box for our module import numpy as np from datascience import *
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Paired Programming Today we want to introduce a new system of work called paired programming. Wikipedia defines paired programming in the following way:Pair programming is an agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, th...
# Calculating the total number of pets in my house. num_cats = 4 num_dogs = 10 total = num_cats + num_dogs total
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, write a comment in the cell below explaining what it is doing, then run the cell to see if you're correct.
animals = make_array('Cat', 'Dog', 'Bird', 'Spider') num_legs = make_array(4, 4, 2, 8) my_table = Table().with_columns('Animal', animals, 'Number of Legs', num_legs) my_table
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
1. Tables (Continued) It is time to practice tables again. We want to load the table files you have uploaded last module. This time, you do it by yourself. Load the table "inmates_by_year.csv" and "correctional_population.csv" and assign it to a variable. Remember, to load a table we use `Table.read_table()` and pass...
inmates_by_year = Table.read_table('inmates_by_year.csv') correctional_population = Table.read_table('correctional_population.csv')
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Good job! Now we have all the tables loaded. It is time to extract some information from these tables!In the next several cells, we would guide you through a quick manipulation that will allow us to extract information about the entire correctional population using both tables we have loaded above. In the correctional_...
# First, extract the column name "Number supervised per 100,000 U.S. adult residents/c" from # the correctional_population table and assign it to the variable provided. c_p = correctional_population.column('Number supervised per 100,000 U.S. adult residents/c') c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
filteringWhen you run the cell above, you may notice that the values in our array are actually strings (you can tell because each value has quotation marks around it). However, we can't do mathematical operations on strings, so we'll have to convert this array first so it has integers instead of strings. This is calle...
# filtering def string_to_int(val): return int(val.replace(',', '')) c_p = correctional_population.apply(string_to_int, 'Number supervised per 100,000 U.S. adult residents/c')
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, let's continue finding the real value of c_p.
# In this cell, multiply the correctional population column name "Number supervised per 100,000 U.S. adult residents/c" # by 100000 and assign it to a new variable (c_p stands for correctional population) real_c_p = c_p * 100000 real_c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Next we want to assign the Total column from inmates_by_year to a variable in order to be able to operate on it.
total_inmates = inmates_by_year.column('Total') total_inmates
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Again, run the following line to convert the values in `total_inmates` to ints.
# filtering total_inmates = inmates_by_year.apply(string_to_int, 'Total') total_inmates
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Switch position, the navigator now takes the wheel. Now that we have the variables holding all the information we want to manipulate, we can start digging into it.We want to come up with a scheme that will allow us to see the precentage of people that are incarcerated, from the total supervised population, by year.Bef...
# filtering real_c_p = real_c_p.take(np.arange(1, real_c_p.size)) real_c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now our arrays both correspond to data from the same years and we can do operations with both of them!
# Write a short code that stores the precentage of people incarcerated from the supervised population # (rel stands for relative, c_p stands from correctional population) inmates_rel_c_p = (total_inmates / real_c_p) * 100 inmates_rel_c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, this actually gives us useful information!Why not write it down? Please write down what this information tells you about the judicial infrastructure - we are looking for more mathy/dry explanation (rather than observation of how poorly it is).
# A simple sentence will suffice, we want to see intuitive understanding. Please call a teacher when done to check! extract_information_shows = "The percentage of people, supervisied by US adult correctional system, who are incarcerated"
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
For a final touch, please sort inmates_rel_c_p by descending order in the next cell. We won't tell you how to sort, this time please check the last lab module on how to sort a table. It is an important quality of a programmer to be able to reuse code you already have. Hint: Remember that you can only use `sort` on tab...
# Please sort inmates_rel_c_p in descending order and print it out inmates_rel_c_p = Table().with_columns('Inmate_percentage', inmates_rel_c_p) inmates_rel_c_p.sort('Inmate_percentage',descending = True) inmates_rel_c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Before starting, please switch positions Filtering Right now, we can't really get much extra information from our tables other than by sorting them. In this section, we'll learn how to filter our data so we can get more useful insights from it. This is especially useful when dealing with larger data sets!For example,...
inmates_by_year.where('Year', are.above(2012))
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Notice that `where` takes in two arguments: the name of the column, and the condition we are filtering by. Now, try it for yourself! In the cell below, filter `correctional_population` so it only includes years after 2008. If you run the following cell, you'll find a complete description of all such conditions (which ...
functions = make_array('are.equal_to(Z)', 'are.above(x)', 'are.above_or_equal_to(x)', 'are.below(x)', 'are.below_or_equal_to(x)', 'are.between(x, y)', 'are.strictly_between(x, y)', 'are.between_or_equal_to(x, y)', 'are.containing(S)') descriptions = make_array('Equal to Z',...
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, we'll be using filtering to gain more insights about our two tables. Before we start, be sure to run the following cell so we can ensure every column we're working with is numerical.
inmates_by_year = inmates_by_year.drop('Total').with_column('Total', total_inmates).select('Year', 'Total', 'Standard error/a') correctional_population = correctional_population.drop('Number supervised per 100,000 U.S. adult residents/c').with_column('Number supervised per 100,000 U.S. adult residents/c', c_p).select('...
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
First, find the mean of the total number of inmates. Hint: You can use the `np.mean()` function on arrays to calculate this.
avg_inmates = np.mean(inmates_by_year.column('Total')) avg_inmates
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, filter `inmates_by_year` to find data for the years in which the number of total inmates was under the average.
filtered_inmates = inmates_by_year.where('Total', are.below(avg_inmates)) filtered_inmates
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
What does this tell you about the total inmate population? Write your answer in the cell below.
answer = "YOUR TEXT HERE"
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Before continuing, please switch positions. Now, similarly, find the average number of adults under correctional supervision, and filter the table to find the years in which the number of adults under correctional supervision was under the average.
avg = np.mean(correctional_population.column('Number supervised per 100,000 U.S. adult residents/c')) filtered_c_p = correctional_population.where('Number supervised per 100,000 U.S. adult residents/c', are.below(avg)) filtered_c_p
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Do the years match up? Does this make sense based on the proportions you calculated above in `inmates_rel_c_p`?
answer = "YOUR TEXT HERE"
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, from `correctional_population`, filter the table so the value of U.S. adult residents under correctional supervision is 1 in 31. Remember, the values in this column are strings.
c_p_1_in_34 = correctional_population.where('U.S. adult residents under correctional supervision', are.containing('1 in 31')) c_p_1_in_34
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Now, we have one last challenge exercise. Before doing this, finish the challenge exercises from last module. We highly encourage you to work with your partner on this one.In the following cell, find the year with the max number of supervised adults for which the proportion of US adult residents under correctional supe...
one_in_32 = correctional_population.where('U.S. adult residents under correctional supervision', are.containing('1 in 32')) one_in_32_sorted = one_in_32.sort('Number supervised per 100,000 U.S. adult residents/c', descending = True) year = one_in_32_sorted.column('Year').item(0) year
_____no_output_____
MIT
ES Module 3 Soln.ipynb
ds-modules/ETHSTD-21AC-
Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func from sqlalchemy import create_engine, inspect engine = create_engine("sqlite:///Resources/hawaii.sqlite") # reflect an existin...
(1, 'WAIKIKI 717.2, HI US', 'USC00519397', -157.8168, 21.2716, 3.0) (2, 'KANEOHE 838.1, HI US', 'USC00513117', -157.8015, 21.4234, 14.6) (3, 'KUALOA RANCH HEADQUARTERS 886.9, HI US', 'USC00514830', -157.8374, 21.5213, 7.0) (4, 'PEARL CITY, HI US', 'USC00517948', -157.9751, 21.3934, 11.9) (5, 'UPPER WAHIAWA 874.3, HI US...
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
OR
# Create the inspector and connect it to the engine inspector = inspect(engine) # Collect the names of tables within the database inspector.get_table_names() # Using the inspector to print the column names within the 'measuremnts' table and its types columns1 = inspector.get_columns('measurements') for column in col...
id INTEGER station TEXT name TEXT latitude FLOAT longitude FLOAT elevation FLOAT
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
Exploratory Climate Analysis -------------------------------------------------------------------------------------------------------------------------- ********************* Precipitation Analysis ********************* ---------------------------------------------------------------------------------------------------...
# Design a query to retrieve the last 12 months of precipitation data and plot the results #calulation the last date. session.query(Measurement.date).order_by(Measurement.date.desc()).first() # Calculate the date 1 year ago from the last data point in the database year_ago_date= dt.date(2017, 8, 23) - dt.timedelta(day...
_____no_output_____
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
-------------------------------------------------------------------------------------------------------------------------- ********************* Station Analysis ********************* --------------------------------------------------------------------------------------------------------------------------
# Design a query to show how many stations are available in this dataset? number_of_stations = session.query(Station).count() number_of_stations # What are the most active stations? (i.e. what stations have the most rows)? # List the stations and the counts in descending order. active_stations = (session.query(Measurem...
_____no_output_____
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
Bonus Challenge Assignment
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' # and return the minimum, average, and maximum temperatures for that range of dates def calc_temps(start_date, end_date): """TMIN, TAVG, and TMAX for a list of dates. Args: start_date (string): A date ...
_____no_output_____
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
ROUGH WORK FOR APP.PY
# from flask import Flask, jsonify # def precipitation(): # # Create session (link) from Python to the DB # session = Session(engine) # # Query Measurement # results = (session.query(Measurement.date, Measurement.prcp) # .order_by(Measurement.date)) # # Create a dictiona...
_____no_output_____
MIT
climate_starter.ipynb
RShailza/sqlalchemy-challenge
Introduction to NLTKWe have seen how to do [some basic text processing in Python](https://github.com/Mashimo/datascience/blob/master/03-NLP/helloworld-nlp.ipynb), now we introduce an open source framework for natural language processing that can further help to work with human languages: [NLTK (Natural Language ToolK...
sampleText1 = "The Elephant's 4 legs: THE Pub! You can't believe it or can you, the believer?" sampleText2 = "Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29."
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Tokens The basic atomic part of each text are the tokens. A token is the NLP name for a sequence of characters that we want to treat as a group.We have seen how we can extract tokens by splitting the text at the blank spaces. NTLK has a function word_tokenize() for it:
import nltk s1Tokens = nltk.word_tokenize(sampleText1) s1Tokens len(s1Tokens)
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
21 tokens extracted, which include words and punctuation. Note that the tokens are different than what a split by blank spaces would obtained, e.g. "can't" is by NTLK considered TWO tokens: "can" and "n't" (= "not") while a tokeniser that splits text by spaces would consider it a single token: "can't". Let's see anot...
s2Tokens = nltk.word_tokenize(sampleText2) s2Tokens
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
And we can apply it to an entire book, "The Prince" by Machiavelli that we used last time:
# If you would like to work with the raw text you can use 'bookRaw' with open('../datasets/ThePrince.txt', 'r') as f: bookRaw = f.read() bookTokens = nltk.word_tokenize(bookRaw) bookText = nltk.Text(bookTokens) # special format nBookTokens= len(bookTokens) # or alternatively len(bookText) print ("*** Analysing book...
*** Analysing book *** The book is 300814 chars long The book has 59792 tokens
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
As mentioned above, the NTLK tokeniser works in a more sophisticated way than just splitting by spaces, therefore we got this time more tokens. SentencesNTLK has a function to tokenise a text not in words but in sentences.
text1 = "This is the first sentence. A liter of milk in the U.S. costs $0.99. Is this the third sentence? Yes, it is!" sentences = nltk.sent_tokenize(text1) len(sentences) sentences
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
As you see, it is not splitting just after each full stop but check if it's part of an acronym (U.S.) or a number (0.99). It also splits correctly sentences after question or exclamation marks but not after commas.
sentences = nltk.sent_tokenize(bookRaw) # extract sentences nSent = len(sentences) print ("The book has {} sentences".format (nSent)) print ("and each sentence has in average {} tokens".format (nBookTokens / nSent))
The book has 1416 sentences and each sentence has in average 42.22598870056497 tokens
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Most common tokensWhat are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?The NTLK FreqDist class is used to encode “frequency distributions”, which count the number of times that something occurs, for example a token.Its `most_common()` method then returns a list of tuples where...
def get_top_words(tokens): # Calculate frequency distribution fdist = nltk.FreqDist(tokens) return fdist.most_common() topBook = get_top_words(bookTokens) # Output top 20 words topBook[:20]
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Comma is the most common: we need to remove the punctuation. Most common alphanumeric tokensWe can use `isalpha()` to check if the token is a word and not punctuation.
topWords = [(freq, word) for (word,freq) in topBook if word.isalpha() and freq > 400] topWords
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
We can also remove any capital letters before tokenising:
def preprocessText(text, lowercase=True): if lowercase: tokens = nltk.word_tokenize(text.lower()) else: tokens = nltk.word_tokenize(text) return [word for word in tokens if word.isalpha()] bookWords = preprocessText(bookRaw) topBook = get_top_words(bookWords) # Output top 20 words topBook[:...
*** Analysing book *** The text has now 52202 words (tokens)
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Now we removed the punctuation and the capital letters but the most common token is "the", not a significative word ... As we have seen last time, these are so-called **stop words** that are very common and are normally stripped from a text when doing these kind of analysis. Meaningful most common tokensA simple appr...
meaningfulWords = [word for (word,freq) in topBook if len(word) > 5 and freq > 80] sorted(meaningfulWords)
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
This would work but would leave out also tokens such as `I` and `you` which are actually significative. The better approach - that we have seen earlier how - is to remove stopwords using external files containing the stop words. NLTK has a corpus of stop words in several languages:
from nltk.corpus import stopwords stopwordsEN = set(stopwords.words('english')) # english language betterWords = [w for w in bookWords if w not in stopwordsEN] topBook = get_top_words(betterWords) # Output top 20 words topBook[:20]
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Now we excluded words such as `the` but we can improve further the list by looking at semantically similar words, such as plural and singular versions.
'princes' in betterWords betterWords.count("prince") + betterWords.count("princes")
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Stemming Above, in the list of words we have both `prince` and `princes` which are respectively the singular and plural version of the same word (the **stem**). The same would happen with verb conjugation (`love` and `loving` are considered different words but are actually *inflections* of the same verb). **Stemmer**...
input1 = "List listed lists listing listings" words1 = input1.lower().split(' ') words1
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
And now we apply one of the NLTK stemmer, the Porter stemmer:
porter = nltk.PorterStemmer() [porter.stem(t) for t in words1]
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
As you see, all 5 different words have been reduced to the same stem and would be now the same lexical token.
stemmedWords = [porter.stem(w) for w in betterWords] topBook = get_top_words(stemmedWords) topBook[:20] # Output top 20 words
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Now the word `princ` is counted 281 times, exactly like the sum of prince and princes. A note here: Stemming usually refers to a crude heuristic process that chops off the ends of words in the hope of achieving this goal correctly most of the time, and often includes the removal of derivational affixes. `Prince` and ...
from nltk.stem.snowball import SnowballStemmer stemmerIT = SnowballStemmer("italian") inputIT = "Io ho tre mele gialle, tu hai una mela gialla e due pere verdi" wordsIT = inputIT.split(' ') [stemmerIT.stem(w) for w in wordsIT]
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
LemmaLemmatization usually refers to doing things properly with the use of a vocabulary and morphological analysis of words, normally aiming to remove inflectional endings only and to return the **base or dictionary form of a word, which is known as the lemma**. While a stemmer operates on a single word without knowl...
from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() words1 [lemmatizer.lemmatize(w, 'n') for w in words1] # n = nouns
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
We tell the lemmatise that the words are nouns. In this case it considers the same lemma words such as list (singular noun) and lists (plural noun) but leave as they are the other words.
[lemmatizer.lemmatize(w, 'v') for w in words1] # v = verbs
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
We get a different result if we say that the words are verbs. They have all the same lemma, in fact they could be all different inflections or conjugation of a verb. The type of words that can be used are: 'n' = noun, 'v'=verb, 'a'=adjective, 'r'=adverb
words2 = ['good', 'better'] [porter.stem(w) for w in words2] [lemmatizer.lemmatize(w, 'a') for w in words2]
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
It works with different adjectives, it doesn't look only at prefixes and suffixes. You would wonder why stemmers are used, instead of always using lemmatisers: stemmers are much simpler, smaller and faster and for many applications good enough. Now we lemmatise the book:
lemmatisedWords = [lemmatizer.lemmatize(w, 'n') for w in betterWords] topBook = get_top_words(lemmatisedWords) topBook[:20] # Output top 20 words
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Yes, the lemma now is `prince`. But note that we consider all words in the book as nouns, while actually a proper way would be to apply the correct type to each single word. Part of speech (PoS)In traditional grammar, a part of speech (abbreviated form: PoS or POS) is a category of words which have similar grammatica...
text1 = "Children shouldn't drink a sugary drink before bed." tokensT1 = nltk.word_tokenize(text1) nltk.pos_tag(tokensT1)
_____no_output_____
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
The NLTK function `pos_tag()` will tag each token with the estimated PoS. NLTK has 13 categories of PoS. You can check the acronym using the NLTK help function:
nltk.help.upenn_tagset('RB')
RB: adverb occasionally unabatingly maddeningly adventurously professedly stirringly prominently technologically magisterially predominately swiftly fiscally pitilessly ...
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Which are the most common PoS in The Prince book?
tokensAndPos = nltk.pos_tag(bookTokens) posList = [thePOS for (word, thePOS) in tokensAndPos] fdistPos = nltk.FreqDist(posList) fdistPos.most_common(5) nltk.help.upenn_tagset('IN')
IN: preposition or conjunction, subordinating astride among uppon whether out inside pro despite on by throughout below within for towards near behind atop around if like until below next into if beside ...
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
It's not nouns (NN) but interections (IN) such as preposition or conjunction. Extra note: Parsing the grammar structure Words can be ambiguous and sometimes is not easy to understand which kind of POS is a word, for example in the sentence "visiting aunts can be a nuisance", is visiting a verb or an adjective? Taggin...
# Parsing sentence structure text2 = nltk.word_tokenize("Alice loves Bob") grammar = nltk.CFG.fromstring(""" S -> NP VP VP -> V NP NP -> 'Alice' | 'Bob' V -> 'loves' """) parser = nltk.ChartParser(grammar) trees = parser.parse_all(text2) for tree in trees: print(tree)
(S (NP Alice) (VP (V loves) (NP Bob)))
Apache-2.0
03-NLP/introNLTK.ipynb
holabayor/datascience
Optimizing Code: Holiday GiftsIn the last example, you learned that using vectorized operations and more efficient data structures can optimize your code. Let's use these tips for one more example.Say your online gift store has one million users that each listed a gift on a wish list. You have the prices for each of t...
import time import numpy as np with open('gift_costs.txt') as f: gift_costs = f.read().split('\n') gift_costs = np.array(gift_costs).astype(int) # convert string to int start = time.time() total_price = 0 for cost in gift_costs: if cost < 25: total_price += cost * 1.08 # add cost after tax prin...
32765421.24 Duration: 6.560739994049072 seconds
MIT
udacity_ml/software_engineering/holiday_gifts/optimizing_code_holiday_gifts.ipynb
issagaliyeva/machine_learning
Here you iterate through each cost in the list, and check if it's less than 25. If so, you add the cost to the total price after tax. This works, but there is a much faster way to do this. Can you refactor this to run under half a second? Refactor Code**Hint:** Using numpy makes it very easy to select all the elements...
start = time.time() total_price = np.sum(gift_costs[gift_costs < 25] * 1.08) # compute the total price print(total_price) print('Duration: {} seconds'.format(time.time() - start))
32765421.24 Duration: 0.09631609916687012 seconds
MIT
udacity_ml/software_engineering/holiday_gifts/optimizing_code_holiday_gifts.ipynb
issagaliyeva/machine_learning
**Import Libraries and modules**
# https://keras.io/ !pip install -q keras import keras import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten, Add from keras.layers import Convolution2D, MaxPooling2D from keras.utils import np_utils from keras.datasets import mnist
Using TensorFlow backend.
MIT
1st_DNN.ipynb
joyjeni/-Learn-Artificial-Intelligence-with-TensorFlow
Load pre-shuffled MNIST data into train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data() print (X_train.shape) from matplotlib import pyplot as plt %matplotlib inline plt.imshow(X_train[0]) X_train = X_train.reshape(X_train.shape[0], 28, 28,1) X_test = X_test.reshape(X_test.shape[0], 28, 28,1) X_train = X_train.astype('float32') X_test = X_test.astyp...
_____no_output_____
MIT
1st_DNN.ipynb
joyjeni/-Learn-Artificial-Intelligence-with-TensorFlow
Result 1 Replicating results from Boise Pre-K Program Evaluation 2017 Page 7
print("Fall LSF No Vista Pre-k = ", getMean(nonprek.copy(), 'Fall_LSF')) print("Fall LSF Vista Pre-k = ", getMean(prekst.copy(), 'Fall_LSF')) print("Fall LNF No Vista Pre-k = ", getMean(nonprek.copy(), 'Fall_LNF')) print("Fall LNF Vista Pre-k = ", getMean(prekst.copy(), 'Fall_LNF')) print("Winter LSF No Vista Pre-k = "...
Spring LSF No Vista Pre-k = 40.38297872340426 Spring LSF Vista Pre-k = 45.0 Spring LNF No Vista Pre-k = 41.07446808510638 Spring LNF Vista Pre-k = 46.26086956521739
MIT
project.ipynb
gcaracas/ds_project
Result 1 Replicating results from Boise Pre-K Program Evaluation 2017 Page 9
# We need to arrange our StudentID from float to int, and then to string def convertToInt(df_t): strTbl=[] for a in df_t: strTbl.append(int(a)) return strTbl def getListValues(dataset, firstSelector, secondSelector): tbl = dataset.reset_index() data = tbl.groupby([firstSelector])[[secon...
_____no_output_____
MIT
project.ipynb
gcaracas/ds_project
Trending improvement in Get Ready To Read score Question: Are students who start the pre-k program, show improvement from fall to spring on their Get Ready To Read scores?
sns.pairplot(prekst, x_vars="Fall_GRTR_Score", y_vars="Spring_GRTR_Score",kind="reg")
_____no_output_____
MIT
project.ipynb
gcaracas/ds_project
Rate of improvement on pre-k and no pre-k students together Question: On Kindergardent, do we see any difference on improvement rate between kids with and without pre-k? Preliminary observation: Here we will use the slope of our regression to measure that and we do see that kids with pre-k hav...
print("LNF Scores for pre-k Students") p1=sns.pairplot(prekst, x_vars=["Fall_LNF"],y_vars="Spring_LNF", kind='reg') axes = p1.axes axes[0,0].set_ylim(0,100) print("LNF Scores for pre-k Students") p2=sns.pairplot(prekst, x_vars=["Fall_LSF"],y_vars="Spring_LSF", kind='reg') axes = p2.axes axes[0,0].set_ylim(0,100) print(...
LNF Scores for no pre-k Students LSF Scores for no pre-k Students
MIT
project.ipynb
gcaracas/ds_project
Now let's get the real numbers on rate of learning (our m slope from the regression)
# Import SK Learn train test split from sklearn.linear_model import LinearRegression from sklearn.cross_validation import train_test_split def getSlope(X, y): # Assign variables to capture train test split output #X_train, X_test, y_train, y_test = train_test_split(X, y) # Instantiate linreg = LinearReg...
_____no_output_____
MIT
project.ipynb
gcaracas/ds_project
Question: What is the quantitative learning rate in LNF from students with and without Pre-k
toEval = nonprek.copy() X = toEval['Fall_LNF'] y = toEval['Winter_LNF'] X[0]=0 # Fix an issue because the first sample is Nan thus ffill is ineffective for first sample X=X.fillna(method='ffill') y=y.fillna(method='ffill') X=X.values.reshape(-1,1) y=y.values.reshape(-1,1) print("LNF Learning rate for studenst non pre-K...
LNF Learning rate for studenst pre-K From Fall to Winter = [1.01872332]
MIT
project.ipynb
gcaracas/ds_project
Question: What is the quantitative learning rate in LSF from students with and without Pre-k
toEval = nonprek.copy() X = toEval['Fall_LSF'] y = toEval['Winter_LSF'] X[0]=0 # Fix an issue because the first sample is Nan thus ffill is ineffective for first sample X=X.fillna(method='ffill') y=y.fillna(method='ffill') X=X.values.reshape(-1,1) y=y.values.reshape(-1,1) print("LSF Learning rate for studenst non pre-K...
LSF Learning rate for studenst pre-K From Fall to Winter = [1.194067]
MIT
project.ipynb
gcaracas/ds_project
Question: Is there a difference in learning rate between high performers from both groups? Observation: The following plots have the same scale
pkhp = prekst[prekst['Fall_Level'] == 3] npkhp = nonprek[nonprek['Fall_Level'] == 3] print("LNF Scores for pre-k Students") p1=sns.pairplot(pkhp, x_vars=["Fall_LNF"],y_vars="Spring_LNF", kind='reg') axes = p1.axes axes[0,0].set_ylim(0,100) type(p1) print("LSF Scores for no pre-k Students") p2=sns.pairplot(npkhp, x_vars...
LSF Scores for no pre-k Students
MIT
project.ipynb
gcaracas/ds_project
2.5 Expressions and statements **An expression** is a combination of values, variables, and operators. A value all by itselfis considered an expression, and so is a variable, so the following are all legal expressions(assuming that the variable x has been assigned a value):17xx + 17**A statement** is a unit of code tha...
5
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
An assignment is a statement and so does not get printed out technically by the python shell in interpreter mode
x = 5
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
The value of an expression gets printed out in the python shell interpreter mode
x + 1
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
2.7 Order of operationsWhen more than one operator appears in an expression, the order of evaluation dependson the rules of precedence. For mathematical operators, Python follows mathematicalconvention. The acronym **PEMDAS** is a useful way to remember the rules: • **Parentheses** have the highest precedence and can b...
width = 17 height = 12.0 delimiter = '.'
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
For each of the following expressions, write the value of the expression and the type (of the value ofthe expression).1. width/22. width/2.03. height/34. 1 + 2 * 55. delimiter * 5
width / 2 # Type of value of expression in float width / 2.0 # Type of value of expression in float height / 3 # Type of value of expression in float 1+2 * 5 # Type of value of expression is int, value is 11 delimiter * 5 # Type of value of expression is string
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
**Exercise 2.3.** Practice using the Python interpreter as a calculator: 1. The volume of a sphere with radius r is ${4\over3}\pi r^3$ What is the volume of a sphere with radius 5? Hint: 392.7 is wrong!2. Suppose the cover price of a book is $$24.95, but bookstores get a 40% discount. Shipping costs$3 for the first cop...
import math
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
**Quest 1.**
radius = 5 volume = (4/3*math.pi)*radius**3 volume
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
**Quest 2.**
cover_price = 24.95 book_stores_discount = 0.4
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
Total wholesale cost for each book will be the cover_price cost less discounts plus shipping cost. The fisrst copy has shipping of $$3 and the rest 0.75 cents. So add it up for 60 copies
net_cover_price = cover_price - (cover_price * book_stores_discount) net_cover_price First_shipping_cost = 3 subsequent_shipping_cost = 0.75 first_book_cost = net_cover_price + First_shipping_cost fifty_nine_books_cost = (net_cover_price + subsequent_shipping_cost) * 59 total_wholesale_cost = first_book_cost + fifty_ni...
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
**Quest 3.**
min_sec = 60 hours_sec = 3600 start_time_secs = 6 * hours_sec + 52 * min_sec start_time_secs easy_pace_per_mile = 8 * min_sec + 15 tempo_space_per_mile = 7 * min_sec + 12
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
Now add 2 * easy-pace + 3 * tempo-pace to start-time
finish_time_secs = start_time_secs + (2 * easy_pace_per_mile) + (3 * tempo_space_per_mile) finish_time_secs
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
Now convert finish-time-secs to hours and minutes
import time def convert(seconds): return time.strftime("%H:%M:%S", time.gmtime(seconds)) # Now call it on the start_time to check, start-time is 06.52 convert(start_time_secs) # Now call it on the end_time to get the answer convert(finish_time_secs)
_____no_output_____
MIT
think_python/.ipynb_checkpoints/ch_2_variables-checkpoint.ipynb
Lawrence-Krukrubo/Effective_Python
Starbucks Capstone Challenge IntroductionThis data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BO...
# Import required libraries from datetime import datetime import json import math import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import LogisticRegression from sklearn.metrics import fbeta_score...
_____no_output_____
CNRI-Python
Starbucks_Capstone_notebook.ipynb
amit-singh-rathore/Starbucks-Capstone