markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
We can inspect this object and see that it's what we've been called a ```spaCy``` object.
type(nlp)
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
We use this ```spaCy``` object to create annotated outputs, what we call a ```Doc``` object.
example = "This is a sentence written in English" doc = nlp(example) type(doc)
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
```Doc``` objects are sequences of tokens, meaning we can iterate over the tokens and output specific annotations that we want such as POS tag or lemma.
for token in doc: print(token.text, token.pos_, token.tag_, token.lemma_)
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
__Reading data with ```pandas```__ ```pandas``` is the main library in Python for working with DataFrames. These are tabular objects of mixed data types, comprising rows and columns.In ```pandas``` vocabulary, a column is called a ```Series```, which is like a sophisticated list. I'll be using the names ```Series``` and column pretty interchangably.
import pandas as pd in_file = os.path.join("..", "data", "labelled_data", "fake_or_real_news.csv") data = pd.read_csv(in_file)
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
We can use ```.sample()``` to take random samples of the dataframe.
data.sample(5)
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
To delete unwanted columns, we can do the following:
del data["Unnamed: 0"] type(data["label"])
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
We can count the distribution of possible values in our data using ```.value_counts()``` - e.g. how many REAL and FAKE news entries do we have in our DataFrame?
data["label"].value_counts()
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
__Filter on columns__ To filter on columns, we define a condition on which we want to filter and use that to filer our DataFrame. We use the square-bracket syntax, just as if we were slicing a list or string.
data["label"]=="FAKE" data["label"]=="REAL"
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
Here we create two new dataframes, one with only fake news text, and one with only real news text.
fake_news_df = data[data["label"]=="FAKE"] real_news_df = data[data["label"]=="REAL"] fake_news_df["label"].value_counts() real_news_df["label"].value_counts()
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
__Counters__ In the following cell, you can see how to use a 'counter' to count how many entries are in a list.The += operator adds 1 to the variable ```counter``` for every entry in the list.
counter = 0 test_list = range(0,100) for entry in test_list: counter += 1
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
__Counting features in data__ Using the same logic, we can count how often adjectives (```JJ```) appear in our data. This is useful from a lingustic perspective; we could now, for example, figure out how many of each part of speech can be found in our data.
# create counters adj_count = 0 # process texts in batch for doc in nlp.pipe(fake_news_df["title"], batch_size=500): for token in doc: if token.tag_ == "JJ": adj_count += 1
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
In this case, we're using ```nlp.pipe``` from ```spaCy``` to group the entries together into batches of 500 at a time.Why?Everytime we execute ```nlp(text)``` it incurs a small computational overhead which means that scaling becomes an issue. An overhead of 0.01s per document becomes an issue when dealing with 1,000,000 or 10,000,000 or 100,000,000...If we batch, we can therefore be a bit more efficient. It also allows us to keep our ```spaCy``` logic compact and together, which becomes useful for more complex tasks.
print(adj_count)
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
Sentiment with ```spaCy``` To work with spaCyTextBlob, we need to make sure that we are working with ```spacy==2.3.5```. Follow the separate instructions posted to Slack to make this work.
import os import pandas as pd import matplotlib.pyplot as plt import spacy from spacytextblob.spacytextblob import SpacyTextBlob # initialise spacy nlp = spacy.load("en_core_web_sm")
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
Here, we initialise spaCyTextBlob and add it as a new component to our ```spaCy``` nlp pipeline.
spacy_text_blob = SpacyTextBlob() nlp.add_pipe(spacy_text_blob)
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
Let's test spaCyTextBlob on a single text, specifically Virgian Woolf's _To The Lighthouse_, published in 1927.
text_file = os.path.join("..", "data", "100_english_novels", "corpus", "Woolf_Lighthouse_1927.txt") with open(text_file, "r", encoding="utf-8") as file: text = file.read() print(text[:1000])
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
We use ```spaCy``` to create a ```Doc``` object for the entire text (how might you do this in batch?)
doc = nlp(text)
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
We can extract the polarity for each sentence in the novel and create list of scores per sentence.
polarity = [] for sentence in doc.sents: score = sentence._.sentiment.polarity polarity.append(score) polarity[:10]
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
We can create a quick and cheap plot using matplotlib - this is only fine in Jupyter Notebooks, don't do this in the wild!
plt.plot(polarity)
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
We can the use some fancy methods from ```pandas``` to calculate a rolling mean over a certain window length.For example, we group together our polarity scores into a window of 100 sentences at a time and calculate an average on that window.
smoothed_sentiment = pd.Series(polarity).rolling(100).mean()
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
This plot with a rolling average shows us a 'smoothed' output showing the rolling average over time, helping to cut through the noise.
plt.plot(smoothed_sentiment)
_____no_output_____
MIT
notebooks/session4_inclass_rdkm.ipynb
agnesbn/cds-language
05 - Exploring UtilsFalar sobre para se trabalhar com trajetórias pode ser necessária algumas c onversões envolvendo tempo e data, distância e etc, fora outros utilitários.Falar dos módulos presentes no pacote utils- constants- conversions- datetime- distances- math- mem- trajectories- transformations--- Imports
import pymove.utils as utils import pymove from pymove import MoveDataFrame
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
--- Load data
move_data = pymove.read_csv("geolife_sample.csv")
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
--- ConversionsTo transform latitude degree to meters, you can use function **lat_meters**. For example, you can convert Fortaleza's latitude -3.8162973555:
utils.conversions.lat_meters(-3.8162973555)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To concatenates list elements, joining them by the separator specified by the parameter "delimiter", you can use **list_to_str**
utils.conversions.list_to_str(["a", "b", "c", "d"], "-")
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To concatenates the elements of the list, joining them by ",", , you can use **list_to_csv_str**
utils.conversions.list_to_csv_str(["a", "b", "c", "d"])
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To concatenates list elements in consecutive element pairs, you can use **list_to_svm_line**
utils.conversions.list_to_svm_line(["a", "b", "c", "d"])
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert longitude to X EPSG:3857 WGS 84/Pseudo-Mercator, you can use **lon_to_x_spherical**
utils.conversions.lon_to_x_spherical(-38.501597)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert latitude to Y EPSG:3857 WGS 84/Pseudo-Mercator, you can use **lat_to_y_spherical**
utils.conversions.lat_to_y_spherical(-3.797864)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert X EPSG:3857 WGS 84/Pseudo-Mercator to longitude, you can use **x_to_lon_spherical**
utils.conversions.x_to_lon_spherical(-4285978.172767829)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert Y EPSG:3857 WGS 84/Pseudo-Mercator to latitude, you can use **y_to_lat_spherical**
utils.conversions.y_to_lat_spherical(-423086.2213610324)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert values, in ms, in label_speed column to kmh, you can use **ms_to_kmh**
utils.conversions.ms_to_kmh(move_data)
Creating or updating distance, time and speed features in meters by seconds ...Sorting by id and datetime to increase performance ...Set id as index to a higher peformance
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert values, in kmh, in label_speed column to ms, you can use **kmh_to_ms**
utils.conversions.kmh_to_ms(move_data)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert values, in meters, in label_distance column to kilometer, you can use **meters_to_kilometers**
utils.conversions.meters_to_kilometers(move_data)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert values, in kilometers, in label_distance column to meters, you can use **kilometers_to_meters**
utils.conversions.kilometers_to_meters(move_data)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert values, in seconds, in label_distance column to minutes, you can use **seconds_to_minutes**
utils.conversions.seconds_to_minutes(move_data)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert values, in minutes, in label_distance column to seconds, you can use **minute_to_seconds**
utils.conversions.minute_to_seconds(move_data)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert in minutes, in label_distance column to hours, you can use **minute_to_hours**
utils.conversions.minute_to_hours(move_data)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert in hours, in label_distance column to minute, you can use **hours_to_minute**
utils.conversions.hours_to_minute(move_data)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert in seconds, in label_distance column to hours, you can use **seconds_to_hours**
utils.conversions.seconds_to_hours(move_data)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert in seconds, in label_distance column to hours, you can use **hours_to_seconds**
utils.conversions.hours_to_seconds(move_data)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
--- Datetime To converts a datetime in string"s format "%Y-%m-%d" or "%Y-%m-%d %H:%M:%S" to datetime"s format, you can use **str_to_datetime**.
utils.datetime.str_to_datetime('2018-06-29 08:15:27')
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To get date, in string's format, from timestamp, you can use **date_to_str**.
utils.datetime.date_to_str(utils.datetime.str_to_datetime('2018-06-29 08:15:27'))
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To converts a date in datetime's format to string's format, you can use **to_str**.
import datetime utils.datetime.to_str(datetime.datetime(2018, 6, 29, 8, 15, 27))
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To converts a datetime to an int representation in minutes, you can use **to_min**.
utils.datetime.to_min(datetime.datetime(2018, 6, 29, 8, 15, 27))
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To do the reverse use: **min_to_datetime**
utils.datetime.min_to_datetime(25504335)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To get day of week of a date, you can use **to_day_of_week_int**, where 0 represents Monday and 6 is Sunday.
utils.datetime.to_day_of_week_int(datetime.datetime(2018, 6, 29, 8, 15, 27))
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To indices if a day specified by the user is a working day, you can use **working_day**.
utils.datetime.working_day(datetime.datetime(2018, 6, 29, 8, 15, 27), country='BR') utils.datetime.working_day(datetime.datetime(2018, 4, 21, 8, 15, 27), country='BR')
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To get datetime of now, you can use **now_str**.
utils.datetime.now_str()
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To convert time in a format appropriate of time, you can use **deltatime_str**.
utils.datetime.deltatime_str(1082.7180936336517)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To converts a local datetime to a POSIX timestamp in milliseconds, you can use **timestamp_to_millis**.
utils.datetime.timestamp_to_millis("2015-12-12 08:00:00.123000")
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To converts milliseconds to timestamp, you can use **millis_to_timestamp**.
utils.datetime.millis_to_timestamp(1449907200123)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To get time, in string's format, from timestamp, you can use **time_to_str**.
utils.datetime.time_to_str(datetime.datetime(2018, 6, 29, 8, 15, 27))
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To converts a time in string's format "%H:%M:%S" to datetime's format, you can use **str_to_time**.
utils.datetime.str_to_time("08:00:00")
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To computes the elapsed time from a specific start time to the moment the function is called, you can use **elapsed_time_dt**.
utils.datetime.elapsed_time_dt(utils.datetime.str_to_time("08:00:00"))
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To computes the elapsed time from the start time to the end time specifed by the user, you can use **diff_time**.
utils.datetime.diff_time(utils.datetime.str_to_time("08:00:00"), utils.datetime.str_to_time("12:00:00"))
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
--- Distances To calculate the great circle distance between two points on the earth, you can use **haversine**.
utils.distances.haversine(-3.797864,-38.501597,-3.797890, -38.501681)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
--- Math To compute standard deviation, you can use **std**.
utils.math.std(600, 20, 5)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To compute the average of standard deviation, you can use **avg_std**.
# utils.math.avg_std(600, 600, 20)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To compute the standard deviation of sample, you can use **std_sample**.
utils.math.std_sample(600, 20, 5)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To compute the average of standard deviation of sample, you can use **avg_std_sample**.
# utils.math.avg_std_sample(600, 20, 5)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To computes the sum of the elements of the array, you can use **array_sum**.
utils.math.array_sum([600, 20, 5])
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To computes the sum of all the elements in the array, the sum of the square of each element and the number of elements of the array, you can use **array_stats**.
utils.math.array_stats([600, 20, 5])
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
To perfomers interpolation and extrapolation, you can use **interpolation**.
utils.math.interpolation(15, 20, 65, 86, 5)
_____no_output_____
MIT
examples/#05 - Exploring Utils.ipynb
synapticarbors/PyMove
Initial Modelling notebook
import os import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import warnings import bay12_solution_eposts as solution
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Load data
post, thread = solution.prepare.load_dfs('train') post.head(2) thread.head(2)
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
I will set the thread number to be the index, to simplify matching in the future:
thread = thread.set_index('thread_num') thread.head(2)
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
We'll load the label map as well, which tells us which index goes to which label
label_map = solution.prepare.load_label_map() label_map
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Create features from thread dataframe We will fit a CountVectorizer, which is a simple transformation that counts the number of times the word was found.The parameter `min_df` sets the minimum number of occurances in our set that will allow a word to join our vocabulary.
from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer(ngram_range=(1, 1), min_df=3) word_vectors_raw = cv.fit_transform(thread['thread_name'])
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
To save space, this outputs a sparse matrix:
word_vectors_raw
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
However, since we'll be using it with a DataFrame, we need to convert it into a Pandas DataFrame:
word_df = pd.DataFrame(word_vectors_raw.toarray(), columns=cv.get_feature_names(), index=thread.index) word_df.head()
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
The only other feature we have from our thread data is the number of replies. Let's add one to get the number of replies. Also, let's use the logarithm of post count as well, just for fun.We'll concatenate those into our X dataframe (Note that I'm renaming the columns, to keep track more easily):
X = pd.concat([ (thread['thread_replies'] + 1).rename('posts'), np.log(thread['thread_replies'] + 1).rename('log_posts'), word_df, ], axis='columns') X.head()
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Our target is the category number. Remember that this isn't a regression task - there is no actual order between these categories! Also, our Y is one-dimensional, so we'll keep it as a Series (even though it prints less prettily).
y = thread['thread_label_id'] y.head()
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Split dataset into "training" and "validation" In order to check the quality of our model in a more realistic setting, we will split all our input (training) data into a "training set" (which our model will see and learn from) and a "validation set" (where we see how well our model generalized). [Relevant link](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).
from sklearn.model_selection import train_test_split # NOTE: setting the `random_state` lets you get the same results with the pseudo-random generator validation_pct = 0.25 X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=validation_pct, random_state=99) X_train.shape, y_train.shape X_val.shape, y_val.shape
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Fit a model Since we are fitting a multiclass model, [this scikit-learn link](https://scikit-learn.org/stable/modules/multiclass.html) is very relevant. To simplify things, we will be using an algorithm that is inherently multi-class.
from sklearn.tree import DecisionTreeClassifier # Just using default parameters... what can do wrong? cls = DecisionTreeClassifier(random_state=1337) # Fit cls.fit(X_train, y_train) # In-sample and out-of-sample predictions # NOTE: we y_train_pred = pd.Series( cls.predict(X_train), index=X_train.index, ) y_val_pred = pd.Series( cls.predict(X_val), index=X_val.index, ) y_val_pred.head()
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Score the model To find out how well the model did, we'll use the [model evaluation functionality of sklearn](https://scikit-learn.org/stable/modules/model_evaluation.html); specifically, the [multiclass classification metrics](https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-metrics).
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
The [confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix) shows how our predictions differ from the actual values.It's important to note how strongly our in-sample (training) and out-of-sample (validation/test) metrics differ.
def confusion_df(y_actual, y_pred): res = pd.DataFrame( confusion_matrix(y_actual, y_pred, labels=label_map.values), index=label_map.index.rename('predicted'), columns=label_map.index.rename('actual'), ) return res confusion_df(y_train, y_train_pred).style.highlight_max() confusion_df(y_val, y_val_pred).style.highlight_max()
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Oh boy. That's pretty bad - we didn't predict anything for several columns! Let's look at the metrics to confirm that it is indeed bad.
print("Test accuracy:", accuracy_score(y_train, y_train_pred)) print("Validation accuracy:", accuracy_score(y_val, y_val_pred)) report = classification_report(y_val, y_val_pred, labels=label_map.values, target_names=label_map.index) print(report)
precision recall f1-score support bastard 1.00 0.40 0.57 5 beginners-mafia 0.67 0.50 0.57 4 byor 0.00 0.00 0.00 2 classic 0.40 0.25 0.31 8 closed-setup 0.33 0.71 0.45 7 cybrid 0.00 0.00 0.00 1 kotm 0.00 0.00 0.00 1 non-mafia-game 0.00 0.00 0.00 0 other 0.88 0.84 0.86 55 paranormal 0.60 1.00 0.75 3 supernatural 0.00 0.00 0.00 0 vanilla 0.00 0.00 0.00 1 vengeful 0.67 0.67 0.67 3 micro avg 0.69 0.69 0.69 90 macro avg 0.35 0.34 0.32 90 weighted avg 0.73 0.69 0.69 90
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Well, that's pretty bad. We seriously overfit our training set... which is sort-of what I expected. Oh well.By the way, the warnings at the bottom say that we have no real Precision or F-score to use, with no predictions for some classes. Predict with the modelHere, we will predict on the test set (predicitions to send in), then save the results and the model.**IMPORTANT NOTE**: In reality, you need to re-train your same model on the entire set to predict! However, I'm just using the same model as before, as it will bad anyways. ;)
post_test, thread_test = solution.prepare.load_dfs('test') thread_test = thread_test.set_index('thread_num') thread_test.head(2)
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
We need to attach a `thread_label_id` column, as given in the training set:
thread.head(2)
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Use the fitted CountVectorizer and other features to make our X dataframe:
word_vectors_raw_test = cv.transform(thread_test['thread_name']) word_df_test = pd.DataFrame(word_vectors_raw_test.toarray(), columns=cv.get_feature_names(), index=thread_test.index) word_df_test.head() X_test = pd.concat([ (thread_test['thread_replies'] + 1).rename('posts'), np.log(thread_test['thread_replies'] + 1).rename('log_posts'), word_df_test, ], axis='columns') X_test.head()
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Now we predict with our model, then paste it to a copy of `thread_test` as column `thread_label_id`.
y_test_pred = pd.Series( cls.predict(X_test), index=X_test.index, ) y_test_pred.head() result = thread_test.copy() result['thread_label_id'] = y_test_pred result.head()
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
We need to reshape to conform to the submission format specified [here](https://www.kaggle.com/c/ni-mafia-gametypeevaluation).
result = result.reset_index()[['thread_num', 'thread_label_id']] result.head()
_____no_output_____
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Export predictions, modelOur model consists of the text vectorizer `cv` and classifier `cls`. We already formatted our results, we just need to make sure not to write an extra index column.
# NOTE: Exporting next to the notebooks - the files are small, but usually you don't want to do this. out_dir = os.path.abspath('1_output') os.makedirs(out_dir, exist_ok=True) result.to_csv( os.path.join(out_dir, 'baseline_predict.csv'), index=False, header=True, encoding='utf-8', ) import joblib joblib.dump(cv, os.path.join(out_dir, 'cv.joblib')) joblib.dump(cls, os.path.join(out_dir, 'cls.joblib')) print("Done. :)")
Done. :)
Apache-2.0
notebooks/1_initial_model.ipynb
NowanIlfideme/kaggle_ni_mafia_gametype
Introduction NotebookHere we will cover common python libraries.1. [Numpy](numpy) 2. [Scipy](scipy) 3. [Matplotlib](matplotlib) 4. [PySCF](pyscf)5. [Psi4](psi4) Extra PracticeFor a more hands-on introduction notebook, check out the notebook at [this link](https://hub.mybinder.org/user/amandadumi-nume-methods_release-une1joqv/tree/IPython_notebooks/01_Introduction). This will take you to a web-hosted Jupyter notebook on Binder. If you would prefer to clone the notebook to use locally, you can find it [here](https://github.com/amandadumi/numerical_methods_release/tree/master/IPython_notebooks). NumpyFundamental package for scientific computing with Python
import numpy as np a = np.array((4, 5, 6, 6, 7, 8)) b = np.array((8, 9, 2, 4, 6, 7)) c = np.dot(a, b) print(c)
_____no_output_____
BSD-3-Clause
01_Introduction/Introduction.ipynb
ABellesis/QMMM_study_group
ScipyProvides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization
import scipy as sp import scipy.linalg as la mat = np.random.rand(5, 5) eig_val, eig_vec = la.eig(mat) print('eigenvalues:\n {}\n'.format(eig_val)) print('eigenvectors:\n {}'.format(eig_vec))
_____no_output_____
BSD-3-Clause
01_Introduction/Introduction.ipynb
ABellesis/QMMM_study_group
MatplotlibPython library for 2- and 3-D visualization.Pyplot provides convenient functions to generate plots.
import matplotlib.pyplot as plt x = np.linspace(0, 5, 100) y = np.sin(x) plt.plot(x, y) plt.show()
_____no_output_____
BSD-3-Clause
01_Introduction/Introduction.ipynb
ABellesis/QMMM_study_group
Psi4NumpyPsi4 is an open source quantum chemistry package.Recently introduced [Psi4Numpy](https://github.com/psi4/psi4numpy), a collections of notebooks for teaching quantum chemistry. The cell below runs an SCF cyle for water with the cc-pvdz basis using Psi4Numpy
import psi4 # read in geometry for water h2o = psi4.geometry(""" O 0.0000000 0.0000000 0.0000000 H 0.7569685 0.0000000 -0.5858752 H -0.7569685 0.0000000 -0.5858752 """) # set basis set psi4.set_options({'basis': 'cc-pvdz'}) # run an scf calculation scf_e, scf_wfn = psi4.energy('scf', return_wfn=True) print('converged SCF energy: {}'.format(scf_e))
_____no_output_____
BSD-3-Clause
01_Introduction/Introduction.ipynb
ABellesis/QMMM_study_group
PySCFPython-based quantum simulations The cell below runs an SCF cycle for water with the cc-pvdz basis using PySCF
from pyscf import gto, scf # read in geometry mol = gto.M(atom='O 0.0000000 0.0000000 0.0000000; H 0.7569685 0.0000000 -0.5858752; H -0.7569685 0.0000000 -0.5858752') mol.basis = 'ccpvdz' # run an scf calculation mol_scf = scf.RHF(mol) mol_scf.kernel()
_____no_output_____
BSD-3-Clause
01_Introduction/Introduction.ipynb
ABellesis/QMMM_study_group
3. How to Construct a Linear Model
import torch import torch.nn as nn import torch.optim as optim import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
Week2/03. How to Construct a Linear Model_seungju.ipynb
Seungju182/pytorch-basic
3.1 Problem 1
X = torch.rand(100, 20) Y = torch.rand(100, 1) model = nn.Linear(20, 1) model(X.view(100,20)).shape == Y.shape
_____no_output_____
MIT
Week2/03. How to Construct a Linear Model_seungju.ipynb
Seungju182/pytorch-basic
3.2 Problem 2
X = torch.rand(500, 30) Y = torch.rand(500, 2) model = nn.Linear(30, 2) model(X.view(500, 30)).shape == Y.shape
_____no_output_____
MIT
Week2/03. How to Construct a Linear Model_seungju.ipynb
Seungju182/pytorch-basic
3.3 Problem 3
X = torch.rand(500, 40) Y = torch.rand(1000, 1) model = nn.Linear(40, 1) model(X.view(500, 40)).shape == Y.shape
_____no_output_____
MIT
Week2/03. How to Construct a Linear Model_seungju.ipynb
Seungju182/pytorch-basic
3.4 Problem 4
X = torch.rand(1000, 200, 20) Y = torch.rand(1000, 2) model = nn.Linear(200*20, 2) model(X.view(1000, -1)).shape == Y.shape
_____no_output_____
MIT
Week2/03. How to Construct a Linear Model_seungju.ipynb
Seungju182/pytorch-basic
Pandas Metodları ve Özellikleri Veri Analizi için Önemli Konular Eksik Veriler (Missing Value)
data = {'Istanbul':[30,29,np.nan],'Ankara':[20,np.nan,25],'Izmir':[40,39,38],'Antalya':[40,np.nan,np.nan]} weather = pd.DataFrame(data,index=['pzt','sali','car']) weather
_____no_output_____
CC0-1.0
pandas/3.0.pandas_methods_features.ipynb
enesonmez/data-science-tutorial-turkish
Satırında değer olmayan satırları veya sütunları silmek için **dropna** fonksiyonu kullanılır.
weather.dropna() weather.dropna(axis=1) # sütunda 2 veya daha fazla nan var ise siler. weather.dropna(axis=1, thresh=2)
_____no_output_____
CC0-1.0
pandas/3.0.pandas_methods_features.ipynb
enesonmez/data-science-tutorial-turkish
Boş olan değerleri doldurmak için **fillna** fonksiyonunu kullanırız.
weather.fillna(22)
_____no_output_____
CC0-1.0
pandas/3.0.pandas_methods_features.ipynb
enesonmez/data-science-tutorial-turkish
Gruplama (Group By)
data = {'Departman':['Yazılım','Pazarlama','Yazılım','Pazarlama','Hukuk','Hukuk'], 'Calisanlar':['Ahmet','Mehmet','Enes','Burak','Zeynep','Fatma'], 'Maas':[150,100,200,300,400,500]} workers = pd.DataFrame(data) workers groupbyobje = workers.groupby('Departman') groupbyobje.count() groupbyobje.mean() groupbyobje.min() groupbyobje.max() groupbyobje.describe()
_____no_output_____
CC0-1.0
pandas/3.0.pandas_methods_features.ipynb
enesonmez/data-science-tutorial-turkish
Concatenation
data1 = {'Isim':['Ahmet','Mehmet','Zeynep','Enes'], 'Spor':['Koşu','Yüzme','Koşu','Basketbol'], 'Kalori':[100,200,300,400]} data2 = {'Isim':['Osman','Levent','Atlas','Fatma'], 'Spor':['Koşu','Yüzme','Koşu','Basketbol'], 'Kalori':[200,200,30,400]} data3 = {'Isim':['Ayse','Mahmut','Duygu','Nur'], 'Spor':['Koşu','Yüzme','Badminton','Tenis'], 'Kalori':[150,200,350,400]} df1 = pd.DataFrame(data1) df2 = pd.DataFrame(data2) df3 = pd.DataFrame(data3) pd.concat([df1,df2,df3], ignore_index=True, axis=0)
_____no_output_____
CC0-1.0
pandas/3.0.pandas_methods_features.ipynb
enesonmez/data-science-tutorial-turkish
Merging
mdata1 = {'Isim':['Ahmet','Mehmet','Zeynep','Enes'], 'Spor':['Koşu','Yüzme','Koşu','Basketbol']} mdata2 = {'Isim':['Ahmet','Mehmet','Zeynep','Enes'], 'Kalori':[100,200,300,400]} mdf1 = pd.DataFrame(mdata1) mdf1 mdf2 = pd.DataFrame(mdata2) mdf2 pd.merge(mdf1,mdf2,on='Isim')
_____no_output_____
CC0-1.0
pandas/3.0.pandas_methods_features.ipynb
enesonmez/data-science-tutorial-turkish
Önemli Metodlar ve Özellikleri
data = {'Departman' : ['Yazılım','Pazarlama','Yazılım','Pazarlama','Hukuk','Hukuk'], 'Isim' : ['Ahmet','Mehmet','Enes','Burak','Zeynep','Fatma'], 'Maas' : [150,100,200,300,400,500]} workerdf = pd.DataFrame(data) workerdf
_____no_output_____
CC0-1.0
pandas/3.0.pandas_methods_features.ipynb
enesonmez/data-science-tutorial-turkish