markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
hour | a = taxi.pivot_table("log_duration", "hour", aggfunc='mean')
a.plot()
# origin data model
model = sm.OLS.from_formula("log_duration ~ scale(hour) +scale(hour**2) +scale(hour**3) + scale(hour**4) +scale(hour**5) +scale(hour**6)+scale(hour**7) + scale(hour**8) + scale(hour**9)", data = taxi)
result2 = model.fit_regulariz... | _____no_output_____ | MIT | Mk/2.fitting.ipynb | Romanism/dss-project-taxi |
interaction์ ๋ค๋ฅธ ๋ณ์๋ ๊ฐ์ด ๋ฃ์ผ๋ฉด ๋๋ฌด ๋ง์์ ธ์ ์๋ฌ๋จ --- store | results = pd.DataFrame(columns = ["R-square", "AIC", "BIC", "Cond.No.", "Pb(Fstatics)", "Pb(omnibus)", "Pb(jb)", "Dub-Wat","Remarks"])
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + C(store_and_fwd_flag) +0", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + C(store_and_fw... | _____no_output_____ | MIT | Mk/2.fitting.ipynb | Romanism/dss-project-taxi |
vendor_id | results = pd.DataFrame(columns = ["R-square", "AIC", "BIC", "Cond.No.", "Pb(Fstatics)", "Pb(omnibus)", "Pb(jb)", "Dub-Wat","Remarks"])
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + C(vendor_id) +0", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + C(vendor_id)')
model1 ... | _____no_output_____ | MIT | Mk/2.fitting.ipynb | Romanism/dss-project-taxi |
no_passenger | results = pd.DataFrame(columns = ["R-square", "AIC", "BIC", "Cond.No.", "Pb(Fstatics)", "Pb(omnibus)", "Pb(jb)", "Dub-Wat","Remarks"])
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + C(no_passenger) +0", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + C(no_passenger)')
m... | _____no_output_____ | MIT | Mk/2.fitting.ipynb | Romanism/dss-project-taxi |
passenger_count | results = pd.DataFrame(columns = ["R-square", "AIC", "BIC", "Cond.No.", "Pb(Fstatics)", "Pb(omnibus)", "Pb(jb)", "Dub-Wat","Remarks"])
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + C(passenger_count) +0", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + C(passenger_count... | _____no_output_____ | MIT | Mk/2.fitting.ipynb | Romanism/dss-project-taxi |
Exploring Elastic Search Database to Investigate Moving Object Lightcurves*Chien-Hsiu Lee, Thomas Matheson & ANTARES Team* Table of contents* [Goals & notebook summary](goals)* [Disclaimer & Attribution](attribution)* [Imports & setup](import)* [Authentication](auth)* [First chapter](chapter1)* [Resources and referen... | from antares_client.search import search
import matplotlib.pyplot as plt
import pandas as pd | _____no_output_____ | BSD-3-Clause | 05_Contrib/TimeDomain/AntaresExample/AntaresSolarSystemObjectLightCurveExploration.ipynb | noaodatalab/notebooks_default |
Querying ANTARES alert database This cell shows how to call elastic search with ANTARES API. It can search on ZTF object id, RA, Dec, or other properties. For our purpose, we search for ZTF alerts associated with 809 Lundia using the keyword ztf_ssnamenr. | query = {
"query": {
"bool": {
"must": [
{
"match": {
"properties.ztf_ssnamenr": 809
}
},
]
}
}
}
result_s... | _____no_output_____ | BSD-3-Clause | 05_Contrib/TimeDomain/AntaresExample/AntaresSolarSystemObjectLightCurveExploration.ipynb | noaodatalab/notebooks_default |
Extracting light curve related propertiesNow the query is finished, let's extract relevant properties (MJD, Mag, Mag_err) for this moving object. | gmjd = []
gmag = []
gerr = []
rmjd = []
rmag = []
rerr = []
for locus in search(query):
for alert in locus.alerts:
if 'ztf_ssnamenr' in alert.properties:
if alert.properties['ant_passband'] == 'R':
rmjd.append(alert.properties['ztf_jd'])
rmag.appen... | _____no_output_____ | BSD-3-Clause | 05_Contrib/TimeDomain/AntaresExample/AntaresSolarSystemObjectLightCurveExploration.ipynb | noaodatalab/notebooks_default |
Having the time-series photometry in hand, we can plot the light curve. | plt.scatter(rmjd, rmag, c='red', alpha=0.5)
plt.scatter(gmjd, gmag, c='green', alpha=0.5)
plt.title('809 Lundia light curve from ZTF')
plt.xlabel('Time [Julian date]')
plt.ylabel('Magnitude in g- and r-passband')
plt.show | _____no_output_____ | BSD-3-Clause | 05_Contrib/TimeDomain/AntaresExample/AntaresSolarSystemObjectLightCurveExploration.ipynb | noaodatalab/notebooks_default |
Now we want to see if we can find the binary eclipses in the light curves. First we need to remove the long-term trend. This can be done by comparing with the apparent magnitude predicted by JPL/HORIZONS. It has been shown that Lundia has a period of 15.42 hours, we also fold the light curve with this period after de-t... | from scipy import interpolate
#we read in the predictions of the brightness (according to the distance to the sun) from JPL/HORIZONS
lc = pd.read_csv('JPL809.csv')
jpl_jd = lc['JD']
jpl_mag = lc['Vmag']
period=15.42/24.
x0=[]
y0=[]
for i in range(len(jpl_jd)):
x0.append(float(jpl_jd[i]))
y0.append(float(jpl_mag... | _____no_output_____ | BSD-3-Clause | 05_Contrib/TimeDomain/AntaresExample/AntaresSolarSystemObjectLightCurveExploration.ipynb | noaodatalab/notebooks_default |
We can now plot the de-trend and folded light curve. | #plot folded light curve
plt.ylim(max(mgmag)+0.5*(max(mgmag)-min(mgmag)),min(mrmag)-0.5*(max(mrmag)-min(mrmag)))
plt.scatter(mrdate, mrmag, c='red', alpha=0.5)
plt.scatter(mgdate, mgmag, c='green', alpha=0.5)
plt.title('809 Lundia phase-folded light curve from ZTF')
plt.xlabel('Phase (Period=15.42 hr)')
plt.ylabel('Mag... | _____no_output_____ | BSD-3-Clause | 05_Contrib/TimeDomain/AntaresExample/AntaresSolarSystemObjectLightCurveExploration.ipynb | noaodatalab/notebooks_default |
Rewrite to style | from molsysmt.tools import file_pir
#file_pir.rewrite_to_style() | _____no_output_____ | MIT | docs/contents/tools/files/file_pir/rewrite_to_style.ipynb | dprada/molsysmt |
If you increase the size of your outputs, the model becomes very confident about predictions scores x 10If you reduce the size of your outputs, the model becomes very unsure scores / 10We want it to be unsure in the beginning and become more confident as it learns | def softmax(x):
"""Compute softmax values for each sets of scores in x."""
#pass # TODO: Compute and return softmax(x)
e_x = np.exp(x)
sum_e_x = np.sum(e_x, axis = 0)
#return np.exp(x) / np.sum(np.exp(x), axis = 0)
return e_x / sum_e_x
print(softmax(scores))
# Plot softmax curves
x = np.ar... | _____no_output_____ | MIT | Softmax.ipynb | Mdcrab02/Udacity_DeepLearning |
If you multiply these by 10, their probabilities get super close to 1 or 0mult = np.multiply(np.array(scores),10)print(softmax(mult)) | # Plot softmax curves
x = np.arange(-2.0, 6.0, 0.1)
scores = np.vstack([x, np.ones_like(x), 0.2 * np.ones_like(x)])
plt.plot(x, softmax(scores * 10).T, linewidth=2)
plt.show() | _____no_output_____ | MIT | Softmax.ipynb | Mdcrab02/Udacity_DeepLearning |
If you divide them by 10div = np.divide(np.array(scores),10)print(softmax(div)) | # Plot softmax curves
x = np.arange(-2.0, 6.0, 0.1)
scores = np.vstack([x, np.ones_like(x), 0.2 * np.ones_like(x)])
plt.plot(x, softmax(scores / 10).T, linewidth=2)
plt.show() | _____no_output_____ | MIT | Softmax.ipynb | Mdcrab02/Udacity_DeepLearning |
PART 1 | df_census = pd.read_csv(data_census)
df_school = pd.read_csv(data_school)
pd.set_option("display.max_columns", 200)
pd.set_option("display.max_rows", 200) | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
Census Data | df_census.head(5)
pd.set_option("display.max_rows", 100)
df_census.info()
print("Dimension: " +str(df_census.shape))
df_census.isnull().sum() | Dimension: (78, 9)
| MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
The df_census dataframe contains census data that shows the socio-economic conditions in Chicago. The three primary datatype in this data frame is float, int, and pandas object. This dataframe has 9 variables (columns) and 78 observations (rows). When checking for the number of missing values in this dataframe, it was ... | df_school.head(5)
df_school.info()
print("Dimension: " +str(df_school.shape))
df_school.isnull().sum() | Dimension: (566, 78)
| MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
The df_school dataframe contains data about public school assesment in the Chicago area. This dataframe has float, int and pandas object data types. df_school dataframe consists of 78 variables (columns) and 566 observations (rows). This dataframe has missing values in columns: "LINK", "SAFETY_SCORE", "ENVIRONMENT SCOR... | l = ['School ID','COMMUNITY_AREA_NUMBER','NAME_OF_SCHOOL','SAFETY_SCORE','Environment Score','Instruction Score','Parent Engagement Score',
'Average Teacher Attendance','COMMUNITY_AREA_NAME','College Enrollment Rate %']
df_school_selected = df_school[l]
df_school_selected.head(10)
df_school_selected.isnull().sum() | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
List-wise deletion was used for schools that doesn't have 'Parent Engagement Score'. However, for 'College Enrollment Rate', we will not be deleting the missing values since only high schools have them and pandas operations will be intrinsically skip those missing values. Instead it will be replace with 'nan. 'SAFETY_S... | #drops rows that conatins missing value for parent Engagement Score (list-wise deletion)
df_school_selected = df_school_selected[~df_school_selected['Parent Engagement Score'].isin(['NDA'])]
df_school_selected['Parent Engagement Score'].isnull().sum()
#replacing NDA with nan
df_school_selected = df_school_selected.repl... | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
Part 2 | import sqlite3
assign2db = "assign2.db"
conn = sqlite3.connect(assign2db)
print(conn)
conn = sqlite3.connect(assign2db)
cursor = conn.cursor()
#Drop public school table if it exits
cursor.execute("DROP TABLE IF EXISTS `PUBLIC_SCHOOL`")
print("Table dropped")
conn.close() | Table dropped
| MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
Community area number or community area name can serve as foreign key but it's prefered that a non-string data type is foreign key | #create public school table with appropriate fields
conn = sqlite3.connect(assign2db)
cursor = conn.cursor()
try:
cursor.execute("""
CREATE TABLE PUBLIC_SCHOOL (
SCHOOL_ID INTEGER PRIMARY KEY,
COMMUNITY_AREA_NUMBER INTEGER,
NAME_OF_SCHOOL TEXT NOT NULL,
SAFETY_SCORE FLOAT DEFAULT 0,
Environm... | Total Rows In Public_School Table: 432
Ttoal Rows In Census Table: 78
| MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
The count of rows for public school is 432.The count of rows for census is 78. Part 3 | conn = sqlite3.connect(assign2db)
cursor = conn.cursor()
#Drop public school table if it exits
cursor.execute("DROP TABLE IF EXISTS `Totaldata`")
print("Table dropped")
conn.close()
#Joining the two tables based on community area number
conn = sqlite3.connect(assign2db)
cursor = conn.cursor()
cursor.execute("""
CREATE ... | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
Q1 What is the relationship between per capita income in EAST SIDE and the safety score of schools in EAST SIDE? | sql = '''SELECT PER_CAPITA_INCOME, SAFETY_SCORE FROM Totaldata WHERE COMMUNITY_AREA_NAME = "EAST SIDE"'''
q1 = pd.read_sql(sql, conn)
q1
x = q1['PER_CAPITA_INCOME'].values.reshape(-1,1)
y = q1['SAFETY_SCORE'].values.reshape(-1,1)
linear_regression = LinearRegression()
fit = linear_regression.fit(x,y)
print(fit.coef_)
p... | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
There is no relationship between PER_CAPITA INCOME and SAFETY_SCORE with slope of 0. Q2 What is the relationship between enviroment score and hardship-index of schools that based on for schools with per capita income greater than 15000 | sql = '''SELECT Environment_Score, HARDSHIP_INDEX FROM Totaldata WHERE PER_CAPITA_INCOME>15000'''
q2 = pd.read_sql(sql, conn)
q2
x = q2['Environment_Score'].values.reshape(-1,1)
y = q2['HARDSHIP_INDEX'].values.reshape(-1,1)
linear_regression = LinearRegression()
fit = linear_regression.fit(x,y)
print(fit.coef_)
print(f... | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
There's slight evidence that HARDSHIP_INDEX decreases with increase in ENVIRONMENT_SCORE with slope of -0.283 Q3 How do safety scores of school vary from different areas of the city | sql = """Select COMMUNITY_AREA_NAME, sum(SAFETY_SCORE) from Totaldata group by COMMUNITY_AREA_NAME"""
q3 = pd.read_sql(sql,conn)
q3
q3.describe()
fig, ax = plt.subplots()
fig.set_size_inches(11.7, 8.27)
p = sns.scatterplot(data=q3, x="COMMUNITY_AREA_NAME", y="sum(SAFETY_SCORE)")
p = plt.setp(p.get_xticklabels(), rotati... | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
The mean of safety score amount all schools from all city is ~292 with a S.D of ~183 but according to the scatter plot we can see that it randomly scatterred Q4 Determine relationship between Instruction_Score to Average_Teacher_Attendence | sql = "Select CAST(Average_Teacher_Attendance as INT), Instruction_Score from Totaldata"
q4 = pd.read_sql(sql, conn)
q4
x = q4['CAST(Average_Teacher_Attendance as INT)'].values.reshape(-1,1)
y = q4['Instruction_Score'].values.reshape(-1,1)
linear_regression = LinearRegression()
fit = linear_regression.fit(x,y)
print(fi... | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
From the linear regression and regression plot we can see that there is no relationship between Average_Attendence and institution score Q5 Is the relationship between PERCENT_HOUSEHOLDS_BELOW_POVERTY between different cities normally distributed | sql = " SELECT PERCENT_HOUSEHOLDS_BELOW_POVERTY, COMMUNITY_AREA_NAME FROM Totaldata "
q5 = pd.read_sql(sql,conn)
q5_2 = q5.pivot(columns='COMMUNITY_AREA_NAME',values='PERCENT_HOUSEHOLDS_BELOW_POVERTY')
q5_2.describe()
sns.distplot(q5['PERCENT_HOUSEHOLDS_BELOW_POVERTY'], color='b') | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
The percent households is not normally distributed between different cities Q6 Find the relationship between per_capita income to number of schools in each community area | sql = "SELECT COMMUNITY_AREA_NAME, sum(PER_CAPITA_INCOME), count(SCHOOL_ID) from Totaldata group by COMMUNITY_AREA_NAME"
q6 = pd.read_sql(sql, conn)
q6
x = q6['count(SCHOOL_ID)'].values.reshape(-1,1)
y = q6['sum(PER_CAPITA_INCOME)'].values.reshape(-1,1)
linear_regression = LinearRegression()
fit = linear_regression.fit... | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
There's a strong positive linear relationship between the number of school in an area to the per_capita_income Q7 What is the relationship between safety scores of schools in "AUSTIN" and "ENGLEWOOD" | sql = """SELECT SAFETY_SCORE, COMMUNITY_AREA_NAME FROM Totaldata WHERE COMMUNITY_AREA_NAME='AUSTIN'
or COMMUNITY_AREA_NAME='ENGLEWOOD'"""
q7 = pd.read_sql(sql,conn)
q7_2 = q7.pivot(columns='COMMUNITY_AREA_NAME',values='SAFETY_SCORE')
q7_2.describe()
q7.boxplot(column="SAFETY_SCORE", by="COMMUNITY_AREA_NAME", figsize=(... | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
The average safety score of schools in the AUSTIN area is better than that of ENGLEWOOD Q8 What is the relationship between PERCENT_AGED_16_UNEMPLOYED and Instruction_Score for schools with safety score over 60? | q8 = "SELECT PERCENT_AGED_16_UNEMPLOYED, INSTRUCTION_Score FROM Totaldata where SAFETY_SCORE > 60.0"
q8 = pd.read_sql(q8,conn)
q8
x = q8['Instruction_Score'].values.reshape(-1,1)
y = q8['PERCENT_AGED_16_UNEMPLOYED'].values.reshape(-1,1)
linear_regression = LinearRegression()
fit = linear_regression.fit(x,y)
print(fit.c... | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
There's not a strong linear relationship between instruction scores and percent_aged_16_unemployed for schools where safey score is greater than 60. Q9 How does PERCENT_OF_HOUSING_CROWDED compare for areas where school safety score is greater than 70. | sql = " SELECT COMMUNITY_AREA_NAME, sum(PERCENT_OF_HOUSING_CROWDED) FROM Totaldata WHERE SAFETY_SCORE > 70 GROUP BY COMMUNITY_AREA_NAME"
q9 = pd.read_sql(sql, conn)
q9
q9.describe()
sns.distplot(q9['sum(PERCENT_OF_HOUSING_CROWDED)'], color='b') | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
Percentage of crowded housing is most likely between 0-10% for schools with safety scores greater than 70. Q10 is there a relationship between community hardship to school environmental scores for school who's safety score is less than 50? | sql = "SELECT HARDSHIP_INDEX, Environment_Score From Totaldata Where SAFETY_SCORE < 50"
q10 = pd.read_sql(sql, conn)
q10
x = q10['Environment_Score'].values.reshape(-1,1)
y = q10['HARDSHIP_INDEX'].values.reshape(-1,1)
linear_regression = LinearRegression()
fit = linear_regression.fit(x,y)
print(fit.coef_)
print(fit.int... | _____no_output_____ | MIT | Him_Assignment2.ipynb | vathanahim/IAF603_Assignment2 |
Feedback or issues?For any feedback or questions, please open an [issue](https://github.com/googleapis/python-aiplatform/issues). Vertex SDK for Python: Custom Tabular Training (asynchronous) ExampleTo use this Colaboratory notebook, you copy the notebook to your own Google Drive and open it with Colaboratory (or Col... | !pip3 uninstall -y google-cloud-aiplatform
!pip3 install google-cloud-aiplatform
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True) | _____no_output_____ | Apache-2.0 | notebooks/community/sdk/SDK_Tabular_Custom_Model_Training_asynchronous.ipynb | nayaknishant/vertex-ai-samples |
Enter your project and GCS bucketEnter your Project Id in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. | import sys
if "google.colab" in sys.modules:
from google.colab import auth
auth.authenticate_user()
MY_PROJECT = "YOUR PROJECT"
MY_STAGING_BUCKET = "gs://YOUR BUCKET" # bucket should be in same region as Vertex AI | _____no_output_____ | Apache-2.0 | notebooks/community/sdk/SDK_Tabular_Custom_Model_Training_asynchronous.ipynb | nayaknishant/vertex-ai-samples |
The dataset we are using is the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone | !wget https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv
!gsutil cp abalone_train.csv {MY_STAGING_BUCKET}/data/
gcs_csv_path = f"{MY_STAGING_BUCKET}/data/abalone_train.csv" | _____no_output_____ | Apache-2.0 | notebooks/community/sdk/SDK_Tabular_Custom_Model_Training_asynchronous.ipynb | nayaknishant/vertex-ai-samples |
Initialize Vertex SDK for PythonInitialize the *client* for Vertex AI | from google.cloud import aiplatform
aiplatform.init(project=MY_PROJECT, staging_bucket=MY_STAGING_BUCKET) | _____no_output_____ | Apache-2.0 | notebooks/community/sdk/SDK_Tabular_Custom_Model_Training_asynchronous.ipynb | nayaknishant/vertex-ai-samples |
Create a Managed Tabular Dataset from CSVA Managed dataset can be used to create an AutoML model or a custom model. | ds = aiplatform.TabularDataset.create(
display_name="abalone", gcs_source=[gcs_csv_path], sync=False
) | _____no_output_____ | Apache-2.0 | notebooks/community/sdk/SDK_Tabular_Custom_Model_Training_asynchronous.ipynb | nayaknishant/vertex-ai-samples |
Write Training Script- Write this cell as a file which will be used for custom training. | %%writefile training_script.py
import pandas as pd
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# uncomment and bump up replica_count for distributed training
# strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# tf.distribute.experimental_set_s... | _____no_output_____ | Apache-2.0 | notebooks/community/sdk/SDK_Tabular_Custom_Model_Training_asynchronous.ipynb | nayaknishant/vertex-ai-samples |
Launch a Training Job to Create a ModelOnce we have defined your training script, we will create a model. | job = aiplatform.CustomTrainingJob(
display_name="train-abalone-dist-1-replica",
script_path="training_script.py",
container_uri="gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest",
requirements=["gcsfs==0.7.1"],
model_serving_container_image_uri="gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:late... | _____no_output_____ | Apache-2.0 | notebooks/community/sdk/SDK_Tabular_Custom_Model_Training_asynchronous.ipynb | nayaknishant/vertex-ai-samples |
Deploy Your ModelDeploy your model, then wait until the model FINISHES deployment before proceeding to prediction. | endpoint = model.deploy(machine_type="n1-standard-4", sync=False) | _____no_output_____ | Apache-2.0 | notebooks/community/sdk/SDK_Tabular_Custom_Model_Training_asynchronous.ipynb | nayaknishant/vertex-ai-samples |
Wait for the deployment to complete | endpoint.wait() | _____no_output_____ | Apache-2.0 | notebooks/community/sdk/SDK_Tabular_Custom_Model_Training_asynchronous.ipynb | nayaknishant/vertex-ai-samples |
Predict on the Endpoint | prediction = endpoint.predict(
[
[0.435, 0.335, 0.11, 0.33399999999999996, 0.1355, 0.0775, 0.0965],
[0.585, 0.45, 0.125, 0.874, 0.3545, 0.2075, 0.225],
]
)
prediction | _____no_output_____ | Apache-2.0 | notebooks/community/sdk/SDK_Tabular_Custom_Model_Training_asynchronous.ipynb | nayaknishant/vertex-ai-samples |
**Task 2**
**From the given โIrisโ dataset, predict the optimum number of clusters
and represent it visually**
**Dataset : https://bit.ly/3kXTdox** | from sklearn import datasets
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.cluster import KMeans
import matplotlib.patches as mpatches
import sklearn.metrics as sm
from mpl_toolkits.mplot3d import Axes3D
from scipy.cluster.hierarchy import linkage,dendrogram
from sklearn.c... | Confusion Matrix:-
| MIT | Task_2.ipynb | sassysoul/The-Spark-Foundation |
RegEx and Cookies using Python Importing libraries | import requests
import time
from bs4 import BeautifulSoup
import re
import pandas as pd
import numpy as np
import collections as cl | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
URL to be scraped | url = "https://www.thyssenkrupp-elevator.com/kr/products/multi/" | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Getting the source code | source_code = requests.get(url, headers = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36 Edg/79.0.309.71'}) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Creating a BeautifulSoup object | soup_object = BeautifulSoup(source_code.content, 'html.parser') | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Saving the source_code to htm file | file_to_Save = open("elevator.htm", "w", encoding = 'utf-8')
file_to_Save.write(source_code.text)
file_to_Save.close() | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Opening and reading the "elevator.htm" | file_data = open("elevator.htm", encoding = "utf-8").read() | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Stripping all ``s | print(re.sub(r'<.*?>', r'', file_data)) |
html[lang="vi"],html[lang="vi"] body,a,h1,h2,h3,h4,div,li,ul {font-family: Arial, Verdana, sans-serif !important}
MULTI: ์น๊ฐ ์์ก์ ํ๊ธฐ์ ์ธ ๋ณํ โ ํฐ์ผํฌ๋ฃจํ์๋ฆฌ๋ฒ ์ดํฐ
... | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Selecting Korean characters that come right before "." | korean_char_before_dot = (re.findall(r'([^a-zA-Z0-9\s\[\]\{},\<\\>\!\@\#\$\%\^\&\*\(\)\/\"\'\.\|])\.[^\.]', source_code.text))
str(korean_char_before_dot) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Saving the characters to the file "Korean_char.txt" | Korean_char_file = open("Korean_char.txt", "w", encoding = 'utf-8')
Korean_char_file.write(str(korean_char_before_dot))
Korean_char_file.close() | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Printing the most common Korean character that occurs before the dot | cl.Counter(korean_char_before_dot).most_common(1) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Selecting all Korean characters that occur on the page \[^ -~\] deselects all ASCII characters \[^ \s\] deselcts all white-spaces including \n, \t, etc. | all_korean_characters = (re.findall(r'[^ -~\sโ]', source_code.text)) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Printing the most comon Korean character on the webpage | cl.Counter(all_korean_characters).most_common(1) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Details of the email ID, password and username used to create an account on www.allrecipes.com __Email ID__ = wki91412@zzrgg.com __Password__ = 12345678aA __Username__ = 'Test' Sign-in URL for www.allrecipes.com | signin_url = "https://www.allrecipes.com/account/signin/" | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Getting the code of the webpage | signin_code = requests.get(signin_url, headers = {'user-agent':'Mozilla/5.0'}) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Creating a BeautifulSoup object | signin_soup = BeautifulSoup(signin_code.content, 'html.parser') | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Printing the code of the webpage | print(signin_soup) |
<!DOCTYPE html>
<html lang="en-us">
<head>
<title>Allrecipes - Signin</title>
<script async="true" src="https://secureimages.allrecipes.com/assets/deployables/v-1.185.0.5222/karma.bundled.js"></script>
<!--Make our website baseUrl available to the client-side code-->
<script type="text/javascript">
var AR = ... | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Retrieving the form tag that contains the sign in inputs | signin_form = signin_soup.find('form', attrs = {'name':'signinForm'}) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Getting the input tag containing the 'token' | input_token = signin_form.find('input', attrs = {'name':'SocialCsrfToken'}) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Printing the value of the token | token_to_use = input_token.get('value') | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Pause between 2 requests | time.sleep(1) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Get cookies | session_request = requests.session() | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Email used to sign-in | email_used = 'wki91412@zzrgg.com' | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Password used to sign-in | password_used = '12345678aA' | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Storing the data to be parsed to stay logged in | form_data = {'ReferringType':'',
'ReferringUrl': 'https://www.allrecipes.com/',
'ReferringAction': '',
'ReferringParams': '',
'AuthLayoutMode': 'Standard',
'SocialCsrfToken': token_to_use,
'txtUserNameOrEmail': email_used,
'passw... | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Loging into the website and timing out after 15 seconds | loged_in = session_request.post(signin_url,
data = form_data,
headers = dict(ReferringUrl= 'https://www.allrecipes.com/'),
timeout = 15
) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Getting the code of the webpage | logged_in_page = session_request.get(signin_url,
headers = dict(ReferringUrl= signin_url)) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Storing the code in a BeautifulSoup object | logged_in_soup = BeautifulSoup(logged_in_page.content, 'html.parser') | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Printing the BeautifulSoup object | print(logged_in_soup) |
<!DOCTYPE html>
<html lang="en-us">
<head>
<title>Allrecipes | Food, friends, and recipe inspiration</title>
<script async="true" src="https://secureimages.allrecipes.com/assets/deployables/v-1.185.0.5222/karma.bundled.js"></script>
<!--Make our website baseUrl available to the client-side code-->
<script type="text/... | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Finding the `` that contains the username | username_found = logged_in_soup.find('span', attrs = {'class': 'username'}) | _____no_output_____ | MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Printing the username | print(username_found.text) | Test
| MIT | ReGex-Cookies.ipynb | miteshrj/projects |
Constraints Diagram AVD Group 16Aero Year 4 2019-2020Last updated: 2019.10.17 | import numpy as np
import array as arr
import pandas as pd
import math as math
import matplotlib.pyplot as plt
import fluids | _____no_output_____ | MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
Function returns CLmax, Cd, e based on configuration and gear | def flap(config=None, gear=False):
"""function returns tuple of cl, cd, e with input configuration and gear option
Args:
config (str): flap configuration (basically comes from flaps angles)
If set to None, clean config. is returned
gear (bool): gear option
Returns:
(tuple... | clean cl, cd, e: (1.5865331696797949, 0.02311, 0.7)
takeoff cl, cd, e: (1.9344, 0.04417, 0.7)
takeoff with gear cl, cd, e: (1.9344, 0.07982, 0.7)
approach cl, cd, e: (1.9707, 0.05989, 0.7)
landing cl, cd, e: (2.115783, 0.15303, 0.7)
landing with gear cl, cd, e: (2.115783, 0.18868000000000001, 0.7)
| MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
Inputs related to aircraft parameters | # Reference data from CRJ700
#Sref = 70.6 # m2
#b = 23.2 # m wingspan
Kld = 11 # stand. value for retractable prop aircraft
SwetSref = 5.7 # Estimated value
#AR = b**2/Sref
AR = 8 # FIXME!!! - Weight sizing uses 8!!!
BPR = 5.7
print('Aspect ratio: {}'.format(AR))
# con... | Aspect ratio: 8
| MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
List of $\alpha$ and functions to calculate $\beta$ | # List of alphas from GPKit notebook
# this is [M_0, M_1, M_2, ..., M_8, M_9, M_dry
# M_9 is the mass at the end of landing and taxi, M_dry is different due to assumed 6% ullage
# ------------------------- #
# M_0 = alpha_list[0] - taxi and takeoff
# M_1 = alpha_list[1] - climb and accelerate
# M_2 = alpha_list... | _____no_output_____ | MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
$ \alpha $ | # Function calculates betas
def calc_beta(z, M=0.0, BPR=0.0):
"""
Function calculates beta's for different altitudes # FIXME - should also have dependencies on speed/Mach?
Args:
z (float): altitude in meters
Returns:
(float): value of beta
"""
#Z is assumed to be in meters
at... | _____no_output_____ | MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
Function to calculate T/W_0 = f( W_0/S )Use same equation (full version for twin jet), re-assign values to parameters for each scenario$ \left( \dfrac{T}{W} \right)_0 = \dfrac{\alpha}{\beta} \left[ \dfrac{1}{V_{inf}}\dfrac{dh}{dt} + \dfrac{1}{g}\dfrac{dV_{inf}}{dt} + \dfrac{\tfrac{1}{2}\rho V_{inf}^2 C_{D_0}}{\alpha \... | # Define function (T/W)_0 = fn(S/W0, etc.)
def TWvsWS(WSo,alpha,beta,dhdt,dvdt,rho,Vinf,Cdo,n,AR,e,split=False):
"""
Function calculates T/W for given S/W
Args:
WSo (array): list of WSo at which T/W is to be computed
alpha (float): W/W0
beta (float): T/T0
dhdt (float): climb... | _____no_output_____ | MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
Take off | CLmax, Cdo, e = flap(config='takeoff', gear=True)
sigma = 1
TODA = 1500
Ne = 2
TW_BFL = (1/TODA)*(0.297 - 0.019*Ne)*WSo/(sigma*CLmax)
TW_AEO = (1/TODA)*0.144*WSo/(sigma*CLmax)
| _____no_output_____ | MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
Landing distance$ ALD = 0.51 \frac{W/S}{\sigma C_{L, max}} KR + Sa $$W/S = \sigma C_{L,max} \frac{ALD - SA}{0.51 K_R}$ | # ===================================================================== #
# Landing distance line
# >> Plotted as vertical line
CLmax, Cdo, e = flap(config='landing', gear=True)
sigma = 1
ALD = 1500/(5/3)
SA = 305 #FIXME, from Errikos slides
Kr = 0.66
WS_landing = sigma*CLmax*(ALD-SA)/(0.51*Kr)
WS_landing
# =====... | _____no_output_____ | MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
Climb segments (1 ~ 4) | # ===================================================================== #
# Climb segments are given in:
# https://aviation.stackexchange.com/questions/3310/how-are-take-off-segments-defined
rho = 1.225 # assumed air-density is approx. constant
# 1st segment - TAKEOFF
# >> right after rotate, take-off configuratio... | _____no_output_____ | MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
LoiterAssume load factor $n$, then$n = \dfrac{L}{mg} = \dfrac{1}{\cos(\theta)}$also from horizontal equilibrium$m\omega^2 R = L\sin(\theta)$$R\omega^2 = \dfrac{L}{mg}g \sin(\theta)$$R = \dfrac{n g \sin(\theta)}{\omega^2}$ | # ===================================================================== #
# loiter
# >> 3 degrees per second turn
atmosphere = fluids.atmosphere.ATMOSPHERE_1976(1500)
rho = atmosphere.rho
a = atmosphere.v_sonic
n = 1.2 # FIXME - guessed load factor for loiter
print(f'prescribed load factor: {n}')
theta = np.arccos(1/... | _____no_output_____ | MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
fuel consumption contoursFor the cruise section, the range equation says$$ R = \ln\left(\frac{W_{i}}{W_{i+1}}\right) V \frac{L/D}{SFC} $$We can use a balance of forces to find $$ V^2 =\frac{W/S}{1/2 \rho C_L S} $$and $$ L/D = \frac{W}{T} = \frac{1}{T/W} $$which gives$$ R = \ln\left(\frac{W_i}{W_{i+1}}\right) \sqrt{\f... | WS_mesh, TW_mesh = np.meshgrid(np.linspace(0,8000, 100), np.logspace(-8,-0.1,100))
# the 1/10 factor is a estimate but plugging in all the parameter
fuel_consumption = 1-np.exp(-(1/10)*TW_mesh/np.sqrt(WS_mesh/4000))
plt.contourf(WS_mesh, TW_mesh, fuel_consumption,levels=np.linspace(0,0.1, 10),cmap='Reds',alpha=1)
plt.c... | /Users/Devansh/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:3: RuntimeWarning: divide by zero encountered in true_divide
This is separate from the ipykernel package so we can avoid doing imports until
| MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
Constraint Diagram | # ========== CONSTRAINT DIAGRAM ==========
fig = plt.figure(figsize=(15,15))
# climb segment 1
#plt.plot(WSo, TW_climb1, 'b-', label="1st climb segment")
# climb segment 2
#plt.plot(WSo, TW_climb2, 'g-', label="2nd climb segment")
# climb segment 3
#plt.plot(WSo, TW_climb3, 'r-', label="3rd climb segment")
# climb se... | Sref: 44.1131175 [m^2]
Wingspan: 18.785764291079563 [m]
Mean aerodynamic chord: 2.3482205363849453 [m]
| MIT | Constraints_diagram.ipynb | dev10110/AVD_Initial_Sizing |
T5 ๊ตฌํ ๊ณผ์ (2/2)T5 ๋ชจ๋ธ ๊ตฌํ์ ๋ํ ์ค๋ช
์
๋๋ค.์ด ๋ด์ฉ์ ํ์ธํ๊ธฐ ์ ์๋ ๋ด์ฉ์ ํ์ธํ์๊ธฐ ๋ฐ๋๋๋ค.- [Sentencepiece๋ฅผ ํ์ฉํด Vocab ๋ง๋ค๊ธฐ](https://paul-hyun.github.io/vocab-with-sentencepiece/)- [Naver ์ํ๋ฆฌ๋ทฐ ๊ฐ์ ๋ถ์ ๋ฐ์ดํฐ ์ ์ฒ๋ฆฌ ํ๊ธฐ](https://paul-hyun.github.io/preprocess-nsmc/)- [Transformer (Attention Is All You Need) ๊ตฌํํ๊ธฐ (1/3)](https://paul-hyun.github.io/transfor... | !pip install sentencepiece
!pip install wget | Collecting sentencepiece
[?25l Downloading https://files.pythonhosted.org/packages/74/f4/2d5214cbf13d06e7cb2c20d84115ca25b53ea76fa1f0ade0e3c9749de214/sentencepiece-0.1.85-cp36-cp36m-manylinux1_x86_64.whl (1.0MB)
[K |โ | 10kB 12.9MB/s eta 0:00:01
[K |โ ... | Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
1. Google Drive MountColab์์๋ ์ปดํจํฐ์ ์์์ ์ ๊ทผ์ด ๋ถ๊ฐ๋ฅ ํ๋ฏ๋ก Google Drive์ ํ์ผ์ ์ฌ๋ ค ๋์ ํ Google Drive๋ฅผ mount ์์ ๋ก์ปฌ ๋์คํฌ์ฒ๋ผ ์ฌ์ฉ ํฉ๋๋ค.1. ์๋ ๋ธ๋ญ์ ์คํํ๋ฉด ๋ํ๋๋ ๋งํฌ๋ฅผ ํด๋ฆญํ์ธ์.2. Google ๊ณ์ ์ ์ ํ ํ์๊ณ ํ์ฉ์ ๋๋ฅด๋ฉด ๋ํ๋๋ ์ฝ๋๋ฅผ ๋ณต์ฌํ์ฌ ์๋ ๋ฐ์ค์ ์
๋ ฅํ ํ Enter ํค๋ฅผ ์
๋ ฅํ๋ฉด ๋ฉ๋๋ค.ํ์ต๊ด๋ จ [๋ฐ์ดํฐ ๋ฐ ๊ฒฐ๊ณผ ํ์ผ](https://drive.google.com/open?id=15XGr-L-W6DSoR5TbniPMJASPsA0IDTiN)์ ์ฐธ๊ณ ํ์ธ์. | from google.colab import drive
drive.mount('/content/drive')
# data๋ฅผ ์ ์ฅํ ํด๋ ์
๋๋ค. ํ๊ฒฝ์ ๋ง๊ฒ ์์ ํ์ธ์.
data_dir = "/content/drive/My Drive/Data/transformer-evolution" | Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.c... | Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
2. Imports | import os
import numpy as np
import math
import matplotlib.pyplot as plt
import json
import pandas as pd
from IPython.display import display
from tqdm import tqdm, tqdm_notebook, trange
import sentencepiece as spm
import wget
import torch
import torch.nn as nn
import torch.nn.functional as F | _____no_output_____ | Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
3. ํด๋์ ๋ชฉ๋ก์ ํ์ธGoogle Drive mount๊ฐ ์ ๋์๋์ง ํ์ธํ๊ธฐ ์ํด data_dir ๋ชฉ๋ก์ ํ์ธ ํฉ๋๋ค. | for f in os.listdir(data_dir):
print(f) | kowiki.csv.gz
kowiki.model
kowiki.vocab
ratings_train.txt
ratings_test.txt
ratings_train.json
ratings_test.json
kowiki.txt
kowiki_gpt.json
save_gpt_pretrain.pth
kowiki_bert_0.json
save_bert_pretrain.pth
kowiki_t5.model
kowiki_t5.vocab
kowiki_t5_0.json
save_t5_pretrain.pth
| Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
4. Vocab ๋ฐ ์
๋ ฅ[Sentencepiece๋ฅผ ํ์ฉํด Vocab ๋ง๋ค๊ธฐ](https://paul-hyun.github.io/vocab-with-sentencepiece/)๋ฅผ ํตํด ๋ง๋ค์ด ๋์ vocab์ ๋ก๋ฉ ํฉ๋๋ค. | # vocab loading
vocab_file = f"{data_dir}/kowiki_t5.model"
vocab = spm.SentencePieceProcessor()
vocab.load(vocab_file) | _____no_output_____ | Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
5. Config๋ชจ๋ธ์ ์ค์ ๊ฐ์ ์ ๋ฌํ๊ธฐ ์ํ config๋ฅผ ๋ง๋ญ๋๋ค. | """ configuration json์ ์ฝ์ด๋ค์ด๋ class """
class Config(dict):
__getattr__ = dict.__getitem__
__setattr__ = dict.__setitem__
@classmethod
def load(cls, file):
with open(file, 'r') as f:
config = json.loads(f.read())
return Config(config)
config = Config({
"n_vocab": le... | {'n_vocab': 8033, 'n_seq': 256, 'n_layer': 6, 'd_hidn': 256, 'i_pad': 0, 'd_ff': 1024, 'n_head': 4, 'd_head': 64, 'dropout': 0.1, 'layer_norm_epsilon': 1e-12}
| Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
6. T5T5 Class ๋ฐ ํจ์ ์
๋๋ค. | """ attention pad mask """
def get_attn_pad_mask(seq_q, seq_k, i_pad):
batch_size, len_q = seq_q.size()
batch_size, len_k = seq_k.size()
pad_attn_mask = seq_k.data.eq(i_pad).unsqueeze(1).expand(batch_size, len_q, len_k) # <pad>
return pad_attn_mask
""" attention decoder mask """
def get_attn_decoder_... | _____no_output_____ | Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
7. Naver ์ํ ๋ถ๋ฅ ๋ชจ๋ธ | """ naver movie classfication """
class MovieClassification(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.t5 = T5(self.config)
def forward(self, enc_inputs, dec_inputs):
# (bs, n_dec_seq, n_vocab), [(bs, n_head, n_enc_seq, n_enc_seq)],... | _____no_output_____ | Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
8. ๋ค์ด๋ฒ ์ํ ๋ถ๋ฅ ๋ฐ์ดํฐT5๋ฅผ ์ํด vocab์ ์๋ก ๋ง๋ค์ด์ ํ์ต ๋ฐ์ดํฐ๋ ์๋ก ๋ง๋ค ์์ต๋๋ค. | """ train data ์ค๋น """
def prepare_train(vocab, infile, outfile):
df = pd.read_csv(infile, sep="\t", engine="python")
with open(outfile, "w") as f:
for index, row in df.iterrows():
document = row["document"]
if type(document) != str:
continue
instance =... | Loading Dataset: 100%|โโโโโโโโโโ| 149995/149995 [00:06<00:00, 21670.37 lines/s]
Loading Dataset: 100%|โโโโโโโโโโ| 49997/49997 [00:01<00:00, 25291.42 lines/s]
| Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
9. ๋ค์ด๋ฒ ์ํ ๋ถ๋ฅ ๋ฐ์ดํฐ ํ์ต | """ ๋ชจ๋ธ epoch ํ๊ฐ """
def eval_epoch(config, model, data_loader):
matchs = []
model.eval()
n_word_total = 0
n_correct_total = 0
with tqdm(total=len(data_loader), desc=f"Valid") as pbar:
for i, value in enumerate(data_loader):
labels, enc_inputs, dec_inputs = map(lambda v: v.to(con... | _____no_output_____ | Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
Pretrain ์์ด ํ์ต | model = MovieClassification(config)
losses_00, scores_00 = train(model) | Train(0): 100%|โโโโโโโโโโ| 1172/1172 [05:07<00:00, 3.72it/s, Loss: 0.244 (0.827)]
Valid: 100%|โโโโโโโโโโ| 391/391 [01:00<00:00, 4.63it/s, Acc: 0.771]
Train(1): 100%|โโโโโโโโโโ| 1172/1172 [05:06<00:00, 3.89it/s, Loss: 0.232 (0.253)]
Valid: 100%|โโโโโโโโโโ| 391/391 [01:02<00:00, 4.59it/s, Acc: 0.794]
Train(2): 100%|โ... | Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
Pretrain์ ํ ํ ํ์ต | model = MovieClassification(config)
save_pretrain = f"{data_dir}/save_t5_pretrain.pth"
model.t5.load(save_pretrain)
losses_20, scores_20 = train(model) | Train(0): 100%|โโโโโโโโโโ| 1172/1172 [05:11<00:00, 3.69it/s, Loss: 0.227 (0.288)]
Valid: 100%|โโโโโโโโโโ| 391/391 [01:01<00:00, 4.86it/s, Acc: 0.769]
Train(1): 100%|โโโโโโโโโโ| 1172/1172 [05:07<00:00, 3.93it/s, Loss: 0.261 (0.220)]
Valid: 100%|โโโโโโโโโโ| 391/391 [01:00<00:00, 4.68it/s, Acc: 0.813]
Train(2): 100%|โ... | Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
10. Result | # table
data = {
"loss_00": losses_00,
"socre_00": scores_00,
"loss_20": losses_20,
"socre_20": scores_20,
}
df = pd.DataFrame(data)
display(df)
# graph
plt.figure(figsize=[12, 4])
plt.plot(scores_00, label="score_00")
plt.plot(scores_20, label="score_20")
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('V... | _____no_output_____ | Apache-2.0 | tutorial/t5_02.ipynb | hyungjun010/transformer-evolution |
07 - Python Finance**Capitulo 07**: Como calcular essa probabilidade usando Python. Considerar que os retornos seguem uma distribuiรงรฃo de probabilidade normal induz a erros grosseiros.Utilizando distribuiรงรตes de caudas gordas podemos ter uma aproximaรงรฃo melhor do mundo real.**Qual a probabilidade do รญndice bovespa cai... | # Configurando dados historicos do Yahoo Finance
!pip install yfinance --upgrade --no-cache-dir | _____no_output_____ | MIT | 07_Python_Finance.ipynb | devscie/PythonFinance |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.