markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
What's the most recent activity available?Not sure this is useful.
q(f''' select max(from_iso8601_timestamp(datetime)) as most_recent_activity from incoming_rotations where year = year(current_date) and month >= month(current_date)-1 ''')
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
How has she progressed over the last 7 days? (cumulative plot, origin at sum before period)This should be the total sum prior to the window of interest. The query scans everything instead of trying to skip scanning 7 day partitions--a drop in the bucket.
qid, resp = q(''' select sum(rotations) as prior_rotations from incoming_rotations where from_iso8601_timestamp(datetime) < (current_date - interval '7' day) ''') resp publish(qid, 'prior-7-day-window.csv');
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
And this should be the sum of rotations by hour and the cumulative sum by hour within the window of interest. I'm trying to constrain the search space for the necessary data using partitions. I need a bit more data to make sure this is working properly.
qid, resp = q(f''' select sum(rotations) as sum_rotations, to_iso8601(date_trunc('hour', from_iso8601_timestamp(datetime))) as datetime_hour, sum(sum(rotations)) over ( order by date_trunc('hour', from_iso8601_timestamp(datetime)) asc rows between unbounded preceding and current row ) a...
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
Let's work with the CSV forms of these metrics to create a plot.
tz = pytz.timezone('America/New_York') resp = s3.get_object(Bucket='honey-data-public', Key='prior-7-day-window.csv') prior_df = pd.read_csv(resp['Body']) try: offset = prior_df.iloc[0].iloc[0] except: offset = 0 resp = s3.get_object(Bucket='honey-data-public', Key='7-day-window.csv') week_df = pd.read_csv(resp...
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
Filling missing values is something I might want to do in Athena instead of relying on the frontend web client doing it if plot interpolation doesn't look pretty. Some techniques here: https://www.reddit.com/r/SQL/comments/80t1db/inserting_dates_between_a_start_date_and_enddate/
cumsum_df = week_df[['cumsum_rotations']] + offset #cumsum_df = cumsum_df.reindex(pd.date_range(week_df.index.min(), week_df.index.max(), freq='1h'), method='ffill') cumsum_df.index.max() - cumsum_df.index.min() _, ax = plt.subplots(figsize=(15, 5)) (cumsum_df * wheel_circumference).rename(columns={'cumsum_rotations': ...
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
How far has she run each night for the past year?We should subtract 12 hours to sum rotations for nocturnal sessions, plus a few hours more to account for the fact that the hamster is in EST/EDT. Then we add one day back to the date to align with end of session reporting used elsewhere. Don't bother getting precisely ...
qid, resp = q(f''' select sum(rotations) as value, date(date_trunc('day', from_iso8601_timestamp(datetime) - interval '16' hour)) + interval '1' day as day from incoming_rotations where year >= year(current_date)-1 group by date_trunc('day', from_iso8601_timestamp(datetime) - interval '16' hour) order by day ''...
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
What city might she have reached by traveling this distance?https://rapidapi.com/wirefreethought/api/geodb-cities?endpoint=5aadab87e4b00687d35767b4 allows 1000 request per day. If the data upload / aggregation job runs every 10 minutes, I only need about a tenth of that.
rapid_key = getpass.getpass('Rapid API key:') durham_lat = '35.994034' durham_lon = '-78.898621' rapid_url = "https://wft-geo-db.p.rapidapi.com" def furthest_poi(lat, lon, radius, api_key, base_url=rapid_url): path = f'/v1/geo/locations/{durham_lat_lon}/nearbyCities' # Results sort nearest to farthest resp...
_____no_output_____
MIT
notebooks/query_prototyping.ipynb
parente/honey.data
PRÉ-PROCESSAMENTO
Antes['CS_GESTANT'].replace({1.0: 1, 2.0: 1, 3.0 :1, 4.0 : 1}, inplace= True) Antes['CS_GESTANT'].replace({5.0: 0, 6.0:0, 9.0:0}, inplace= True) Antes['CS_RACA'].fillna(9,inplace= True) Antes['CS_ESCOL_N'].fillna(9,inplace= True) Antes['SURTO_SG'].replace({2.0: 0, 9.0: 0}, inplace= True) Antes['SURTO_SG'].fillna(0,inp...
_____no_output_____
MIT
Notebooks cidades/Manaus_Antes.ipynb
amandacaravieri/ProjetoFinal-COVID_Brasil
- Resetando o Index novamente.
Antes= Antes.reset_index(drop=True) Antes.head()
_____no_output_____
MIT
Notebooks cidades/Manaus_Antes.ipynb
amandacaravieri/ProjetoFinal-COVID_Brasil
- Aplicação da Dummy nas Features Categóricas
Antes=pd.get_dummies(Antes, columns=['CS_SEXO', 'CS_GESTANT', 'CS_RACA', 'CS_ESCOL_N', 'SURTO_SG', 'NOSOCOMIAL', 'FEBRE', 'TOSSE', 'GARGANTA', 'DISPNEIA', 'DESC_RESP', 'SATURACAO', 'DIARREIA', 'VOMITO', 'PUERPERA', 'FATOR_RISC', 'CARDIOPATI', 'HEMATOLOGI', 'SIND_DOWN', 'HEPATICA', 'ASMA', 'D...
_____no_output_____
MIT
Notebooks cidades/Manaus_Antes.ipynb
amandacaravieri/ProjetoFinal-COVID_Brasil
Verificando o Balanceamento
Antes["EVOLUCAO"].value_counts(normalize=True) X = Antes[['IDADE_ANOS','CS_SEXO_M','CS_RACA_4.0','FEBRE_1.0','DISPNEIA_1.0','SATURACAO_1.0','UTI_1.0', 'SUPORT_VEN_1.0', 'SUPORT_VEN_2.0', 'PCR_RESUL_2.0','TOSSE_1.0','DESC_RESP_1.0', 'FATOR_RISC_2']] y = Antes['EVOLUCAO'] Xtrain, Xtest, ytrain, ytest =...
_____no_output_____
MIT
Notebooks cidades/Manaus_Antes.ipynb
amandacaravieri/ProjetoFinal-COVID_Brasil
Aplicação do Modelo Escolhido
random_state=42 RDF = RandomForestClassifier() RDF.fit(Xtrain_over, ytrain_over) previsoes = RDF.predict(Xtest_over) previsoes accuracy_score(ytest_over, previsoes) # Testar Modelo idade = 43.0 sexo = 1 raca = 0 febre = 1 dispneia = 1 saturacao = 0 uti = 1 suport1 = 1 suport2 = 0 pcr = 1 tosse = 1 descresp = 0 frisc =...
[2.]
MIT
Notebooks cidades/Manaus_Antes.ipynb
amandacaravieri/ProjetoFinal-COVID_Brasil
Ye ab tak ka code:
import cv2 import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(frame): def cal_undistort(img): # Reads mtx and dist ma...
_____no_output_____
MIT
Pipeline progression/13_variance-weighted-done.ipynb
animesh-singhal/Project-2
Let's try classes
# Define a class to receive the characteristics of each line detection class Line(): def __init__(self): #Let's count the number of consecutive frames self.count = 0 # was the line detected in the last iteration? self.detected = False #polynomial coefficients for the most r...
starting count value 0 starting count 0 variance_new 16.411728943874007 First case mein variance ye hai 16.411728943874007 [16.411728943874007, 16.411728943874007, 16.411728943874007, 16.411728943874007, 16.411728943874007] starting count 1 variance_new 20.454135208135213 [16.411728943874007, 16.411728943874007, 16.411...
MIT
Pipeline progression/13_variance-weighted-done.ipynb
animesh-singhal/Project-2
Videoo test
# Define a class to receive the characteristics of each line detection class Line(): def __init__(self): #Let's count the number of consecutive frames self.count = 0 # was the line detected in the last iteration? self.detected = False #polynomial coefficients for the most r...
_____no_output_____
MIT
Pipeline progression/13_variance-weighted-done.ipynb
animesh-singhal/Project-2
.
import numpy as np def modify_array(array, new_value): if len(array)!=5: for i in range(0,5): array.append(new_value) else: dump_var=array[0] array[0]=array[1] array[1]=array[2] array[2]=array[3] array[3]=array[4] array[4]=new_value ...
_____no_output_____
MIT
Pipeline progression/13_variance-weighted-done.ipynb
animesh-singhal/Project-2
Wczytywanie danych
pwd df = pd.read_hdf("data/car.h5") df.shape df.columns
_____no_output_____
MIT
day5_hyperopt_xgboost.ipynb
kowalcorp/dw_matrix_car
Dummy model
df.select_dtypes(np.number).columns feats = ['car_id'] X = df[ feats ].values y = df[ ['car_id'] ].values model = DummyRegressor() model.fit(X,y) y_pred = model.predict(X) mae(y, y_pred) [x for x in df.columns if 'price' in x] df['price_currency'].value_counts() df['price_currency'].value_counts(normalize=True)*100 ...
_____no_output_____
MIT
day5_hyperopt_xgboost.ipynb
kowalcorp/dw_matrix_car
Decision Tree
model = DecisionTreeRegressor(max_depth=5) run_model(model,cat_feats)
_____no_output_____
MIT
day5_hyperopt_xgboost.ipynb
kowalcorp/dw_matrix_car
Random Forest
model = RandomForestRegressor(max_depth=5 , n_estimators=50, random_state=0) run_model(model,cat_feats)
_____no_output_____
MIT
day5_hyperopt_xgboost.ipynb
kowalcorp/dw_matrix_car
XGBoost
xgb_params = { 'max_depth' : 5 , 'n_estimators' : 50, 'learning_rate' : 0.1, 'seed':0 } model = xgb.XGBRegressor(**xgb_params) run_model(model, cat_feats) xgb_params = { 'max_depth' : 5 , 'n_estimators' : 50, 'learning_rate' : 0.1, 'seed':0 } m = xgb.XGBRegressor(**xgb_params) m.f...
_____no_output_____
MIT
day5_hyperopt_xgboost.ipynb
kowalcorp/dw_matrix_car
Dies ist bisher mehr eine Widget Demo als eine Erklärung der Normalverteilung. Aber was nicht ist, kann ja noch werden.In this app we:* Plot the gaussian density for a specific $\mu$ and $\sigma$* Use the FloatSlider widget in ipywidgets to represent $\mu$ and $\sigma$ values* Stack the density plot along with the slid...
import numpy as np from scipy.stats import norm from ipywidgets import FloatSlider, HBox, VBox import bqplot.pyplot as plt x = np.linspace(-10, 10, 200) y = norm.pdf(x) # plot the gaussian density title_tmpl = 'Gaussian Density (mu = {} and sigma = {})' pdf_fig = plt.figure(title=title_tmpl.format(0, 1)) pdf_line = p...
_____no_output_____
BSD-3-Clause
index.ipynb
janbucher/normalverteilung
IntroductionLast time we have used Lagrange basis to interpolate polynomial. However, it is not efficient to update the interpolating polynomial when a new data point is added. We look at an iterative approach.Given points $\{(z_i, f_i) \}_{i=0}^{n-1}$, $z_i$ are distinct and $p_{n-1} \in \mathbb{C}[z]_{n-1}\, , p_{n-...
z0 = -1; f0 = -3; z1 = 0; f1 = -1; z2 = 2; f2 = 4; z3 = 5; f3 = 1; z4 = 1; f4 = 1 p3 = -13*x**3/90 + 14*x**2/45 + 221*x/90 - 1
_____no_output_____
MIT
M2AA3/M2AA3-Polynomials/Lesson 02 - Newton Method/.ipynb_checkpoints/Newton Tableau-checkpoint.ipynb
ImperialCollegeLondon/Random-Stuff
We add a point $(z_4,f_4) = (1,1)$ and obtain $p_4(x)$
z4 = 1; f4 = 1 C = (f4 - p3.subs(x,z4))/((z4-z0)*(z4-z1)*(z4-z2)*(z4-z3)) C p4 = p3 + C*(x-z0)*(x-z1)*(x-z2)*(x-z3) sp.expand(p4)
_____no_output_____
MIT
M2AA3/M2AA3-Polynomials/Lesson 02 - Newton Method/.ipynb_checkpoints/Newton Tableau-checkpoint.ipynb
ImperialCollegeLondon/Random-Stuff
**Remark:** the constant $C$ is usually written as $f[z_0,z_1,z_2,z_3,z_4]$. Moreover by iteration we have$$p_n(z) = \sum_{i=0}^n f[z_0,...,z_n] \prod_{j=0}^i (z - z_j)$$ Newton Tableau We look at efficient ways to compute $f[z_0,...,z_n]$, iteratively from $f[z_0,...,z_{n-1}]$ and $f[z_1,...,z_n]$. We may first const...
def product(xs,key,i): #Key: Forward or Backward n = len(xs)-1 l = 1 for j in range(i): if key == 'forward': l *= (x - xs[j]) else: l *= (x - xs[n-j]) return l def newton(xs,ys,key): # Key: Forward or Backward n = len(xs)-1 ...
[0.8414709848078965, 0.9719379013633127, 0.9954079577517649, 0.9092974268256817] [0.3914007496662487, 0.07041016916535667, -0.25833159277824974] [-0.481485870751338, -0.4931126429154095] [-0.011626772164071542]
MIT
M2AA3/M2AA3-Polynomials/Lesson 02 - Newton Method/.ipynb_checkpoints/Newton Tableau-checkpoint.ipynb
ImperialCollegeLondon/Random-Stuff
INF6804 Vision par ordinateurPolytechnique MontréalExemple de la segmentation d'une image
import numpy as np import cv2 import os import matplotlib.pyplot as plt
_____no_output_____
MIT
ColorSegmentation.ipynb
gabilodeau/INF6804
Lecture d'une image.
image_name = 'bureau.jpg' if not os.path.exists(image_name): !gdown https://raw.githubusercontent.com/gabilodeau/INF6804/master/images/bureau.jpg image = cv2.imread(image_name) b,g,r = cv2.split(image) #OpenCV lit les images en BGR image = cv2.merge([r,g,b]) #ou image = image[:,:,::-1] plt.figure(figsize = (8,8)) plt...
_____no_output_____
MIT
ColorSegmentation.ipynb
gabilodeau/INF6804
Segmentation avec K-means. Segmentation en incluant la position des pixels dans l'image. On fera le groupement de vecteurs [R,G,B,X,Y] La plupart des pixels d'un groupe seront connectés entre eux.Cas 1: Sous-segmentation, pas assez de groupes
# Nombre de groupes K = 6 # Définition des critères d'arret. EPS est le déplacement des centres criteresArret = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0) RGBPix = image.reshape((-1,3)) # Pour ajouter la position. Groupement de vecteur 1 par 5 plutôt que 1 par 3. RGBetPosPix= np.zeros((len(RGBPix...
_____no_output_____
MIT
ColorSegmentation.ipynb
gabilodeau/INF6804
Cas 2: Sur-segmentation, trop de groupes
K = 100 ret,etiquettes,centres=cv2.kmeans(RGBetPosPix,K,None,criteresArret,1,cv2.KMEANS_RANDOM_CENTERS) centres = np.uint8(centres) centressansPos= np.zeros((K,3),dtype=np.uint8) #Pour affichage des classes for i in range(0,len(centres)): centressansPos[i]=centres[i][0:3] res = centressansPos[etiquettes] res2 = ...
_____no_output_____
MIT
ColorSegmentation.ipynb
gabilodeau/INF6804
Difficile d'avoir un résultat parfait.
_____no_output_____
MIT
ColorSegmentation.ipynb
gabilodeau/INF6804
resultというディレクトリにtrain_mnist.pyの実行結果が出力されているとします。
ls result
_____no_output_____
MIT
2/7_chainermn_result.ipynb
rymzt/aaic_gathering
cg.dotはDOT言語で記述されたネットワーク構造のファイル、logはJSONで記述された実行時間、エポック数、反復回数、精度などを記述したファイルになります。 cg.dotはdotコマンドによりpngなどの画像ファイルに変換することができます。
%%bash dot -Tpng result/cg.dot -o result/cg.png
_____no_output_____
MIT
2/7_chainermn_result.ipynb
rymzt/aaic_gathering
result/cg.pngを表示してみます。ここではPythonスクリプトを使用して表示してみます。(eogコマンド等を使っても大丈夫です。)
from PIL import Image from matplotlib import pylab as plt %matplotlib inline plt.figure(figsize=(8, 8), dpi=400) plt.imshow(np.array(Image.open('result/cg.png')))
_____no_output_____
MIT
2/7_chainermn_result.ipynb
rymzt/aaic_gathering
result/logを表示してみます。Pythonスクリプトを使ってファイルを読み込んでみましょう。
import json with open('result/log', 'r') as f: data = json.load(f) print(data)
_____no_output_____
MIT
2/7_chainermn_result.ipynb
rymzt/aaic_gathering
epoch毎にelapsed_time, iteration, main/loss, main/accuracy, validation/main/loss, validation/main/accuracyが出力されているのがわかります。ここでは、epoch毎のvalidation/main/accuracyを表示してみましょう。
x, y = [],[] xlabel, ylabel = 'epoch', 'validation/main/accuracy' for d in data: x.append(d[xlabel]) y.append(d[ylabel]) %matplotlib inline plt.xlabel(xlabel) plt.ylabel(ylabel) plt.plot(x,y)
_____no_output_____
MIT
2/7_chainermn_result.ipynb
rymzt/aaic_gathering
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# Dependencies and Setup import pandas as pd # File to Load (Remember to Change These) file_to_load = "Resources/purchase_data.csv" # Read Purchasing File and store into Pandas data frame purchase_data = pd.read_csv(file_to_load) purchase_data.head()
_____no_output_____
Apache-2.0
HeroesOfPymoli_starter.ipynb
JarrodCasey/pandas-challenge
Player Count * Display the total number of players
# Calculate the number of unique players in the DataFram PlayerCount = len(purchase_data["SN"].unique()) # Place data found into a DataFrame TotalPlayer_df = pd.DataFrame({"Total Players":[PlayerCount]}) TotalPlayer_df
_____no_output_____
Apache-2.0
HeroesOfPymoli_starter.ipynb
JarrodCasey/pandas-challenge
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
#Calculate number of unique items TotUniqueItems = len(purchase_data["Item Name"].unique()) #Calculate average price AvePrice = purchase_data["Price"].mean() #Calculate Number of Purchases TotalPurch = len(purchase_data["Item Name"]) #Calculate Total Revenue TotRev = purchase_data["Price"].sum() #Create Data Frame...
_____no_output_____
Apache-2.0
HeroesOfPymoli_starter.ipynb
JarrodCasey/pandas-challenge
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
#Calculate No. male players MalePlayer_df = purchase_data[purchase_data['Gender']=="Male"] TotMalePlayers = len(MalePlayer_df["SN"].unique()) #Calculate Percentage of Male Players PercMale = TotMalePlayers/PlayerCount #Calculate No. female players FemalePlayer_df = purchase_data[purchase_data['Gender']=="Female"] TotF...
_____no_output_____
Apache-2.0
HeroesOfPymoli_starter.ipynb
JarrodCasey/pandas-challenge
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
#Calculate No. purchases by females FemalePurch = len(FemalePlayer_df["SN"]) #Calculate Average Purchase Price by female FemaleAvePurch = FemalePlayer_df["Price"].mean() #Calculate Total Purchase Value by Female FemaleTotPurch = FemalePlayer_df["Price"].sum() #Calculate averae total purchase per person FemalePPAvePurch...
_____no_output_____
Apache-2.0
HeroesOfPymoli_starter.ipynb
JarrodCasey/pandas-challenge
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
#Create age range bins bins = [0,9,14,19,24,29,34,39,10000] #Create labels for bins group_labels = ["<10","10-14","15-19","20-24","25-29","30-34","35-39","40+"] #Slice the data and place it into bins and Place data series into a new column inside of the DataFrame purchase_data["Age Range"] = pd.cut(purchase_data["Age...
_____no_output_____
Apache-2.0
HeroesOfPymoli_starter.ipynb
JarrodCasey/pandas-challenge
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary d...
#Create a second GroupBy object based on "Age Range" PurchByAgeRg_Group2 = purchase_data.groupby("Age Range")["Price"].agg(["count","mean","sum"]) PurchByAgeRg_Group3 = purchase_data.groupby("Age Range")["SN"].agg(["nunique"]) PurchAnalAge_df= pd.concat([PurchByAgeRg_Group2,PurchByAgeRg_Group3],axis=1,join="inner") Pur...
_____no_output_____
Apache-2.0
HeroesOfPymoli_starter.ipynb
JarrodCasey/pandas-challenge
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
#Create a GroupBy object based on "SN" PurchBySN_gp = purchase_data.groupby("SN")["Price"].agg(["count","mean","sum"]) #Sort the total purchase value column in descending order TotalSpender = PurchBySN_gp.sort_values("sum", ascending=False) #Rename Columns TotalSpender=TotalSpender.rename(columns={"count":"Purchase C...
_____no_output_____
Apache-2.0
HeroesOfPymoli_starter.ipynb
JarrodCasey/pandas-challenge
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give ...
#Create a GroupBy object based on "SN" ItemsAnal_gp = purchase_data.groupby(["Item ID","Item Name"])["Price"].agg(["count","mean","sum"]) #Sort the total purchase value column in descending order ItemsAnalDesc_gp = ItemsAnal_gp.sort_values("count", ascending=False) #Rename Columns ItemsAnalDesc_gp=ItemsAnalDesc_gp.re...
_____no_output_____
Apache-2.0
HeroesOfPymoli_starter.ipynb
JarrodCasey/pandas-challenge
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
#Sort the total purchase value column in descending order ItemsAnalTotDesc_gp = ItemsAnalDesc_gp.sort_values("Total Purchase Value", ascending=False) #Set correct numbering format ItemsAnalTotDesc_gp.style.format({"Item Price":"${:20,.2f}", "Total Purchase Value":"${:20,.2f}"})
_____no_output_____
Apache-2.0
HeroesOfPymoli_starter.ipynb
JarrodCasey/pandas-challenge
EvalutaionTo be able to make a statement about the performance of a question-asnwering system, it is important to evalute it. Furthermore, evaluation allows to determine which parts of the system can be improved. Start an Elasticsearch serverYou can start Elasticsearch on your local machine instance using Docker. If ...
# Recommended: Start Elasticsearch using Docker ! docker run -d -p 9200:9200 -e "discovery.type=single-node" elasticsearch:7.6.2 # In Colab / No Docker environments: Start Elasticsearch from source #! wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-linux-x86_64.tar.gz -q #! tar -xzf elasti...
05/19/2020 09:03:37 - INFO - elasticsearch - POST http://localhost:9200/_bulk [status:200 request:0.796s] 05/19/2020 09:03:38 - INFO - elasticsearch - POST http://localhost:9200/_bulk [status:200 request:0.222s]
Apache-2.0
tutorials/Tutorial5_Evaluation.ipynb
vchulski/haystack
Initialize components of QA-System
# Initialize Retriever from haystack.retriever.elasticsearch import ElasticsearchRetriever retriever = ElasticsearchRetriever(document_store=document_store) # Initialize Reader from haystack.reader.farm import FARMReader reader = FARMReader("deepset/roberta-base-squad2") # Initialize Finder which sticks together Read...
_____no_output_____
Apache-2.0
tutorials/Tutorial5_Evaluation.ipynb
vchulski/haystack
Evaluation of Retriever
# Evaluate Retriever on its own retriever_eval_results = retriever.eval() ## Retriever Recall is the proportion of questions for which the correct document containing the answer is ## among the correct documents print("Retriever Recall:", retriever_eval_results["recall"]) ## Retriever Mean Avg Precision rewards retrie...
05/19/2020 09:04:11 - INFO - elasticsearch - GET http://localhost:9200/feedback/_search?scroll=5m&size=1000 [status:200 request:0.090s] 05/19/2020 09:04:11 - INFO - elasticsearch - GET http://localhost:9200/eval_document/_search [status:200 request:0.051s] 05/19/2020 09:04:11 - INFO - haystack.retriever.elasticsear...
Apache-2.0
tutorials/Tutorial5_Evaluation.ipynb
vchulski/haystack
Evaluation of Reader
# Evaluate Reader on its own reader_eval_results = reader.eval(document_store=document_store, device=device) # Evaluation of Reader can also be done directly on a SQuAD-formatted file # without passing the data to Elasticsearch #reader_eval_results = reader.eval_on_file("../data/natural_questions", "dev_subset.json",...
05/19/2020 09:04:22 - INFO - elasticsearch - GET http://localhost:9200/feedback/_search?scroll=5m&size=1000 [status:200 request:0.007s] 05/19/2020 09:04:22 - INFO - elasticsearch - GET http://localhost:9200/_search/scroll [status:200 request:0.003s] 05/19/2020 09:04:22 - INFO - elasticsearch - DELETE http://local...
Apache-2.0
tutorials/Tutorial5_Evaluation.ipynb
vchulski/haystack
Evaluation of Finder
# Evaluate combination of Reader and Retriever through Finder finder_eval_results = finder.eval() print("Retriever Recall in Finder:", finder_eval_results["retriever_recall"]) print("Retriever Mean Avg Precision in Finder:", finder_eval_results["retriever_map"]) # Reader is only evaluated with those questions, where ...
05/19/2020 09:04:57 - INFO - elasticsearch - GET http://localhost:9200/feedback/_search?scroll=5m&size=1000 [status:200 request:0.006s] 05/19/2020 09:04:57 - INFO - elasticsearch - GET http://localhost:9200/eval_document/_search [status:200 request:0.006s] 05/19/2020 09:04:57 - INFO - haystack.retriever.elasticsear...
Apache-2.0
tutorials/Tutorial5_Evaluation.ipynb
vchulski/haystack
"US oil & gas production"> "Accessing EIA with Jupyter" In this post, we will plot a chart of (1) US oil production and (2) US gas production by accessing this data from EIA.First, install the python wrapper for Energy Information Administration (EIA) API using your Command Prompt pip install EIA_python
#import customary packages import pandas as pd import matplotlib.pyplot as plt %matplotlib inline #import EIA package import eia
_____no_output_____
Apache-2.0
_notebooks/2021-01-22-US oil & gas production with EIA.ipynb
ujpradhan/blog
define key using your personal EIA API key key = eia.API("Peronal API KEY")
#hide_input key = eia.API('fae9d0bd7f4172e57a1876b2e5802392') #Let's try quering EIA with our key. EIA has unique series ID for a variety of data. #Browse series ID here: https://www.eia.gov/opendata/qb.php?category=371 #We'll first query for "U.S. Field Production of Crude Oil, Annual" with "PET.MCRFPUS2.A" oil = ke...
_____no_output_____
Apache-2.0
_notebooks/2021-01-22-US oil & gas production with EIA.ipynb
ujpradhan/blog
oneTBB Concurrent Containers Sections- [oneTBB Concurrent Containers](oneTBB-Concurrent-Containers)- _Code_: [A Producer-Consumer Application with tbb::concurrent_queue](A-Producer-Consumer-Application-with-tbb::concurrent_queue) Learning Objectives* Learn how thread-unsafe uses of a standard container might be addr...
%%writefile lab/q-serial.cpp //============================================================== // Copyright (c) 2020 Intel Corporation // // SPDX-License-Identifier: Apache-2.0 // ============================================================= #include <iostream> #include <queue> int main() { int sum (0); int item; ...
_____no_output_____
Apache-2.0
examples/cppcon_2020/04_oneTBB_concurrent_containers/oneTBB_concurrent_containers.ipynb
alvdd/oneTBB
Build and Run the baselineSelect the cell below and click Run ▶ to compile and execute the code above:
! chmod 755 q; chmod 755 ./scripts/run_q-serial.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_q-serial.sh; else ./scripts/run_q-serial.sh; fi
_____no_output_____
Apache-2.0
examples/cppcon_2020/04_oneTBB_concurrent_containers/oneTBB_concurrent_containers.ipynb
alvdd/oneTBB
Implement a parallel version with tbb::concurrent_priority_queueIn this section, we modify the example to create one producer threadand two consumer threads that run concurrently. To eliminate the potentialrace on the call to `empty` and `pop`, we replace the `std::priority_queue` with `tbb::concurrent_priority_queue`...
%%writefile lab/q-parallel.cpp //============================================================== // Copyright (c) 2020 Intel Corporation // // SPDX-License-Identifier: Apache-2.0 // ============================================================= #include <iostream> #include <queue> #include <thread> #include <tbb/tbb.h> ...
_____no_output_____
Apache-2.0
examples/cppcon_2020/04_oneTBB_concurrent_containers/oneTBB_concurrent_containers.ipynb
alvdd/oneTBB
Build and Run the modified codeSelect the cell below and click Run ▶ to compile and execute the code that you modified above:
! chmod 755 q; chmod 755 ./scripts/run_q-parallel.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_q-parallel.sh; else ./scripts/run_q-parallel.sh; fi
_____no_output_____
Apache-2.0
examples/cppcon_2020/04_oneTBB_concurrent_containers/oneTBB_concurrent_containers.ipynb
alvdd/oneTBB
Producer-Consumer Solution (Don't peak unless you have to)
%%writefile solutions/q-parallel.cpp //============================================================== // Copyright (c) 2020 Intel Corporation // // SPDX-License-Identifier: Apache-2.0 // ============================================================= #include <iostream> #include <queue> #include <thread> #include <tbb/t...
_____no_output_____
Apache-2.0
examples/cppcon_2020/04_oneTBB_concurrent_containers/oneTBB_concurrent_containers.ipynb
alvdd/oneTBB
Question 1: Given some sample data, write a program to answer the following: On Shopify, we have exactly 100 sneaker shops, and each of these shops sells only one model of shoe. We want to do some analysis of the average order value (AOV). When we look at orders data over a 30 day window, we naively calculate an AOV o...
import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # Load dataset shopify_ds = pd.read_csv("2019 Winter Data Science Intern Challenge Data Set - Sheet1.csv", parse_dates = ['created_at']) shopify_ds shopify_ds.head()
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
Exploratory Data Analysis and Descriptive Statistics
shopify_ds.describe()
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
- Mean order value ($3145.12) is crazy high for a sneaker store considering even if the most expensive sneakers are sold from the most sought after brands.
shopify_ds.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 5000 entries, 0 to 4999 Data columns (total 7 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 order_id 5000 non-null int64 1 shop_id 5000 non-null int64 2 u...
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
- No nul values in the dataset!
shopify_ds['shop_id'].nunique() fig = plt.figure(figsize = (10, 10)) sns.boxplot(shopify_ds['order_amount']);
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
- Looks like there are a lot of outliers that are affecting the average order value. Lets find out those.
data = shopify_ds['order_amount'] # Function to Detect Outliers def find_anomalies(data): #define a list to accumlate anomalies anomalies = [] # Set upper and lower limit to 3 standard deviation data_std = np.std(data) data_mean = np.mean(data) anomaly_cut_off = data_std * 3 lower...
Lower Limit: -120690.10466525992 Upper Limit: 126980.36066525991 Outliers: [704000, 704000, 704000, 154350, 704000, 704000, 704000, 704000, 704000, 704000, 704000, 704000, 704000, 704000, 704000, 704000, 704000, 704000]
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
- Any value that is beyond 3 standard deviations from the mean is an outlier. Above we have a list of order amounts that fall in the outlier region and because of them the AOV is high for an affordable product like shoes.
shopify_ds.loc[shopify_ds['order_amount'].isin(outliers)]
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
Interesting findings here.- All of the outliers amounts are transactions done by two specific users.- They are buying shoes in bulk; 2000 pairs of shoes by user_id 607 and 6 pairs of shoes by user_id 878 which is shooting up the AOV.- We can still consider user id 878's order as legitimate as it is just one transaction...
shopify_ds_user_amount = pd.DataFrame({'mean_amount': shopify_ds.groupby('user_id')['order_amount'].mean()}).reset_index() shopify_ds_user_amount.sort_values(by = 'mean_amount', ascending = False).head(30)
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
- A massive mean amount of $704,000 just by one user using a credit card. That is definitley a case of credit card fraud.- Second and third highest spending by users 878 and 766 is also suspicious which we will take a look at later.
shopify_ds[shopify_ds['user_id'] == 607]
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
- User ID 607 made 17 transactions and each time purchased 2000 shoes from shop id 42 worth 70400 dollars, that is total expenditure of $11.96m with a credit card. Fraud alert. Let's remove user Id 607 and plot again.
subset_df = shopify_ds_user_amount[shopify_ds_user_amount['user_id'] != 607] subset_df.head() fig = plt.figure(figsize = (20, 10)) plt.bar(subset_df['user_id'], subset_df['mean_amount']); # Plot for users with mean amount greater tham $2000 subset_df = subset_df[subset_df['mean_amount'] > 2000] fig = plt.figure(figsiz...
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
- User id 878 and 766's spending amount are the ones that stand out in this graph. Let's check.
subset_df[subset_df['user_id'] == 878] shopify_ds[shopify_ds['user_id'] == 878]
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
- User id 878 has a very high one time spending which is through a debit card, shop id is 78.
shopify_ds[shopify_ds['user_id'] == 766]
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
- User 766 also has a one time high spending at shop id 78.- Shop id 78 clearly looks suspicious, let us check
shopify_ds[shopify_ds['shop_id'] == 78] shopify_ds_shop = pd.DataFrame({'mean_amount': shopify_ds.groupby('shop_id')['order_amount'].mean()}).reset_index() shopify_ds_shop fig = plt.figure(figsize = (30, 15)) sns.barplot(x = 'shop_id', y = 'mean_amount', data = shopify_ds_shop);
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
- Another case of fraud at shop with id 42 along with 78.
shopify_ds[shopify_ds['shop_id'] == 42]
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
- Let's remove the 2 fraud cases: user id 607 and shop id 78 and observe the data again.
clean_df = shopify_ds[shopify_ds['user_id' ]!= 607] clean_df = clean_df[clean_df['shop_id'] != 78] clean_df clean_df.describe()
_____no_output_____
MIT
Shopify-DataScience-Intern-Challenge-S2022.ipynb
aniruddhashinde29/Shopify-Data-Science-Intern-Summer-2022-Challenge
CLASSES 1 Object-oriented programming We now turn our attention to the **object-oriented programming** topic. The OOP is a programming paradigm based on the concept of **objects**: contain * **attributes(data)** and * the **methods(operations)** that operate on that attributes.>Formally, an object is a collection of ...
class Person: def __init__(self, name, age): self.name = name self.age = age def myfunc(self): print("Hello my name is " + self.name) p1 = Person("Li",56) p1.myfunc() p2=p1 p2.age p2.age=44 p1.age
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
2.1 Create an `object` of `class` type use the `class` keyword to define a new type class:`Circle`* a subclass of `object````pythonclass Person: ```
print(type(Person))
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
2.2 Creates a set of `attributes` and `methods`An class contains * **attributes** * think of data as other objects that make up the class * **methods** * think of methods as functions that only work with this class * how to interact with the object**Access any attribute**The dot **“`.`”** operator is used t...
p1.myfunc()
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
3 The Magic Method `__str__` Add the Magic Method `__str__` to the class Circle>**Magic Method:**>>One of the design goals for Python was to allow programmers to use classes to define new types that are as easy to **use as the `built-in` types** of Python. >>Using magic methods to provide **class-specific definition...
class Person: def __init__(self, name, age): self.name = name self.age = age def myfunc(self): print("Hello my name is " + self.name) def __str__(self): """Returns a string representation of Person""" return f"The age of {self.name} is {self.age}"
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
3.1 the `print` commandthe **`__str__`** function associated with the object to be `printed` is **automatically invoked**
p1=Person("zhang shan",21) #p1.__str__() print(p1)
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
3.2 calling `str`the `__str__` function is automatically invoked to convert a instance of that class a string
str(p1) p1.__str__()
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
3.3 Build-in `__str__`* List,dict,tuple
l=[1,2,3] print(l) str(l) l.__str__() d={'a':1,'b':2} print(d) str(d) d.__str__() t=('a',1,'c') print(t) str(t) t.__str__()
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
4 Inheritance**Inheritance** provides a convenient mechanism for building **groups of `related` abstractions**It allows programmers to create a type hierarchy in which each type inherits attributes from the types above it in the hierarchy.```pythonclass subclass(superclass):``` 4.1 The class Student` We shall define ...
class Student(Person): next_id_num = 0 #identification number def __init__(self, name,age): super().__init__(name,age) self.id_num = Student.next_id_num Student.next_id_num += 1 def __str__(self): """Returns a string representation of Student""" return ...
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
The subclass Student add **new** attributes: * **the class variable(类变量)**: * `_next_id_num`, belongs to the class` Student`, rather than to instances of the class.>* belongs to the class>* shared by all instance of the classs* **the instance variable(实例变量)**: * `_id_num`: id of each cyclinder instanceoverride met...
s1 = Student(name="Li Shi",age=22) print(s1)
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
**Class variable*** belongs to the class* shared by all instance of the classs
print(Student.next_id_num) # belongs to the class print(s1.next_id_num) # shared by all instance of the classs
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
**s2 = Student(name="wang yi", age=20)**
s2 =Student("wang yi", 20) print(s2)
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
5 Private Variables and Methods in PythonPython does not have keywords to access control.In other words,* All attributes and methods are PUBLIC by default in PythonBy convention, * Names begin with **double underscores (`__`)** and **not end with double underscores** are further hidden from direct access
class Student(Person): next_id_num = 0 #identification number def __init__(self, name,age): super().__init__(name,age) self.id_num = Student.next_id_num Student.next_id_num += 1 def add_grades(self,grades): self.grades=grades self.avg_grade=self.__ge...
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
**Public**
s3.add_grades([100,90,80,50]) s3.ave_grade
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
**Private**
s3.__get_average_grade() s3.__min_grade
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
6 The UML class diagramThe [Unified Modeling Language(UML统一建模语言)](https://en.wikipedia.org/wiki/Unified_Modeling_Language) is a general-purpose, developmental, modeling language in the field of software engineering that is intended to provide a standard way to visualize the design of a system.[A class diagram](https:/...
import iplantuml %%plantuml class Person { + name: str + age: int + {static} Person(name:str.age:int) + myfunc() + __str()__:str }
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
6.2 The UML class Inheritance 6.2.1 The Class-level(类) relationship: InheritanceIf two classes are in an **inheritance** relation, * the subclass inherits its attributes and methods from the superclass.The UML graphical representation of **an inheritance relation** is **a hollow triangle shape** on the **superclass en...
%%plantuml class Person { + name: str + age: int + {static} Person(name:str.age:int) + myfunc() + __str()__:str } class Student{ + {static} next_id_num:int + id_num:int + grades: float [1..*] + ave_grade:float - __min_grade:float + {static} Student(name:str,age:int) + add_grades(grades:float [1...
_____no_output_____
MIT
notebook/Unit3-1-Classes.ipynb
Py03013050/Home-Py03013050
**Instructions:**1. Rename the notebook (click on the title at the top of the page next to 'jupyter') to your A number, name and HW2Py: example (replace spaces with underscores -- this seems to work better): "Reid-Otsuji-A1234567-HW2Py"2. Add your code or markdown answers under the question. 3. `File>Download as` a '...
me = ["reid otsuji", 'blue', 'a23456789']
_____no_output_____
CC-BY-4.0
Homework/2018-hw2-python-solutions.ipynb
U2NG/win2020-gps-python
Print just your name and student ID. You can use a slicing operator or a loop.
me[0::2]
_____no_output_____
CC-BY-4.0
Homework/2018-hw2-python-solutions.ipynb
U2NG/win2020-gps-python
Using the join method, join the list into a string separated by a comma
print('join the list:', ','.join(me))
join the list: reid otsuji,blue,a23456789
CC-BY-4.0
Homework/2018-hw2-python-solutions.ipynb
U2NG/win2020-gps-python
Writing a loop to generate a list of letters. Use a for-loop to convert the string "global" into a list of letters:["g", "l", "o", "b", "a", "l"]Hint: You can create an empty list like this:
my_list = [] my_list = [] for char in "global": my_list.append(char) print(my_list)
['g', 'l', 'o', 'b', 'a', 'l']
CC-BY-4.0
Homework/2018-hw2-python-solutions.ipynb
U2NG/win2020-gps-python
Write a program that reads in the regional gapminder data sets and plots the average GDP per capita for each region over time in a single chart.
%matplotlib inline import matplotlib.pyplot as plt import glob import pandas
_____no_output_____
CC-BY-4.0
Homework/2018-hw2-python-solutions.ipynb
U2NG/win2020-gps-python
Fill in the blanks for the following code. Remember you want to:1. Loop thru a list of the contintent gdp files in the data/ directory2. Read each of those files in as data.frames3. Get the mean for each4. Plot each 5. Add a legend for each. Notice we are doing this by using a list of the countries in alpha orderNote:...
for filename in glob.____('data/______.csv'): df = pandas._______(filename) cont = df.____() cont.____() plt.____(rotation=90) plt.____(['Africa', 'Asia', 'Americas', 'Europe', 'Oceania']) plt.style.use('seaborn-muted') for filename in glob.glob('data/gapminder_gdp*.csv'): df = pandas.read_c...
_____no_output_____
CC-BY-4.0
Homework/2018-hw2-python-solutions.ipynb
U2NG/win2020-gps-python
Encapsulating Data AnalysisRun the following code. This code reads in the gapminder file for asia and then subsets the Japan data into a new dataframe called `japan`. Remember: Use the correct file path for the .csv saved on your computer.
import pandas df = pandas.read_csv('data/gapminder_gdp_asia.csv', index_col=0) #note: use the file path for your saved data location japan = df.ix['Japan'] japan.tail(6)
_____no_output_____
CC-BY-4.0
Homework/2018-hw2-python-solutions.ipynb
U2NG/win2020-gps-python
Using the japan data frame we created above, the below code is one way to get the average for Japan in the 80s. We create a year base (198) using floor division and then add the strings 2 and 7 to that to index for the 1982 and 1987 years (the two years we have in the 80s). We add those together and then divide by 2 t...
year = 1983 gdp_decade = 'gdpPercap_' + str(year // 10) #`//` is Floor division gives us 198 avg = (japan.ix[gdp_decade + '2'] + japan.ix[gdp_decade + '7']) / 2 # we want to add 1982 and 1987 and divide by 2 print(avg)
20880.0238
CC-BY-4.0
Homework/2018-hw2-python-solutions.ipynb
U2NG/win2020-gps-python
Given the code above, abstract the code into a function named `avg_gdp_in_decade` that will take a `country`, `continent` and `year` as parameters and return the average. We should be able to call your function like this: ```{python}avg_gdp_in_decade('Algeria','Africa', 1970) 1970 for the 70s, 1980 would be 80s```I sta...
def avg_dgp_in_decade(country, continent, year): #read the data in #create a new data.frame for the country #get the decade base three numbers 198 for 80s #subset and calculate avg #return average pass # pass tells python to pass on the function, remove it when you want to run your function
_____no_output_____
CC-BY-4.0
Homework/2018-hw2-python-solutions.ipynb
U2NG/win2020-gps-python
Syntaxanalyse durch [Deplacy](https://koichiyasuoka.github.io/deplacy/) mit [Camphr-Udify](https://camphr.readthedocs.io/en/latest/notes/udify.html)
!pip install deplacy camphr 'unofficial-udify>=0.3.0' en-udify@https://github.com/PKSHATechnology-Research/camphr_models/releases/download/0.7.0/en_udify-0.7.tar.gz import pkg_resources,imp imp.reload(pkg_resources) import spacy nlp=spacy.load("en_udify") doc=nlp("Er sieht sehr jung aus.") import deplacy deplacy.render...
_____no_output_____
MIT
doc/de.ipynb
kyodocn/deplacy
mit [Stanza](https://stanfordnlp.github.io/stanza)
!pip install deplacy stanza import stanza stanza.download("de") nlp=stanza.Pipeline("de") doc=nlp("Er sieht sehr jung aus.") import deplacy deplacy.render(doc) deplacy.serve(doc,port=None) # import graphviz # graphviz.Source(deplacy.dot(doc))
_____no_output_____
MIT
doc/de.ipynb
kyodocn/deplacy
mit [COMBO-pytorch](https://gitlab.clarin-pl.eu/syntactic-tools/combo)
!pip install --index-url https://pypi.clarin-pl.eu/simple deplacy combo import combo.predict nlp=combo.predict.COMBO.from_pretrained("german-ud27") doc=nlp("Er sieht sehr jung aus.") import deplacy deplacy.render(doc) deplacy.serve(doc,port=None) # import graphviz # graphviz.Source(deplacy.dot(doc))
_____no_output_____
MIT
doc/de.ipynb
kyodocn/deplacy
mit [UDPipe 2](http://ufal.mff.cuni.cz/udpipe/2)
!pip install deplacy def nlp(t): import urllib.request,urllib.parse,json with urllib.request.urlopen("https://lindat.mff.cuni.cz/services/udpipe/api/process?model=de&tokenizer&tagger&parser&data="+urllib.parse.quote(t)) as r: return json.loads(r.read())["result"] doc=nlp("Er sieht sehr jung aus.") import deplac...
_____no_output_____
MIT
doc/de.ipynb
kyodocn/deplacy