markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
5.3) Predicting Comments for Specific Factor
comments_ = [] for i in df['views']: comments_.append(int(i * .01388))
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
6. Combining Factor + Error + Ratios
comments = np.array(df['comments']) error = [] for i in tqdm(range(st,end + 1 , 1)): # Creating Start and Ending Reage for Factors factor = i/100000 comments_ = [] for i in df['views']: # Predicting Likes for Specific Factor comments_...
100%|██████████████████████████████████████| 5416/5416 [00:07<00:00, 770.16it/s]
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
Finding Best Factor that Fits the Likes and Views
final_factor = error.sort_values(by = 'Error').head(10)['Factor'].mean() final_factor comments_ = [] for i in df['views']: comments_.append(int(i * final_factor)) df['pred_comments'] = comments_ df.head()
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
Actual to Predicted Likes with best Fit Factor
data = [] for i in df.values: data.append([i[2],i[4],i[10]]) df_ = pd.DataFrame(data, columns = ['views','comments','pred_comments']) views = list(df_.sort_values(by = 'views')['views']) likes = list(df_.sort_values(by = 'views')['comments']) likes_ = list(df_.sort_values(by = 'views')['pred_comments...
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
Multicollinearity and Regression AnalysisIn this tutorial, we will be using a spatial dataset of county-level election and demographic statistics for the United States. This time, we'll explore different methods to diagnose and account for multicollinearity in our data. Specifically, we'll calculate variance inflation...
import numpy as np import geopandas as gpd import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import statsmodels.api as sm from statsmodels.stats.outliers_influence import variance_inflation_factor from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedKFo...
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
First, we're going to load the 'Elections' dataset from the libpysal library, which is a very easy to use API that accesses the Geodata Center at the University of Chicago.* More on spatial data science resources from UC: https://spatial.uchicago.edu/* A list of datasets available through lipysal: https://geodacenter.g...
from libpysal.examples import load_example elections = load_example('Elections') #note the folder where your data now lives: #First, let's see what files are available in the 'Elections' data example elections.get_file_list()
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
When you are out in the world doing research, you often will not find a ready-made function to download your data. That's okay! You know how to get this dataset without using pysal! Do a quick internal review of online data formats and automatic data downloads. TASK 1: Use urllib functions to download this file directl...
# Task 1 code here: #import required function: import urllib.request #define online filepath (aka url): url = "https://geodacenter.github.io/data-and-lab//data/election.zip" #define local filepath: local = '../../elections.zip' #download elections data: urllib.request.urlretrieve(url, local) #unzip file: see if goo...
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 2: Use geopandas to read in this shapefile. Call your geopandas.DataFrame "votes"
# TASK 2: Use geopandas to read in this shapefile. Call your geopandas.DataFrame "votes" votes = gpd.read_file("H:\EnvDataSci\election/election.shp")
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
EXTRA CREDIT TASK (+2pts): use os to delete the elections data downloaded by pysal in your C: drive that you are no longer using.
# Extra credit task: #Let's view the shapefile to get a general idea of the geometry we're looking at: %matplotlib inline votes.plot() #View the first few line]s of the dataset votes.head() #Since there are too many columns for us to view on a signle page using "head", we can just print out the column names so we have...
STATEFP COUNTYFP GEOID ALAND AWATER area_name state_abbr PST045214 PST040210 PST120214 POP010210 AGE135214 AGE295214 AGE775214 SEX255214 RHI125214 RHI225214 RHI325214 RHI425214 RHI525214 RHI625214 RHI725214 RHI825214 POP715213 POP645213 POP815213 EDU635213 EDU685213 VET605213 LFE305213 HSG010214 HSG445213 HSG096213 HSG...
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
You can use pandas summary statistics to get an idea of how county-level data varies across the United States. TASK 3: For example, how did the county mean percent Democratic vote change between 2012 (pct_dem_12) and 2016 (pct_dem_16)?Look here for more info on pandas summary statistics:https://www.earthdatascience.o...
#Task 3 demchange = votes["pct_dem_16"].mean() - votes["pct_dem_12"].mean() print("The mean percent Democrative vote changed by ", demchange, "between 2012 and 2016.")
The mean percent Democrative vote changed by -0.06783446699806961 between 2012 and 2016.
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
We can also plot histograms of the data. Below, smoothed histograms from the seaborn package (imported as sns) let us get an idea of the distribution of percent democratic votes in 2012 (left) and 2016 (right).
# Plot histograms: f,ax = plt.subplots(1,2, figsize=(2*3*1.6, 2)) for i,col in enumerate(['pct_dem_12','pct_dem_16']): sns.kdeplot(votes[col].values, shade=True, color='slategrey', ax=ax[i]) ax[i].set_title(col.split('_')[1]) # Plot spatial distribution of # dem vote in 2012 and 2016 with histogram. f,ax = plt....
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 4: Make a new column on your geopandas dataframe called "pct_dem_change" and plot it using the syntax above. Explain the plot.
# Task 4: add new column pct_dem_change to votes: votes["pct_dem_change"] = votes.pct_dem_16 - votes.pct_dem_12 f, ax = plt plt.show(votes.pct_dem_change) #Task 4: plot your pct_dem_change variable on a map:
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Click on this url to learn more about the variables in this dataset: https://geodacenter.github.io/data-and-lab//county_election_2012_2016-variables/As you can see, there are a lot of data values available in this dataset. Let's say we want to learn more about what county-level factors influence percent change in democ...
# Task 4: create a subset of votes called "my list" with all your subset variables. #my_list = ["pct_pt_16", <list your variables here>] #check to make sure all your columns are there: votes[my_list].head()
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Scatterplot matrixWe call the process of getting to know your data (ranges and distributions of the data, as well as any relationships between variables) "exploratory data analysis". Pairwise plots of your variables, called scatterplots, can provide a lot of insight into the type of relationships you have between vari...
#Use seaborn.pairplot to plot a scatterplot matrix of you 10 variable subset: sns.pairplot(votes[my_list])
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 7: Do you observe any collinearity in this dataset? How would you describe the relationship between your two "incidentally collinear" variables that you selected based on looking at variable descriptions? *Type answer here* TASK 8: What is plotted on the diagonal panels of the scatterplot matrix?*Type answer here...
#VIF = 1/(1-R2) of a pairwise OLS regression between two predictor variables #We can use a built-in function "variance_inflation_factor" from statsmodel.api to calculate VIF #Learn more about the function ?variance_inflation_factor #Calculate VIFs on our dataset vif = pd.DataFrame() vif["VIF Factor"] = [variance_inflat...
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Collinearity is always present in observational data. When is it a problem?Generally speaking, VIF > 10 are considered "too much" collinearity. But this value is somewhat arbitrary: the extent to which variance inflation will impact your analysis is highly context dependent. There are two primary contexts where varian...
#first, forumalate the model. See weather_trend.py in "Git_101" for a refresher on how. #extract variable that you want to use to "predict" X = np.array(votes[my_list[1:10]].values) #standardize data to assist in interpretation of coefficients X = (X - np.mean(X, axis=0)) / np.std(X, axis=0) #extract variable that we...
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 9: Answer: which coefficients indicate a statisticall significant relationship between parameter and pct_dem_change? What is your most important predictor variable? How can you tell?*Type answer here* TASK10: Are any of these parameters subject to variance inflation? How can you tell?*Type answer here* Now, let...
#Add model residuals to our "votes" geopandas dataframe: votes['lm_resid']=OLS(Y,X).fit().resid sns.kdeplot(votes['lm_resid'].values, shade=True, color='slategrey')
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 11: Are our residuals normally distributed with a mean of zero? What does that mean?*Type answer here* Penalized regression: ridge penaltyIn penalized regression, we intentionally bias the parameter estimates to stabilize them given collinearity in the dataset.From https://www.analyticsvidhya.com/blog/2016/01/ri...
# when L2=0, Ridge equals OLS model = Ridge(alpha=1) # define model evaluation method cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_val_score(model, X, Y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1) #force scores to be positive scores = absolute(scores) print('Mea...
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Penalized regression: lasso penaltyFrom https://www.analyticsvidhya.com/blog/2016/01/ridge-lasso-regression-python-complete-tutorial/"LASSO stands for Least Absolute Shrinkage and Selection Operator. I know it doesn’t give much of an idea but there are 2 key words here – ‘absolute‘ and ‘selection‘.Lets consider the fo...
# when L1=0, Lasso equals OLS model = Lasso(alpha=0) # define model evaluation method cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_val_score(model, X, Y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1) #force scores to be positive scores = absolute(scores) print('Mea...
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Penalized regression: elastic net penaltyIn other words, the lasso penalty shrinks unimportant coefficients down towards zero, automatically "selecting" important predictor variables. The ridge penalty shrinks coefficients of collinear predictor variables nearer to each other, effectively partitioning the magnitude of...
# when L1 approaches infinity, certain coefficients will become exactly zero, and MAE equals the variance of our response variable: model = ElasticNet(alpha=1, l1_ratio=0.2) # define model evaluation method cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_val_score(model, X, ...
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 11: Match these elastic net coefficients up with your original data. Do you see a logical grouping(s) between variables that have non-zero coefficients?Explain why or why not.*Type answer here*
# Task 11 scratch cell:
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Testinnsening av person skattemelding med næringspesifikasjon Denne demoen er ment for å vise hvordan flyten for et sluttbrukersystem kan hente et utkast, gjøre endringer, validere/kontrollere det mot Skatteetatens apier, for å sende det inn via Altinn3
try: from altinn3 import * from skatteetaten_api import main_relay, base64_decode_response, decode_dokument import requests import base64 import xmltodict import xml.dom.minidom from pathlib import Path except ImportError as e: print("Mangler en eller avhengighet, installer dem via pip,...
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Generer ID-porten tokenTokenet er gyldig i 300 sekunder, rekjørt denne biten om du ikke har kommet frem til Altinn3 biten før 300 sekunder
idporten_header = main_relay()
https://oidc-ver2.difi.no/idporten-oidc-provider/authorize?scope=skatteetaten%3Aformueinntekt%2Fskattemelding%20openid&acr_values=Level3&client_id=8d7adad7-b497-40d0-8897-9a9d86c95306&redirect_uri=http%3A%2F%2Flocalhost%3A12345%2Ftoken&response_type=code&state=5lCEToPZskoHXWGs-ghf4g&nonce=1638258045740949&resource=http...
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Hent utkast og gjeldendeHer legger vi inn fødselsnummeret vi logget oss inn med, Dersom du velger et annet fødselsnummer så må den du logget på med ha tilgang til skattemeldingen du ønsker å hente Parten nedenfor er brukt for internt test, pass på bruk deres egne testparter når dere tester01014700230 har fått en myndi...
s = requests.Session() s.headers = dict(idporten_header) fnr="29114501318" #oppdater med test fødselsnummerene du har fått tildelt
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Utkast
url_utkast = f'https://mp-test.sits.no/api/skattemelding/v2/utkast/2021/{fnr}' r = s.get(url_utkast) r print(r.text)
<skattemeldingOgNaeringsspesifikasjonforespoerselResponse xmlns="no:skatteetaten:fastsetting:formueinntekt:skattemeldingognaeringsspesifikasjon:forespoersel:response:v2"><dokumenter><skattemeldingdokument><id>SKI:138:41694</id><encoding>utf-8</encoding><content>PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c2thdH...
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Gjeldende
url_gjeldende = f'https://mp-test.sits.no/api/skattemelding/v2/2021/{fnr}' r_gjeldende = s.get(url_gjeldende) r_gjeldende
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
FastsattHer får en _http 404_ om vedkommende ikke har noen fastsetting, rekjørt denne etter du har sendt inn og fått tilbakemdling i Altinn at den har blitt behandlet, du skal nå ha en fastsatt skattemelding om den har blitt sent inn som Komplett
url_fastsatt = f'https://mp-test.sits.no/api/skattemelding/v2/fastsatt/2021/{fnr}' r_fastsatt = s.get(url_fastsatt) r_fastsatt
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Svar fra hent gjeldende Gjeldende dokument referanse: I responsen på alle api kallene, være seg utkast/fastsatt eller gjeldene, så følger det med en dokumentreferanse. For å kalle valider tjenesten, er en avhengig av å bruke korrekt referanse til gjeldende skattemelding. Cellen nedenfor henter ut gjeldende dokumentre...
sjekk_svar = r_gjeldende sme_og_naering_respons = xmltodict.parse(sjekk_svar.text) skattemelding_base64 = sme_og_naering_respons["skattemeldingOgNaeringsspesifikasjonforespoerselResponse"]["dokumenter"]["skattemeldingdokument"] sme_base64 = skattemelding_base64["content"] dokref = sme_og_naering_respons["skattemelding...
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Valider utkast sme med næringsopplysninger
def valider_sme(payload): url_valider = f'https://mp-test.sits.no/api/skattemelding/v2/valider/2021/{fnr}' header = dict(idporten_header) header["Content-Type"] = "application/xml" return s.post(url_valider, headers=header, data=payload) valider_respons = valider_sme(naering_enk) resultatAvValidering ...
validertMedFeil <?xml version="1.0" ?> <skattemeldingOgNaeringsspesifikasjonResponse xmlns="no:skatteetaten:fastsetting:formueinntekt:skattemeldingognaeringsspesifikasjon:response:v2"> <avvikVedValidering> <avvik> <avvikstype>xmlValideringsfeilPaaNaeringsopplysningene</avvikstype> </avvik> </avvikVedValiderin...
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Altinn 3 1. Hent Altinn Token2. Oppretter en ny instans av skjemaet3. Last opp vedlegg til skattemeldingen4. Oppdater skattemelding xml med referanse til vedlegg_id fra altinn3.5. Laster opp skattemeldingen og næringsopplysninger som et vedlegg
#1 altinn3_applikasjon = "skd/formueinntekt-skattemelding-v2" altinn_header = hent_altinn_token(idporten_header) #2 instans_data = opprett_ny_instans_med_inntektsaar(altinn_header, fnr, "2021", appnavn=altinn3_applikasjon)
{'Authorization': 'Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IjI3RTAyRTk4M0FCMUEwQzZEQzFBRjAyN0YyMUZFMUVFNENEQjRGRjEiLCJ4NXQiOiJKLUF1bURxeG9NYmNHdkFuOGhfaDdremJUX0UiLCJ0eXAiOiJKV1QifQ.eyJuYW1laWQiOiI4NTMzNyIsInVybjphbHRpbm46dXNlcmlkIjoiODUzMzciLCJ1cm46YWx0aW5uOnVzZXJuYW1lIjoibXVuaGplbSIsInVybjphbHRpbm46cGFydHlpZCI6NTAxMTA0OTU...
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Last opp skattemelding Last først opp vedlegg som hører til skattemeldingenEksemplet nedenfor gjelder kun generelle vedlegg for skattemeldingen, ```xml En unik id levert av altinn når du laster opp vedleggsfilen vedlegg_eksempel_sirius_stjerne.jpg jpg dokumentertMarkedsverdi ```men samme p...
vedleggfil = "vedlegg_eksempel_sirius_stjerne.jpg" opplasting_respons = last_opp_vedlegg(instans_data, altinn_header, vedleggfil, content_type="image/jpeg", data_ty...
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Sett statusen klar til henting av skatteetaten.
req_bekreftelse = endre_prosess_status(instans_data, altinn_header, "next", appnavn=altinn3_applikasjon) req_bekreftelse = endre_prosess_status(instans_data, altinn_header, "next", appnavn=altinn3_applikasjon) req_bekreftelse
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Sjekk status på altinn3 instansen om skatteetaten har hentet instansen.Denne statusen vil til å begynne med ha verdien "none". Oppdatering skjer når skatteetaten har behandlet innsendingen.- Ved **komplett**-innsending vil status oppdateres til Godkjent/Avvist når innsendingen er behandlet.- Ved **ikkeKomplett**-innse...
instans_etter_bekreftelse = hent_instans(instans_data, altinn_header, appnavn=altinn3_applikasjon) response_data = instans_etter_bekreftelse.json() print(f"Instans-status: {response_data['status']['substatus']}")
Instans-status: None
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Se innsending i AltinnTa en slurk av kaffen og klapp deg selv på ryggen, du har nå sendt inn, la byråkratiet gjøre sin ting... og det tar litt tid. Pt så sjekker skatteeaten hos Altinn3 hvert 30 sek om det har kommet noen nye innsendinger. Skulle det gå mer enn et par minutter så har det mest sansynlig feilet. Før der...
print("Resultat av hent fastsatt før fastsetting") print(r_fastsatt.text) print("Resultat av hent fastsatt etter fastsetting") r_fastsatt2 = s.get(url_fastsatt) r_fastsatt2.text #r_fastsatt.elapsed.total_seconds()
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Full Run In order to run the scripts, we need to be in the base directory. This will move us out of the notebooks directory and into the base directory
import os os.chdir('..')
_____no_output_____
MIT
notebooks/Full Run.ipynb
joelmpiper/ga_project
Define where each of the datasets are stored
Xtrain_dir = 'solar/data/kaggle_solar/train/' Xtest_dir = 'solar/data/kaggle_solar/test' ytrain_file = 'solar/data/kaggle_solar/train.csv' station_file = 'solar/data/kaggle_solar/station_info.csv' import numpy as np
_____no_output_____
MIT
notebooks/Full Run.ipynb
joelmpiper/ga_project
Define the parameters needed to run the analysis script.
# Choose up to 98 stations; not specifying a station means to use all that fall within the given lats and longs. If the # parameter 'all' is given, then it will use all stations no matter the provided lats and longs station = ['all'] # Determine which dates will be used to train the model. No specified date means use ...
_____no_output_____
MIT
notebooks/Full Run.ipynb
joelmpiper/ga_project
Define the directories that contain the code needed to run the analysis
import solar.report.submission import solar.wrangle.wrangle import solar.wrangle.subset import solar.wrangle.engineer import solar.analyze.model
_____no_output_____
MIT
notebooks/Full Run.ipynb
joelmpiper/ga_project
Reload the modules to load in any code changes since the last run. Load in all of the data needed for the run and store in a pickle file. The 'external' flag determines whether to look to save the pickle file in a connected hard drive or to store locally. The information in pink shows what has been written to the log ...
# test combination of station names and grid reload(solar.wrangle.wrangle) reload(solar.wrangle.subset) reload(solar.wrangle.engineer) from solar.wrangle.wrangle import SolarData #external = True input_data = SolarData.load(Xtrain_dir, ytrain_file, Xtest_dir, station_file, \ train_da...
_____no_output_____
MIT
notebooks/Full Run.ipynb
joelmpiper/ga_project
"Jupyter notebook"> "Setup and snippets for a smooth jupyter notebook experience"- toc: False- branch: master- categories: [code snippets, jupyter, python] Start jupyter notebook on boot Edit the crontab for your user.
crontab -e
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
Add the following line.
@reboot source ~/.venv/venv/bin/activate; ~/.venv/venv/bin/jupyter-notebook
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
--- Magic Commands Autoreload imports when file changes were made.
%load_ext autoreload %autoreload 2
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
Show matplotlib plots inside the notebook.
import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
Measure excecution time of a cell.
%%time
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
`pip install` from jupyter notebook.
import sys !{sys.executable} -m pip install numpy
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
Data Science Academy - Python Fundamentos - Capítulo 10 Download: http://github.com/dsacademybr
# Versão da Linguagem Python from platform import python_version print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Versão da Linguagem Python Usada Neste Jupyter Notebook: 3.7.6
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Lab 4 - Construindo um Modelo de Regressão Linear com TensorFlow Use como referência o Deep Learning Book: http://www.deeplearningbook.com.br/ Obs: Embora a versão 2.x do TensorFlow já esteja disponível, este Jupyter Notebook usar a versão 1.15, que também é mantida pela equipe do Google.Caso queira aprender TensorFlo...
# Versão do TensorFlow a ser usada !pip install -q tensorflow==1.15.2 # Imports import tensorflow as tf import numpy as np import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Definindo os hyperparâmetros do modelo
# Hyperparâmetros do modelo learning_rate = 0.01 training_epochs = 2000 display_step = 200
_____no_output_____
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Definindo os datasets de treino e de teste Considere X como o tamanho de uma casa e y o preço de uma casa
# Dataset de treino train_X = np.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,7.042,10.791,5.313,7.997,5.654,9.27,3.1]) train_y = np.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,2.827,3.465,1.65,2.904,2.42,2.94,1.3]) n_samples = train_X.shape[0] # Dataset de teste test_X = np.asarray(...
_____no_output_____
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Placeholders e variáveis
# Placeholders para as variáveis preditoras (x) e para variável target (y) X = tf.placeholder(tf.float32) y = tf.placeholder(tf.float32) # Pesos e bias do modelo W = tf.Variable(np.random.randn(), name="weight") b = tf.Variable(np.random.randn(), name="bias")
_____no_output_____
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Construindo o modelo
# Construindo o modelo linear # Fórmula do modelo linear: y = W*X + b linear_model = W*X + b # Mean squared error (erro quadrado médio) cost = tf.reduce_sum(tf.square(linear_model - y)) / (2*n_samples) # Otimização com Gradient descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
_____no_output_____
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Executando o grafo computacional, treinando e testando o modelo
# Definindo a inicialização das variáveis init = tf.global_variables_initializer() # Iniciando a sessão with tf.Session() as sess: # Iniciando as variáveis sess.run(init) # Treinamento do modelo for epoch in range(training_epochs): # Otimização com Gradient Descent sess.run(optimiz...
Epoch: 200 Custo (Erro): 0.2628 W:0.4961 b:-0.934 Epoch: 400 Custo (Erro): 0.1913 W:0.4433 b:-0.5603 Epoch: 600 Custo (Erro): 0.1473 W: 0.402 b:-0.2672 Epoch: 800 Custo (Erro): 0.1202 W:0.3696 b:-0.03732 Epoch: 1000 Custo (Erro): 0.1036 W:0.3441 b: 0.143 Epoch: 120...
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Bay Area Bike Share Analysis Introduction> **Tip**: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.[Bay Area Bike Share](http://www.bayareabikeshare.com/) is a company that provides on-demand bike rentals for customers in San Francisco, Redwood City, Palo Alt...
# import all necessary packages and functions. import csv from datetime import datetime import numpy as np import pandas as pd from babs_datacheck import question_3 from babs_visualizations import usage_stats, usage_plot from IPython.display import display %matplotlib inline # file locations file_in = '201402_trip_dat...
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Condensing the Trip DataThe first step is to look at the structure of the dataset to see if there's any data wrangling we should perform. The below cell will read in the sampled data file that you created in the previous cell, and print out the first few rows of the table.
sample_data = pd.read_csv('201309_trip_data.csv') display(sample_data.head())
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
In this exploration, we're going to concentrate on factors in the trip data that affect the number of trips that are taken. Let's focus down on a few selected columns: the trip duration, start time, start terminal, end terminal, and subscription type. Start time will be divided into year, month, and hour components. We...
# Display the first few rows of the station data file. station_info = pd.read_csv('201402_station_data.csv') display(station_info.head()) # This function will be called by another function later on to create the mapping. def create_station_mapping(station_data): """ Create a mapping from station IDs to cities,...
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
You can now use the mapping to condense the trip data to the selected columns noted above. This will be performed in the `summarise_data()` function below. As part of this function, the `datetime` module is used to **p**arse the timestamp strings from the original data file as datetime objects (`strptime`), which can t...
def summarise_data(trip_in, station_data, trip_out): """ This function takes trip and station information and outputs a new data file with a condensed summary of major trip information. The trip_in and station_data arguments will be lists of data files for the trip and station information, respectiv...
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
**Question 3**: Run the below code block to call the `summarise_data()` function you finished in the above cell. It will take the data contained in the files listed in the `trip_in` and `station_data` variables, and write a new file at the location specified in the `trip_out` variable. If you've performed the data wran...
# Process the data by running the function we wrote above. station_data = ['201402_station_data.csv'] trip_in = ['201309_trip_data.csv'] trip_out = '201309_trip_summary.csv' summarise_data(trip_in, station_data, trip_out) # Load in the data file and print out the first few rows sample_data = pd.read_csv(trip_out) disp...
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
> **Tip**: If you save a jupyter Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the necessary code blocks from your previous session to reestablish variables and functions before picking up...
trip_data = pd.read_csv('201309_trip_summary.csv') usage_stats(trip_data)
There are 27345 data points in the dataset. The average duration of trips is 27.60 minutes. The median trip duration is 10.72 minutes. 25% of trips are shorter than 6.82 minutes. 25% of trips are longer than 17.28 minutes.
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
You should see that there are over 27,000 trips in the first month, and that the average trip duration is larger than the median trip duration (the point where 50% of trips are shorter, and 50% are longer). In fact, the mean is larger than the 75% shortest durations. This will be interesting to look at later on.Let's s...
usage_plot(trip_data, 'subscription_type')
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Seems like there's about 50% more trips made by subscribers in the first month than customers. Let's try a different variable now. What does the distribution of trip durations look like?
usage_plot(trip_data, 'duration')
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Looks pretty strange, doesn't it? Take a look at the duration values on the x-axis. Most rides are expected to be 30 minutes or less, since there are overage charges for taking extra time in a single trip. The first bar spans durations up to about 1000 minutes, or over 16 hours. Based on the statistics we got out of `u...
usage_plot(trip_data, 'duration', ['duration < 60'])
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
This is looking better! You can see that most trips are indeed less than 30 minutes in length, but there's more that you can do to improve the presentation. Since the minimum duration is not 0, the left hand bar is slighly above 0. We want to be able to tell where there is a clear boundary at 30 minutes, so it will loo...
usage_plot(trip_data, 'duration', ['duration < 60'], boundary = 0, bin_width = 5)
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
**Question 4**: Which five-minute trip duration shows the most number of trips? Approximately how many trips were made in this range?**Answer**: 5 to 10 minutes trip, with approximately 9.000 trips. Visual adjustments like this might be small, but they can go a long way in helping you understand the data and convey you...
station_data = ['201402_station_data.csv', '201408_station_data.csv', '201508_station_data.csv' ] trip_in = ['201402_trip_data.csv', '201408_trip_data.csv', '201508_trip_data.csv' ] trip_out = 'babs_y1_y2_summary.csv' # This function will take in the station data a...
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Since the `summarise_data()` function has created a standalone file, the above cell will not need to be run a second time, even if you close the notebook and start a new session. You can just load in the dataset and then explore things from there.
trip_data = pd.read_csv('babs_y1_y2_summary.csv') display(trip_data.head())
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Now it's your turn to explore the new dataset with `usage_stats()` and `usage_plot()` and report your findings! Here's a refresher on how to use the `usage_plot()` function:- first argument (required): loaded dataframe from which data will be analyzed.- second argument (required): variable on which trip counts will be...
usage_stats(trip_data) usage_plot(trip_data,'start_hour',['subscription_type == Subscriber'])
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Explore some different variables using the functions above and take note of some trends you find. Feel free to create additional cells if you want to explore the dataset in other ways or multiple ways.> **Tip**: In order to add additional cells to a notebook, you can use the "Insert Cell Above" and "Insert Cell Below" ...
# Final Plot 1 usage_plot(trip_data,'start_hour',['subscription_type == Subscriber'],bin_width=1) usage_plot(trip_data,'weekday',['subscription_type == Subscriber'])
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
**Question 5a**: What is interesting about the above visualization? Why did you select it?**Answer**: Both graphs show that most Subscribers use the service to go to work, since vast majority happened on weekdays, and between 7-9 AM and 4-5 PM.
# Final Plot 2 usage_plot(trip_data,'start_month',['subscription_type == Customer'], boundary = 1) usage_plot(trip_data,'start_month',['subscription_type == Customer'], boundary = 1, n_bins=12) usage_plot(trip_data,'weekday',['subscription_type == Customer','start_month > 6'],bin_width=30) usage_plot(trip_data,'start_c...
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Apsidal Motion Age for HD 144548Here, I am attempting to derive an age for the triple eclipsing hierarchical triple HD 144548 (Upper Scoripus member) based on the observed orbital precession (apsidal motion) of the inner binary system's orbit about the tertiary companion (star A). A value for the orbital precession is...
def c2(masses, radii, e, a, rotation=None): f = (1.0 - e**2)**-2 g = (8.0 + 12.0*e**2 + e**4)*f**(5.0/2.0) / 8.0 if rotation == None: omega_ratio_sq = 0.0 elif rotation == 'synchronized': omega_ratio_sq = (1.0 + e)/(1.0 - e)**3 else: omega_ratio_sq = 0.0 c2_0 = (ome...
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
What complicates the issue is that the interior structure constants for the B components also vary as a function of age, so we need to compute a mean mass track using the $c_2$ coefficients and the individual $k_2$ values.
import numpy as np trk_Ba = np.genfromtxt('/Users/grefe950/evolve/dmestar/trk/gs98/p000/a0/amlt1884/m0980_GS98_p000_p0_y28_mlt1.884.trk') trk_Bb = np.genfromtxt('/Users/grefe950/evolve/dmestar/trk/gs98/p000/a0/amlt1884/m0940_GS98_p000_p0_y28_mlt1.884.trk')
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
Create tracks with equally spaced time steps.
from scipy.interpolate import interp1d log10_age = np.arange(6.0, 8.0, 0.01) # log10(age/yr) ages = 10**log10_age icurve = interp1d(trk_Ba[:,0], trk_Ba, kind='linear', axis=0) new_trk_Ba = icurve(ages) icurve = interp1d(trk_Bb[:,0], trk_Bb, kind='linear', axis=0) new_trk_Bb = icurve(ages)
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
Now, compute the $c_2$ coefficients for each age.
mean_trk_B = np.empty((len(ages), 3)) for i, age in enumerate(ages): c2s = c2(masses, [10**new_trk_Ba[i, 4], 10**new_trk_Bb[i, 4]], e, a, rotation='synchronized') avg_k2 = (c2s[0]*new_trk_Ba[i, 10] + c2s[1]*new_trk_Bb[i, 10])/(sum(c2s)) mean_trk_B[i] = np.array([age, 10**new_trk_Ba[i, 4] ...
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
With that, we have an estimate for the mean B component properties as a function of age. One complicating factor is the "radius" of the average B component. If we are modeling the potential created by the Ba/Bb components as that of a single star, we need to assume that the A component never enters into any region of t...
e2 = 0.2652 a2 = 66.2319 masses_2 = [1.44, 1.928] trk_A = np.genfromtxt('/Users/grefe950/evolve/dmestar/trk/gs98/p000/a0/amlt1884/m1450_GS98_p000_p0_y28_mlt1.884.trk', usecols=(0,1,2,3,4,5,6,7,8,9,10)) icurve = interp1d(trk_A[:,0], trk_A, kind='linear', axis=0) new_trk_A = icurve(ages)
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
We are now in a position to compute the classical apsidal motion rate from the combined A/B tracks.
cl_apsidal_motion_rate = np.empty((len(ages), 2)) for i, age in enumerate(ages): c2_AB = c2(masses_2, [10**new_trk_A[i, 4], a + 0.5*mean_trk_B[i, 1]], e2, a2) cl_apsidal_motion_rate[i] = np.array([age, 360.0*(c2_AB[0]*new_trk_A[i, 10] + c2_AB[1]*mean_trk_B[i, 2])]) GR_apsidal_motion_rate = 5.45e-4*(sum(masses)/...
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
One can see from this that the general relativistic component is a very small contribution to the total apsidal motion of the system. Let's look at the evolution of the apsidal motion for the A/B binary system.
%matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1, figsize=(8., 8.), sharex=True) ax.grid(True) ax.tick_params(which='major', axis='both', length=15., labelsize=18.) ax.set_xlabel('Age (yr)', fontsize=20., family='serif') ax.set_ylabel('Apsidal Motion Rate (deg / cycle)', fontsize=20., fam...
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
How sensitive is this to the properties of the A component, which are fairly uncertain?
icurve = interp1d(cl_apsidal_motion_rate[:,1], cl_apsidal_motion_rate[:,0], kind='linear') print icurve(0.0235)/1.0e6, icurve(0.0255)/1.0e6, icurve(0.0215)/1.0e6
11.2030404132 9.66153795127 12.8365039818
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
CORD19 Analysis
%matplotlib inline # import nltk # nltk.download('stopwords') # nltk.download('punkt') # nltk.download('averaged_perceptron_tagger') import json import yaml import os import nltk import matplotlib.pyplot as plt import re import pandas as pd from nltk.corpus import stopwords #import plotly.graph_objects as go import ne...
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Configurations
# import configurations with open('config.yaml','r') as ymlfile: cfg = yaml.load(ymlfile)
C:\Users\david\Anaconda3\lib\site-packages\ipykernel_launcher.py:3: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. This is separate from the ipykernel package so we can avoid doing imports until
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
General Functions
def get_papers(path): # get list of papers .json from path papers = [] for file_name in os.listdir(path): papers.append(file_name) return papers def extract_authors(authors_list): ''' Function to extract authors metadata from list of authors ''' authors = [] for curr_au...
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Objects
class Author: def __init__(self, firstname, middlename, lastname): self.firstName = firstname self.middleName = middlename self.lastName = lastname def __str__(self): return '{} {} {}'.format(self.firstName, self.middleName, self.lastName) class Paper: def __init__(self,...
C:\Users\david\OneDrive\Bureau\CORD-19-research-challenge\\2020-03-13\biorxiv_medrxiv\biorxiv_medrxiv
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Metadata
meta = '2020-03-13/all_sources_metadata_2020-03-13.csv' df_meta = pd.read_csv(cfg['data-path'] + meta) df_meta.head() df_meta[df_meta['has_full_text']==True] df_meta.info() df_meta['source_x'].unique() paper_ids = set(df_meta.iloc[:,0]) paper_ids.pop() paper_ids df_meta[df_meta['source_x']=='biorxiv'][['sha','doi']]
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
biorxiv_medrxiv
biorxiv = '\\2020-03-13\\biorxiv_medrxiv\\biorxiv_medrxiv' path = cfg['data-path'] + biorxiv papers = get_papers(path) cnt = 0 # check if paper are in metadata dataframe for paper in papers: if paper[:-5] not in paper_ids: print(paper) else: cnt += 1 print('There are {}/{} papers present in the ...
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
pmc_custom_license
pmc = '2020-03-13\pmc_custom_license\pmc_custom_license' path = cfg['data-path'] + pmc pmc_papers = get_papers(path) pmc_papers[:5] cnt = 0 # check if paper are in metadata dataframe for paper in pmc_papers: if paper[:-5] not in paper_ids: print(paper) else: cnt += 1 print('There are {}/{} paper...
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
comm_use_subset noncomm_use_subset
# extract data from all papers all_papers_data = [] for paper_name in papers: file_path = os.path.join(path,paper_name) with open(file_path, 'r') as f: paper_info = extract_paper_metadata(json.load(f)) all_papers_data.append(paper_info) for i in range(10): print('- {}'.format(all_papers_data[i][...
0015023cc06b5362d332b3baf348d11567ca2fbb
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Authors
def are_equal(author1, author2): if (author1['first'][0] == author2['first'][0]) and (author1['mid'] == author2['mid']) and (author1['last'] == author2['last']): return True class Author: def __init__(self, firstname, middlename, lastname): self.firstName = firstname self.middleName = mi...
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Co-Authors
from itertools import combinations co_authors_net = nx.Graph() # for each paper for i in range(len(all_papers_data)): # get list of authors co_authors = [] for author in all_papers_data[i]['authors']: author_full_name = '' # only keep authors with first and last names if aut...
C:\Users\david\Anaconda3\lib\site-packages\networkx\drawing\nx_pylab.py:611: MatplotlibDeprecationWarning: isinstance(..., numbers.Number) if cb.is_numlike(alpha):
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Reference Authors
for i in range(3): for author in all_papers_data[i]['authors']: print(author) # referenced authors for ref in all_papers_data[i]['refs']: for author in ref['authors']: print(author) print('-'*60)
{'first': 'Joseph', 'middle': ['C'], 'last': 'Ward'} {'first': 'Lidia', 'middle': [], 'last': 'Lasecka-Dykes'} {'first': 'Chris', 'middle': [], 'last': 'Neil'} {'first': 'Oluwapelumi', 'middle': [], 'last': 'Adeyemi'} {'first': 'Sarah', 'middle': [], 'last': ''} {'first': '', 'middle': [], 'last': 'Gold'} {'first': 'Ni...
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Extracting Key Words
paper_json['body_text'] stop_sentences = ['All rights reserved.','No reuse allowed without permission.','Abstract','author/funder','The copyright holder for this preprint (which was not peer-reviewed) is the'] abstract_text = extract_abstract(paper_json['abstract']) body_text = '' for t in paper_json['body_text']: ...
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
TEST
paper_info = get_paper_data(cfg['data-path'] + biorxiv, papers[10]) paper_info def get_sections_from_body(body): sections = {} for section in body: if section['section'].isupper(): if section['section'] not in sections: sections[section['section']] = '' else: ...
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Implementing the Gradient Descent AlgorithmIn this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
import matplotlib.pyplot as plt import numpy as np import pandas as pd #Some helper functions for plotting and drawing lines def plot_points(X, y): admitted = X[np.argwhere(y==1)] rejected = X[np.argwhere(y==0)] plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', ...
_____no_output_____
MIT
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
Reading and plotting the data
data = pd.read_csv('data.csv', header=None) X = np.array(data[[0,1]]) y = np.array(data[2]) plot_points(X,y) plt.show()
_____no_output_____
MIT
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
TODO: Implementing the basic functionsHere is your turn to shine. Implement the following formulas, as explained in the text.- Sigmoid activation function$$\sigma(x) = \frac{1}{1+e^{-x}}$$- Output (prediction) formula$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$- Error function$$Error(y, \hat{y}) = - y \log(\hat{y}) - (...
# Implement the following functions # Activation (sigmoid) function def sigmoid(x): sigmoid_result = 1/(1 + np.exp(-x)) return sigmoid_result # Output (prediction) formula def output_formula(features, weights, bias): x = features.dot(weights) + bias print(x) output= sigmoid(x) return output # ...
_____no_output_____
MIT
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
Training functionThis function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
np.random.seed(44) epochs = 100 learnrate = 0.01 def train(features, targets, epochs, learnrate, graph_lines=False): errors = [] n_records, n_features = features.shape last_loss = None weights = np.random.normal(scale=1 / n_features**.5, size=n_features) bias = 0 for e in range(epochs): ...
_____no_output_____
MIT
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
Time to train the algorithm!When we run the function, we'll obtain the following:- 10 updates with the current training loss and accuracy- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.- A ...
train(X, y, epochs, learnrate, True)
-0.473530635888 0.125936869166 -0.0361573775337 0.256515978501 0.0843418004565 -0.0179345755974 0.198924764205 0.10068384058 0.270812660579 0.158779604536 0.172757003252 0.306288885566 0.328800256305 0.209874987271 0.267438947578 0.047722264903 0.0292851983816 0.156739365126 0.309723821939 0.299274606528 0.276005815744...
MIT
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
Monitoring & Reporting What `pipeline.py` is doing:- Load: - Monitoring & Reporting Data- Link MPRNs to GPRNs CaveatThe M&R data is publicly available, however, the user still needs to [create their own s3 credentials](https://aws.amazon.com/s3/) to fully reproduce the pipeline this pipeline (*i.e. they need an AWS...
conda env create --file environment.yml conda activate hdd
_____no_output_____
MIT
combine-monitoring-and-reporting-mprns-and-gprns/README.ipynb
Rebeccacachia/projects
Run
python pipeline.py
_____no_output_____
MIT
combine-monitoring-and-reporting-mprns-and-gprns/README.ipynb
Rebeccacachia/projects
Time and Dates The `astropy.time` package provides functionality for manipulating times and dates. Specific emphasis is placed on supporting time scales (e.g. UTC, TAI, UT1, TDB) and time representations (e.g. JD, MJD, ISO 8601) that are used in astronomy and required to calculate, e.g., sidereal times and barycentric...
import numpy as np from astropy.time import Time times = ['1999-01-01T00:00:00.123456789', '2010-01-01T00:00:00'] t = Time(times, format='isot', scale='utc') t t[1]
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
The `format` argument specifies how to interpret the input values, e.g. ISO or JD or Unix time. The `scale` argument specifies the time scale for the values, e.g. UTC or TT or UT1. The `scale` argument is optional and defaults to UTC except for Time from epoch formats. We could have written the above as:
t = Time(times, format='isot')
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
When the format of the input can be unambiguously determined then the format argument is not required, so we can simplify even further:
t = Time(times) t
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
Now let’s get the representation of these times in the JD and MJD formats by requesting the corresponding Time attributes:
t.jd t.mjd
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
The default representation can be changed by setting the `format` attribute:
t.format = 'fits' t t.format = 'isot' t
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022