markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Variable YearsCode:
data_test['YearsCode'] = data_test['YearsCode'].replace(['More than 50 years'], 50) data_test['YearsCode'] = data_test['YearsCode'].replace(['Less than 1 year'], 1)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable YearsCodePro:
data_test['YearsCodePro'] = data_test['YearsCodePro'].replace(['More than 50 years'], 50) data_test['YearsCodePro'] = data_test['YearsCodePro'].replace(['Less than 1 year'], 1)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable OpSys:
data_test['OpSys'].value_counts() data_test['OpSys'] = data_test['OpSys'].replace(['Windows Subsystem for Linux (WSL)'], 'Windows') data_test['OpSys'] = data_test['OpSys'].replace(['Linux-based'], 'Linux') data_test['OpSys'] = data_test['OpSys'].replace(['Other (please specify)'], 'Otro') data_test['OpSys'].value_count...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable Age:
data_test['Age'].value_counts() data_test['Age'] = data_test['Age'].replace(['25-34 years old'], '25-34') data_test['Age'] = data_test['Age'].replace(['35-44 years old'], '35-44') data_test['Age'] = data_test['Age'].replace(['18-24 years old'], '18-24') data_test['Age'] = data_test['Age'].replace(['45-54 years old'], '...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable Gender:
data_test['Gender'].value_counts() data_test['Gender'] = data_test['Gender'].replace(['Man'], 'Hombre') data_test['Gender'] = data_test['Gender'].replace(['Woman'], 'Mujer') data_test['Gender'] = data_test['Gender'].replace(['Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro') data_test['Gender'] ...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable Trans:
data_test['Trans'].value_counts() data_test['Trans'] = data_test['Trans'].replace(['Yes'], 'Si') data_test['Trans'] = data_test['Trans'].replace(['Prefer not to say'], 'No definido') data_test['Trans'] = data_test['Trans'].replace(['Or, in your own words:'], 'No definido') data_test['Trans'].value_counts()
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable MentalHealth:
data_test['MentalHealth'].value_counts() from re import search def choose_mental_health(cell_mental_health): val_mental_health_exceptions = ["Or, in your own words:"] if cell_mental_health == "Or, in your own words:": return val_mental_health_exceptions[0] if search(";", cell_mental_healt...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2. Selección de campos para subdatasetsSe seleccionarán los campos adecuados para responder a cada una de las cuestiones que se plantearon en la primera parte de la práctica. 2.1. Según la autodeterminación de la etnia, ¿Qué etnia tiene un mayor sueldo anual?Se seleccionarán los campos adecuados para responder a esta...
data_etnia = data_test[['Country', 'Ethnicity', 'ConvertedCompYearly']] data_etnia.head() df_data_etnia = data_etnia.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_etnia['ConvertedComp...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.2. ¿Cuáles son los porcentajes de programadores que trabajan a tiempo completo, medio tiempo o freelance?Se seleccionarán los campos adecuados para responder a esta pregunta
data_time_work_dev = data_test[['Country', 'Employment', 'ConvertedCompYearly', 'EdLevel', 'Age']] data_time_work_dev.head() df_flourish_002 = data_time_work_dev['Employment'].value_counts().to_frame('counts').reset_index() df_flourish_002 df_flourish_002['counts'] = (df_flourish_002['counts'] * 100 ) / data_time_work_...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.3. ¿Cuáles son los países con mayor número de programadores profesionales que son activos en la comunidad Stack Overflow?Se seleccionarán los campos adecuados para responder a esta pregunta
data_pro_dev_active_so = data_test[['Country', 'Employment', 'MainBranch', 'EdLevel', 'DevType', 'Age']] data_pro_dev_active_so.head() df_flourish_003 = data_pro_dev_active_so['Country'].value_counts().sort_values(ascending=False).head(10) df_flourish_003 = df_flourish_003.to_frame() df_flourish_003 = df_flourish_003.r...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.4. ¿Cuál es el nivel educativo que mayores ingresos registra entre los encuestados?Se seleccionarán los campos adecuados para responder a esta pregunta
data_edlevel_income = data_test[['ConvertedCompYearly', 'EdLevel']] data_edlevel_income.head() df_data_edlevel_income = data_edlevel_income.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_da...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.5. ¿Existe brecha salarial entre hombres y mujeres u otros géneros?, y de ¿Cuánto es la diferencia? ¿Cuáles son los peores países en cuanto a brecha salarial? ¿Cuáles son los países que han reducido esta brecha salarial entre programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
data_wage_gap = data_test[['Country', 'ConvertedCompYearly', 'Gender']] data_wage_gap.head() df_data_wage_gap = data_wage_gap.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_wage_gap['C...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.6. ¿Cuáles son los ingresos promedios según los rangos de edad? ¿Cuál es el rango de edad con el mejor y peor ingreso?Se seleccionarán los campos adecuados para responder a esta pregunta
data_age_income = data_test[['ConvertedCompYearly', 'Age']] data_age_income.head() df_data_age_income = data_age_income.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_age_income['Conve...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.7. ¿Cuáles son las tecnologías que permiten tener un mejor ingreso salarial anual?Se seleccionarán los campos adecuados para responder a esta pregunta
data_techs_best_income1 = data_test[['ConvertedCompYearly', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']] data_techs_best_income1.head() data_techs_best_income1['AllTechs'] = ...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.8. ¿Cuántas tecnologías en promedio domina un programador profesional?Se seleccionarán los campos adecuados para responder a esta pregunta
data_techs_dev_pro1 = data_test[['DevType', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']] data_techs_dev_pro1.head() data_techs_dev_pro1['AllTechs'] = data_techs_dev_pro1['Lan...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.9. ¿En qué rango de edad se inició la mayoría de los programadores en la programación?Se seleccionarán los campos adecuados para responder a esta pregunta
data_age1stcode_dev_pro1 = data_test[['Age1stCode']] data_age1stcode_dev_pro1.head() data_age1stcode_dev_pro1 = data_age1stcode_dev_pro1['Age1stCode'].value_counts().to_frame('counts').reset_index() data_age1stcode_dev_pro1.to_csv('009_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.10. ¿Cuántos años como programadores se requiere para obtener un ingreso salarial alto?Se seleccionarán los campos adecuados para responder a esta pregunta
data_yearscode_high_income1 = data_test[['ConvertedCompYearly', 'YearsCode']] data_yearscode_high_income1.head() df_data_yearscode_high_income = data_yearscode_high_income1.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return m...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.11. ¿Cuáles son los perfiles que registran los mejores ingresos?Se seleccionarán los campos adecuados para responder a esta pregunta
data_profiles_dev_high_income1 = data_test[['ConvertedCompYearly', 'DevType']].copy() data_profiles_dev_high_income1.head() df_data_profiles_dev_high_income = data_profiles_dev_high_income1.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lo...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.12. ¿Cuáles son las 10 tecnologías más usadas entre los programadores por países?Se seleccionarán los campos adecuados para responder a esta pregunta
data_10_techs_popular_dev_countries = data_test[['Country', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']] data_10_techs_popular_dev_countries.head() data_10_techs_popular_dev_...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.13. ¿Cuáles el sistema operativo más usado entre los encuestados?Se seleccionarán los campos adecuados para responder a esta pregunta
df_data_so_devs = data_test[['OpSys']].copy() df_data_so_devs.tail() df_data_so_devs['OpSys'].drop_duplicates().sort_values() df_data_so_devs['OpSys'] = df_data_so_devs['OpSys'].replace(['Other (please specify):'], 'Otro') df_data_so_devs['OpSys'].value_counts() df_counts = df_data_so_devs['OpSys'].str.split(expand=Tru...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.14. ¿Qué proporción de programadores tiene algún desorden mental por país?Se seleccionarán los campos adecuados para responder a esta pregunta
data_devs_mental_health_countries = data_test[['Country', 'MentalHealth']] data_devs_mental_health_countries.head() data_devs_mental_health_countries['MentalHealth'].value_counts() df_data_devs_mental_health_countries = data_devs_mental_health_countries.copy() df_data_devs_mental_health_countries = df_data_devs_mental_...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.15. ¿Cuáles son los países que tienen los mejores sueldos entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_best_incomes_countries = data_test[['Country', 'ConvertedCompYearly']].copy() df_best_incomes_countries def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_best_incomes_countries['ConvertedCompYearl...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.16. ¿Cuáles son los 10 lenguajes de programación más usados entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_prog_languages_devs = data_test[['LanguageHaveWorkedWith']].copy() df_10_prog_languages_devs.head() df_10_prog_languages_devs['LanguageHaveWorkedWith'] = df_10_prog_languages_devs['LanguageHaveWorkedWith'].str.replace(';', ' ') df_counts_016 = df_10_prog_languages_devs['LanguageHaveWorkedWith'].str.split(expand=T...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.17. ¿Cuáles son las bases de datos más usadas entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_databases = data_test[['DatabaseHaveWorkedWith']].copy() df_10_databases.head() df_10_databases['DatabaseHaveWorkedWith'] = df_10_databases['DatabaseHaveWorkedWith'].str.replace(' ', '') df_10_databases['DatabaseHaveWorkedWith'] = df_10_databases['DatabaseHaveWorkedWith'].str.replace(';', ' ') df_counts_017 = df_...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.18. ¿Cuáles son las plataformas más usadas entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_platforms = data_test[['PlatformHaveWorkedWith']].copy() df_10_platforms.head() df_10_platforms['PlatformHaveWorkedWith'] = df_10_platforms['PlatformHaveWorkedWith'].str.replace(' ', '') df_10_platforms['PlatformHaveWorkedWith'] = df_10_platforms['PlatformHaveWorkedWith'].str.replace(';', ' ') df_counts_018 = df_...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.19. ¿Cuáles son los frameworks web más usados entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_web_frameworks = data_test[['WebframeHaveWorkedWith']].copy() df_10_web_frameworks.head() df_10_web_frameworks['WebframeHaveWorkedWith'] = df_10_web_frameworks['WebframeHaveWorkedWith'].str.replace(' ', '') df_10_web_frameworks['WebframeHaveWorkedWith'] = df_10_web_frameworks['WebframeHaveWorkedWith'].str.replace...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.20. ¿Cuáles son las herramientas tecnológicas más usadas entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_data_misc_techs = data_test[['MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith']].copy() df_10_data_misc_techs.head() df_10_data_misc_techs['AllMiscTechs'] = df_10_data_misc_techs['MiscTechHaveWorkedWith'].map(str) + ';' + df_10_data_misc_techs['ToolsTechHaveWorkedWith'].map(str) df_10_data_misc_techs.head() df_...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.21. ¿Cuáles son las herramientas colaborativas más usadas entre programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_colab = data_test[['NEWCollabToolsHaveWorkedWith']].copy() df_10_colab.head() df_10_colab['NEWCollabToolsHaveWorkedWith'] = df_10_colab['NEWCollabToolsHaveWorkedWith'].str.replace(' ', '') df_10_colab['NEWCollabToolsHaveWorkedWith'] = df_10_colab['NEWCollabToolsHaveWorkedWith'].str.replace(';', ' ') df_counts_021...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.22. ¿Cuáles son los países con mayor número de programadores trabajando a tiempo completo?Se seleccionarán los campos adecuados para responder a esta pregunta
df_fulltime_employment = data_test[['Country', 'Employment']].copy() df_fulltime_employment.head() df_fulltime_employment.info() df_fulltime_only = df_fulltime_employment[df_fulltime_employment['Employment'] == 'Tiempo completo'] df_fulltime_only.head() df_flourish_022 = df_fulltime_only['Country'].value_counts().to_fr...
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Pyber Analysis 4.3 Loading and Reading CSV files
# Add Matplotlib inline magic command %matplotlib inline # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import matplotlib.dates as mdates # File to Load (Remember to change these) city_data_to_load = "Resources/city_data.csv" ride_data_to_load = "Resources/ride_data.csv" # Read the City ...
_____no_output_____
Apache-2.0
PyBer_analysis_code.ipynb
rfwilliams92/Pyber_Ridesharing_Analysis
Merge the DataFrames
# Combine the data into a single dataset pyber_data_df = pd.merge(ride_data_df, city_data_df, how="left", on=["city", "city"]) # Display the data table for preview pyber_data_df
_____no_output_____
Apache-2.0
PyBer_analysis_code.ipynb
rfwilliams92/Pyber_Ridesharing_Analysis
Deliverable 1: Get a Summary DataFrame
# 1. Get the total rides for each city type tot_rides_by_type = pyber_data_df.groupby(["type"]).count()["ride_id"] tot_rides_by_type # 2. Get the total drivers for each city type tot_drivers_by_type = city_data_df.groupby(["type"]).sum()["driver_count"] tot_drivers_by_type # 3. Get the total amount of fares for each ...
_____no_output_____
Apache-2.0
PyBer_analysis_code.ipynb
rfwilliams92/Pyber_Ridesharing_Analysis
Deliverable 2. Create a multiple line plot that shows the total weekly of the fares for each type of city.
# 1. Read the merged DataFrame pyber_data_df # 2. Using groupby() to create a new DataFrame showing the sum of the fares # for each date where the indices are the city type and date. tot_fares_by_date_df = pd.DataFrame(pyber_data_df.groupby(["type", "date"]).sum()["fare"]) tot_fares_by_date_df # 3. Reset the index on...
_____no_output_____
Apache-2.0
PyBer_analysis_code.ipynb
rfwilliams92/Pyber_Ridesharing_Analysis
Import Packages
from ndfinance.brokers.backtest import * from ndfinance.core import BacktestEngine from ndfinance.analysis.backtest import BacktestAnalyzer from ndfinance.strategies import PeriodicRebalancingStrategy from ndfinance.visualizers.backtest_visualizer import BasicVisualizer %matplotlib inline import matplotlib.pyplot as pl...
2020-10-10 13:22:13,815 INFO resource_spec.py:212 -- Starting Ray with 15.38 GiB memory available for workers and up to 7.7 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>). 2020-10-10 13:22:14,051 WARNING services.py:923 -- Redis failed to start, retrying now. 2...
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
build strategy
class AllWeatherPortfolio(PeriodicRebalancingStrategy): def __init__(self, weight_dict, rebalance_period): super(AllWeatherPortfolio, self).__init__(rebalance_period) self.weight_dict = weight_dict def _logic(self): self.broker.order(Rebalance(self.weight_dict.keys(), s...
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
set portfolio elements, weights, rebalance periodyou can adjust and play in your own!
PORTFOLIO = { "GLD" : 0.05, "SPY" : 0.5, "SPTL" : 0.15, "BWZ" : 0.15, "SPHY": 0.15, } REBALANCE_PERIOD = TimeFrames.day * 365
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Make data provider
dp = BacktestDataProvider() dp.add_yf_tickers(*PORTFOLIO.keys())
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Make time indexer
indexer = TimeIndexer(dp.get_shortest_timestamp_seq()) dp.set_indexer(indexer) dp.cut_data()
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Make broker and add assets
brk = BacktestBroker(dp, initial_margin=10000) _ = [brk.add_asset(Asset(ticker=ticker)) for ticker in PORTFOLIO.keys()]
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Initialize strategy
strategy = AllWeatherPortfolio(PORTFOLIO, rebalance_period=REBALANCE_PERIOD)
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Initialize backtest engine
engine = BacktestEngine() engine.register_broker(brk) engine.register_strategy(strategy)
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
run
log = engine.run()
[ENGINE]: 100%|██████████| 2090/2090 [00:00<00:00, 11637.76it/s]
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
run analysis
analyzer = BacktestAnalyzer(log) analyzer.print()
-------------------------------------------------- [BACKTEST RESULT] -------------------------------------------------- CAGR:10.644 MDD:19.49 CAGR_MDD_ratio:0.546 win_trade_count:24 lose_trade_count:11 total_trade_count:75 win_rate_percentage:32.0 lose_rate_percentage:14.667 sharpe_ratio:0.279 sortino_ratio:0.361 pnl...
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
visualize
visualizer = BasicVisualizer() visualizer.plot_log(log)
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Export
visualizer.export(EXPORT_PATH) analyzer.export(EXPORT_PATH)
-------------------------------------------------- [EXPORTING FIGURES] -------------------------------------------------- exporting figure to: ./bt_results/all_weather_portfolio/plot/mdd.png exporting figure to: ./bt_results/all_weather_portfolio/plot/cagr.png exporting figure to: ./bt_results/all_weather_portfoli...
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Estatísticas descritivas e Visualização de DadosEste notebook é responsável por mostrar as estatíscas descritivas da base dados com visualizações.Será analisado o comportamento de algumas características que são cruciais na compra/venda de veículos usados.
from Utils import * from tqdm import tqdm from matplotlib import pyplot as plt import seaborn as sns pd.set_option('display.max_colwidth', 100) DATASET = "../datasets/clean_vehicles_2.csv" df = pd.read_csv(DATASET) df.describe()
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Estatísticas UnivariadasAqui vamos análisar o comportamento de alguns dados em relação a sua distribuição. Ano de fabricação
##Análise de média, desvio padrão, mediana e moda do Ano de fabricação print( "Ano do veículo:\n" "Média: "+floatStr(df['year'].mean())+"\n"+ "Desvio padrão: "+floatStr(df['year'].std())+"\n"+ "Mediana: "+floatStr(df['year'].median())+"\n"+ "IQR: "+floatStr(df['year'].describe()[6] - df['year'].describe()[4])+"\n"+ ...
Ano do veículo: Média: 2010.26 Desvio padrão: 8.67 Mediana: 2012.0 IQR: 9.0 Moda: 2017.0
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Aqui notamos uma mediana maior do que a média. O que nos levar a imaginar que esta grandeza não segue uma distribuição normal.Isto indica que deve haver alguns carros muito antigos sendo vendidos, gerando uma caractéristica de assimetria na curva.Para verifcarmos isso, vamos gerar o histograma
##Plotar o histograma da distribuição em relação ao ano de fabricação do veículo bars = df[df['year']> 0].year.max() - df[df['year']> 0].year.min() df[df['year']> 0].year.hist(bins = int(bars))
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Porém, este plotting não nos dá uma boa visualização. Nesta lista há alguns carros voltados para colecionadores, que não é o perfil que queremos estudar. Então, tomando o ano de 1985 como limiar, analisamos o histograma da distribuição de carros comercializáveis "para uso normal".Agora conseguimos perceber que a maior ...
##Plot do histograma dos anos de fabricação do carro limitando à 1985 bars = df['year'].max() - 1985 df[df['year']> 1985].year.hist(bins = int(bars))
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Preço de revenda do veículo
##Análise de estatísticas univariadas dos valores de preço do veículo print( "Preço do veículo:\n" "Média: "+floatStr(df[df['price'] > 0].price.mean())+"\n"+ "Desvio padrão: "+floatStr(df[df['price'] > 0].price.std())+"\n"+ "Mediana: "+floatStr(df[df['price'] > 0].price.median())+"\n"+ "IQR: "+floatStr(df['price'].d...
Preço do veículo: Média: 36809.65 Desvio padrão: 6571953.45 Mediana: 11495.0 IQR: 13000.0 Moda: 7995
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Aqui encontramos uma diferença muito grandes nestes dados. O que nos faz pensar que temos uma distribuição muito variada e assimétrica de preços.Devido a esta característica, não conseguiremos ver um histograma com todos os dados. Podemos contornar isto de 2 maneiras: * Poderíamos usar o log10 para ter uma noção da o...
sns.distplot(df[(df['price'] > 0) & (df['price'] < 100000)].price, bins = 100,norm_hist = False, hist=True, kde=False)
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Leitura atual do Odômetro (Milhas percorrida pelo veículo)
##Análise de estatísticas univariadas dos valores de leitura do Odômetro. ##Note que estamos descartando valores nulos para fazer esta análise print( "Odômetro do veículo:\n" "Média: "+floatStr(df[df['odometer'] > 0].odometer.mean())+"\n"+ "Desvio padrão: "+floatStr(df[df['odometer'] > 0].odometer.std())+"\n"+ "Medi...
Odômetro do veículo: Média: 99705.09 Desvio padrão: 111570.94 Mediana: 92200.0 IQR: 92054.0 Moda: 150000.0
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Aqui também temos uma grande varidade de valores. Apenas 492 deles estão acima de 800.000 de milhas registradas. Para fim de análise, iremos utilizar este intervalo.
sns.distplot(df[(df['odometer'] > 0) & (df['odometer'] < 400000)].odometer, bins = 100,norm_hist = False, hist=True, kde=False)
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Visualização de quantidade de anúncios por fabricantes de veículosFaremos uma análise visual para tentar perceber quais as marcas mais populares no mercado de seminovos.
## Plotar a divisão de mercas que são mais anunciadas manufacturers = df['manufacturer'].value_counts().drop(df['manufacturer'].value_counts().index[8]).drop(df['manufacturer'].value_counts().index[13:]) sns.set() plt.figure(figsize=(10,5)) sns.barplot(x=manufacturers.index, y=manufacturers) print("As 3 marcas mais anu...
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Visualização da relação entre preço de carros classificados por traçãoAqui podemos comparar como o preço variam de acordo com a tração do veículo * 4wd: 4x4 * rwd: tração traseira * fwd: tração dianteira Comparamos a média, mediana e quantidade. Porém, já percebemos de análises anteriores que a mediana ...
df[df['drive'] != 'undefined'].groupby(['drive']).agg(['mean','median','count'])['price'].sort_values(by='median', ascending=False)
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Estatísticas BivariadasAqui vamos tentar encontrar se as grandezas numéricas possuem algum tipo de correlação. Primeiro analisaremos o método de spearman, depois de pearson. Em seguida tentaremos utilizar
##Aplicando-se alguns limitadores para analisar correlações entre as variáveis car = df[(df['odometer']> 0) & (df['odometer']<400000)] car = car[(car['price']>0) & (car['price']<100000)] car = car[car['year']>=1985] car = car.drop(['lat','long'], axis=1) car.cov() car.corr(method='spearman') car.corr(method='pearson') ...
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Welcome in the introductory template of the python graph gallery. Here is how to proceed to add a new `.ipynb` file that will be converted to a blogpost in the gallery! Notebook Metadata It is very important to add the following fields to your notebook. It helps building the page later on:- **slug**: the URL of the bl...
import seaborn as sns, numpy as np np.random.seed(0) x = np.random.randn(100) ax = sns.distplot(x)
_____no_output_____
0BSD
src/notebooks/255-percentage-stacked-area-chart.ipynb
nrslt/The-Python-Graph-Gallery
Airbnb - Rio de Janeiro* Download [data](http://insideairbnb.com/get-the-data.html)* We downloaded `listings.csv` from all monthly dates available Questions1. What was the price and supply behavior before and during the pandemic?2. Does a title in English or Portuguese impact the price?3. What features correlate with ...
import numpy as np import pandas as pd import seaborn as sns import glob import re import pendulum import tqdm import matplotlib.pyplot as plt import langid langid.set_languages(['en','pt'])
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Read filesRead all 30 files and get their date
files = sorted(glob.glob('data/listings*.csv')) df = [] for f in files: date = pendulum.from_format(re.findall(r"\d{4}_\d{2}_\d{2}", f)[0], fmt="YYYY_MM_DD").naive() csv = pd.read_csv(f) csv["date"] = date df.append(csv) df = pd.concat(df) df
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Deal with NaNs* Drop `neighbourhood_group` as it is all NaNs;* Fill `reviews_per_month` with zeros (if there is no review, then reviews per month are zero)* Keep `name` for now* Drop `host_name` rows, as there is not any null host_id* Keep `last_review` too, as there are rooms with no review
df.isna().any() df = df.drop(["host_name", "neighbourhood_group"], axis=1) df["reviews_per_month"] = df["reviews_per_month"].fillna(0.) df.head()
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Detect `name` language* Clean strings for evaluation* Remove common neighbourhoods name in Portuguese from `name` column to diminish misprediction* Remove several non-alphanumeric characters* Detect language using [langid](https://github.com/saffsd/langid.py)* I restricted between pt, en. There are very few rooms list...
import unicodedata stopwords = pd.unique(df["neighbourhood"]) stopwords = [re.sub(r"[\(\)]", "", x.lower().strip()).split() for x in stopwords] stopwords = [x for item in stopwords for x in item] stopwords += [unicodedata.normalize("NFKD", x).encode('ASCII', 'ignore').decode() for x in stopwords] stopwords += ["rio", "...
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
* Test accuracy, manually label 383 out of 88191 (95% conf. interval, 5% margin of error)
df.loc[~df["name"].isna()].drop_duplicates("name").shape df.loc[~df["name"].isna()].drop_duplicates("name")[["name", "language"]].sample(n=383, random_state=42).to_csv("lang_pred_1.csv") lang_pred = pd.read_csv("lang_pred.csv", index_col=0) lang_pred.head() overall_accuracy = (lang_pred["pred"] == lang_pred["true"]).su...
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Calculate how many times a room appeared* There are 30 months of data, and rooms appear multiple times* Calculate for a specific date, how many times the same room appeared up to that date
df = df.set_index(["id", "date"]) df["appearances"] = df.groupby(["id", "date"])["host_id"].count().unstack().cumsum(axis=1).stack() df = df.reset_index() df.head()
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Days since last review* Calculate days since last review* Then categorize them by the length of the days
df.loc[:, "last_review"] = pd.to_datetime(df["last_review"], format="%Y/%m/%d") # For each scraping date, consider the last date to serve as comparision as the maximum date last_date = df.groupby("date")["last_review"].max() df["last_date"] = df.apply(lambda row: last_date.loc[row["date"]], axis=1) df["days_last_review...
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Distributions* Check the distribution of features
df = pd.read_pickle("data.pkl") df.head() df["latitude"].hist(bins=250) df["longitude"].hist(bins=250) df["price"].hist(bins=250) df["minimum_nights"].hist(bins=250) df["number_of_reviews"].hist() df["reviews_per_month"].hist(bins=250) df["calculated_host_listings_count"].hist(bins=250) df["availability_365"].hist() df...
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Limits* We are analising mostly for touristic purpose, so get the short-term rentals only* Prices between 10 and 10000 (The luxury Copacabana Palace Penthouse at 8000 for example)* Short-term rentals (minimum_nights < 31)* Impossibility of more than 31 reviews per month
df = pd.read_pickle("data.pkl") total_records = len(df) outbound_values = (df["price"] < 10) | (df["price"] > 10000) df = df[~outbound_values] print(f"Removed values {outbound_values.sum()}, {outbound_values.sum()*100/total_records}%") long_term = df["minimum_nights"] >= 31 df = df[~long_term] print(f"Removed values ...
Removed values 2, 0.00019089597982611286%
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Log skewed variables* Most numerical values are skewed, so log them
df.describe() # number_of_reviews, reviews_per_month, availability_365 have zeros, thus sum one to all df["number_of_reviews"] = np.log(df["number_of_reviews"] + 1) df["reviews_per_month"] = np.log(df["reviews_per_month"] + 1) df["availability_365"] = np.log(df["availability_365"] + 1) df["price"] = np.log(df["price"])...
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Extreme outliers* Most outliers are clearly mistyped values (one can check these rooms ids in their website)* Remove extreme outliers first from large deviations within the same `id` (eliminate rate jumps of same room)* Then remove those from same scraping `date`, `neighbourhood` and `room_type`
df = df.reset_index() q25 = df.groupby(["id"])["price"].quantile(0.25) q75 = df.groupby(["id"])["price"].quantile(0.75) ext = q75 + 3 * (q75 - q25) ext = ext[(q75 - q25) > 0.] affected_rows = [] multiple_id = df[df["id"].isin(ext.index)] for row in tqdm.tqdm(multiple_id.itertuples(), total=len(multiple_id)): if ro...
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/real-itu/modern-ai-course/blob/master/lecture-04/lab.ipynb) Lab 4 - Math StatsYou're given the following dataset:
import random random.seed(0) x = [random.gauss(0, 1)**2 for _ in range(20)] print(x)
[0.8868279034128675, 1.9504304025306558, 0.46201173092655257, 0.13727289350107572, 1.0329650747173147, 0.00520129768651921, 0.032111381051647944, 0.6907259056240523, 1.713578821550704, 0.037592456206679545, 0.9865449735727571, 0.418585230265908, 0.11133432341026718, 2.7082355435792898, 0.3123577703699347, 0.26435707416...
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Compute the min, max, mean, median, standard deviation and variance of x
# Your code here import math # min mi = min(x) print("min: " + str(mi)) #max ma = max(x) print("max: " + str(ma)) #mean mean = sum(x)/len(x) print("mean: " + str(mean)) #median median = sorted(x)[int(len(x)/2)] print("median: " + str(median)) #stddv #variance lars = 0 for v in x: lars += ...
min: 0.00520129768651921 max: 5.779789763931721 mean: 1.2261550833868817 median: 0.6907259056240523 standard deviation: 1.4717408201314568 variance: 2.166021041641213
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
VectorsYou're given the two 3 dimensional vectors a and b below.
a = [1, 3, 5] b = [2, 9, 13]
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Compute 1. $a + b$ 2. $2a-3b$ 3. $ab$ - the inner product
# Your code here first = list(map(lambda t: t[0]+t[1], list(zip(a,b)))) print(first) second = list(map(lambda t: t[0] - t[1], list(zip(list(map(lambda x: x*2, a)), list(map(lambda x: x*3, b)))))) print(second) third = sum(list(map(lambda t: t[0] * t[1], list(zip(a,b))))) print(third)
[3, 12, 18] [-4, -21, -29] 94
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
Gradients Given the function $f(x,y) = 3x^2 + 6y$ > Compute the partial gradients $\frac{df}{dx}$ and $\frac{df}{dy}$ Your answer here $\frac{df}{dx} = 6x$ $\frac{df}{dy} = 6$ The function above corresponds to the following computational graph ![sol (1).png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAqkAAAG5CA...
def parents_grads(node): """ returns parents of node and the gradients of node w.r.t each parent e.g. in the example graph above parents_grads(f) would return: [(b, df/db), (c, df/dc)] """
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Complete the `backprop` method below to create a recursive algorithm such that calling `backward(node)` computes the gradient of `node` w.r.t. every (upstream - to the left) node in the computational graph. Every node has a `node.grad` attribute that is initialized to `0.0`, it's numerical gradient. The algorithm sho...
def backprop(node, df_dnode): node.grad += df_dnode # Your code here parents = parents_grads(node) for parent, grad in parents: backprop(parent, grad + df_dnode) def backward(node): """ Computes the gradient of every (upstream) node in the computational graph w.r.t. node. """ backpro...
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
Ok, now let's try to actually make it work! We'll define a class `Node` which contains the node value, gradient and parents and their gradients
from typing import Sequence, Tuple class Node: def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]): self.value = value self.grad = 0.0 self.parents_grads = parents_grads def __repr__(self): return "Node(value=%.4f, grad=%.4f)"%(self.value, self.grad)
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
So far no magic. We still havn't defined how we get the `parents_grads`, but we'll get there. Now move the `backprop` and `grad` function into the class, and modify it so it works with the class.
# Your code here from typing import Sequence, Tuple class Node: def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]): self.value = value self.grad = 0.0 self.parents_grads = parents_grads def __repr__(self): return "Node(value=%.4f, gr...
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
Now let's create a simple graph: $y = x^2$, and compute it for $x=2$. We'll set the parent_grads directly based on our knowledge that $\frac{dx^2}{dx}=2x$
x = Node(2.0, []) y = Node(x.value**2, parents_grads=[(x, 2*x.value)])
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
And print the two nodes
print("x", x, "y", y)
x Node(value=2.0000, grad=0.0000) y Node(value=4.0000, grad=0.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Verify that the `y.backward()` call below computes the correct gradients
y.backward() print("x", x, "y", y)
x Node(value=2.0000, grad=4.0000) y Node(value=4.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
$\frac{dy}{dx}$ should be 4 and $\frac{dy}{dy}$ should be 1 Ok, so it seems to work, but it's not very easy to use, since you have to define all the `parents_grads` whenever you're creating new nodes. **Here's the trick.** We can make a function `square(node:Node)->Node` which can square any Node. See below
def square(node: Node) -> Node: return Node(node.value**2, [(node, 2*node.value)])
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
Let's verify that it works
x = Node(3.0, []) y = square(x) print("x", x, "y", y) y.backward() print("x", x, "y", y)
x Node(value=3.0000, grad=0.0000) y Node(value=9.0000, grad=0.0000) x Node(value=3.0000, grad=6.0000) y Node(value=9.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
Now we're getting somewhere. These calls to square can of course be chained
x = Node(3.0, []) y = square(x) z = square(y) print("x", x, "y", y, "z", z) z.backward() print("x", x, "y", y,"z", z)
x Node(value=3.0000, grad=0.0000) y Node(value=9.0000, grad=0.0000) z Node(value=81.0000, grad=0.0000) x Node(value=3.0000, grad=108.0000) y Node(value=9.0000, grad=18.0000) z Node(value=81.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Compute the $\frac{dz}{dx}$ gradient by hand and verify that it's correct Your answer here $\frac{dz}{dx} = \frac{dz}{dy} * \frac{dy}{dx} = 2y * 2x = 2 (x^2) * 2x = 2 * 3^2 * 2 * 3 = 108$ Similarly we can create functions like this for all the common operators, plus, minus, multiplication, etc. With enough base oper...
def plus(a: Node, b:Node)->Node: """ Computes a+b """ # Your code here return Node(a.value + b.value, [(a, 1), (b, 1)]) x = Node(4.0, []) y = Node(5.0, []) z = plus(x, y) print("x", x, "y", y, "z", z) z.backward() print("x", x, "y", y,"z", z)
x Node(value=4.0000, grad=0.0000) y Node(value=5.0000, grad=0.0000) z Node(value=9.0000, grad=0.0000) x Node(value=4.0000, grad=1.0000) y Node(value=5.0000, grad=1.0000) z Node(value=9.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Finish the multiply function below and verify that it works:
def multiply(a: Node, b:Node)->Node: """ Computes a*b """ # Your code hre return Node(a.value*b.value, [(a,b.value),(b,a.value)]) x = Node(4.0, []) y = Node(5.0, []) z = multiply(x, y) print("x", x, "y", y, "z", z) z.backward() print("x", x, "y", y,"z", z)
x Node(value=4.0000, grad=0.0000) y Node(value=5.0000, grad=0.0000) z Node(value=20.0000, grad=0.0000) x Node(value=4.0000, grad=5.0000) y Node(value=5.0000, grad=4.0000) z Node(value=20.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
We'll stop here, but with just a few more functions we could compute a lot of common computations, and get their gradients automatically!This is super nice, but it's kind of annoying having to write `plus(a,b)`. Wouldn't it be nice if we could just write `a+b`? With python operator overloading we can! If we define the ...
# Your code here class Node: def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]): self.value = value self.grad = 0.0 self.parents_grads = parents_grads def __repr__(self): return "Node(value=%.4f, grad=%.4f)"%(self.value, self.grad) ...
a Node(value=2.0000, grad=0.0000) b Node(value=3.0000, grad=0.0000) c Node(value=4.0000, grad=0.0000) d Node(value=10.0000, grad=0.0000) a Node(value=2.0000, grad=3.0000) b Node(value=3.0000, grad=2.0000) c Node(value=4.0000, grad=1.0000) d Node(value=10.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
To DoDownload a dataset from DomainConvert all string columns to unique integers ---> could use hashes
domain_node = sy.login(email="info@openmined.org", password="changethis", port=8081) domain_node.store.pandas import pandas as pd canada = pd.read_csv("../../trade_demo/datasets/ca - feb 2021.csv") canada.head() import hashlib hashlib.algorithms_available test_string = "February 2021" hashlib.md5(test_string.encode("ut...
_____no_output_____
Apache-2.0
notebooks/Experimental/Ishan/ADP Demo/Old Versions/DataFrame to NumPy.ipynb
Noob-can-Compile/PySyft
```{note}This feature requires MPI, and may not be able to be run on Colab.``` Distributed VariablesAt times when you need to perform a computation using large input arrays, you may want to perform that computation in multiple processes, where each process operates on some subset of the input values. This may be done ...
%%px import numpy as np import openmdao.api as om class SimpleDistrib(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) # Distributed Output self.add_output('out_dist', copy_shape='in_dist', distributed...
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
In the next part of the example, we take the `SimpleDistrib` component, place it into a model, and run it. Suppose the vector of data we want to process has 7 elements. We have 4 processors available for computation, so if we distribute them as evenly as we can, 3 procs can handle 2 elements each, and the 4th processor...
%%px from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {...
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
Note that we created a connection source 'x_dist' that passes its value to 'D1.in_dist'. OpenMDAO requires a source for non-constant inputs, and usually creates one automatically as an output of a component referred to as an 'Auto-IVC'. However, the automatic creation is not supported for distributed variables. We mu...
%%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class SimpleDistrib(om.ExplicitComponent): def initialize(self): self.options.declare('vec_size', types=int, default=1, desc="Total...
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
Example: Distributed I/O and a Serial InputOpenMDAO supports both serial and distributed I/O on the same component, so in this example, we expand the problem to include a serial input. In this case, the serial input also has a vector width of 7, but those values will be the same on each processor. This serial input is...
%%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib1(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) ...
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
Example: Distributed I/O and a Serial OuputYou can also create a component with a serial output and distributed outputs and inputs. This situation tends to be more tricky and usually requires you to performe some MPI operations in your component's `run` method. If the serial output is only a function of the serial inp...
%%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib2(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) ...
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
```{note}In this example, we introduce a new component called an [IndepVarComp](indepvarcomp.ipynb). If you used OpenMDAO prior to version 3.2, then you are familiar with this component. It is used to define an independent variable.You usually do not have to define these because OpenMDAO defines and uses them automatic...
%%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib1(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) ...
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
Derivatives: Distributed I/O and a Serial OutputIf you have a component with distributed inputs and a serial output, then the standard `compute_partials` API will not work for specifying the derivatives. You will need to use the matrix-free API with `compute_jacvec_product`, which is described in the feature document ...
%%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib2(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) ...
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
Lambda School Data Science*Unit 2, Sprint 1, Module 4*--- Logistic Regression- do train/validate/test split- begin with baselines for classification- express and explain the intuition and interpretation of Logistic Regression- use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression modelsLo...
%%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/' !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/'
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Do train/validate/test split Overview Predict Titanic survival 🚢Kaggle is a platform for machine learning competitions. [Kaggle has used the Titanic dataset](https://www.kaggle.com/c/titanic/data) for their most popular "getting started" competition. Kaggle splits the data into train and test sets for participants....
import pandas as pd train = pd.read_csv(DATA_PATH+'titanic/train.csv') test = pd.read_csv(DATA_PATH+'titanic/test.csv')
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Notice that the train set has one more column than the test set:
train.shape, test.shape
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Which column is in train but not test? The target!
set(train.columns) - set(test.columns)
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Why doesn't Kaggle give you the target for the test set? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who ...
from sklearn.model_selection import train_test_split train.shape, test.shape train, val = train_test_split(train, random_state=28) train.shape, val.shape, test.shape
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Challenge For your assignment, you'll do a 3-way train/validate/test split.Then next sprint, you'll begin to participate in a private Kaggle challenge, just for your cohort! You will be provided with data split into 2 sets: training and test. You will create your own training and validation sets, by splitting the Kagg...
target = 'Survived' y_train = train[target] y_train.value_counts()
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
What if we guessed the majority class for every prediction?
y_pred = y_train.apply(lambda x : 0)
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models