markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Similar Documents
sim = np.matmul(W, np.transpose(W)) print(sim.shape) def similar_docs(filename, sim, topn): doc_id = int(filename.split(".")[0]) row = sim[doc_id, :] target_docs = np.argsort(-row)[0:topn].tolist() scores = row[target_docs].tolist() target_filenames = ["{:d}.txt".format(x) for x in target_docs] ...
Source: Forward-backward retraining of recurrent neural networks --- top 10 similar docs --- (0.05010) Context-Dependent Multiple Distribution Phonetic Modeling with MLPs (0.04715) Is Learning The n-th Thing Any Easier Than Learning The First? (0.04123) Learning Statistically Neutral Tasks without Expert Guidance (0.04...
Apache-2.0
notebooks/19-content-recommender.ipynb
sujitpal/content-engineering-tutorial
Suggesting Documents based on Read CollectionWe consider an arbitary set of documents that we know a user has read or liked or marked somehow, and we want to recommend other documents that he may like.To do this, we compute the average feature among these documents (starting from the sparse features) convert it to a a...
collection_size = np.random.randint(3, high=10, size=1)[0] collection_ids = np.random.randint(0, high=num_docs+1, size=collection_size) feat_vec = np.zeros((1, 11992)) for collection_id in collection_ids: feat_vec += X[collection_id, :] feat_vec /= collection_size y = model.transform(feat_vec) doc_sims = np.matmul...
violation: 1.0 violation: 0.23129634545431624 violation: 0.03209572604136983 violation: 0.007400997221153011 violation: 0.0012999049199094925 violation: 0.0001959522250959198 violation: 4.179248920879007e-05 Converged at iteration 7 --- Source collection --- A Generic Approach for Identification of Event Related Brain ...
Apache-2.0
notebooks/19-content-recommender.ipynb
sujitpal/content-engineering-tutorial
Welcome to VapourSynth in Colab!Basic usage instructions: run the setup script, and run all the tabs in the "processing" script for example output.For links to instructions, tutorials, and help, see https://github.com/AlphaAtlas/VapourSynthColab Init
#@title Check GPU #@markdown Run this to connect to a Colab Instance, and see what GPU Google gave you. gpu = !nvidia-smi --query-gpu=gpu_name --format=csv print(gpu[1]) print("The Tesla T4 and P100 are fast and support hardware encoding. The K80 and P4 are slower.") print("Sometimes resetting the instance in the 'run...
_____no_output_____
MIT
VapourSynthColab.ipynb
03stevensmi/VapourSynthColab
Processing
%%writefile /content/autogenerated.vpy #This is the Vapoursynth Script! #Running this cell will write the code in this cell to disk, for VSPipe to read. #Later cells will check to see if it executes. #Edit it just like a regular python VS script. #Search for functions and function reference in http://vsdb.top/, or bro...
_____no_output_____
MIT
VapourSynthColab.ipynb
03stevensmi/VapourSynthColab
Scratch Space---
#Do stuff here #Example ffmpeg script: !vspipe -y /content/autogenerated.vpy - | ffmpeg -i pipe: -c:v hevc_nvenc -profile:v main10 -preset lossless -spatial_aq:v 1 -aq-strength 15 "/gdrive/MyDrive/upscale.mkv" #TODO: Figure out why vspipe's progress isn't showing up in colab.
_____no_output_____
MIT
VapourSynthColab.ipynb
03stevensmi/VapourSynthColab
Extra Functions
#@title Build ImageMagick and VapourSynth for Colab #@markdown VapourSynth needs to be built for Python 3.6, and Imagemagick needs to be built for the VapourSynth imwri plugin. The setup script pulls from bintray, but this cell will rebuild and reinstall them if those debs dont work. #@markdown The built debs can be f...
_____no_output_____
MIT
VapourSynthColab.ipynb
03stevensmi/VapourSynthColab
Método de Runge-Kutta de 4to ordenClase: F1014B Modelación Computacional de Sistemas ElectromagnéticosAutor: Edoardo BucheliProfesor de Cátedra, Tec de Monterrey Campus Santa Fe IntroducciónEn esta sesión aprenderemos un método numérico para la solución de problemas de valor inicial con la siguiente forma,$$\frac{dy}...
import numpy as np import matplotlib.pyplot as plt
_____no_output_____
MIT
Metodo de Runge-Kutta_Actividad.ipynb
ebucheli/F1014B
Para la solución del problema, intentemos hacer una implementación similar a lo que haría un/a programador/a con más experiencia.Por lo tanto separaremos la solución en dos funciones. Una se encargará de calcular un paso del método de euler y la segunda utilizará esta función para repetir el proceso tantas veces sea ne...
def euler_step(x_n,y_n,f,h): """ Calcula un paso del método de euler Entradas: x_n: int,float El valor inicial de x en este paso y_n: int, float El valor inicial de y en este paso f: función Una función que represente f(x,y) en el problema de valor inicial. h: int, fl...
_____no_output_____
MIT
Metodo de Runge-Kutta_Actividad.ipynb
ebucheli/F1014B
Probemos tu función con el siguiente problema,$$\frac{dy}{dx} = x + \frac{1}{5}y \qquad y(0) = -3 $$Utilizando $h = 1$
x_0 = 0 y_0 = -3 def f(x,y): return x + (1/5)*y h = 1 print(euler_step(x_0,y_0,f,h))
_____no_output_____
MIT
Metodo de Runge-Kutta_Actividad.ipynb
ebucheli/F1014B
Tu resultado debería ser `(1,-3.6)` Ahora que tenemos una función que calcula un paso del método, implementemos la función `euler_method()` que use la función `euler_step()` para generar una lista de valores hasta una nueva variable `x_goal`
def euler_method(x_0,y_0,x_goal,f,h): """ Regresa una lista para aproximaciones de y con el método de euler hasta un cierto valor x_goal Entradas: x_n: int,float El valor inicial de x y_n: int, float El valor inicial de y x_goal: int,float El valor hasta donde queremos ca...
_____no_output_____
MIT
Metodo de Runge-Kutta_Actividad.ipynb
ebucheli/F1014B
La salida de la celda anterior debería ser:`([0, 1, 2, 3, 4, 5], [-3, -3.6, -3.3200000000000003, -1.9840000000000004, 0.6191999999999993, 4.743039999999999])` Un Método de Euler MejoradoUna manera sencilla de mejorar el método de Euler será obteniendo más de una pendiente y tomando el promedio de las pendientes para m...
def runge_kutta_step(x_n,y_n,f,h): """ Calcula una iteración del método de Runge-Kutta de cuarto orden Entradas: x_n: int,float El valor inicial de x en este paso y_n: int, float El valor inicial de y en este paso f: función Una función que represente f(x,y) en el problem...
_____no_output_____
MIT
Metodo de Runge-Kutta_Actividad.ipynb
ebucheli/F1014B
El resultado anterior debería ser:```x_1 = 0.5y_1 = 1.796875```Para terminar, implementemos la función `runge_kutta()`
def runge_kutta(x_0,y_0,x_goal,f,h): """ Regresa una lista para aproximaciones de y con el método de Runge-Kutta hasta un cierto valor x_goal Entradas: x_n: int,float El valor inicial de x y_n: int, float El valor inicial de y x_goal: int,float El valor hasta donde querem...
_____no_output_____
MIT
Metodo de Runge-Kutta_Actividad.ipynb
ebucheli/F1014B
Usemos una librería llamada `prettytable` para imprimir nuestro resultado. Si no la tienes instalada entonces la siguiente linea arrojaría un error.
from prettytable import PrettyTable
_____no_output_____
MIT
Metodo de Runge-Kutta_Actividad.ipynb
ebucheli/F1014B
En caso de que no tengas la librería la puedes instalar corriendo el siguiente comando en una celda,`!pip install PrettyTable`O puedes simplemente correr el comando `pip install PrettyTable` en una ventana de la terminal (mac y linux) o el prompt de anaconda (windows).
mytable = PrettyTable() for x,y in zip(x_list,y_list): mytable.add_row(["{:0.2f}".format(x),"{:0.6f}".format(y)]) print(mytable)
_____no_output_____
MIT
Metodo de Runge-Kutta_Actividad.ipynb
ebucheli/F1014B
This notebook is for prototyping data preparation for insertion into the database. Data for installer table.Need:- installer name- installer primary module manufacurer (e.g. mode of manufacturer name for all installers)
import pandas as pd import numpy as np def load_lbnl_data(replace_nans=True): df1 = pd.read_csv('../data/TTS_LBNL_public_file_10-Dec-2019_p1.csv', encoding='latin-1', low_memory=False) df2 = pd.read_csv('../data/TTS_LBNL_public_file_10-Dec-2019_p2.csv', encoding='latin-1', low_memory=False) lbnl_df = pd.con...
_____no_output_____
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
Relpace missing values with 0 so it doesn't screw up the average calculation.
lbnl_zip_data.replace(-9999, 0, inplace=True) lbnl_zip_groups = lbnl_zip_data.groupby('Zip Code').mean() lbnl_zip_groups.head() lbnl_zip_groups.info()
<class 'pandas.core.frame.DataFrame'> Index: 36744 entries, 85351 to 99403 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Battery System 36744 non-null float64 1 Feed-in Tariff (Annua...
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
Drop missing zip codes.
lbnl_zip_groups = lbnl_zip_groups[~(lbnl_zip_groups.index == '-9999')] lbnl_zip_groups.reset_index(inplace=True) lbnl_zip_groups.head()
_____no_output_____
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
Data for the Utility table.Need:- zipcode- utility name- ownership- service typeJoin EIA-861 report data with EIA IOU rates by zipcode
eia861_df = pd.read_excel('../data/Sales_Ult_Cust_2018.xlsx', header=[0, 1, 2]) def load_eia_iou_data(): iou_df = pd.read_csv('../data/iouzipcodes2017.csv') noniou_df = pd.read_csv('../data/noniouzipcodes2017.csv') eia_zipcode_df = pd.concat([iou_df, noniou_df], axis=0) # zip codes are ints without...
_____no_output_____
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
Missing data seems to be a period.
res_data.replace('.', np.nan, inplace=True) for c in res_data.columns: print(c) res_data[c] = res_data[c].astype('float') res_data['average_yearly_bill'] = res_data['Revenues', 'Thousand Dollars'] * 1000 / res_data['Customers', 'Count'] res_data.head() res_data['average_yearly_kwh'] = (res_data['Sales', 'Megawa...
_____no_output_____
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
Get average bill and kWh used by zip code.
res_columns = ['average_yearly_bill', 'average_yearly_kwh'] res_data.columns = res_data.columns.droplevel(1) res_data[res_columns].head() eia_861_data = pd.concat([res_data[res_columns], eia_utility_data], axis=1) eia_861_data.head() eia_861_data_zipcode = eia_861_data.merge(eia_zip_df, left_on='Utility Number', right_...
_____no_output_____
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
Double-check res_rate
eia_861_data_zipcode['res_rate_recalc'] = eia_861_data_zipcode['average_yearly_bill'] / eia_861_data_zipcode['average_yearly_kwh'] eia_861_data_zipcode.head() eia_861_data_zipcode.drop_duplicates(inplace=True) eia_861_data_zipcode.tail() eia_861_data_zipcode.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 152322 entries, 0 to 159485 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 average_yearly_bill 143449 non-null float64 1 average_yearly_kwh 143449 non-null float64 2 ...
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
Join project solar, ACS, EIA, and LBNL data to get main tableTry and save all of required data from bigquery.
# Set up GCP API from google.cloud import bigquery # Construct a BigQuery client object. client = bigquery.Client() # ACS US census data ACS_DB = '`bigquery-public-data`.census_bureau_acs' ACS_TABLE = 'zip_codes_2017_5yr' # project sunroof PSR_DB = '`bigquery-public-data`.sunroof_solar' PSR_TABLE = 'solar_potential_by...
_____no_output_____
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
Project sunroof data
psr_cols = ['region_name', 'percent_covered', 'percent_qualified', 'number_of_panels_total', 'kw_median', 'count_qualified', 'existing_installs_count'] psr_query = f"""SELECT region_name, percent_covered, percent...
_____no_output_____
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
Join data for main data table
psr_acs = psr_df.merge(acs_data, left_on='region_name', right_on='geo_id', how='outer') psr_acs.head() psr_acs_lbnl = psr_acs.merge(lbnl_zip_groups, left_on='region_name', right_on='Zip Code', how='outer') psr_acs_lbnl_eia = psr_acs_lbnl.merge(eia_861_data_zipcode, left_on='region_name', right_on='zip', how='outer') ps...
<class 'pandas.core.frame.DataFrame'> Int64Index: 206079 entries, 0 to 206078 Data columns (total 34 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 region_name 42105 non-null object 1 percent_covered...
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
Looks like we have a lot of missing data. Combine the zip code columns to have one zip column with no missing data.
def fill_zips(x): if not pd.isna(x['zip']): return x['zip'] elif not pd.isna(x['Zip Code']): return x['Zip Code'] elif not pd.isna(x['geo_id']): return x['geo_id'] elif not pd.isna(x['region_name']): return x['region_name'] else: return np.nan psr_acs_lbnl_eia...
<class 'pandas.core.frame.DataFrame'> Int64Index: 206079 entries, 0 to 206078 Data columns (total 16 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 full_zip 206079 non-null object 1 percent_qualifi...
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
That's a lot of missing data. Something is wrong though, since there should only be ~41k zip codes, and this is showing 206k.
df_to_write.to_csv('../data/solar_metrics_data.csv', index=False) import pandas as pd df = pd.read_csv('../data/solar_metrics_data.csv') df.drop_duplicates().shape df['full_zip'].drop_duplicates().shape df['full_zip'].head()
_____no_output_____
Apache-2.0
notebooks/Prototype_data_prep.ipynb
nateGeorge/udacity_dend_capstone
Loading the data
!ls ../input_data/ train = pd.read_csv('../input_data/train.csv') test = pd.read_csv('../input_data/train.csv') train.head(10)
_____no_output_____
MIT
titanic/nb/0 - Getting to know the data.ipynb
mapa17/Kaggle
FeaturesFrom the competetion documentation|Feature | Explanation | Values ||--------|-------------|--------||survival| Survival |0 = No, 1 = Yes||pclass| Ticket class |1 = 1st, 2 = 2nd, 3 = 3rd||sex| Sex| male/female||Age| Age| in years ||sibsp| of siblings / spouses aboard the Titanic| numeric| |parch| of parents /...
print('DataSet Size') train.shape print("Number of missing values") pd.DataFrame(train.isna().sum(axis=0)).T train['Pclass'].hist(grid=False) train.describe() print('Numaber of missing Cabin strings per class') train[['Pclass', 'Cabin']].groupby('Pclass').agg(lambda x: x.isna().sum()) print('Numaber of missing Age per ...
_____no_output_____
MIT
titanic/nb/0 - Getting to know the data.ipynb
mapa17/Kaggle
Correlation Interpretation* Pclass: the higher the pclass (worse class) decreases the chance of survival significantly (the riches first)* Age: higher age decreases survival slightly (the children first)* SipSp: more siblings has a light negative effect on survival (bigger families have it more difficult?)* Parch: hav...
fig, ax = plt.subplots() classes = [] for pclass, df in train[['Pclass', 'Fare']].groupby('Pclass', as_index=False): df['Fare'].plot(kind='kde', ax=ax) classes.append(pclass) ax.legend(classes) ax.set_xlim(-10, 200) fig, ax = plt.subplots() classes = [] for pclass, df in train[['Pclass', 'Age']].groupby('Pclass...
_____no_output_____
MIT
titanic/nb/0 - Getting to know the data.ipynb
mapa17/Kaggle
SkewnessIt is the degree of distortion from the normal distribution. It measures the lack of symmetry in data distribution.-- Positive Skewness means when the tail on the right side of the distribution is longer or fatter. The mean and median will be greater than the mode. -- Negative Skewness is when the tail of t...
height_weight_data['Height'].skew() height_weight_data['Weight'].skew() listOfSeries = [pd.Series(['Male', 400, 300], index=height_weight_data.columns ), pd.Series(['Female', 660, 370], index=height_weight_data.columns ), pd.Series(['Female', 199, 410], index=height_weight_data.columns...
_____no_output_____
MIT
book/_build/html/_sources/descriptive/m3-demo-05-SkewnessAndKurtosisUsingPandas.ipynb
hossainlab/statswithpy
KurtosisIt is actually the measure of outliers present in the distribution.-- High kurtosis in a data set is an indicator that data has heavy tails or outliers. -- Low kurtosis in a data set is an indicator that data has light tails or lack of outliers.
height_weight_data['Height'].kurtosis() height_weight_data['Weight'].kurtosis() height_weight_updated['Height'].kurtosis() height_weight_updated['Weight'].kurtosis()
_____no_output_____
MIT
book/_build/html/_sources/descriptive/m3-demo-05-SkewnessAndKurtosisUsingPandas.ipynb
hossainlab/statswithpy
Imports
# ! pip install pandas # ! pip install calender # ! pip install numpy # ! pip install datetime # ! pip install matplotlib # ! pip install collections # ! pip install random # ! pip install tqdm # ! pip install sklearn # ! pip install lightgbm # ! pip install xgboost import pandas as pd import calendar from datetime imp...
_____no_output_____
MIT
Training and Output.ipynb
iamshamikb/Walmart_M5_Accuracy
Load data
def get_csv(X): return pd.read_csv(path+X) path = '' calender, sales_train_ev, sales_train_val, sell_prices = get_csv('calendar.csv'), get_csv('sales_train_evaluation.csv'), \ get_csv('sales_train_validation.csv'), get_csv('sell_prices.csv') non_numeric_col_l...
_____no_output_____
MIT
Training and Output.ipynb
iamshamikb/Walmart_M5_Accuracy
Train Function
def startegy7dot1(new_df, dept): print('Using strategy ', strategy) evaluation, validation = new_df.id.iloc[0].find('evaluation'), new_df.id.iloc[0].find('validation') new_df = new_df[new_df.dept_id == dept] print('Total rows: ', len(new_df)) rows_per_day = len(new_df[new_df.d == 'd_1']) ...
_____no_output_____
MIT
Training and Output.ipynb
iamshamikb/Walmart_M5_Accuracy
Run
strategy = 7.1 ############## Eval data %%time df = sales_train_ev.copy() empty_list = [0]*30490 for i in range(1942, 1970): df['d_'+str(i)] = empty_list df = feature_engineer(df) %%time main_out_df_ev = get_output_of_eval_or_val(df) main_out_df_ev.to_csv('main_out_ev.csv', index=False) ############# Val Data %%tim...
_____no_output_____
MIT
Training and Output.ipynb
iamshamikb/Walmart_M5_Accuracy
Get text
with open('data/one_txt/blogger.txt') as f: blogger = f.read() with open('data/one_txt/wordpress.txt') as f: wordpress = f.read() txt = wordpress + blogger
_____no_output_____
BSD-3-Clause
analyze_vocab.ipynb
quentin-auge/blogger
Explore vocabulary
vocab_count = dict(Counter(txt)) vocab_freq = {char: count / len(txt) for char, count in vocab_count.items()} sorted(zip(vocab_count.keys(), vocab_count.values(), vocab_freq.values()), key=operator.itemgetter(1)) full_vocab = sorted(vocab_count.keys(), key=vocab_count.get, reverse=True) full_vocab = ''.join(full_vocab)...
_____no_output_____
BSD-3-Clause
analyze_vocab.ipynb
quentin-auge/blogger
Normalize some of the text characters
def normalize_txt(txt): # Non-breaking spaces -> regular spaces txt = txt.replace('\xa0', ' ') # Double quotes double_quotes_chars = '“”»«' for double_quotes_char in double_quotes_chars: txt = txt.replace(double_quotes_char, '"') # Single quotes single_quote_chars = '‘`´’' for...
_____no_output_____
BSD-3-Clause
analyze_vocab.ipynb
quentin-auge/blogger
Restrict text to a sensible vocabulary
vocab = ' !"$%\'()+,-./0123456789:;=>?ABCDEFGHIJKLMNOPQRSTUVWXYZ_abcdefghijklmnopqrstuvwxyz~°àâçèéêëîïôùûœо€' # Restrict text to vocabulary def restrict_to_vocab(txt, vocab): txt = ''.join(char for char in txt if char in vocab) return txt txt = restrict_to_vocab(txt, vocab) # Double check new vocabulary assert ...
_____no_output_____
BSD-3-Clause
analyze_vocab.ipynb
quentin-auge/blogger
One-dimensional Lagrange Interpolation The problem of interpolation or finding the value of a function at an arbitrary point $X$ inside a given domain, provided we have discrete known values of the function inside the same domain is at the heart of the finite element method. In this notebooke we use Lagrange interpola...
from __future__ import division import numpy as np from scipy import interpolate import sympy as sym import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib notebook
_____no_output_____
MIT
notebooks/lagrange_interpolation.ipynb
jomorlier/FEM-Notes
First we use a function to generate the Lagrage polynomial of order $order$ at point $i$
def basis_lagrange(x_data, var, cont): """Find the basis for the Lagrange interpolant""" prod = sym.prod((var - x_data[i])/(x_data[cont] - x_data[i]) for i in range(len(x_data)) if i != cont) return sym.simplify(prod)
_____no_output_____
MIT
notebooks/lagrange_interpolation.ipynb
jomorlier/FEM-Notes
we now define the function $ f(x)=x^3+4x^2-10 $:
fun = lambda x: x**3 + 4*x**2 - 10 x = sym.symbols('x') x_data = np.array([-1, 1, 0]) f_data = fun(x_data)
_____no_output_____
MIT
notebooks/lagrange_interpolation.ipynb
jomorlier/FEM-Notes
And obtain the Lagrange polynomials using:
basis = [] for cont in range(len(x_data)): basis.append(basis_lagrange(x_data, x, cont)) sym.pprint(basis[cont])
x⋅(x - 1) ───────── 2 x⋅(x + 1) ───────── 2 2 - x + 1
MIT
notebooks/lagrange_interpolation.ipynb
jomorlier/FEM-Notes
which are shown in the following plots/
npts = 101 x_eval = np.linspace(-1, 1, npts) basis_num = sym.lambdify((x), basis, "numpy") # Create a lambda function for the polynomials plt.figure(figsize=(6, 4)) for k in range(3): y_eval = basis_num(x_eval)[k] plt.plot(x_eval, y_eval) y_interp = sym.simplify(sum(f_data[k]*basis[k] for k in range(3))) y...
_____no_output_____
MIT
notebooks/lagrange_interpolation.ipynb
jomorlier/FEM-Notes
Now we plot the complete approximating polynomial, the actual function and the points where the function was known.
y_interp = sum(f_data[k]*basis_num(x_eval)[k] for k in range(3)) y_original = fun(x_eval) plt.figure(figsize=(6, 4)) plt.plot(x_eval, y_original) plt.plot(x_eval, y_interp) plt.plot([-1, 1, 0], f_data, 'ko') plt.show()
_____no_output_____
MIT
notebooks/lagrange_interpolation.ipynb
jomorlier/FEM-Notes
Interpolation in 2 dimensionsWe can extend the concept of Lagrange interpolation to 2 or more dimensions.In the case of bilinear interpolation (2×2, 4 vertices) in $[-1, 1]^2$,the base functions are given by (**prove it**):\begin{align}N_0 = \frac{1}{4}(1 - x)(1 - y)\\N_1 = \frac{1}{4}(1 + x)(1 - y)\\N_2 = \frac{1}{4}...
def rect_grid(Lx, Ly, nx, ny): u"""Create a rectilinear grid for a rectangle The rectangle has dimensiones Lx by Ly. nx are the number of nodes in x, and ny are the number of nodes in y """ y, x = np.mgrid[-Ly/2:Ly/2:ny*1j, -Lx/2:Lx/2:nx*1j] els = np.zeros(((nx - 1)*(ny - 1), 4), dtype...
_____no_output_____
MIT
notebooks/lagrange_interpolation.ipynb
jomorlier/FEM-Notes
The next cell change the format of the Notebook.
from IPython.core.display import HTML def css_styling(): styles = open('../styles/custom_barba.css', 'r').read() return HTML(styles) css_styling()
_____no_output_____
MIT
notebooks/lagrange_interpolation.ipynb
jomorlier/FEM-Notes
**This notebook is an exercise in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/missing-values).**--- Now it's your turn to test your new knowledge of **missing values** handling. ...
# Set up code checking import os if not os.path.exists("../input/train.csv"): os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv") os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv") from learntools.core import binder binder.bind(globals()) from learntools.m...
Setup Complete
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
In this exercise, you will work with data from the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course). ![Ames Housing dataset image](https://i.imgur.com/lTJVG4e.png)Run the next code cell without changes to load the training and validation sets in `X_train`, `X_valid`,...
import pandas as pd from sklearn.model_selection import train_test_split # Read the data X_full = pd.read_csv('../input/train.csv', index_col='Id') X_test_full = pd.read_csv('../input/test.csv', index_col='Id') # Remove rows with missing target, separate target from predictors X_full.dropna(axis=0, subset=['SalePrice...
_____no_output_____
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Use the next code cell to print the first five rows of the data.
X_train.head()
_____no_output_____
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
You can already see a few missing values in the first several rows. In the next step, you'll obtain a more comprehensive understanding of the missing values in the dataset. Step 1: Preliminary investigationRun the code cell below without changes.
# Shape of training data (num_rows, num_columns) print(X_train.shape) # Number of missing values in each column of training data missing_val_count_by_column = (X_train.isnull().sum()) print(missing_val_count_by_column[missing_val_count_by_column > 0]) #print(X_train.isnull().sum(axis=0))
(1168, 36) LotFrontage 212 MasVnrArea 6 GarageYrBlt 58 dtype: int64
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Part AUse the above output to answer the questions below.
# Fill in the line below: How many rows are in the training data? num_rows = 1168 # Fill in the line below: How many columns in the training data # have missing values? num_cols_with_missing = 3 # Fill in the line below: How many missing entries are contained in # all of the training data? tot_missing = 276 # Check...
_____no_output_____
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Part BConsidering your answers above, what do you think is likely the best approach to dealing with the missing values?
# Check your answer (Run this code cell to receive credit!) step_1.b.check() step_1.b.hint()
_____no_output_____
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
To compare different approaches to dealing with missing values, you'll use the same `score_dataset()` function from the tutorial. This function reports the [mean absolute error](https://en.wikipedia.org/wiki/Mean_absolute_error) (MAE) from a random forest model.
from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error # Function for comparing different approaches def score_dataset(X_train, X_valid, y_train, y_valid): model = RandomForestRegressor(n_estimators=100, random_state=0) model.fit(X_train, y_train) preds = model.p...
_____no_output_____
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Step 2: Drop columns with missing valuesIn this step, you'll preprocess the data in `X_train` and `X_valid` to remove columns with missing values. Set the preprocessed DataFrames to `reduced_X_train` and `reduced_X_valid`, respectively.
# Fill in the line below: get names of columns with missing values missing_col_names = ['LotFrontage','MasVnrArea','GarageYrBlt'] # Your code here include_column_names = [cols for cols in X_train.columns if cols not in missing_col_names] # Fill in the lines below: drop columns in training and v...
_____no_output_____
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Run the next code cell without changes to obtain the MAE for this approach.
print("MAE (Drop columns with missing values):") print(score_dataset(reduced_X_train, reduced_X_valid, y_train, y_valid))
MAE (Drop columns with missing values): 17837.82570776256
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Step 3: Imputation Part AUse the next code cell to impute missing values with the mean value along each column. Set the preprocessed DataFrames to `imputed_X_train` and `imputed_X_valid`. Make sure that the column names match those in `X_train` and `X_valid`.
from sklearn.impute import SimpleImputer # Fill in the lines below: imputation # Your code here myimputer = SimpleImputer() imputed_X_train = pd.DataFrame(myimputer.fit_transform(X_train)) imputed_X_valid = pd.DataFrame(myimputer.transform(X_valid)) # Fill in the lines below: imputation removed column names; put th...
_____no_output_____
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Run the next code cell without changes to obtain the MAE for this approach.
print("MAE (Imputation):") print(score_dataset(imputed_X_train, imputed_X_valid, y_train, y_valid))
MAE (Imputation): 18062.894611872147
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Part BCompare the MAE from each approach. Does anything surprise you about the results? Why do you think one approach performed better than the other?
# Check your answer (Run this code cell to receive credit!) step_3.b.check() #step_3.b.hint()
_____no_output_____
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Step 4: Generate test predictionsIn this final step, you'll use any approach of your choosing to deal with missing values. Once you've preprocessed the training and validation features, you'll train and evaluate a random forest model. Then, you'll preprocess the test data before generating predictions that can be su...
# Preprocessed training and validation features final_X_train = reduced_X_train final_X_valid = reduced_X_valid # Check your answers step_4.a.check() # Lines below will give you a hint or solution code #step_4.a.hint() #step_4.a.solution()
_____no_output_____
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Run the next code cell to train and evaluate a random forest model. (*Note that we don't use the `score_dataset()` function above, because we will soon use the trained model to generate test predictions!*)
# Define and fit model model = RandomForestRegressor(n_estimators=100, random_state=0) model.fit(final_X_train, y_train) # Get validation predictions and MAE preds_valid = model.predict(final_X_valid) print("MAE (Your approach):") print(mean_absolute_error(y_valid, preds_valid))
MAE (Your approach): 17837.82570776256
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Part BUse the next code cell to preprocess your test data. Make sure that you use a method that agrees with how you preprocessed the training and validation data, and set the preprocessed test features to `final_X_test`.Then, use the preprocessed test features and the trained model to generate test predictions in `pr...
# Fill in the line below: preprocess test data final_X_test = X_test[include_column_names] # Fill in the line below: get test predictions imputed_final_X_test = pd.DataFrame(myimputer.fit_transform(final_X_test)) imputed_final_X_test.columns = final_X_test.columns final_X_test = imputed_final_X_test preds_test = model...
_____no_output_____
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
Run the next code cell without changes to save your results to a CSV file that can be submitted directly to the competition.
# Save test predictions to file output = pd.DataFrame({'Id': X_test.index, 'SalePrice': preds_test}) output.to_csv('submission.csv', index=False)
_____no_output_____
MIT
exercise-missing-values.ipynb
Kartik-Bhardwaj192/Machine-Learning-Intro-And-Intermediate
**Installing the packages**
#!pip install simpy import random import simpy as sy
_____no_output_____
Apache-2.0
isye_6501_sim_hw/aarti_solution/Question 13.2.ipynb
oskrgab/isye-6644_project
**Intializing the variable and creating the simulation**
#Declaring the variables num_checkers = 10 # Number of Checkers num_scanners = 5 #Number of scanners wait_time = 0 #Initial Waiting Time set to 0 total_pax = 1 #Total number of passengers initialized to 1 num_pax = 100 #Overall Passengers set to 100 runtime = 500 #End simulation when runtime c...
Avg waiting time is 8.104722
Apache-2.0
isye_6501_sim_hw/aarti_solution/Question 13.2.ipynb
oskrgab/isye-6644_project
Conversion reaction===================
import importlib import os import sys import numpy as np import amici import amici.plotting import pypesto # sbml file we want to import sbml_file = 'conversion_reaction/model_conversion_reaction.xml' # name of the model that will also be the name of the python module model_name = 'model_conversion_reaction' # directo...
_____no_output_____
BSD-3-Clause
doc/example/conversion_reaction.ipynb
LarsFroehling/pyPESTO
Compile AMICI model
# import sbml model, compile and generate amici module sbml_importer = amici.SbmlImporter(sbml_file) sbml_importer.sbml2amici(model_name, model_output_dir, verbose=False)
_____no_output_____
BSD-3-Clause
doc/example/conversion_reaction.ipynb
LarsFroehling/pyPESTO
Load AMICI model
# load amici module (the usual starting point later for the analysis) sys.path.insert(0, os.path.abspath(model_output_dir)) model_module = importlib.import_module(model_name) model = model_module.getModel() model.requireSensitivitiesForAllParameters() model.setTimepoints(amici.DoubleVector(np.linspace(0, 10, 11))) mode...
_____no_output_____
BSD-3-Clause
doc/example/conversion_reaction.ipynb
LarsFroehling/pyPESTO
Optimize
# create objective function from amici model # pesto.AmiciObjective is derived from pesto.Objective, # the general pesto objective function class objective = pypesto.AmiciObjective(model, solver, [edata], 1) # create optimizer object which contains all information for doing the optimization optimizer = pypesto.ScipyO...
_____no_output_____
BSD-3-Clause
doc/example/conversion_reaction.ipynb
LarsFroehling/pyPESTO
Visualize
# waterfall, parameter space, scatter plots, fits to data # different functions for different plotting types import pypesto.visualize pypesto.visualize.waterfall(result) pypesto.visualize.parameters(result)
_____no_output_____
BSD-3-Clause
doc/example/conversion_reaction.ipynb
LarsFroehling/pyPESTO
Data storage
# result = pypesto.storage.load('db_file.db')
_____no_output_____
BSD-3-Clause
doc/example/conversion_reaction.ipynb
LarsFroehling/pyPESTO
Profiles
# there are three main parts: optimize, profile, sample. the overall structure of profiles and sampling # will be similar to optimizer like above. # we intend to only have just one result object which can be reused everywhere, but the problem of how to # not have one huge class but # maybe simplified views on it for o...
_____no_output_____
BSD-3-Clause
doc/example/conversion_reaction.ipynb
LarsFroehling/pyPESTO
Sampling
# sampler = pypesto.Sampler() # result = pypesto.sample(problem, sampler, result=None) # open: how to parallelize. the idea is to use methods similar to those in pyabc for working on clusters. # one way would be to specify an additional 'engine' object passed to optimize(), profile(), sample(), # which in the default ...
_____no_output_____
BSD-3-Clause
doc/example/conversion_reaction.ipynb
LarsFroehling/pyPESTO
compute the error rate
df_sorted.head() # df_sorted['diff'] = df_sorted['real'] - df_sorted['model'] df_sorted['error'] = abs(df_sorted['real'] - df_sorted['model'] )/ df_sorted['real'] df_sorted.head() df_sorted['error'].mean() df_sorted['error'].std() df_sorted.shape
_____no_output_____
MIT
cmp_cmp/ModelPredict_plot.ipynb
3upperm2n/block_trace_analyzer
Plot types by function1. Comparison2. Proportion3. Relationship4. Part to a whole5. Distribution6. Change over time
import numpy as np import pandas as pd import seaborn as sns from plotly import tools import plotly.plotly as py from plotly.offline import init_notebook_mode,iplot init_notebook_mode(connected=True) import plotly.graph_objs as go import plotly.figure_factory as ff import matplotlib.pyplot as plt data = pd.read_csv("...
_____no_output_____
MIT
Lectures/Lecture_5/Pokemon.ipynb
lev1khachatryan/DataVisualization
DNN for image classification
from IPython.display import IFrame IFrame(src= "https://cdnapisec.kaltura.com/p/2356971/sp/235697100/embedIframeJs/uiconf_id/41416911/partner_id/2356971?iframeembed=true&playerId=kaltura_player&entry_id=1_zltbjpto&flashvars[streamerType]=auto&amp;flashvars[localizationCode]=en&amp;flashvars[leadWithHTML5]=true&amp;fla...
_____no_output_____
MIT
_build/jupyter_execute/Module3/m3_07.ipynb
liuzhengqi1996/math452_Spring2022
**Métodos de Bayes - Dados do Censo** Importação dos *dados*
import pickle with open("/content/censo.pkl","rb") as f: x_censo_treino,y_censo_treino,x_censo_teste,y_censo_teste = pickle.load(f)
_____no_output_____
MIT
Metodo_de_Bayes_Censo.ipynb
VictorCalebeIFG/MachineLearning_Python
Treinar o modelo preditivo:
from sklearn.naive_bayes import GaussianNB naive = GaussianNB() naive.fit(x_censo_treino,y_censo_treino)
_____no_output_____
MIT
Metodo_de_Bayes_Censo.ipynb
VictorCalebeIFG/MachineLearning_Python
Previsões
previsoes = naive.predict(x_censo_teste) previsoes
_____no_output_____
MIT
Metodo_de_Bayes_Censo.ipynb
VictorCalebeIFG/MachineLearning_Python
Verificando a acurracia do modelo
from sklearn.metrics import accuracy_score accuracy_score(y_censo_teste,previsoes)
_____no_output_____
MIT
Metodo_de_Bayes_Censo.ipynb
VictorCalebeIFG/MachineLearning_Python
Como pode ser visto, a acuracia do modelo é bem baixa. As vezes será necessário modificar o preprocessamento ("de acordo com o professor, ao retirar a padronização do preprocessamento, neste algorítmo e para essa base de dados em específico, a acurácia aumentou para 75%")
from yellowbrick.classifier import ConfusionMatrix cm = ConfusionMatrix(naive) cm.fit(x_censo_treino,y_censo_treino) cm.score(x_censo_teste,y_censo_teste)
_____no_output_____
MIT
Metodo_de_Bayes_Censo.ipynb
VictorCalebeIFG/MachineLearning_Python
Start here to begin with Stingray.
import numpy as np %matplotlib inline import warnings warnings.filterwarnings('ignore')
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Creating a light curve
from stingray import Lightcurve
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
A `Lightcurve` object can be created in two ways :1. From an array of time stamps and an array of counts.2. From photon arrival times. 1. Array of time stamps and counts Create 1000 time stamps
times = np.arange(1000) times[:10]
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Create 1000 random Poisson-distributed counts:
counts = np.random.poisson(100, size=len(times)) counts[:10]
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Create a Lightcurve object with the times and counts array.
lc = Lightcurve(times, counts)
WARNING:root:Checking if light curve is well behaved. This can take time, so if you are sure it is already sorted, specify skip_checks=True at light curve creation. WARNING:root:Checking if light curve is sorted. WARNING:root:Computing the bin time ``dt``. This can take time. If you know the bin time, please specify it...
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
The number of data points can be counted with the `len` function.
len(lc)
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Note the warnings thrown by the syntax above. By default, `stingray` does a number of checks on the data that is put into the `Lightcurve` class. For example, it checks whether it's evenly sampled. It also computes the time resolution `dt`. All of these checks take time. If you know the time resolution, it's a good ide...
dt = 1 lc = Lightcurve(times, counts, dt=dt, skip_checks=True)
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
2. Photon Arrival TimesOften, you might have unbinned photon arrival times, rather than a light curve with time stamps and associated measurements. If this is the case, you can use the `make_lightcurve` method to turn these photon arrival times into a regularly binned light curve.
arrivals = np.loadtxt("photon_arrivals.txt") arrivals[:10] lc_new = Lightcurve.make_lightcurve(arrivals, dt=1)
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
The time bins and respective counts can be seen with `lc.counts` and `lc.time`
lc_new.counts lc_new.time
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
One useful feature is that you can explicitly pass in the start time and the duration of the observation. This can be helpful because the chance that a photon will arrive exactly at the start of the observation and the end of the observation is very small. In practice, when making multiple light curves from the same ob...
lc_new = Lightcurve.make_lightcurve(arrivals, dt=1.0, tstart=1.0, tseg=9.0)
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Properties A Lightcurve object has the following properties :1. `time` : numpy array of time values2. `counts` : numpy array of counts per bin values3. `counts_err`: numpy array with the uncertainties on the values in `counts`4. `countrate` : numpy array of counts per second5. `countrate_err`: numpy array of the uncer...
lc.n == len(lc)
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Note that by default, `stingray` assumes that the user is passing a light curve in **counts per bin**. That is, the counts in bin $i$ will be the number of photons that arrived in the interval $t_i - 0.5\Delta t$ and $t_i + 0.5\Delta t$. Sometimes, data is given in **count rate**, i.e. the number of events that arrive ...
# times with a resolution of 0.1 dt = 0.1 times = np.arange(0, 100, dt) times[:10] mean_countrate = 100.0 countrate = np.random.poisson(mean_countrate, size=len(times)) lc = Lightcurve(times, counts=countrate, dt=dt, skip_checks=True, input_counts=False)
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Internally, both `counts` and `countrate` attribute will be defined no matter what the user passes in, since they're trivially converted between each other through a multiplication/division with `dt:
print(mean_countrate) print(lc.countrate[:10]) mean_counts = mean_countrate * dt print(mean_counts) print(lc.counts[:10])
10.0 [11.3 9.2 11. 9.7 10.1 10.2 10.3 10.1 12.4 8.9]
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Error Distributions in `stingray.Lightcurve`The instruments that record our data impose measurement noise on our measurements. Depending on the type of instrument, the statistical distribution of that noise can be different. `stingray` was originally developed with X-ray data in mind, where most data comes in the form...
times = np.arange(1000) mean_flux = 100.0 # mean flux std_flux = 2.0 # standard deviation on the flux # generate fluxes with a Gaussian distribution and # an array of associated uncertainties flux = np.random.normal(loc=mean_flux, scale=std_flux, size=len(times)) flux_err = np.ones_like(flux) * std_flux lc = Lightc...
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Good Time Intervals`Lightcurve` (and most other core `stingray` classes) support the use of *Good Time Intervals* (or GTIs), which denote the parts of an observation that are reliable for scientific purposes. Often, GTIs introduce gaps (e.g. where the instrument was off, or affected by solar flares). By default. GTIs ...
times = np.arange(1000) counts = np.random.poisson(100, size=len(times)) lc = Lightcurve(times, counts, dt=1, skip_checks=True) lc.gti print(times[0]) # first time stamp in the light curve print(times[-1]) # last time stamp in the light curve print(lc.gti) # the GTIs generated within Lightcurve
0 999 [[-5.000e-01 9.995e+02]]
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
GTIs are defined as a list of tuples:
gti = [(0, 500), (600, 1000)] lc = Lightcurve(times, counts, dt=1, skip_checks=True, gti=gti) print(lc.gti)
[[ 0 500] [ 600 1000]]
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
We'll get back to these when we talk more about some of the methods that apply GTIs to the data. Operations Addition/Subtraction Two light curves can be summed up or subtracted from each other if they have same time arrays.
lc = Lightcurve(times, counts, dt=1, skip_checks=True) lc_rand = Lightcurve(np.arange(1000), [500]*1000, dt=1, skip_checks=True) lc_sum = lc + lc_rand print("Counts in light curve 1: " + str(lc.counts[:5])) print("Counts in light curve 2: " + str(lc_rand.counts[:5])) print("Counts in summed light curve: " + str(lc_sum....
Counts in light curve 1: [103 99 102 109 104] Counts in light curve 2: [500 500 500 500 500] Counts in summed light curve: [603 599 602 609 604]
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Negation A negation operation on the lightcurve object inverts the count array from positive to negative values.
lc_neg = -lc lc_sum = lc + lc_neg np.all(lc_sum.counts == 0) # All the points on lc and lc_neg cancel each other
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks
Indexing Count value at a particular time can be obtained using indexing.
lc[120]
_____no_output_____
MIT
Lightcurve/Lightcurve tutorial.ipynb
jdswinbank/notebooks