markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Train the model with the callback
# Train the model, with some of the data reserved for validation history = model.fit(train_data, train_targets, epochs=3, verbose=False, callbacks=[TrainingCallback()], validation_split=0.2) # Evaluate the model model.evaluate(test_data, test_targets, verbose=False, callbacks=[TestingCallback()]) # Make predictions wit...
_____no_output_____
MIT
Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb
nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London
*** Early stopping / patience Re-train the models with early stopping
# Re-train the unregularised model import numpy as np unregularised_model = get_model() unregularised_model.compile(optimizer='adam', loss='mae') unreg_history = unregularised_model.fit(train_data[...,np.newaxis], train_targets, epochs=100, validation_split=0.15, batch_size=64, v...
2/2 - 0s - loss: 0.5251
MIT
Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb
nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London
Plot the learning curves
# Plot the training and validation loss import matplotlib.pyplot as plt fig = plt.figure(figsize=(12, 5)) fig.add_subplot(121) plt.plot(unreg_history.history['loss']) plt.plot(unreg_history.history['val_loss']) plt.title('Unregularised model: loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Tra...
_____no_output_____
MIT
Course_1_Getting started with TensorFlow 2/Tensorflow_2_week_3.ipynb
nagar-mayank/TensorFlow-2-for-Deep-Learning-by-Imperial-College-London
Regression
df_cars = pd.read_csv("../data/cars.csv") X = df_cars.drop('MPG', axis=1) y = df_cars['MPG'] features_reg_univar = ["WGT"] target_reg = "MPG" dtr_univar = DecisionTreeRegressor(max_depth=3, criterion="mae") dtr_univar.fit(X[features_reg_univar], y) skdtree_univar = ShadowSKDTree(dtr_univar, X[features_reg_univar], y, ...
_____no_output_____
MIT
notebooks/partitioning.ipynb
windisch/dtreeviz
Classification
iris = load_iris() X = iris.data X = X[:,2].reshape(-1,1) # petal length (cm) y = iris.target len(X), len(y) feature_c_univar = "petal length (cm)" target_c_univar = "iris" class_names_univar = list(iris.target_names) dtc_univar = DecisionTreeClassifier(max_depth=1, min_samples_leaf=1) dtc_univar.fit(X, y) skdtree_c_...
_____no_output_____
MIT
notebooks/partitioning.ipynb
windisch/dtreeviz
Load and transform dataset. Install Bioconductor biocLite package in order to access the golubEsets library. [golubEsets](https://bioconductor.org/packages/release/data/experiment/manuals/golubEsets/man/golubEsets.pdf) contains the raw data used by Todd Golub in the original paper.We use the scale method in the origin...
## Most code is commented in this cell since it is unnecessary and time-consuming to run it everytime. # options(repos='http://cran.rstudio.com/') # source("http://bioconductor.org/biocLite.R") # biocLite("golubEsets") suppressMessages(library(golubEsets)) #Training data predictor and response data(Golub_Train) golub...
_____no_output_____
MIT
ReproducingMLpipelines/Paper6/DataPreprocessing.ipynb
CompareML/AIM-Manuscript
模型定义
# 超参数设定 ## 固定参数 epochs = 1000 display_freq = 200 save_epoch_freq = 1 ## 模型参数 alpha = 1 beta = 0.2 model_name = f'CSA-crop-{alpha}-{beta}' base_opt = load_option('../options/base.toml') opt = load_option('../options/train-new.toml') opt.update(base_opt) opt.update({'name': model_name}) # 设定模型名称 model = CSA(beta, **op...
Epoch/total_steps/alpha-beta: 0/100/1-0.2 {'G_GAN': 5.518022537231445, 'G_L1': 55.588680267333984, 'D': 1.1141009330749512, 'F': 0.07483334094285965} Epoch/total_steps/alpha-beta: 0/200/1-0.2 {'G_GAN': 5.759289741516113, 'G_L1': 55.08604049682617, 'D': 0.6345841288566589, 'F': 0.04530204087495804} Epoch/total_steps/alp...
Apache-2.0
docs/start/02_train2.ipynb
SanstyleLab/pytorch-book
Getting Started with PCSE/WOFOSTThis Jupyter notebook will introduce PCSE and explain the basics of running models with PCSE, taking WOFOST as an example.Allard de Wit, March 2018**Prerequisites for running this notebook**Several packages need to be installed for running PCSE/WOFOST: 1. `PCSE` and its dependencies. ...
%matplotlib inline import sys, os import pcse import pandas import matplotlib matplotlib.style.use("ggplot") import matplotlib.pyplot as plt print("This notebook was built with:") print("python version: %s " % sys.version) print("PCSE version: %s" % pcse.__version__)
This notebook was built with: python version: 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] PCSE version: 5.4.2
MIT
01 Getting Started with PCSE.ipynb
kkj154393476/pcse_notebooks
Starting from the internal demo databaseFor demonstration purposes, we can start WOFOST with a single function call. This function reads all relevant data from the internal demo databases. In the next notebook we will demonstrate how to read data from external sources.The command below starts WOFOST in potential produ...
wofostPP = pcse.start_wofost(mode="pp")
_____no_output_____
MIT
01 Getting Started with PCSE.ipynb
kkj154393476/pcse_notebooks
You have just successfully initialized a PCSE/WOFOST object in the Python interpreter, which is in its initial state and waiting to do some simulation. We can now advance the model state for example with 1 day:
wofostPP.run()
_____no_output_____
MIT
01 Getting Started with PCSE.ipynb
kkj154393476/pcse_notebooks
Advancing the crop simulation with only 1 day, is often not so useful so the number of days to simulate can be specified as well:
wofostPP.run(days=10)
_____no_output_____
MIT
01 Getting Started with PCSE.ipynb
kkj154393476/pcse_notebooks
Getting information about state and rate variablesRetrieving information about the calculated model states or rates can be done with the `get_variable()` method on a PCSE object. For example, to retrieve the leaf area index value in the current model state you can do:
wofostPP.get_variable("LAI") wofostPP.run(days=25) wofostPP.get_variable("LAI")
_____no_output_____
MIT
01 Getting Started with PCSE.ipynb
kkj154393476/pcse_notebooks
Showing that after 11 days the LAI value is 0.287. When we increase time with another 25 days, the LAI increases to 1.528. The `get_variable()` method can retrieve any state or rate variable that is defined somewhere in the model. Finally, we can finish the crop season by letting it run until the model terminates becau...
wofostPP.run_till_terminate()
_____no_output_____
MIT
01 Getting Started with PCSE.ipynb
kkj154393476/pcse_notebooks
Note that before or after the crop cycle, the object representing the crop does not exist and therefore retrieving a crop related variable results in a `None` value. Off course the simulation results are stored and can be obtained, see next section.
print(wofostPP.get_variable("LAI"))
None
MIT
01 Getting Started with PCSE.ipynb
kkj154393476/pcse_notebooks
Retrieving and displaying WOFOST outputWe can retrieve the results of the simulation at each time step using `get_output()`. In python terms this returns a list of dictionaries, one dictionary for each time step of the the simulation results. Each dictionary contains the key:value pairs of the state or rate variables...
output = wofostPP.get_output()
_____no_output_____
MIT
01 Getting Started with PCSE.ipynb
kkj154393476/pcse_notebooks
The most convenient way to handle the output from WOFOST is to used the `pandas` module to convert it into a dataframe. Pandas DataFrames can be converted to a variety of formats including excel, CSV or database tables.
dfPP = pandas.DataFrame(output).set_index("day") dfPP.tail()
_____no_output_____
MIT
01 Getting Started with PCSE.ipynb
kkj154393476/pcse_notebooks
Besides the output at each time step, WOFOST also provides summary output which summarizes the crop cycle and provides you the total crop biomass, total yield, maximum LAI and other variables. In case of crop rotations, the summary output will consist of several sets of variables, one for each crop cycle.
summary_output = wofostPP.get_summary_output() msg = "Reached maturity at {DOM} with total biomass {TAGP:.1f} kg/ha, " \ "a yield of {TWSO:.1f} kg/ha with a maximum LAI of {LAIMAX:.2f}." for crop_cycle in summary_output: print(msg.format(**crop_cycle))
Reached maturity at 2000-05-31 with total biomass 18091.0 kg/ha, a yield of 8729.4 kg/ha with a maximum LAI of 6.23.
MIT
01 Getting Started with PCSE.ipynb
kkj154393476/pcse_notebooks
Visualizing outputThe pandas module is also very useful for generating charts from simulation results. In this case we generate graphs of leaf area index and crop biomass including total biomass and grain yield.
fig, (axis1, axis2) = plt.subplots(nrows=1, ncols=2, figsize=(16,8)) dfPP.LAI.plot(ax=axis1, label="LAI", color='k') dfPP.TAGP.plot(ax=axis2, label="Total biomass") dfPP.TWSO.plot(ax=axis2, label="Yield") axis1.set_title("Leaf Area Index") axis2.set_title("Crop biomass") fig.autofmt_xdate() r = fig.legend()
_____no_output_____
MIT
01 Getting Started with PCSE.ipynb
kkj154393476/pcse_notebooks
Importar Pandas
#importa pandas import pandas as pd pd.__version__
_____no_output_____
Apache-2.0
1.DataFrames y Series-ejercicio.ipynb
Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON
Crear una Serie Explore series en python en el siguiente [link](https://pandas.pydata.org/pandas-docs/stable/user_guide/10min.html) en las primeras lineas del documento
# Crea una Serie de los numeros 10, 20 and 10. s = pd.Series([10, 20, 10]) s # Crea una Serie con tres objetos: 'rojo', 'verde', 'azul' s = pd.Series(["rojo", "verde", "azul"]) s
_____no_output_____
Apache-2.0
1.DataFrames y Series-ejercicio.ipynb
Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON
Crear un Dataframe
# Crea un dataframe vacío llamado 'df' dicx= {} df_dataframe = pd.DataFrame(dicx) df_dataframe df = pd.Dataframe() # Crea una nueva columna en el dataframe, y asignale la primera serie que has creado serie1 = [[10, 20, 10]] # nombre de columnas columnas= ["C1", "C2", "C3"] # Ayuda para funcion -> shift + tab df_serie...
_____no_output_____
Apache-2.0
1.DataFrames y Series-ejercicio.ipynb
Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON
Leer un dataframe
# Lee el archivo llamado 'avengers.csv" localizado en la carpeta "data" y crea un DataFrame, llamado 'avengers'. # El archivo está localizado en "data/avengers.csv" df = pd.read_csv('./src/pandas/avengers.csv', sep=',') df.head()
_____no_output_____
Apache-2.0
1.DataFrames y Series-ejercicio.ipynb
Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON
Inspeccionar un dataframe
# Muestra las primeras 5 filas del DataFrame. df.head(5) # Muestra las primeras 10 filas del DataFrame. df.head(10) # Muestra las últimas 5 filas del DataFrame. df.tail(5)
_____no_output_____
Apache-2.0
1.DataFrames y Series-ejercicio.ipynb
Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON
Tamaño del DataFrame
# Muestra el tamaño del DataFrame df.shape
_____no_output_____
Apache-2.0
1.DataFrames y Series-ejercicio.ipynb
Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON
Data types en un DataFrame
# Muestra los data types del dataframe df.dtypes
_____no_output_____
Apache-2.0
1.DataFrames y Series-ejercicio.ipynb
Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON
Editar el indice (index)
# Cambia el indice a la columna "fecha_inicio". df2 = df.set_index("fecha_inicio").copy() df2.head()
_____no_output_____
Apache-2.0
1.DataFrames y Series-ejercicio.ipynb
Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON
Ordenar el indice
# Ordena el índice de forma descendiente df.sort_values(by=["URL", "nombre", "n_apariciones", "actual", "genero", "fecha_inicio", "Notes"], ascending=[False, False, False, False, False, False, False,])
_____no_output_____
Apache-2.0
1.DataFrames y Series-ejercicio.ipynb
Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON
Resetear el indice
# Resetea el índice df2.reset_index(drop=True, inplace=True) df2.head()
_____no_output_____
Apache-2.0
1.DataFrames y Series-ejercicio.ipynb
Javierhidalgo95/Hidalgo-Lopez---PC-PYTHON
Lecture 13 Examples
# imports import numpy as np from scipy.ndimage import uniform_filter1d from scipy.stats import shapiro, bartlett from matplotlib import pyplot as plt import pandas from statsmodels.tsa.seasonal import seasonal_decompose import statsmodels.api as sm from statsmodels.stats.stattools import durbin_watson import statsmod...
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Phosphorus Load
data_file = '../Data/samsonvillebrook_phosphorus_quarterly.txt' df = pandas.read_table(data_file, delim_whitespace=True, names=['time','P']) df.set_index('time', inplace=True) df.head()
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Plot
df.P.plot()
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Mann-Kendall
result = mk.original_test(df.P) result
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Significant p-value! Fit linear trend
time = np.arange(len(df)) + 1 df['time'] = time formula = "P ~ time" mod_ols = smf.glm(formula=formula, data=df).fit()#, family=sm.families.Binomial()).fit() mod_ols.summary() mod_ols.pvalues
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
---- Faux dataset
## Load data_file2 = '../Data/pollution_data_stationY.txt' df2 = pandas.read_table(data_file2, delim_whitespace=True) df2.head()
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Date
dates = [] for index, row in df2.iterrows(): dates.append(f'{int(row.year)}-{int(row.month)}') dates = pandas.to_datetime(dates) df2['date'] = dates df2.set_index('date', inplace=True) df2.head()
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Plot
df2.y.plot()
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Fit with seasonal trend Dummies for seasonal
dummy = np.zeros((len(df2), 11), dtype=int) for i in np.arange(11): for j in np.arange(len(df2)): if df2.month.values[j] == i+1: dummy[j,i] = 1 dummies = [] for idum in np.arange(11): key = f'dum{idum}' dummies.append(key) df2[key] = dummy[:,idum] df2.head()
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Time
time = np.arange(len(df2)) + 1 df2['time'] = time
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Fit
formula = "y ~ dum0 + dum1 + dum2 + dum3 + dum4 + dum5 + dum6 + dum7 + dum8 + dum9 + dum10 + time" ols2 = smf.glm(formula=formula, data=df2).fit()#, family=sm.families.Binomial()).fit() ols2.summary()
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Plot
df2['ols'] = ols2.fittedvalues fig = plt.figure() fig.set_size_inches((12, 9)) ax = df2.y.plot(ylabel='y', label='data', marker='o', ls='') # df2.ols.plot(ax=ax, color='k', label='model') # ax.legend(fontsize=15) # #set_fontsize(ax, 17)
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Explore residuals Durbin-Watson
resids = df2.y-ols2.fittedvalues durbin_watson(resids)
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Shapiro
shapiro(resids)
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
Not normal!! Try a Seasonal MK test!
mk2_results = mk.seasonal_test(df2.y, period=12) mk2_results
_____no_output_____
BSD-3-Clause
OCEA-267/Lectures/W7_L13.ipynb
profxj/ocea200
---title: "Music recommender system with full pipeline"date: 2020-04-12T14:41:32+02:00author: "Othmane Rifki"type: technical_notedraft: false---
import pandas as pd from scipy.sparse import csr_matrix df = pd.read_csv('artists/scrobbler-small-sample.csv', index_col=0) artists = csr_matrix(df.transpose()) artist_names = [x.strip('\n').split(' ')[0] for x in open('artists/artists.csv').readlines()]
_____no_output_____
MIT
courses/datacamp/notes/python/sklearn/recommender.ipynb
othrif/DataInsights
computed the normalized NMF features:
# Perform the necessary imports from sklearn.decomposition import NMF from sklearn.preprocessing import Normalizer, MaxAbsScaler from sklearn.pipeline import make_pipeline # Create a MaxAbsScaler: scaler scaler = MaxAbsScaler() # Create an NMF model: nmf nmf = NMF(n_components=20) # Create a Normalizer: normalizer n...
_____no_output_____
MIT
courses/datacamp/notes/python/sklearn/recommender.ipynb
othrif/DataInsights
06 - K-Means on SVD Data Imports
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(style="white")
_____no_output_____
MIT
06 - K-Means on SVD Data.ipynb
avdeev-andrew/mlbootcamp6
Constants
n_clusters = 100 models_folder = "models/" train_data_fn = models_folder+'train_data.pkl' target_fn = models_folder+'target.pkl' test_data_fn = models_folder+'test_data.pkl' weight_multiplier_fn = models_folder+"weight_multiplier.pkl"
_____no_output_____
MIT
06 - K-Means on SVD Data.ipynb
avdeev-andrew/mlbootcamp6
Functions
import os.path from sklearn.externals import joblib def Load(filename): if os.path.isfile(filename): return joblib.load(filename) def Save(obj, filename): joblib.dump(obj, filename)
_____no_output_____
MIT
06 - K-Means on SVD Data.ipynb
avdeev-andrew/mlbootcamp6
Loading data
train = Load(train_data_fn) test = Load(test_data_fn) target = Load(target_fn) weight_multiplier = Load(weight_multiplier_fn) print(train.shape) print(test.shape) data = np.concatenate((train, test), axis=0) print(data.shape) from sklearn.cluster import KMeans kmeans = KMeans( n_clusters=n_clusters, # init='k-...
_____no_output_____
MIT
06 - K-Means on SVD Data.ipynb
avdeev-andrew/mlbootcamp6
T81-558: Applications of Deep Neural Networks**Module 3: Introduction to TensorFlow*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class w...
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import pandas as pd import io import os import requests import numpy as np from sklearn import metrics save_path = "." df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=...
_____no_output_____
Apache-2.0
t81_558_class_03_3_save_load.ipynb
sanjayssane/t81_558_deep_learning
The code below sets up a neural network and reads the data (for predictions), but it does not clear the model directory or fit the neural network. The weights from the previous fit are used. Now we reload the network and perform another prediction. The RMSE should match the previous one exactly if the neural network ...
from tensorflow.keras.models import load_model model2 = load_model(os.path.join(save_path,"network.h5")) pred = model2.predict(x) # Measure RMSE error. RMSE is common for regression. score = np.sqrt(metrics.mean_squared_error(pred,y)) print(f"After load score (RMSE): {score}")
_____no_output_____
Apache-2.0
t81_558_class_03_3_save_load.ipynb
sanjayssane/t81_558_deep_learning
Part 3.4: Early Stopping in Keras to Prevent Overfitting **Overfitting** occurs when a neural network is trained to the point that it begins to memorize rather than generalize. ![Training vs Validation Error for Overfitting](https://raw.githubusercontent.com/jeffheaton/t81_558_deep_learning/master/images/class_3_trai...
import pandas as pd import io import requests import numpy as np from sklearn import metrics from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping df = pd.read_csv( ...
Train on 112 samples, validate on 38 samples Epoch 1/1000 112/112 - 0s - loss: 1.2631 - val_loss: 1.1849 Epoch 2/1000 112/112 - 0s - loss: 1.1055 - val_loss: 1.0706 Epoch 3/1000 112/112 - 0s - loss: 1.0157 - val_loss: 1.0093 Epoch 4/1000 112/112 - 0s - loss: 0.9774 - val_loss: 0.9663 Epoch 5/1000 112/112 - 0s - loss: 0...
Apache-2.0
t81_558_class_03_3_save_load.ipynb
sanjayssane/t81_558_deep_learning
As you can see from above, the entire number of requested epocs were not used. The neural network training stopped once the validation set no longer improved.
from sklearn.metrics import accuracy_score pred = model.predict(x_test) predict_classes = np.argmax(pred,axis=1) expected_classes = np.argmax(y_test,axis=1) correct = accuracy_score(expected_classes,predict_classes) print(f"Accuracy: {correct}")
Accuracy: 1.0
Apache-2.0
t81_558_class_03_3_save_load.ipynb
sanjayssane/t81_558_deep_learning
Early Stopping with Regression
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import pandas as pd import io import os import requests import numpy as np from sklearn import metrics df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/auto-mpg.csv", na_values=['NA', '?']) car...
Final score (RMSE): 4.939672608054
Apache-2.0
t81_558_class_03_3_save_load.ipynb
sanjayssane/t81_558_deep_learning
Part 3.5: Extracting Keras Weights and Manual Neural Network CalculationIn this section we will build a neural network and analyze it down the individual weights. We will train a simple neural network that learns the XOR function. It is not hard to simply hand-code the neurons to provide an [XOR function](https://en...
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation import numpy as np # Create a dataset for the XOR function x = np.array([ [0,0], [1,0], [0,1], [1,1] ]) y = np.array([ 0, 1, 1, 0 ]) # Build the network # sgd = optimizers.SGD(lr=0.01,...
_____no_output_____
Apache-2.0
t81_558_class_03_3_save_load.ipynb
sanjayssane/t81_558_deep_learning
The output above should have two numbers near 0.0 for the first and forth spots (input [[0,0]] and [[1,1]]). The middle two numbers should be near 1.0 (input [[1,0]] and [[0,1]]). These numbers are in scientific notation. Due to random starting weights, it is sometimes necessary to run the above through several cycl...
# Dump weights for layerNum, layer in enumerate(model.layers): weights = layer.get_weights()[0] biases = layer.get_weights()[1] for toNeuronNum, bias in enumerate(biases): print(f'{layerNum}B -> L{layerNum+1}N{toNeuronNum}: {bias}') for fromNeuronNum, wgt in enumerate(weights): ...
_____no_output_____
Apache-2.0
t81_558_class_03_3_save_load.ipynb
sanjayssane/t81_558_deep_learning
If you rerun this, you probably get different weights. There are many ways to solve the XOR function.In the next section, we copy/paste the weights from above and recreate the calculations done by the neural network. Because weights can change with each training, the weights used for the below code came from this:```...
input0 = 0 input1 = 1 hidden0Sum = (input0*1.3)+(input1*1.3)+(-1.3) hidden1Sum = (input0*1.2)+(input1*1.2)+(0) print(hidden0Sum) # 0 print(hidden1Sum) # 1.2 hidden0 = max(0,hidden0Sum) hidden1 = max(0,hidden1Sum) print(hidden0) # 0 print(hidden1) # 1.2 outputSum = (hidden0*-1.6)+(hidden1*0.8)+(0) print(outputSum) ...
_____no_output_____
Apache-2.0
t81_558_class_03_3_save_load.ipynb
sanjayssane/t81_558_deep_learning
url :https://www.naval-group.com/en/documents
import os os.environ['KMP_DUPLICATE_LIB_OK']='True' import pandas as pd,requests,bs4,re,time,io,pytesseract,easyocr,random,textstat,urllib.request from pdfminer.high_level import extract_text from PIL import Image from pathlib import Path from pdf2image import convert_from_path from selenium.webdriver.common.by impor...
Trash removed successfully
MIT
Naval Group/naval-group_selenium_bs4.ipynb
heyakshayhere/bs4
Analysing car mpg data set using Decision Tree Regressor
# To enable plotting graphs in Jupyter notebook %matplotlib inline # Numerical libraries import numpy as np from sklearn.model_selection import train_test_split # Import Linear Regression machine learning library from sklearn.tree import DecisionTreeRegressor # to handle data in form of rows and columns import...
<class 'pandas.core.frame.DataFrame'> RangeIndex: 398 entries, 0 to 397 Data columns (total 10 columns): mpg 398 non-null float64 cyl 398 non-null int64 disp 398 non-null float64 hp 398 non-null float64 wt 398 non-null int64 acc 398 non-null float64 yr 398 non-n...
MIT
Ensemble_Techniques/.ipynb_checkpoints/DecisionTreeRegressor_MPGData-checkpoint.ipynb
Aujasvi-Moudgil/Ensemble-Learning
Let us do a pair plot analysis to visually check study the data
# This is done using scatter matrix function which creates a dashboard reflecting useful information about the dimensions # The result can be stored as a .png file and opened in say, paint to get a larger view mpg_df_attr = mpg_df.iloc[:, 0:9] mpg_df_attr['dispercyl'] = mpg_df_attr['disp'] / mpg_df_attr['cyl'] sns.pa...
C:\Users\Mukesh\Anaconda3\lib\site-packages\statsmodels\nonparametric\kde.py:494: RuntimeWarning: invalid value encountered in true_divide binned = fast_linbin(X,a,b,gridsize)/(delta*nobs) C:\Users\Mukesh\Anaconda3\lib\site-packages\statsmodels\nonparametric\kdetools.py:34: RuntimeWarning: invalid value encountered i...
MIT
Ensemble_Techniques/.ipynb_checkpoints/DecisionTreeRegressor_MPGData-checkpoint.ipynb
Aujasvi-Moudgil/Ensemble-Learning
Step 5 DecisionTree Regression
from scipy.stats import zscore mpg_df_attr = mpg_df.loc[:, 'mpg':'origin'] mpg_df_attr_z = mpg_df_attr.apply(zscore) mpg_df_attr_z.pop('origin') # Remove "origin" and "yr" columns mpg_df_attr_z.pop('yr') array = mpg_df_attr_z.values X = array[:,1:5] # select all rows and first 4 columns which are the attributes...
_____no_output_____
MIT
Ensemble_Techniques/.ipynb_checkpoints/DecisionTreeRegressor_MPGData-checkpoint.ipynb
Aujasvi-Moudgil/Ensemble-Learning
Data Zoo> Download datasets produced within current project
# export import logging import pandas as pd from pathlib import Path from typing import List from os.path import normpath # export BASE_URL = 'https://raw.githubusercontent.com/artemlops/customer-segmentation-toolkit/master' SUPPORTED_SUFFIXES = {'.csv'} ENCODING = "ISO-8859-1" def download_data_csv(path_relative: str...
_____no_output_____
Apache-2.0
nbs/00_data_zoo.ipynb
artemlops/customer-segmentation-toolkit
Alright, so I think I have a pretty neat idea for this package, so I'm just going to document a few of my thoughts. This is an augment package, where the idea is to take a protein sequence and generate a bunch of point mutational variants. I think this would be quite handy in certain applications, for instance antimicr...
import pandas as pd mendelDF = pd.DataFrame.from_dict({ # A,C,D,E,F,G,H,I,K,L,M,N,P,Q,R,S,T,V,W,Y "M":[0,0,0,0,0,0,0,0,0,8,9,7,0,0,0,0,0,0,0,0], "E":[0,0,7,8,9,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], "N":[0,0,0,0,0,0,0,0,0,9,8,7,0,0,0,0,0,0,0,0], "D":[0,7,8,9,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], # "E":[0,0,9,8,7,0,0,0,0,0,...
_____no_output_____
Apache-2.0
nbs/03_augment.ipynb
tijeco/berteome
Alright so this very much a barebones dummy df, but I think it will work.So for this example I would only want to sample k=2, to go through each residue it should be something like1. M: L,N2. E: D,F3. N: L,M4. D: C,E6. L: K,MSo that should give us 12 variant sequences! (Edit:10..)So let me go ahead and think about what...
def augmentPep(df, k): seqList = list(df["wt"]) variantDict = {} for index, row in df.iterrows(): scores = row[list("ACDEFGHIKLMNPQRSTVWY")] top_k_scores = scores.where(scores.index != row["wt"]).sort_values(ascending=False).head(k) top_k_subs = list(top_k_scores.index) for res in top_k_subs: seqCopy...
_____no_output_____
Apache-2.0
nbs/03_augment.ipynb
tijeco/berteome
ALright!! In principle, that's working!!The output structure needs to be modified so that the mutational variant is annotated as "pos_res".I think I like this..Let's see how it works on esm/prot_bert dataframes
from berteome import prot_bert mendel_prot_bert_DF = prot_bert.bertPredictionDF("MENDEL") mendel_prot_bert_DF augmentPep(mendel_prot_bert_DF, 2) from berteome import esm mendel_esm_DF = esm.esmPredictionDF("MENDEL") augmentPep(mendel_esm_DF, 2)
_____no_output_____
Apache-2.0
nbs/03_augment.ipynb
tijeco/berteome
#gera o grafico na lina #%matplotlib inline import pandas as pd from wordcloud import WordCloud #base de palavras textos = [ 'eu nao gostei disso', 'eu nao quero isso', 'nao vou mais aceitar isso', 'eu nao gosto disso', 'gosto disso eu agora', 'voce goatari...
_____no_output_____
MIT
Visualizando_dados_WordCloud.ipynb
EduardoMoraesRitter/Introdu-o-NLP-an-lise-de-sentimento
HDF Reference Recipe for CMIP6This example illustrates how to create a {class}`pangeo_forge_recipes.recipes.HDFReferenceRecipe`.This recipe does not actually copy the original source data.Instead, it generates metadata files which reference and index the original data, allowing it to be accessed more efficiently.For m...
import s3fs fs = s3fs.S3FileSystem(anon=True) base_path = 's3://esgf-world/CMIP6/OMIP/NOAA-GFDL/GFDL-CM4/omip1/r1i1p1f1/Omon/thetao/gr/v20180701/' all_paths = fs.ls(base_path) all_paths
_____no_output_____
Apache-2.0
docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb
chc-ucsb/pangeo-forge-recipes
We see there are 15 individual NetCDF files. Let's time how long it takes to open and display one of them using Xarray.```{note}The argument `decode_coords='all'` helps Xarray promote all of the `_bnds` variables to coordinates (rather than data variables).```
import xarray as xr %%time ds_orig = xr.open_dataset(fs.open(all_paths[0]), engine='h5netcdf', chunks={}, decode_coords='all') ds_orig
<timed exec>:1: UserWarning: Variable(s) referenced in cell_measures not in variables: ['areacello', 'volcello']
Apache-2.0
docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb
chc-ucsb/pangeo-forge-recipes
It took ~30 seconds to open this one dataset. So it would take 7-8 minutes for us to open every file. This would be annoyingly slow.As a first step in our recipe, we create a `File Pattern ` to represent the input files.In this case, since we already have a list of inputs, we just use the `pattern_from_file_sequence` c...
from pangeo_forge_recipes.patterns import pattern_from_file_sequence pattern = pattern_from_file_sequence(['s3://' + path for path in all_paths], 'time') pattern
_____no_output_____
Apache-2.0
docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb
chc-ucsb/pangeo-forge-recipes
Define the RecipeOnce we have our `FilePattern` defined, defining our Recipe is straightforward.The only custom options we need are to specify that we'll be accessing the source files anonymously and to use `decode_coords='all'` when opening them.
from pangeo_forge_recipes.recipes.reference_hdf_zarr import HDFReferenceRecipe rec = HDFReferenceRecipe( pattern, xarray_open_kwargs={"decode_coords": "all"}, netcdf_storage_options={"anon": True} ) rec
_____no_output_____
Apache-2.0
docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb
chc-ucsb/pangeo-forge-recipes
StorageIf the recipe excecution occurs in a Bakery, cloud storage will be assigned automatically.For this example, we use the recipe's default storage, which is a temporary local directory. Execute with DaskThis runs relatively slowly in serial on a small laptop, but it would scale out very well on the cloud.
from dask.diagnostics import ProgressBar delayed = rec.to_dask() with ProgressBar(): delayed.compute()
[##################################### ] | 94% Completed | 6min 21.1s
Apache-2.0
docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb
chc-ucsb/pangeo-forge-recipes
Examine the Result Load with IntakeThe easiest way to load the dataset created by `fsspec_reference_maker` is via intake.An intake catalog is automatically created in the target.
cat_url = f"{rec.target}/reference.yaml" cat_url import intake cat = intake.open_catalog(cat_url) cat
_____no_output_____
Apache-2.0
docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb
chc-ucsb/pangeo-forge-recipes
To load the data lazily:
%time ds = cat.data.to_dask() ds
CPU times: user 153 ms, sys: 12.4 ms, total: 166 ms Wall time: 777 ms
Apache-2.0
docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb
chc-ucsb/pangeo-forge-recipes
Note that it opened immediately! 🎉```{note}The Zarr chunks of the reference dataset correspond 1:1 to the HDF5 chunks in the original dataset.These chunks are often smaller than optimal for cloud-based analysis.```If we want to pass custom options to xarray when loading the dataset, we do so as follows.In this example...
ds = cat.data( chunks={'time': 10, 'lev': -1, 'lat': -1, 'lon': -1}, decode_coords='all' ).to_dask() ds
/Users//miniconda3/envs/pangeo-forge-recipes/lib/python3.9/site-packages/xarray/core/dataset.py:409: UserWarning: Specified Dask chunks (10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,...
Apache-2.0
docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb
chc-ucsb/pangeo-forge-recipes
Manual LoadingIt is also possible to load the reference dataset directly with xarray, bypassing intake.
ref_url = f"{rec.target}/reference.json" ref_url import fsspec m = fsspec.get_mapper( "reference://", fo=ref_url, target_protocol="file", remote_protocol="s3", remote_options=dict(anon=True), skip_instance_cache=True, ) ds = xr.open_dataset( m, engine='zarr', backend_kwargs={'consoli...
_____no_output_____
Apache-2.0
docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb
chc-ucsb/pangeo-forge-recipes
Problem with Time Encoding```{warning}There is currently a bug with time encoding in [fsspec reference maker](https://github.com/intake/fsspec-reference-maker/issues/69)which causes the time coordinate in this dataset to be messed up.Until this bug is fixed, we can manually override the time variable.```We know the da...
import pandas as pd import datetime as dt ds = ds.assign_coords( time=pd.date_range(start='1708-01', freq='MS', periods=ds.dims['time']) + dt.timedelta(days=14) ) ds.time
_____no_output_____
Apache-2.0
docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb
chc-ucsb/pangeo-forge-recipes
Make a MapLet's just verify that we can read an visualize the data. We'll compare the first year to the last year.
ds_ann = ds.resample(time='A').mean() sst_diff = ds_ann.thetao.isel(time=-1, lev=0) - ds_ann.thetao.isel(time=0, lev=0) sst_diff.plot()
_____no_output_____
Apache-2.0
docs/pangeo_forge_recipes/tutorials/hdf_reference/reference_cmip6.ipynb
chc-ucsb/pangeo-forge-recipes
Avent Of Code 2018 - DAY 4 Jean-David HALIMI, 2018https://adventofcode.com/2018/day/4 strategy- read input- sort alphabetically- parse lines and build list (date, guard, minutes)- use pandas dataframes to sum and search asleep time
import re import numpy as np import pandas as pd def dumps(line): """writes the date as in sample""" d, g, l = line[0], line[1], line[2:] return '{} {:04} {}'.format(d, g, ''.join(('#' if x else '.') for x in l)) def read(input_file): """reads input and returns input sorted""" inputs = [] wi...
_____no_output_____
Unlicense
04/day-4.ipynb
jdhalimi/aoc-2018
Sample- read sample- apply the strategy
def sample(): return [x.strip() for x in """ [1518-11-01 00:00] Guard #10 begins shift [1518-11-01 00:05] falls asleep [1518-11-01 00:25] wakes up [1518-11-01 00:30] falls asleep [1518-11-01 00:55] wakes up ...
1518-11-01 0010 .....####################.....#########################..... 1518-11-02 0099 ........................................##########.......... 1518-11-03 0010 ........................#####............................... 1518-11-04 0099 ....................................##########.............. 1518-11-05 0...
Unlicense
04/day-4.ipynb
jdhalimi/aoc-2018
part 1
def strategy_1(lines): # create the dataframe with minutes as columns df = pd.DataFrame(lines, columns=['date', 'guard']+[str(x) for x in range(60)]) df.set_index(['date', 'guard'], inplace=True) # sums the total asleep time per line df['total'] = df['0':'59'].sum(axis=1) # sum columns...
_____no_output_____
Unlicense
04/day-4.ipynb
jdhalimi/aoc-2018
sample
lines = parse_input(sample()) print(strategy_1(lines))
240
Unlicense
04/day-4.ipynb
jdhalimi/aoc-2018
input file
lines = parse_input(read('input.txt')) print(strategy_1(lines))
19874
Unlicense
04/day-4.ipynb
jdhalimi/aoc-2018
Part 2
def strategy_2(lines): df = pd.DataFrame(lines, columns=['date', 'guard']+[str(x) for x in range(60)]) df.set_index(['date', 'guard'], inplace=True) # sum group by guard df2 = df.reset_index().set_index(['guard']).drop(['date'], axis=1).groupby('guard').sum() guard = df2.max(axis=1).idxmax() ...
_____no_output_____
Unlicense
04/day-4.ipynb
jdhalimi/aoc-2018
Sample
lines = parse_input(sample()) print(strategy_2(lines))
4455
Unlicense
04/day-4.ipynb
jdhalimi/aoc-2018
Input file
lines = parse_input(read('input.txt')) print(strategy_2(lines))
22687
Unlicense
04/day-4.ipynb
jdhalimi/aoc-2018
Explore Data
df.head() df.info() df.describe() df.describe(include = 'all') df.isnull().sum() df = df.dropna() df.isnull().sum()
_____no_output_____
MIT
Data/my_model/Untitled.ipynb
James-Hagerman/DS-BW-3
Split data
train, val = train_test_split(df, test_size=0.2, random_state=42) target = 'Strain' X_train = train.drop(columns = target) y_train = train[target] X_val = val.drop(columns = target) y_val = val[target] print('Baselne Accruacy:', y_train.value_counts(normalize=True).max())
Baselne Accruacy: 0.001098297638660077
MIT
Data/my_model/Untitled.ipynb
James-Hagerman/DS-BW-3
Model Building
model = make_pipeline( OneHotEncoder(use_cat_names = True), RandomForestRegressor(random_state = 42, n_jobs = -1) ) model.fit(X_train, y_train) encoder = ce.OrdinalEncoder(cols = ['Effects']) # le = preprocessing.LabelEncoder() # df['Effects'] = le.fit_transform(df.Effects) df.head() df1 = df['Effects'] list =...
_____no_output_____
MIT
Data/my_model/Untitled.ipynb
James-Hagerman/DS-BW-3
matrices elemnt wise product
a = np.array([[1,2], [3,4]]) b = np.array([[4,5], [6,7]]) a * b # element wise np.multiply(a, b) # elemnt wise
_____no_output_____
MIT
other/python_recap.ipynb
alirezadir/deep-learning
matrix prodcut
np.dot(a,b) np.matmul(a,b) a.dot(b) # transpose a_t = a.T a_t a = np.array([2]) b= np.array([[2],[ 3]]) print(a) print(b) a.dot(b) a_t[0][1] = 5 # note Transposed mat is just a shallow copy, and a change in a_t, changes a too!!!! a x = np.array([0.5, -0.2, 0.1]) # (3,) y = np.array([2,4]) #(2,) print(x[:,None] * y) #...
100%|██████████| 10/10 [00:00<00:00, 7583.27it/s]
MIT
other/python_recap.ipynb
alirezadir/deep-learning
backprop
activation_function = lambda x : 1 / (1 + np.exp(-x)) list(map(activation_function, [2,3])) import numpy as np def sigmoid(x): """ Calculate sigmoid """ return 1 / (1 + np.exp(-x)) x = np.array([0.5, 0.1, -0.2]) target = 0.6 learnrate = 0.5 weights_input_hidden = np.array([[0.5, -0.6], ...
_____no_output_____
MIT
other/python_recap.ipynb
alirezadir/deep-learning
In this tutorial we will show how to access and navigate the Iteration/Expression Tree (IET) rooted in an `Operator`. Part I - Top DownLet's start with a fairly trivial example. First of all, we disable all performance-related optimizations, to maximize the simplicity of the created IET as well as the readability of th...
from devito import configuration configuration['opt'] = 'noop' configuration['language'] = 'C'
_____no_output_____
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito
Then, we create a `TimeFunction` with 3 points in each of the space `Dimension`s _x_ and _y_.
from devito import Grid, TimeFunction grid = Grid(shape=(3, 3)) u = TimeFunction(name='u', grid=grid)
_____no_output_____
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito
We now create an `Operator` that increments by 1 all points in the computational domain.
from devito import Eq, Operator eq = Eq(u.forward, u+1) op = Operator(eq)
_____no_output_____
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito
An `Operator` is an IET node that can generate, JIT-compile, and run low-level code (e.g., C). Just like all other types of IET nodes, it's got a number of metadata attached. For example, we can query an `Operator` to retrieve the input/output `Function`s.
op.input op.output
_____no_output_____
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito
If we print `op`, we can see how the generated code looks like.
print(op)
#define _POSIX_C_SOURCE 200809L #include "stdlib.h" #include "math.h" #include "sys/time.h" struct dataobj { void *restrict data; int * size; int * npsize; int * dsize; int * hsize; int * hofs; int * oofs; } ; struct profiler { double section0; } ; int Kernel(struct dataobj *restrict u_vec, const in...
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito
An `Operator` is the root of an IET that typically consists of several nested `Iteration`s and `Expression`s – two other fundamental IET node types. The user-provided SymPy equations are wrapped within `Expressions`. Loop nest embedding such expressions are constructed by suitably nesting `Iterations`.The Devito compil...
from devito.tools import pprint pprint(op)
<Callable Kernel> <ArrayCast> <List (0, 1, 0)> <[affine,sequential,wrappable] Iteration time::time::(time_m, time_M, 1)> <TimedList (2, 1, 2)> <Section (1)> <[affine,parallel,parallel=,tilable] Iteration x::x::(x_m, x_M, 1)> <[affine,parallel,parallel=,tilable] Iteration y:...
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito
In this example, `op` is represented as a ``. Attached to it are metadata, such as `_headers` and `_includes`, as well as the `body`, which includes the children IET nodes. Here, the body is the concatenation of an `ArrayCast` and a `List` object.
op._headers op._includes op.body
_____no_output_____
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito
We can explicitly traverse the `body` until we locate the user-provided `SymPy` equations.
print(op.body[0]) # Printing the ArrayCast print(op.body[1]) # Printing the List
for (int time = time_m, t0 = (time)%(2), t1 = (time + 1)%(2); time <= time_M; time += 1, t0 = (time)%(2), t1 = (time + 1)%(2)) { struct timeval start_section0, end_section0; gettimeofday(&start_section0, NULL); /* Begin section0 */ for (int x = x_m; x <= x_M; x += 1) { for (int y = y_m; y <= y_M; y += 1) ...
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito
Below we access the `Iteration` representing the time loop.
t_iter = op.body[1].body[0] t_iter
_____no_output_____
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito