markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
09 Strain GageThis is one of the most commonly used sensor. It is used in many transducers. Its fundamental operating principle is fairly easy to understand and it will be the purpose of this lecture. A strain gage is essentially a thin wire that is wrapped on film of plastic. The strain gage is then mounted (glued...
Vs = 5.00 Vo = (120**2-120*110)/(230*240) * Vs print('Vo = ',Vo, ' V') # typical range in strain a strain gauge can measure # 1 -1000 micro-Strain AxialStrain = 1000*10**(-6) # axial strain StrainGageFactor = 2 R_ini = 120 # Ohm R_1 = R_ini+R_ini*StrainGageFactor*AxialStrain print(R_1) Vo = (120**2-120*(R_1))/((120+R_...
120.24 Vo = -0.002497502497502434 V
BSD-3-Clause
Lectures/09_StrainGage.ipynb
eiriniflorou/GWU-MAE3120_2022
> How important is it to know \& match the resistances of the resistors you employ to create your bridge?> How would you do that practically?> Assume $R_1= R_2 =R_3=120\,\Omega$, $R_4=120.01\,\Omega$, $V_s=5.00\,\text{V}$. What is $V_\circ$?
Vs = 5.00 Vo = (120**2-120*120.01)/(240.01*240) * Vs print(Vo)
-0.00010416232656978944
BSD-3-Clause
Lectures/09_StrainGage.ipynb
eiriniflorou/GWU-MAE3120_2022
2- Strain gage 1:One measures the strain on a bridge steel beam. The modulus of elasticity is $E=190$ GPa. Only one strain gage is mounted on the bottom of the beam; the strain gage factor is $S=2.02$.> a) What kind of electronic circuit will you use? Draw a sketch of it.> b) Assume all your resistors including the ...
S = 2.02 Vo = -0.00125 Vs = 5 eps_a = -1*(4/S)*(Vo/Vs) print(eps_a)
0.0004950495049504951
BSD-3-Clause
Lectures/09_StrainGage.ipynb
eiriniflorou/GWU-MAE3120_2022
Tabular learner> The function to immediately get a `Learner` ready to train for tabular data The main function you probably want to use in this module is `tabular_learner`. It will automatically create a `TabulaModel` suitable for your data and infer the irght loss function. See the [tabular tutorial](http://docs.fast...
#export @log_args(but_as=Learner.__init__) class TabularLearner(Learner): "`Learner` for tabular data" def predict(self, row): tst_to = self.dls.valid_ds.new(pd.DataFrame(row).T) tst_to.process() tst_to.conts = tst_to.conts.astype(np.float32) dl = self.dls.valid.new(tst_to) ...
_____no_output_____
Apache-2.0
nbs/43_tabular.learner.ipynb
NickVlasov/fastai
It works exactly as a normal `Learner`, the only difference is that it implements a `predict` method specific to work on a row of data.
#export @log_args(to_return=True, but_as=Learner.__init__) @delegates(Learner.__init__) def tabular_learner(dls, layers=None, emb_szs=None, config=None, n_out=None, y_range=None, **kwargs): "Get a `Learner` using `dls`, with `metrics`, including a `TabularModel` created using the remaining params." if config is...
_____no_output_____
Apache-2.0
nbs/43_tabular.learner.ipynb
NickVlasov/fastai
If your data was built with fastai, you probably won't need to pass anything to `emb_szs` unless you want to change the default of the library (produced by `get_emb_sz`), same for `n_out` which should be automatically inferred. `layers` will default to `[200,100]` and is passed to `TabularModel` along with the `config`...
path = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(path/'adult.csv') cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race'] cont_names = ['age', 'fnlwgt', 'education-num'] procs = [Categorify, FillMissing, Normalize] dls = TabularDataLoaders.from_df(df, path, procs=procs, cat_...
_____no_output_____
Apache-2.0
nbs/43_tabular.learner.ipynb
NickVlasov/fastai
Export -
#hide from nbdev.export import notebook2script notebook2script()
Converted 00_torch_core.ipynb. Converted 01_layers.ipynb. Converted 02_data.load.ipynb. Converted 03_data.core.ipynb. Converted 04_data.external.ipynb. Converted 05_data.transforms.ipynb. Converted 06_data.block.ipynb. Converted 07_vision.core.ipynb. Converted 08_vision.data.ipynb. Converted 09_vision.augment.ipynb. Co...
Apache-2.0
nbs/43_tabular.learner.ipynb
NickVlasov/fastai
Aerospike Connect for Spark - SparkML Prediction Model Tutorial Tested with Java 8, Spark 3.0.0, Python 3.7, and Aerospike Spark Connector 3.0.0 SummaryBuild a linear regression model to predict birth weight using Aerospike Database and Spark.Here are the features used:- gestation weeks- mother’s age- father’s age- m...
#IP Address or DNS name for one host in your Aerospike cluster. #A seed address for the Aerospike database cluster is required AS_HOST ="127.0.0.1" # Name of one of your namespaces. Type 'show namespaces' at the aql prompt if you are not sure AS_NAMESPACE = "test" AS_FEATURE_KEY_PATH = "/etc/aerospike/features.conf" ...
Spark Verison: 3.0.0
MIT
notebooks/spark/other_notebooks/AerospikeSparkMLLinearRegression.ipynb
artanderson/interactive-notebooks
Step 1: Load Data into a DataFrame
as_data=spark \ .read \ .format("aerospike") \ .option("aerospike.set", "natality").load() as_data.show(5) print("Inferred Schema along with Metadata.") as_data.printSchema()
+-----+--------------------+---------+------------+-------+-------------+---------------+-------------+----------+----------+----------+ |__key| __digest| __expiry|__generation| __ttl| weight_pnd|weight_gain_pnd|gstation_week|apgar_5min|mother_age|father_age| +-----+--------------------+---------+--------...
MIT
notebooks/spark/other_notebooks/AerospikeSparkMLLinearRegression.ipynb
artanderson/interactive-notebooks
To speed up the load process at scale, use the [knobs](https://www.aerospike.com/docs/connect/processing/spark/performance.html) available in the Aerospike Spark Connector. For example, **spark.conf.set("aerospike.partition.factor", 15 )** will map 4096 Aerospike partitions to 32K Spark partitions. (Note: Please conf...
# This Spark3.0 setting, if true, will turn on Adaptive Query Execution (AQE), which will make use of the # runtime statistics to choose the most efficient query execution plan. It will speed up any joins that you # plan to use for data prep step. spark.conf.set("spark.sql.adaptive.enabled", 'true') # Run a query in ...
+------------------+---------------+-------------+----------+----------+----------+ | weight_pnd|weight_gain_pnd|gstation_week|apgar_5min|mother_age|father_age| +------------------+---------------+-------------+----------+----------+----------+ | 7.5398093604| 38| 39| 9| ...
MIT
notebooks/spark/other_notebooks/AerospikeSparkMLLinearRegression.ipynb
artanderson/interactive-notebooks
Step 3 Visualize Data
import numpy as np import matplotlib.pyplot as plt import pandas as pd import math pdf = clean_data.toPandas() #Histogram - Father Age pdf[['father_age']].plot(kind='hist',bins=10,rwidth=0.8) plt.xlabel('Fathers Age (years)',fontsize=12) plt.legend(loc=None) plt.style.use('seaborn-whitegrid') plt.show() ''' pdf[['m...
_____no_output_____
MIT
notebooks/spark/other_notebooks/AerospikeSparkMLLinearRegression.ipynb
artanderson/interactive-notebooks
Step 4 - Create Model**Steps used for model creation:**1. Split cleaned data into Training and Test sets2. Vectorize features on which the model will be trained3. Create a linear regression model (Choose any ML algorithm that provides the best fit for the given dataset)4. Train model (Although not shown here, you coul...
# Define a function that collects the features of interest # (mother_age, father_age, and gestation_weeks) into a vector. # Package the vector in a tuple containing the label (`weight_pounds`) for that # row.## def vector_from_inputs(r): return (r["weight_pnd"], Vectors.dense(float(r["mother_age"]), ...
Coefficients:[0.00858931617782676,0.0008477851947958541,0.27948866120791893,0.009329081045860402,0.18817058385589935] Intercept:-5.893364345930709 R^2:0.3970187134779115 +--------------------+ | residuals| +--------------------+ | -1.845934264937739| | -2.2396120149639067| | -0.7717836944756593| | -0.6160804...
MIT
notebooks/spark/other_notebooks/AerospikeSparkMLLinearRegression.ipynb
artanderson/interactive-notebooks
Evaluate Model
eval_data = test.rdd.map(vector_from_inputs).toDF(["label", "features"]) eval_data.show() evaluation_summary = model.evaluate(eval_data) print("MAE:", evaluation_summary.meanAbsoluteError) print("RMSE:", evaluation_summary.rootMeanSquaredError) print("R-squared va...
+------------------+--------------------+ | label| features| +------------------+--------------------+ | 3.62439958728|[42.0,37.0,35.0,5...| | 5.3351867404|[43.0,48.0,38.0,6...| | 6.8122838958|[42.0,36.0,39.0,2...| | 6.9776305923|[46.0,42.0,39.0,2...| | 7.06361087448|[14.0,...
MIT
notebooks/spark/other_notebooks/AerospikeSparkMLLinearRegression.ipynb
artanderson/interactive-notebooks
Step 5 - Batch Prediction
#eval_data contains the records (ideally production) that you'd like to use for the prediction predictions = model.transform(eval_data) predictions.show()
+------------------+--------------------+-----------------+ | label| features| prediction| +------------------+--------------------+-----------------+ | 3.62439958728|[42.0,37.0,35.0,5...|6.440847435018738| | 5.3351867404|[43.0,48.0,38.0,6...| 6.88674880594522| | 6.8122838958|...
MIT
notebooks/spark/other_notebooks/AerospikeSparkMLLinearRegression.ipynb
artanderson/interactive-notebooks
Compare the labels and the predictions, they should ideally match up for an accurate model. Label is the actual weight of the baby and prediction is the predicated weight Saving the Predictions to Aerospike for ML Application's consumption
# Aerospike is a key/value database, hence a key is needed to store the predictions into the database. Hence we need # to add the _id column to the predictions using SparkSQL predictions.createOrReplaceTempView("predict_view") sql_query = """ SELECT *, monotonically_increasing_id() as _id from...
_____no_output_____
MIT
notebooks/spark/other_notebooks/AerospikeSparkMLLinearRegression.ipynb
artanderson/interactive-notebooks
Concurrency with asyncio Thread vs. coroutine
# spinner_thread.py import threading import itertools import time import sys class Signal: go = True def spin(msg, signal): write, flush = sys.stdout.write, sys.stdout.flush for char in itertools.cycle('|/-\\'): status = char + ' ' + msg write(status) flush() write('\x08' ...
_____no_output_____
Apache-2.0
notebook/fluent_ch18.ipynb
Lin0818/py-study-notebook
Writing asyncio servers
# tcp_charfinder.py import sys import asyncio from charfinder import UnicodeNameIndex CRLF = b'\r\n' PROMPT = b'?>' index = UnicodeNameIndex() @asyncio.coroutine def handle_queries(reader, writer): while True: writer.write(PROMPT) yield from writer.drain() data = yield from reader.readli...
_____no_output_____
Apache-2.0
notebook/fluent_ch18.ipynb
Lin0818/py-study-notebook
原始数据处理程序 本程序用于将原始txt格式数据以utf-8编码写入到csv文件中, 以便后续操作请在使用前确认原始数据所在文件夹内无无关文件,并修改各分类文件夹名至1-9一个可行的对应关系如下所示:财经 1 economy房产 2 realestate健康 3 health教育 4 education军事 5 military科技 6 technology体育 7 sports娱乐 8 entertainment证券 9 stock 先导入一些库
import os #用于文件操作 import pandas as pd #用于读写数据
_____no_output_____
MIT
filePreprocessing.ipynb
zinccat/WeiboTextClassification
数据处理所用函数,读取文件夹名作为数据的类别,将数据以文本(text),类别(category)的形式输出至csv文件中传入参数: corpus_path: 原始语料库根目录 out_path: 处理后文件输出目录
def processing(corpus_path, out_path): if not os.path.exists(out_path): #检测输出目录是否存在,若不存在则创建目录 os.makedirs(out_path) clist = os.listdir(corpus_path) #列出原始数据根目录下的文件夹 for classid in clist: #对每个文件夹分别处理 dict = {'text': [], 'category': []} class_path = corpus_path+classid+"/" filel...
_____no_output_____
MIT
filePreprocessing.ipynb
zinccat/WeiboTextClassification
处理文件
processing("./data/", "./dataset/")
_____no_output_____
MIT
filePreprocessing.ipynb
zinccat/WeiboTextClassification
Logistic Regression Table of ContentsIn this lab, we will cover logistic regression using PyTorch. Logistic Function Build a Logistic Regression Using nn.Sequential Build Custom ModulesEstimated Time Needed: 15 min Preparation We'll need the following libraries:
# Import the libraries we need for this lab import torch.nn as nn import torch import matplotlib.pyplot as plt
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Set the random seed:
# Set the random seed torch.manual_seed(2)
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Logistic Function Create a tensor ranging from -100 to 100:
z = torch.arange(-100, 100, 0.1).view(-1, 1) print("The tensor: ", z)
The tensor: tensor([[-100.0000], [ -99.9000], [ -99.8000], ..., [ 99.7000], [ 99.8000], [ 99.9000]])
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Create a sigmoid object:
# Create sigmoid object sig = nn.Sigmoid()
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Apply the element-wise function Sigmoid with the object:
# Use sigmoid object to calculate the yhat = sig(z)
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Plot the results:
plt.plot(z.numpy(), yhat.numpy()) plt.xlabel('z') plt.ylabel('yhat')
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Apply the element-wise Sigmoid from the function module and plot the results:
yhat = torch.sigmoid(z) plt.plot(z.numpy(), yhat.numpy())
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Build a Logistic Regression with nn.Sequential Create a 1x1 tensor where x represents one data sample with one dimension, and 2x1 tensor X represents two data samples of one dimension:
# Create x and X tensor x = torch.tensor([[1.0]]) X = torch.tensor([[1.0], [100]]) print('x = ', x) print('X = ', X)
x = tensor([[1.]]) X = tensor([[ 1.], [100.]])
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Create a logistic regression object with the nn.Sequential model with a one-dimensional input:
# Use sequential function to create model model = nn.Sequential(nn.Linear(1, 1), nn.Sigmoid())
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
The object is represented in the following diagram: In this case, the parameters are randomly initialized. You can view them the following ways:
# Print the parameters print("list(model.parameters()):\n ", list(model.parameters())) print("\nmodel.state_dict():\n ", model.state_dict())
list(model.parameters()): [Parameter containing: tensor([[0.2294]], requires_grad=True), Parameter containing: tensor([-0.2380], requires_grad=True)] model.state_dict(): OrderedDict([('0.weight', tensor([[0.2294]])), ('0.bias', tensor([-0.2380]))])
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Make a prediction with one sample:
# The prediction for x yhat = model(x) print("The prediction: ", yhat)
The prediction: tensor([[0.4979]], grad_fn=<SigmoidBackward>)
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Calling the object with tensor X performed the following operation (code values may not be the same as the diagrams value depending on the version of PyTorch) : Make a prediction with multiple samples:
# The prediction for X yhat = model(X) yhat
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Calling the object performed the following operation: Create a 1x2 tensor where x represents one data sample with one dimension, and 2x3 tensor X represents one data sample of two dimensions:
# Create and print samples x = torch.tensor([[1.0, 1.0]]) X = torch.tensor([[1.0, 1.0], [1.0, 2.0], [1.0, 3.0]]) print('x = ', x) print('X = ', X)
x = tensor([[1., 1.]]) X = tensor([[1., 1.], [1., 2.], [1., 3.]])
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Create a logistic regression object with the nn.Sequential model with a two-dimensional input:
# Create new model using nn.sequential() model = nn.Sequential(nn.Linear(2, 1), nn.Sigmoid())
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
The object will apply the Sigmoid function to the output of the linear function as shown in the following diagram: In this case, the parameters are randomly initialized. You can view them the following ways:
# Print the parameters print("list(model.parameters()):\n ", list(model.parameters())) print("\nmodel.state_dict():\n ", model.state_dict())
list(model.parameters()): [Parameter containing: tensor([[ 0.1939, -0.0361]], requires_grad=True), Parameter containing: tensor([0.3021], requires_grad=True)] model.state_dict(): OrderedDict([('0.weight', tensor([[ 0.1939, -0.0361]])), ('0.bias', tensor([0.3021]))])
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Make a prediction with one sample:
# Make the prediction of x yhat = model(x) print("The prediction: ", yhat)
The prediction: tensor([[0.6130]], grad_fn=<SigmoidBackward>)
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
The operation is represented in the following diagram: Make a prediction with multiple samples:
# The prediction of X yhat = model(X) print("The prediction: ", yhat)
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
The operation is represented in the following diagram: Build Custom Modules In this section, you will build a custom Module or class. The model or object function is identical to using nn.Sequential. Create a logistic regression custom module:
# Create logistic_regression custom class class logistic_regression(nn.Module): # Constructor def __init__(self, n_inputs): super(logistic_regression, self).__init__() self.linear = nn.Linear(n_inputs, 1) # Prediction def forward(self, x): yhat = torch.sigmoid(self.lin...
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Create a 1x1 tensor where x represents one data sample with one dimension, and 3x1 tensor where $X$ represents one data sample of one dimension:
# Create x and X tensor x = torch.tensor([[1.0]]) X = torch.tensor([[-100], [0], [100.0]]) print('x = ', x) print('X = ', X)
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Create a model to predict one dimension:
# Create logistic regression model model = logistic_regression(1)
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
In this case, the parameters are randomly initialized. You can view them the following ways:
# Print parameters print("list(model.parameters()):\n ", list(model.parameters())) print("\nmodel.state_dict():\n ", model.state_dict())
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Make a prediction with one sample:
# Make the prediction of x yhat = model(x) print("The prediction result: \n", yhat)
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Make a prediction with multiple samples:
# Make the prediction of X yhat = model(X) print("The prediction result: \n", yhat)
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Create a logistic regression object with a function with two inputs:
# Create logistic regression model model = logistic_regression(2)
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Create a 1x2 tensor where x represents one data sample with one dimension, and 3x2 tensor X represents one data sample of one dimension:
# Create x and X tensor x = torch.tensor([[1.0, 2.0]]) X = torch.tensor([[100, -100], [0.0, 0.0], [-100, 100]]) print('x = ', x) print('X = ', X)
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Make a prediction with one sample:
# Make the prediction of x yhat = model(x) print("The prediction result: \n", yhat)
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Make a prediction with multiple samples:
# Make the prediction of X yhat = model(X) print("The prediction result: \n", yhat)
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Practice Make your own model my_model as applying linear regression first and then logistic regression using nn.Sequential(). Print out your prediction.
# Practice: Make your model and make the prediction X = torch.tensor([-10.0])
_____no_output_____
MIT
IBM_AI/4_Pytorch/5.1logistic_regression_prediction_v2.ipynb
merula89/cousera_notebooks
Classification on Iris dataset with sklearn and DJLIn this notebook, you will try to use a pre-trained sklearn model to run on DJL for a general classification task. The model was trained with [Iris flower dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set). Background Iris DatasetThe dataset contains a set ...
// %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/ %maven ai.djl:api:0.8.0 %maven ai.djl.onnxruntime:onnxruntime-engine:0.8.0 %maven ai.djl.pytorch:pytorch-engine:0.8.0 %maven org.slf4j:slf4j-api:1.7.26 %maven org.slf4j:slf4j-simple:1.7.26 %maven com.microsoft.onnxruntime:onnxruntime:1.4...
_____no_output_____
Apache-2.0
jupyter/onnxruntime/machine_learning_with_ONNXRuntime.ipynb
raghav-deepsource/djl
Step 1 create a TranslatorInference in machine learning is the process of predicting the output for a given input based on a pre-defined model.DJL abstracts away the whole process for ease of use. It can load the model, perform inference on the input, and provideoutput. DJL also allows you to provide user-defined inpu...
public static class IrisFlower { public float sepalLength; public float sepalWidth; public float petalLength; public float petalWidth; public IrisFlower(float sepalLength, float sepalWidth, float petalLength, float petalWidth) { this.sepalLength = sepalLength; this.sepalWidth = sep...
_____no_output_____
Apache-2.0
jupyter/onnxruntime/machine_learning_with_ONNXRuntime.ipynb
raghav-deepsource/djl
Let's create a translator
public static class MyTranslator implements Translator<IrisFlower, Classifications> { private final List<String> synset; public MyTranslator() { // species name synset = Arrays.asList("setosa", "versicolor", "virginica"); } @Override public NDList processInput(TranslatorContext ct...
_____no_output_____
Apache-2.0
jupyter/onnxruntime/machine_learning_with_ONNXRuntime.ipynb
raghav-deepsource/djl
Step 2 Prepare your modelWe will load a pretrained sklearn model into DJL. We defined a [`ModelZoo`](https://javadoc.io/doc/ai.djl/api/latest/ai/djl/repository/zoo/ModelZoo.html) concept to allow user load model from varity of locations, such as remote URL, local files or DJL pretrained model zoo. We need to define `C...
String modelUrl = "https://mlrepo.djl.ai/model/tabular/random_forest/ai/djl/onnxruntime/iris_flowers/0.0.1/iris_flowers.zip"; Criteria<IrisFlower, Classifications> criteria = Criteria.builder() .setTypes(IrisFlower.class, Classifications.class) .optModelUrls(modelUrl) .optTranslator(new MyTransl...
_____no_output_____
Apache-2.0
jupyter/onnxruntime/machine_learning_with_ONNXRuntime.ipynb
raghav-deepsource/djl
Step 3 Run inferenceUser will just need to create a `Predictor` from model to run the inference.
Predictor<IrisFlower, Classifications> predictor = model.newPredictor(); IrisFlower info = new IrisFlower(1.0f, 2.0f, 3.0f, 4.0f); predictor.predict(info);
_____no_output_____
Apache-2.0
jupyter/onnxruntime/machine_learning_with_ONNXRuntime.ipynb
raghav-deepsource/djl
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jup...
# Installs geemap package import subprocess try: import geemap except ImportError: print('Installing geemap ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) import ee import geemap
_____no_output_____
MIT
Algorithms/landsat_radiance.ipynb
OIEIEIO/earthengine-py-notebooks
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
Map = geemap.Map(center=[40,-100], zoom=4) Map
_____no_output_____
MIT
Algorithms/landsat_radiance.ipynb
OIEIEIO/earthengine-py-notebooks
Add Earth Engine Python script
# Add Earth Engine dataset # Load a raw Landsat scene and display it. raw = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318') Map.centerObject(raw, 10) Map.addLayer(raw, {'bands': ['B4', 'B3', 'B2'], 'min': 6000, 'max': 12000}, 'raw') # Convert the raw data to radiance. radiance = ee.Algorithms.Landsat.calibratedRa...
_____no_output_____
MIT
Algorithms/landsat_radiance.ipynb
OIEIEIO/earthengine-py-notebooks
Display Earth Engine data layers
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map
_____no_output_____
MIT
Algorithms/landsat_radiance.ipynb
OIEIEIO/earthengine-py-notebooks
Import Libraries
from __future__ import print_function import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision from torchvision import datasets, transforms %matplotlib inline import matplotlib.pyplot as plt
_____no_output_____
MIT
MNIST/Session2/3_Global_Average_Pooling.ipynb
gmshashank/pytorch_vision
Data TransformationsWe first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise.
# Train Phase transformations train_transforms = transforms.Compose([ # transforms.Resize((28, 28)), # transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1), transforms.ToTensor...
_____no_output_____
MIT
MNIST/Session2/3_Global_Average_Pooling.ipynb
gmshashank/pytorch_vision
Dataset and Creating Train/Test Split
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms) test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./data/MNIST/raw/train-images-idx3-ubyte.gz
MIT
MNIST/Session2/3_Global_Average_Pooling.ipynb
gmshashank/pytorch_vision
Dataloader Arguments & Test/Train Dataloaders
SEED = 1 # CUDA? cuda = torch.cuda.is_available() print("CUDA Available?", cuda) # For reproducibility torch.manual_seed(SEED) if cuda: torch.cuda.manual_seed(SEED) # dataloader arguments - something you'll fetch these from cmdprmt dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=T...
CUDA Available? True
MIT
MNIST/Session2/3_Global_Average_Pooling.ipynb
gmshashank/pytorch_vision
Data StatisticsIt is important to know your data very well. Let's check some of the statistics around our data and how it actually looks like
# We'd need to convert it into Numpy! Remember above we have converted it into tensors already train_data = train.train_data train_data = train.transform(train_data.numpy()) print('[Train]') print(' - Numpy Shape:', train.train_data.cpu().numpy().shape) print(' - Tensor Shape:', train.train_data.size()) print(' - min:...
MIT
MNIST/Session2/3_Global_Average_Pooling.ipynb
gmshashank/pytorch_vision
MOREIt is important that we view as many images as possible. This is required to get some idea on image augmentation later on
figure = plt.figure() num_of_images = 60 for index in range(1, num_of_images + 1): plt.subplot(6, 10, index) plt.axis('off') plt.imshow(images[index].numpy().squeeze(), cmap='gray_r')
_____no_output_____
MIT
MNIST/Session2/3_Global_Average_Pooling.ipynb
gmshashank/pytorch_vision
The modelLet's start with the model we first saw
class Net(nn.Module): def __init__(self): super(Net, self).__init__() # Input Block self.convblock1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ) # output_size = 26 # CONVOLUTION BL...
_____no_output_____
MIT
MNIST/Session2/3_Global_Average_Pooling.ipynb
gmshashank/pytorch_vision
Model ParamsCan't emphasize on how important viewing Model Summary is. Unfortunately, there is no in-built model visualizer, so we have to take external help
!pip install torchsummary from torchsummary import summary use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") print(device) model = Net().to(device) summary(model, input_size=(1, 28, 28))
Requirement already satisfied: torchsummary in /usr/local/lib/python3.6/dist-packages (1.5.1) cuda ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 ...
MIT
MNIST/Session2/3_Global_Average_Pooling.ipynb
gmshashank/pytorch_vision
Training and TestingLooking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs. Let's write train and test functions
from tqdm import tqdm train_losses = [] test_losses = [] train_acc = [] test_acc = [] def train(model, device, train_loader, optimizer, epoch): global train_max model.train() pbar = tqdm(train_loader) correct = 0 processed = 0 for batch_idx, (data, target) in enumerate(pbar): # get samp...
_____no_output_____
MIT
MNIST/Session2/3_Global_Average_Pooling.ipynb
gmshashank/pytorch_vision
Let's Train and test our model
model = Net().to(device) optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9) EPOCHS = 15 train_max=0 test_max=0 for epoch in range(EPOCHS): print("EPOCH:", epoch) train(model, device, train_loader, optimizer, epoch) test(model, device, test_loader) print(f"\nMaximum training accuracy: {train_m...
_____no_output_____
MIT
MNIST/Session2/3_Global_Average_Pooling.ipynb
gmshashank/pytorch_vision
basic operation on image
import cv2 import numpy as np impath = r"D:/Study/example_ml/computer_vision_example/cv_exercise/opencv-master/samples/data/messi5.jpg" img = cv2.imread(impath) print(img.shape) print(img.size) print(img.dtype) b,g,r = cv2.split(img) img = cv2.merge((b,g,r)) cv2.imshow("image",img) cv2.waitKey(0) cv2.destroyAllWindows(...
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
copy and paste
import cv2 import numpy as np impath = r"D:/Study/example_ml/computer_vision_example/cv_exercise/opencv-master/samples/data/messi5.jpg" img = cv2.imread(impath) '''b,g,r = cv2.split(img) img = cv2.merge((b,g,r))''' ball = img[280:340,330:390] img[273:333,100:160] = ball cv2.imshow("image",img) cv2.waitKey(0) cv2.destro...
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
merge two imge
import cv2 import numpy as np impath = r"D:/Study/example_ml/computer_vision_example/cv_exercise/opencv-master/samples/data/messi5.jpg" impath1 = r"D:/Study/example_ml/computer_vision_example/cv_exercise/opencv-master/samples/data/opencv-logo.png" img = cv2.imread(impath) img1 = cv2.imread(impath1) img = cv2.resize(img...
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
bitwise operation
import cv2 import numpy as np img1 = np.zeros([250,500,3],np.uint8) img1 = cv2.rectangle(img1,(200,0),(300,100),(255,255,255),-1) img2 = np.full((250, 500, 3), 255, dtype=np.uint8) img2 = cv2.rectangle(img2, (0, 0), (250, 250), (0, 0, 0), -1) #bit_and = cv2.bitwise_and(img2,img1) #bit_or = cv2.bitwise_or(img2,img1) #bi...
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
simple thresholding THRESH_BINARY
import cv2 import numpy as np img = cv2.imread('gradient.jpg',0) _,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY) #check every pixel with 127 cv2.imshow("img",img) cv2.imshow("th1",th1) cv2.waitKey(0) cv2.destroyAllWindows()
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
THRESH_BINARY_INV
import cv2 import numpy as np img = cv2.imread('gradient.jpg',0) _,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY) _,th2 = cv2.threshold(img,127,255,cv2.THRESH_BINARY_INV) #check every pixel with 127 cv2.imshow("img",img) cv2.imshow("th1",th1) cv2.imshow("th2",th2) cv2.waitKey(0) cv2.destroyAllWindows()
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
THRESH_TRUNC
import cv2 import numpy as np img = cv2.imread('gradient.jpg',0) _,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY) _,th2 = cv2.threshold(img,255,255,cv2.THRESH_TRUNC) #check every pixel with 127 cv2.imshow("img",img) cv2.imshow("th1",th1) cv2.imshow("th2",th2) cv2.waitKey(0) cv2.destroyAllWindows()
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
THRESH_TOZERO
import cv2 import numpy as np img = cv2.imread('gradient.jpg',0) _,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY) _,th2 = cv2.threshold(img,127,255,cv2.THRESH_TOZERO) #check every pixel with 127 _,th3 = cv2.threshold(img,127,255,cv2.THRESH_TOZERO_INV) #check every pixel with 127 cv2.imshow("img",img) cv2.imshow("...
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
Adaptive Thresholding it will calculate the threshold for smaller region of iamge .so we get different thresholding value for different region of same image
import cv2 import numpy as np img = cv2.imread('sudoku1.jpg') img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) _,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY) th2 = cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY,11,2) th3 = cv2.adaptiveThreshold(img,255,cv2.ADAPTI...
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
Morphological Transformations Morphological Transformations are some simple operation based on the image shape. Morphological Transformations are normally performed on binary images. A kernal tells you how to change the value of any given pixel by combining it with different amounts of the neighbouring pixels.
import cv2 %matplotlib notebook %matplotlib inline from matplotlib import pyplot as plt img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE) _,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV) titles = ['images',"mask"] images = [img,mask] for i in range(2): plt.subplot(1,2,i+1) plt.imshow(images[i],"gr...
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
Morphological Transformations using erosion
import cv2 import numpy as np %matplotlib notebook %matplotlib inline from matplotlib import pyplot as plt img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE) _,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV) kernal = np.ones((2,2),np.uint8) dilation = cv2.dilate(mask,kernal,iterations = 3) erosion = cv2.ero...
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
Morphological Transformations using opening morphological operation morphologyEx . Will use erosion operation first then dilation on the image
import cv2 import numpy as np %matplotlib notebook %matplotlib inline from matplotlib import pyplot as plt img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE) _,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV) kernal = np.ones((5,5),np.uint8) dilation = cv2.dilate(mask,kernal,iterations = 3) erosion = cv2.ero...
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
Morphological Transformations using closing morphological operation morphologyEx . Will use dilation operation first then erosion on the image
import cv2 import numpy as np %matplotlib notebook %matplotlib inline from matplotlib import pyplot as plt img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE) _,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV) kernal = np.ones((5,5),np.uint8) dilation = cv2.dilate(mask,kernal,iterations = 3) erosion = cv2.ero...
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
Morphological Transformations other than opening and closing morphological operation MORPH_GRADIENT will give the difference between dilation and erosion top_hat will give the difference between input image and opening image
import cv2 import numpy as np %matplotlib notebook %matplotlib inline from matplotlib import pyplot as plt img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE) _,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV) kernal = np.ones((5,5),np.uint8) dilation = cv2.dilate(mask,kernal,iterations = 3) erosion = cv2.ero...
_____no_output_____
Apache-2.0
exercise_2.ipynb
deepak223098/Computer_Vision_Example
Create a list of valid Hindi literals
a = list(set(list("ऀँंःऄअआइईउऊऋऌऍऎएऐऑऒओऔकखगघङचछजझञटठडढणतथदधनऩपफबभमयरऱलळऴवशषसहऺऻ़ऽािीुूृॄॅॆेैॉॊोौ्ॎॏॐ॒॑॓॔ॕॖॗक़ख़ग़ज़ड़ढ़फ़य़ॠॡॢॣ।॥॰ॱॲॳॴॵॶॷॸॹॺॻॼॽॾॿ-"))) len(genderListCleared),len(set(genderListCleared)) genderListCleared = list(set(genderListCleared)) mCount = 0 fCount = 0 nCount = 0 for item in genderListCleared: if item[...
Training new model, loss:categorical_crossentropy, optimizer=sgd, lstm_len=128, dropoff=0.4 Train on 32318 samples, validate on 8080 samples Epoch 1/10 32318/32318 [==============================] - 30s 943us/step - loss: 1.0692 - acc: 0.4402 - val_loss: 1.0691 - val_acc: 0.4406 Epoch 2/10 32318/32318 [================...
Apache-2.0
Untitled1.ipynb
archit120/lingatagger
Default server
default_split = split_params(default)[['model','metric','value','params_name','params_val']] models = default_split.model.unique().tolist() CollectiveMF_Item_set = default_split[default_split['model'] == models[0]] CollectiveMF_User_set = default_split[default_split['model'] == models[1]] CollectiveMF_No_set = default_...
_____no_output_____
MIT
parse_results_with_visualization/Hyper_params_visualization.ipynb
HenryNebula/Personalization_Final_Project
surprise_SVD
surprise_SVD_ndcg = surprise_SVD_set[(surprise_SVD_set['metric'] == 'ndcg@10')] surprise_SVD_ndcg = surprise_SVD_ndcg.pivot(index= 'value', columns='params_name', values='params_val').reset_index(inplace = False) surprise_SVD_ndcg...
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque. The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
MIT
parse_results_with_visualization/Hyper_params_visualization.ipynb
HenryNebula/Personalization_Final_Project
CollectiveMF_Both
reg_param = [0.0001, 0.001, 0.01] w_main = [0.5, 0.6, 0.7, 0.8, 0.9, 1.0] k = [4.,8.,16.] CollectiveMF_Both_ndcg = CollectiveMF_Both_set[CollectiveMF_Both_set['metric'] == 'ndcg@10'] CollectiveMF_Both_ndcg = CollectiveMF_Both_ndcg.pivot(index= 'value', columns='pa...
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque. The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
MIT
parse_results_with_visualization/Hyper_params_visualization.ipynb
HenryNebula/Personalization_Final_Project
New server
new_split = split_params(new)[['model','metric','value','params_name','params_val']] Test_implicit_set = new_split[new_split['model'] == 'BPR'] FMItem_set = new_split[new_split['model'] == 'FMItem'] FMNone_set = new_split[new_split['model'] == 'FMNone']
_____no_output_____
MIT
parse_results_with_visualization/Hyper_params_visualization.ipynb
HenryNebula/Personalization_Final_Project
Test_implicit
Test_implicit_set_ndcg = Test_implicit_set[Test_implicit_set['metric'] == 'ndcg@10'] Test_implicit_set_ndcg = Test_implicit_set_ndcg.pivot(index="value", columns='params_name', values='params_val').reset_index(...
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque. The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
MIT
parse_results_with_visualization/Hyper_params_visualization.ipynb
HenryNebula/Personalization_Final_Project
FMItem
FMItem_set_ndcg = FMItem_set[FMItem_set['metric'] == 'ndcg@10'] FMItem_set_ndcg = FMItem_set_ndcg.pivot(index="value", columns='params_name', values='params_val').reset_index(inplace = False) FMItem_set_ndcg = FMItem_set_ndcg[(FMItem_set_...
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque. The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
MIT
parse_results_with_visualization/Hyper_params_visualization.ipynb
HenryNebula/Personalization_Final_Project
Feature Engineering para XGBoost
important_values = values\ .merge(labels, on="building_id") important_values.drop(columns=["building_id"], inplace = True) important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category") important_values X_train, X_test, y_train, y_test = train_test_split(important_values.dro...
_____no_output_____
MIT
src/VotingClassifier/.ipynb_checkpoints/knn-checkpoint.ipynb
joaquinfontela/Machine-Learning
Entreno tres de los mejores modelos con Voting.
xgb_model_1 = XGBClassifier(n_estimators = 350, subsample = 0.885, booster = 'gbtree', gamma = 1, learning_rate = 0.45, label_encoder = False, verbosity = 2) xgb_m...
building_id,damage_grade 300051,3 99355,2 890251,2 745817,1 421793,3 871976,2 691228,1 896100,3 343471,2
MIT
src/VotingClassifier/.ipynb_checkpoints/knn-checkpoint.ipynb
joaquinfontela/Machine-Learning
Stock Forecasting using Prophet (Uncertainty in the trend) https://facebook.github.io/prophet/
# Libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from prophet import Prophet import warnings warnings.filterwarnings("ignore") import yfinance as yf yf.pdr_override() stock = 'AMD' # input start = '2017-01-01' # input end = '2021-11-08' # input df = yf.downlo...
_____no_output_____
MIT
Python_Stock/Time_Series_Forecasting/Stock_Forecasting_Prophet_Uncertainty_Trend.ipynb
LastAncientOne/Stock_Analysis_For_Quant
Delfin InstallationRun the following cell to install osiris-sdk.
!pip install osiris-sdk --upgrade
_____no_output_____
MIT
delfin/Example - Delfin.ipynb
Open-Dataplatform/examples
Access to datasetThere are two ways to get access to a dataset1. Service Principle2. Access Token Config file with Service PrincipleIf done with **Service Principle** it is adviced to add the following file with **tenant_id**, **client_id**, and **client_secret**:The structure of **conf.ini**:```[Authorization]tenant_...
from osiris.apis.egress import Egress from osiris.core.azure_client_authorization import ClientAuthorization from osiris.core.enums import Horizon from configparser import ConfigParser
_____no_output_____
MIT
delfin/Example - Delfin.ipynb
Open-Dataplatform/examples
Initialize the Egress class with Service Principle
config = ConfigParser() config.read('conf.ini') client_auth = ClientAuthorization(tenant_id=config['Authorization']['tenant_id'], client_id=config['Authorization']['client_id'], client_secret=config['Authorization']['client_secret']) egress = Egress(...
_____no_output_____
MIT
delfin/Example - Delfin.ipynb
Open-Dataplatform/examples
Intialize the Egress class with Access Token
config = ConfigParser() config.read('conf.ini') access_token = 'REPLACE WITH ACCESS TOKEN HERE' client_auth = ClientAuthorization(access_token=access_token) egress = Egress(client_auth=client_auth, egress_url=config['Egress']['url'])
_____no_output_____
MIT
delfin/Example - Delfin.ipynb
Open-Dataplatform/examples
Delfin DailyThe data retrived will be **from_date <= data < to_date**.The **from_date** and **to_date** syntax is [described here](https://github.com/Open-Dataplatform/examples/blob/main/README.md).
json_content = egress.download_delfin_file(horizon=Horizon.MINUTELY, from_date="2021-07-15T20:00", to_date="2021-07-16T00:00") json_content = egress.download_delfin_file(horizon=Horizon.DAILY, ...
_____no_output_____
MIT
delfin/Example - Delfin.ipynb
Open-Dataplatform/examples
Delfin HourlyThe **from_date** and **to_date** syntax is [described here](https://github.com/Open-Dataplatform/examples/blob/main/README.md).
json_content = egress.download_delfin_file(horizon=Horizon.HOURLY, from_date="2020-01-01T00", to_date="2020-01-01T06") # We only show the first entry here json_content[0]
_____no_output_____
MIT
delfin/Example - Delfin.ipynb
Open-Dataplatform/examples
Delfin MinutelyThe **from_date** and **to_date** syntax is [described here](https://github.com/Open-Dataplatform/examples/blob/main/README.md).
json_content = egress.download_delfin_file(horizon=Horizon.MINUTELY, from_date="2021-07-15T00:00", to_date="2021-07-15T00:59") # We only show the first entry here json_content[0]
_____no_output_____
MIT
delfin/Example - Delfin.ipynb
Open-Dataplatform/examples
Delfin Daily with IndicesThe **from_date** and **to_date** syntax is [described here](https://github.com/Open-Dataplatform/examples/blob/main/README.md).
json_content = egress.download_delfin_file(horizon=Horizon.DAILY, from_date="2020-01-15T03:00", to_date="2020-01-16T03:01", table_indices=[1, 2]) # We only show the first entry here json_c...
_____no_output_____
MIT
delfin/Example - Delfin.ipynb
Open-Dataplatform/examples
Apple Stock Introduction:We are going to use Apple's stock price. Step 1. Import the necessary libraries
import pandas as pd import numpy as np # visualization import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
BSD-3-Clause
09_Time_Series/Apple_Stock/Exercises-with-solutions-code.ipynb
nat-bautista/tts-pandas-exercise