text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
# default_exp l2data
```
# L2 Data Interface
> Helpers to retrieve and process Diviner PDS L2 data.
```
# export
import warnings
from pathlib import Path
import numpy as np
import pvl
from yarl import URL
from planetarypy import geotools as gt
from planetarypy.utils import url_retrieve
DIVINER_URL = URL(
... | github_jupyter |
**Unsupervised learning**
___
- Unsupervised learning finds patterns in data
- Dimension = number of features
- k-means clustering
- finds clusters of samples
- number of clusters must be specified
- implemented in sklearn ("scikit-learn")
- new samples can be assigned to existing clusters
- k-m... | github_jupyter |
# Norwegian Bank Account Numbers
## Introduction
The function `clean_no_kontonr()` cleans a column containing Norwegian bank account number (kontonr) strings, and standardizes them in a given format. The function `validate_no_kontonr()` validates either a single kontonr strings, a column of kontonr strings or a DataF... | github_jupyter |
# Tables introduction
The astropy [Table](http://docs.astropy.org/en/stable/table/index.html) class provides an extension of NumPy structured arrays for storing and manipulating heterogeneous tables of data. A few notable features of this package are:
- Initialize a table from a wide variety of input data structures ... | github_jupyter |
## Data Analysis
```
import pandas as pd
crime = pd.read_csv('data/crimeandweather.csv')
crime['OCCURRED_ON_DATE'] = pd.to_datetime(crime['OCCURRED_ON_DATE'])
crime['DATE'] = pd.to_datetime(crime['DATE'])
crime['Lat'] = pd.to_numeric(crime['Lat'])
crime['Long'] = pd.to_numeric(crime['Long'])
print("strat date:", crim... | github_jupyter |
# Mining Function Specifications
When testing a program, one not only needs to cover its several behaviors; one also needs to _check_ whether the result is as expected. In this chapter, we introduce a technique that allows us to _mine_ function specifications from a set of given executions, resulting in abstract and ... | github_jupyter |
# Training Deep Neural Networks on a GPU with PyTorch
### Part 4 of "PyTorch: Zero to GANs"
*This notebook is the fourth in a series of tutorials on building deep learning models with PyTorch, an open source neural networks library. Check out the full series:*
1. [PyTorch Basics: Tensors & Gradients](https://jovian.... | github_jupyter |
**[Este artículo viene de una revisión de lujo que ha hecho [Cristián Maureira-Fredes](https://maureira.xyz/) de los capítulos anteriores. Cristián trabaja como ingeniero de software en el proyecto [Qt for Python](https://wiki.qt.io/Qt_for_Python) dentro de [The Qt Company](https://qt.io/). Este artículo lo escribo yo ... | github_jupyter |
```
from google.colab import drive, files as g_files
drive.mount('/proj')
```
We will train a Darknet model to detect dolphin fins using Yolov4
```
# Imports
import os
import random
import shutil
from glob import glob
# Constants
SEED = 100
TEST_PROP = 0.15
# %cd /proj/MyDrive/158780_Project_Dolphin_Computer_Vi... | github_jupyter |
## Introduction
This example shows how to train a [Soft Actor Critic](https://arxiv.org/abs/1801.01290) agent on the [Minitaur](https://github.com/bulletphysics/bullet3/blob/master/examples/pybullet/gym/pybullet_envs/bullet/minitaur.py) environment using the TF-Agents library.
If you've worked through the [DQN Colab... | github_jupyter |
# Overview of State Task Networks (not finished)
## References
Floudas, C. A., & Lin, X. (2005). Mixed integer linear programming in process scheduling: Modeling, algorithms, and applications. Annals of Operations Research, 139(1), 131-162.
Harjunkoski, I., Maravelias, C. T., Bongers, P., Castro, P. M., Engell, S., ... | github_jupyter |
# Faces recognition using ICA and SVMs
The dataset used in this example is a preprocessed excerpt of the
"Labeled Faces in the Wild", aka LFW_:
http://vis-www.cs.umass.edu/lfw/lfw-funneled.tgz (233MB)
LFW: http://vis-www.cs.umass.edu/lfw/
```
%matplotlib inline
from time import time
import logging
import matp... | github_jupyter |
# Amazon Augmented AI (Amazon A2I) integration with Amazon Translate [Example]
## Introduction
Amazon Translate is constantly learning and evolving to provide the “perfect” output. In domain sensitive applications such as legal, medical, construction, engineering, etc., customers can always improve the translation qu... | github_jupyter |
# Unet with Deep watershed transform(DWT) [Infer]
[[Train notebook]](https://www.kaggle.com/ebinan92/unet-with-deep-watershed-transform-dwt-train)
Inference pipeline is almost same as [Awsaf's notebook](https://www.kaggle.com/awsaf49/pytorch-sartorius-unet-strikes-back-infer) expect watershed algorithm added.
### im... | github_jupyter |
```
import ROOT
import ostap.fixes.fixes
from ostap.core.core import cpp, Ostap
from ostap.core.core import pwd, cwd, ROOTCWD
from ostap.core.core import rootID, funcID, funID, fID, histoID, hID, dsID
from ostap.core.core import VE
from ostap.histos.histos import h1_axis, h2_axes, h3_axes
from ostap.histos.graphs impor... | github_jupyter |
```
" Import the libraries "
import os
import sys
import math
import copy
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.neural_network import MLPRegressor
from s... | github_jupyter |
# COVID-19 Comparative Analysis
> A Comparison of COVID-19 wtih SARS, MERS, EBOLA and H1N1
- author: Devakumar kp
- comments: true
- permalink: /comparitive-analysis/
- toc: true
- image: images/copied_from_nb/covid-compare-1-1.png
These visualizations were made by [Devakumar kp](https://twitter.com/imdevskp), from [t... | github_jupyter |
## Recursion formulae for $\mathbb{j}_v$
In this notebook we validate our recursion formulae for the integral $\mathbb{j}$.
```
%matplotlib inline
%run notebook_setup.py
import numpy as np
from scipy.integrate import quad
import matplotlib.pyplot as plt
from mpmath import ellipf, ellipe
from tqdm.notebook import tqdm... | github_jupyter |
# Explaining Regression Models
Most of the techniques used to explain classification models apply to regression as well. We will look at how to use the SHAP library to interpret regression models.
We will interpret the XGBoost model for the Boston housing dataset:
```
import pandas as pd
import numpy as np
import m... | github_jupyter |
# FLEX
This notebook plots the time series of the surface forcing and simulated sea surface temperature and mixed layer depth in the [FLEX](https://gotm.net/cases/flex/) test case.
```
import sys
import numpy as np
import string
import matplotlib.pyplot as plt
# add the path of gotmtool
sys.path.append("../gotmtool")... | github_jupyter |
# TME 4 : Premiers filtres
> Consignes: le fichier TME4_Sujet.ipynb est à déposer sur le site Moodle de l'UE https://moodle-sciences.upmc.fr/moodle-2019/course/view.php?id=4248. Si vous êtes en binôme, renommez-le en TME4_nom1_nom2.ipynb.
```
# Chargement des modules et des données utiles.
from PIL import Image
impo... | github_jupyter |
# Data and Models
In the subsequent lessons, we will continue to learn deep learning. But we've ignored a fundamental concept about data and modeling: quality and quantity.
<div align="left">
<a href="https://github.com/madewithml/basics/blob/master/notebooks/10_Data_and_Models/10_TF_Data_and_Models.ipynb" role="butto... | github_jupyter |
Intro To Python
=====
In this notebook, we will explore basic Python:
- data types, including dictionaries
- functions
- loops
Please note that we are using Python 3.
(__NOT__ Python 2! Python 2 has some different functions and syntax)
```
# Let's make sure we are using Python 3
import sys
print(sys.version[0]... | github_jupyter |
```
#convert
```
# babilim.model.layers.convolution
> Convolution for 1d and 2d.
```
#export
from typing import Optional, Any, Tuple
from babilim.core.annotations import RunOnlyOnce
from babilim.core.module_native import ModuleNative
from babilim.model.layers.activation import Activation
#export
class Conv1D(ModuleN... | github_jupyter |
>
> # MaaS Sim tutorial
>
> ## External functionalities
>
-----
example of simulations with various functionalities included
```
%load_ext autoreload
%autoreload 2
import os, sys # add MaaSSim to path (not needed if MaaSSim is already in path)
module_path = os.path.abspath(os.path.join('../..'))
if module_path not in ... | github_jupyter |
```
import json
import random
import warnings
import spotipy
import spotipy.util as util
import requests
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from bs4 import BeautifulSoup
```
# get user's top read songs
```
username = 'virginiakm1988'
scope = 'user-top-read'#'... | github_jupyter |
# ResnetTrick_s192bs32_e200
> size 192 bs 32 200 epochs runs.
# setup and imports
```
# pip install git+https://github.com/ayasyrev/model_constructor
# pip install git+https://github.com/kornia/kornia
from kornia.contrib import MaxBlurPool2d
from fastai.basic_train import *
from fastai.vision import *
from fastai.sc... | github_jupyter |
# Classify structured data using Keras Preprocessing Layers
## Learning Objectives
* Load a CSV file using [Pandas](https://pandas.pydata.org/).
* Build an input pipeline to batch and shuffle the rows using [tf.data](https://www.tensorflow.org/guide/datasets).
* Map from columns in the CSV to features used to train ... | github_jupyter |
# HEX algorithm **Kopuru Vespa Velutina Competition**
**XGBoost model**
Purpose: Predict the number of Nests in each of Biscay's 112 municipalities for the year 2020.
Output: *(WaspBusters_20210609_batch_XGBy_48019prodigal.csv)*
@authors:
* mario.bejar@student.ie.edu
* pedro.geirinhas@student.ie.edu
* a.berrizbeiti... | github_jupyter |
```
%config IPCompleter.greedy=True
import warnings
warnings.filterwarnings('ignore')
import sklearn
from sklearn.datasets import fetch_20newsgroups
from sklearn.preprocessing import OneHotEncoder
print('sklearn:', sklearn.__version__)
dataset = fetch_20newsgroups(remove=('headers', 'footers', 'quotes'))
X = datase... | github_jupyter |
```
# -*- coding: utf-8 -*-
"""
EVCで変換する.
詳細 : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580aabf534c4dbb8bc.pdf
Converting by EVC.
Check detail : https://pdfs.semanticscholar.org/cbfe/71798ded05fb8bf8674580abf534c4dbb8bc.pdf
"""
from __future__ import division, print_function
import os
from shutil imp... | github_jupyter |
# Ipyvuetify
* QuantStack/SocGen (Olivier Borderier) project
* Made by Mario Buikhuizen
* Wraps Vuetify
* Vue based
* Material Design
* Rich set of composable widgets following Material Design spec.
```
import ipyvuetify as v
import ipywidgets as widgets
from threading import Timer
lorum_ipsum = 'Lorem i... | github_jupyter |
# Performance Analysis of Awkward-array vs Numba optimized Awkward-array
## Content:
- [Awkward package performance on large arrays](#Awkward-package-performance-on-large-arrays)
- [Profilling of Awkward package](#[Profilling-of-Awkward-package])
- [Using %%timeit](#Awkward-Array-Using-%%timeit)
... | github_jupyter |
```
import re
import numpy as np
import pandas as pd
from pprint import pprint
# Gensim
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
# spacy for lemmatization
import spacy
# Plotting tools
!pip install pyLDAvis
import pyLDAvis
impo... | github_jupyter |
# _*Portfolio Diversification*_
## Introduction
In asset management, there are broadly two approaches: active and passive investment management. Within passive investment management, there are index-tracking funds and there are approaches based on portfolio diversification, which aim at representing a portfolio wit... | github_jupyter |
ERROR: type should be string, got "https://colab.research.google.com/drive/1ENG9UZjOFAB6KDp78oGMfUcFd_o6bHaJ\n\n```\nfrom keras.preprocessing.text import one_hot\nfrom keras.preprocessing.sequence import pad_sequences\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Flatten\nfrom keras.layers.recurrent import SimpleRNN\nfrom keras.layers.embeddings import Embedding\nfrom keras.layers import LSTM\nimport numpy as np\nfrom keras.utils import to_categorical\nfrom keras.layers import RepeatVector\ninput_data = np.array([[1,2],[3,4]])\noutput_data = np.array([[3,4],[5,6]])\nfrom keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers import LSTM\nfrom numpy import array\n# define model\ninputs1 = Input(shape=(2,1))\nlstm1 = LSTM(1, activation = 'tanh', return_sequences=False,recurrent_initializer='Zeros',recurrent_activation='sigmoid')(inputs1)\n#repvec = RepeatVector(2) (lstm1)\nout= Dense(2, activation='linear')(lstm1)\n#repvec = RepeatVector(2) (state_h)\n#model = Model(inputs=inputs1, outputs=[lstm1, state_h, state_c])\nmodel = Model(inputs=inputs1, outputs=out)\nmodel.summary()\nmodel.compile(optimizer='adam',loss='mean_squared_error')\nmodel.fit(input_data.reshape(2,2,1), output_data,epochs=1000)\nprint(model.predict(input_data[0].reshape(1,2,1)))\ninput_t0 = 1\ncell_state0 = 0\nforget0 = input_t0*model.get_weights()[0][0][1] + model.get_weights()[2][1]\nforget1 = 1/(1+np.exp(-(forget0)))\ncell_state1 = forget1 * cell_state0\ninput_t0_1 = input_t0*model.get_weights()[0][0][0] + model.get_weights()[2][0]\ninput_t0_2 = 1/(1+np.exp(-(input_t0_1)))\ninput_t0_cell1 = input_t0*model.get_weights()[0][0][2] + model.get_weights()[2][2]\ninput_t0_cell2 = np.tanh(input_t0_cell1)\ninput_t0_cell3 = input_t0_cell2*input_t0_2\ninput_t0_cell4 = input_t0_cell3 + cell_state1\noutput_t0_1 = input_t0*model.get_weights()[0][0][3] + model.get_weights()[2][3]\noutput_t0_2 = 1/(1+np.exp(-output_t0_1))\nhidden_layer_1 = np.tanh(input_t0_cell4)*output_t0_2\ninput_t1 = 2\ncell_state1 = input_t0_cell4\nforget21 = hidden_layer_1*model.get_weights()[1][0][1] + model.get_weights()[2][1] + input_t1*model.get_weights()[0][0][1]\nforget_22 = 1/(1+np.exp(-(forget21)))\ncell_state2 = cell_state1 * forget_22\ninput_t1_1 = input_t1*model.get_weights()[0][0][0] + model.get_weights()[2][0] + hidden_layer_1*model.get_weights()[1][0][0]\ninput_t1_2 = 1/(1+np.exp(-(input_t1_1)))\ninput_t1_cell1 = input_t1*model.get_weights()[0][0][2] + model.get_weights()[2][2]+ hidden_layer_1*model.get_weights()[1][0][2]\ninput_t1_cell2 = np.tanh(input_t1_cell1)\ninput_t1_cell3 = input_t1_cell2*input_t1_2\ninput_t1_cell4 = input_t1_cell3 + cell_state2\noutput_t1_1 = input_t1*model.get_weights()[0][0][3] + model.get_weights()[2][3]+ hidden_layer_1*model.get_weights()[1][0][3]\noutput_t1_2 = 1/(1+np.exp(-output_t1_1))\nhidden_layer_2 = np.tanh(input_t1_cell4)*output_t1_2\nfinal_output = hidden_layer_2 * model.get_weights()[3][0] + model.get_weights()[4]\nfinal_output\n\n\nfrom keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers import LSTM\nfrom numpy import array\n# define model\ninputs1 = Input(shape=(2,1))\nlstm1 = LSTM(1, activation = 'tanh', return_sequences=True,recurrent_initializer='Zeros',recurrent_activation='sigmoid')(inputs1)\n#repvec = RepeatVector(2) (lstm1)\nout= Dense(1, activation='linear')(lstm1)\n#repvec = RepeatVector(2) (state_h)\n#model = Model(inputs=inputs1, outputs=[lstm1, state_h, state_c])\nmodel = Model(inputs=inputs1, outputs=out)\nmodel.summary()\nmodel.compile(optimizer='adam',loss='mean_squared_error')\nmodel.fit(input_data.reshape(2,2,1), output_data.reshape(2,2,1),epochs=1000)\nprint(model.predict(input_data[0].reshape(1,2,1)))\ninput_t0 = 1\ncell_state0 = 0\nforget0 = input_t0*model.get_weights()[0][0][1] + model.get_weights()[2][1]\nforget1 = 1/(1+np.exp(-(forget0)))\ncell_state1 = forget1 * cell_state0\ninput_t0_1 = input_t0*model.get_weights()[0][0][0] + model.get_weights()[2][0]\ninput_t0_2 = 1/(1+np.exp(-(input_t0_1)))\ninput_t0_cell1 = input_t0*model.get_weights()[0][0][2] + model.get_weights()[2][2]\ninput_t0_cell2 = np.tanh(input_t0_cell1)\ninput_t0_cell3 = input_t0_cell2*input_t0_2\ninput_t0_cell4 = input_t0_cell3 + cell_state1\noutput_t0_1 = input_t0*model.get_weights()[0][0][3] + model.get_weights()[2][3]\noutput_t0_2 = 1/(1+np.exp(-output_t0_1))\nhidden_layer_1 = np.tanh(input_t0_cell4)*output_t0_2\nfinal_output_1 = hidden_layer_1 * model.get_weights()[3][0] + model.get_weights()[4]\nfinal_output_1\ninput_t1 = 2\ncell_state1 = input_t0_cell4\nforget21 = hidden_layer_1*model.get_weights()[1][0][1] + model.get_weights()[2][1] + input_t1*model.get_weights()[0][0][1]\nforget_22 = 1/(1+np.exp(-(forget21)))\ncell_state2 = cell_state1 * forget_22\ninput_t1_1 = input_t1*model.get_weights()[0][0][0] + model.get_weights()[2][0] + hidden_layer_1*model.get_weights()[1][0][0]\ninput_t1_2 = 1/(1+np.exp(-(input_t1_1)))\ninput_t1_cell1 = input_t1*model.get_weights()[0][0][2] + model.get_weights()[2][2]+ hidden_layer_1*model.get_weights()[1][0][2]\ninput_t1_cell2 = np.tanh(input_t1_cell1)\ninput_t1_cell3 = input_t1_cell2*input_t1_2\ninput_t1_cell4 = input_t1_cell3 + cell_state2\noutput_t1_1 = input_t1*model.get_weights()[0][0][3] + model.get_weights()[2][3]+ hidden_layer_1*model.get_weights()[1][0][3]\noutput_t1_2 = 1/(1+np.exp(-output_t1_1))\nhidden_layer_2 = np.tanh(input_t1_cell4)*output_t1_2\nfinal_output_2 = hidden_layer_2 * model.get_weights()[3][0] + model.get_weights()[4]\nfinal_output_2\n\n\n\n\n\n\nfrom keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers import LSTM\nfrom numpy import array\n# define model\ninputs1 = Input(shape=(2,1))\nlstm1,state_h,state_c = LSTM(1, activation = 'tanh', return_sequences=True, return_state = True, recurrent_initializer='Zeros',recurrent_activation='sigmoid')(inputs1)\n#repvec = RepeatVector(2) (lstm1)\n#out= Dense(1, activation='linear')(lstm1)\n#repvec = RepeatVector(2) (state_h)\nmodel = Model(inputs=inputs1, outputs=[lstm1, state_h, state_c])\n#model = Model(inputs=inputs1, outputs=out)\nmodel.summary()\nmodel.compile(optimizer='adam',loss='mean_squared_error')\n#model.fit(input_data.reshape(2,2,1), output_data.reshape(2,2,1),epochs=1)\nprint(model.predict(input_data[0].reshape(1,2,1)))\ninput_t0 = 1\ncell_state0 = 0\nforget0 = input_t0*model.get_weights()[0][0][1] + model.get_weights()[2][1]\nforget1 = 1/(1+np.exp(-(forget0)))\ncell_state1 = forget1 * cell_state0\ninput_t0_1 = input_t0*model.get_weights()[0][0][0] + model.get_weights()[2][0]\ninput_t0_2 = 1/(1+np.exp(-(input_t0_1)))\ninput_t0_cell1 = input_t0*model.get_weights()[0][0][2] + model.get_weights()[2][2]\ninput_t0_cell2 = np.tanh(input_t0_cell1)\ninput_t0_cell3 = input_t0_cell2*input_t0_2\ninput_t0_cell4 = input_t0_cell3 + cell_state1\noutput_t0_1 = input_t0*model.get_weights()[0][0][3] + model.get_weights()[2][3]\noutput_t0_2 = 1/(1+np.exp(-output_t0_1))\nhidden_layer_1 = np.tanh(input_t0_cell4)*output_t0_2\nprint(hidden_layer_1)\ninput_t1 = 2\ncell_state1 = input_t0_cell4\nforget21 = hidden_layer_1*model.get_weights()[1][0][1] + model.get_weights()[2][1] + input_t1*model.get_weights()[0][0][1]\nforget_22 = 1/(1+np.exp(-(forget21)))\ncell_state2 = cell_state1 * forget_22\ninput_t1_1 = input_t1*model.get_weights()[0][0][0] + model.get_weights()[2][0] + hidden_layer_1*model.get_weights()[1][0][0]\ninput_t1_2 = 1/(1+np.exp(-(input_t1_1)))\ninput_t1_cell1 = input_t1*model.get_weights()[0][0][2] + model.get_weights()[2][2]+ hidden_layer_1*model.get_weights()[1][0][2]\ninput_t1_cell2 = np.tanh(input_t1_cell1)\ninput_t1_cell3 = input_t1_cell2*input_t1_2\ninput_t1_cell4 = input_t1_cell3 + cell_state2\noutput_t1_1 = input_t1*model.get_weights()[0][0][3] + model.get_weights()[2][3]+ hidden_layer_1*model.get_weights()[1][0][3]\noutput_t1_2 = 1/(1+np.exp(-output_t1_1))\nhidden_layer_2 = np.tanh(input_t1_cell4)*output_t1_2\nprint(hidden_layer_2, input_t1_cell4)\n\n\nfrom keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers import LSTM,Bidirectional\nfrom numpy import array\n# define model\ninputs1 = Input(shape=(2,1))\nlstm1,state_fh,state_fc,state_bh,state_bc = Bidirectional(LSTM(1, activation = 'tanh', return_sequences=True, return_state = True, recurrent_initializer='Zeros',recurrent_activation='sigmoid'))(inputs1)\n#repvec = RepeatVector(2) (lstm1)\n#out= Dense(1, activation='linear')(lstm1)\n#repvec = RepeatVector(2) (state_h)\nmodel = Model(inputs=inputs1, outputs=[lstm1, state_fh,state_fc,state_bh,state_bc])\n#model = Model(inputs=inputs1, outputs=out)\nmodel.summary()\nmodel.weights\nprint(model.predict(input_data[0].reshape(1,2,1)))\ninput_t0 = 1\ncell_state0 = 0\nforget0 = input_t0*model.get_weights()[0][0][1] + model.get_weights()[2][1]\nforget1 = 1/(1+np.exp(-(forget0)))\ncell_state1 = forget1 * cell_state0\ninput_t0_1 = input_t0*model.get_weights()[0][0][0] + model.get_weights()[2][0]\ninput_t0_2 = 1/(1+np.exp(-(input_t0_1)))\ninput_t0_cell1 = input_t0*model.get_weights()[0][0][2] + model.get_weights()[2][2]\ninput_t0_cell2 = np.tanh(input_t0_cell1)\ninput_t0_cell3 = input_t0_cell2*input_t0_2\ninput_t0_cell4 = input_t0_cell3 + cell_state1\noutput_t0_1 = input_t0*model.get_weights()[0][0][3] + model.get_weights()[2][3]\noutput_t0_2 = 1/(1+np.exp(-output_t0_1))\nhidden_layer_1 = np.tanh(input_t0_cell4)*output_t0_2\nprint(hidden_layer_1)\ninput_t1 = 2\ncell_state1 = input_t0_cell4\nforget21 = hidden_layer_1*model.get_weights()[1][0][1] + model.get_weights()[2][1] + input_t1*model.get_weights()[0][0][1]\nforget_22 = 1/(1+np.exp(-(forget21)))\ncell_state2 = cell_state1 * forget_22\ninput_t1_1 = input_t1*model.get_weights()[0][0][0] + model.get_weights()[2][0] + hidden_layer_1*model.get_weights()[1][0][0]\ninput_t1_2 = 1/(1+np.exp(-(input_t1_1)))\ninput_t1_cell1 = input_t1*model.get_weights()[0][0][2] + model.get_weights()[2][2]+ hidden_layer_1*model.get_weights()[1][0][2]\ninput_t1_cell2 = np.tanh(input_t1_cell1)\ninput_t1_cell3 = input_t1_cell2*input_t1_2\ninput_t1_cell4 = input_t1_cell3 + cell_state2\noutput_t1_1 = input_t1*model.get_weights()[0][0][3] + model.get_weights()[2][3]+ hidden_layer_1*model.get_weights()[1][0][3]\noutput_t1_2 = 1/(1+np.exp(-output_t1_1))\nhidden_layer_2 = np.tanh(input_t1_cell4)*output_t1_2\nprint(hidden_layer_2, input_t1_cell4)\n\n\ninput_t0 = 2\ncell_state0 = 0\nforget0 = input_t0*model.get_weights()[3][0][1] + model.get_weights()[5][1]\nforget1 = 1/(1+np.exp(-(forget0)))\ncell_state1 = forget1 * cell_state0\ninput_t0_1 = input_t0*model.get_weights()[3][0][0] + model.get_weights()[5][0]\ninput_t0_2 = 1/(1+np.exp(-(input_t0_1)))\ninput_t0_cell1 = input_t0*model.get_weights()[3][0][2] + model.get_weights()[5][2]\ninput_t0_cell2 = np.tanh(input_t0_cell1)\ninput_t0_cell3 = input_t0_cell2*input_t0_2\ninput_t0_cell4 = input_t0_cell3 + cell_state1\noutput_t0_1 = input_t0*model.get_weights()[3][0][3] + model.get_weights()[5][3]\noutput_t0_2 = 1/(1+np.exp(-output_t0_1))\nhidden_layer_1 = np.tanh(input_t0_cell4)*output_t0_2\nprint(hidden_layer_1)\ninput_t1 = 1\ncell_state1 = input_t0_cell4\nforget21 = hidden_layer_1*model.get_weights()[4][0][1] + model.get_weights()[5][1] + input_t1*model.get_weights()[3][0][1]\nforget_22 = 1/(1+np.exp(-(forget21)))\ncell_state2 = cell_state1 * forget_22\ninput_t1_1 = input_t1*model.get_weights()[3][0][0] + model.get_weights()[5][0] + hidden_layer_1*model.get_weights()[4][0][0]\ninput_t1_2 = 1/(1+np.exp(-(input_t1_1)))\ninput_t1_cell1 = input_t1*model.get_weights()[3][0][2] + model.get_weights()[5][2]+ hidden_layer_1*model.get_weights()[4][0][2]\ninput_t1_cell2 = np.tanh(input_t1_cell1)\ninput_t1_cell3 = input_t1_cell2*input_t1_2\ninput_t1_cell4 = input_t1_cell3 + cell_state2\noutput_t1_1 = input_t1*model.get_weights()[3][0][3] + model.get_weights()[5][3]+ hidden_layer_1*model.get_weights()[4][0][3]\noutput_t1_2 = 1/(1+np.exp(-output_t1_1))\nhidden_layer_2 = np.tanh(input_t1_cell4)*output_t1_2\nprint(hidden_layer_2, input_t1_cell4)\n\n\n\n\nfrom numpy import array\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import TimeDistributed\nfrom keras.layers import LSTM\n# prepare sequence\nlength = 2\nseq = array([(i+1)/float(length) for i in range(length)])\nX = seq.reshape(1, length, 1)\ny = seq.reshape(1, length, 1)\n# define LSTM configuration\nn_neurons = length\nn_batch = 1\nn_epoch = 1000\n# create LSTM\nmodel = Sequential()\nmodel.add(LSTM(5, activation = 'tanh',input_shape=(length, 1), return_sequences=True,recurrent_initializer='Zeros',recurrent_activation='sigmoid'))\nmodel.add((Dense(1, activation='linear')))\nmodel.compile(loss='mean_squared_error', optimizer='adam')\nprint(model.summary())\n# train LSTM\nmodel.fit(X, y, epochs=10, batch_size=n_batch, verbose=2)\nmodel.weights\nX[0]\ninput_t0 = 0.5\ncell_state0 = 0\nforget0 = input_t0*model.get_weights()[0][0][1] + model.get_weights()[2][1]\nforget1 = 1/(1+np.exp(-(forget0)))\ncell_state1 = forget1 * cell_state0\ninput_t0_1 = input_t0*model.get_weights()[0][0][0] + model.get_weights()[2][0]\ninput_t0_2 = 1/(1+np.exp(-(input_t0_1)))\ninput_t0_cell1 = input_t0*model.get_weights()[0][0][2] + model.get_weights()[2][2]\ninput_t0_cell2 = np.tanh(input_t0_cell1)\ninput_t0_cell3 = input_t0_cell2*input_t0_2\ninput_t0_cell4 = input_t0_cell3 + cell_state1\noutput_t0_1 = input_t0*model.get_weights()[0][0][3] + model.get_weights()[2][3]\noutput_t0_2 = 1/(1+np.exp(-output_t0_1))\nhidden_layer_1 = np.tanh(input_t0_cell4)*output_t0_2\ninput_t1 = 1\ncell_state1 = input_t0_cell4\nforget21 = hidden_layer_1*model.get_weights()[1][0][1] + model.get_weights()[2][1] + input_t1*model.get_weights()[0][0][1]\nforget_22 = 1/(1+np.exp(-(forget21)))\ncell_state2 = cell_state1 * forget_22\ninput_t1_1 = input_t1*model.get_weights()[0][0][0] + model.get_weights()[2][0] + hidden_layer_1*model.get_weights()[1][0][0]\ninput_t1_2 = 1/(1+np.exp(-(input_t1_1)))\ninput_t1_cell1 = input_t1*model.get_weights()[0][0][2] + model.get_weights()[2][2]+ hidden_layer_1*model.get_weights()[1][0][2]\ninput_t1_cell2 = np.tanh(input_t1_cell1)\ninput_t1_cell3 = input_t1_cell2*input_t1_2\ninput_t1_cell4 = input_t1_cell3 + cell_state2\noutput_t1_1 = input_t1*model.get_weights()[0][0][3] + model.get_weights()[2][3]+ hidden_layer_1*model.get_weights()[1][0][3]\noutput_t1_2 = 1/(1+np.exp(-output_t1_1))\nhidden_layer_2 = np.tanh(input_t1_cell4)*output_t1_2\nfinal_output = hidden_layer_2 * model.get_weights()[3][0] + model.get_weights()[4]\nmodel.predict(X[0].reshape(1,2,1))\nfinal_output\nhidden_layer_1 * model.get_weights()[3][0] + model.get_weights()[4]\n\ny\n\n\n# train LSTM\nmodel.fit(X, y, epochs=10, batch_size=n_batch, verbose=2)\n# evaluate\nresult = model.predict(X, batch_size=n_batch, verbose=0)\nfor value in result[0,:,0]:\n\tprint('%.1f' % value)\nresult.shape\nmodel.weights\nmodel.get_weights()\n```\n\n" | github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neuromatch Academy: Week 1, Day 5, Tutorial 3
# Di... | github_jupyter |
# Model Selection: Dataset 6
```
# Import libraries and modules
import numpy as np
import pandas as pd
import xgboost as xgb
from xgboost import plot_tree
from sklearn.metrics import r2_score, classification_report, confusion_matrix, \
roc_curve, roc_auc_score, plot_confusion_ma... | github_jupyter |
# Waves Approaching a Shoreline
```
import fastfd as ffd
ffd.sparse_lib('scipy')
import holoviews as hv
hv.extension('bokeh')
import numpy as np
import time
length = 50 # wave length
amplitude = 1.25 # wave amplitude
spatial_acc = 6 # spacial derivative accuracy
time_acc = 2 # time derivative accuracy
timestep = 0.1... | github_jupyter |
# Python Classes
Because Python is an [**object-oriented** programming language](https://en.wikipedia.org/wiki/Object-oriented_programming), you can create custom structures for storing data and methods called **classes**. A class represents an object and stores variables related to and functions that operate on that ... | github_jupyter |
```
#install dependencies
import pandas as pd
from splinter import Browser
from bs4 import BeautifulSoup as bs
from pprint import pprint
import requests
import time
# intializing the browser object
executable_path = {'executable_path': 'chromedriver.exe'}
browser = Browser('chrome', **executable_path, headless=False)... | github_jupyter |
```
from gensim.test.utils import common_texts, get_tmpfile
from gensim.models import KeyedVectors
import numpy as np
from typing import List
word_vectors = KeyedVectors.load_word2vec_format("D:\\nlp\\vectors\\news.lowercased.tokenized.word2vec.300d", binary=False)
result = word_vectors.most_similar(positive=['україна'... | github_jupyter |
> This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python.
# 5.5. Ray tracing: naive Cython
In this example, we will render a sphere with a diffuse and specular material. The principle is to mod... | github_jupyter |
# The Pandas library
**From the Pandas documentation:**
**pandas** is everyone's favorite data analyis library providing fast, flexible, and expressive data structures designed to work with *relational* or table-like data (SQL table or Excel spreadsheet). It is a fundamental high-level building block for doing practi... | github_jupyter |
<a href="https://colab.research.google.com/github/GiselaCS/Mujeres_Digitales/blob/main/Unsupervised_Learning_Caso_Fraude.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Machine learning (Metodo No supervisado)
# ¿Podemos detectar patrones entre c... | github_jupyter |
```
# change the current dir so the sys know the .py modules
import os
os.chdir('/Users/patrick/OneDrive - University of North Carolina at Chapel Hill/SMART_research/lookie-lookie/python')
import ijson
import base64
import cv2
import random
import requests
import matplotlib.pyplot as plt
import pandas as pd
import nump... | github_jupyter |
## Work
1. 請嘗試將 preproc_x 替換成以每筆資料的 min/max 進行標準化至 -1 ~ 1 間,再進行訓練
2. 請嘗試將 mlp 疊更深 (e.g 5~10 層),進行訓練後觀察 learning curve 的走勢
3. (optional) 請改用 GPU 進行訓練 (如果你有 GPU 的話),比較使用 CPU 與 GPU 的訓練速度
```
##
"""
Your code here (optional)
確認硬體資源
"""
import os
from tensorflow import keras
# 請嘗試設定 GPU:os.environ
os.environ["CUDA_VISIBL... | github_jupyter |
```
import argparse
import sys
import os
import random
import time
import datetime
from collections import Counter
import numpy as np
import shutil
import inspect
import gc
import re
import keras
from keras import models
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model
fro... | github_jupyter |
# Exercise notebook :
```
import warnings
warnings.simplefilter('ignore', FutureWarning)
import pandas as pd
from datetime import datetime
df = pd.read_csv('WHO POP TB all.csv')
```
## Exercise 1: Applying methods to a dataframe column
The `iloc` attribute and the <code>head()</code> and <code>tail()</code> methods ... | github_jupyter |
### Pasar de la base de datos a diccionario
```
# Preparación de ambiente
import pandas as pd
import numpy as np
# Funciones útiles
def convert(ruta):
s = [str(i) for i in ruta]
ruta_c = "-".join(s)
return(ruta_c)
#Cargamos el csv con los datos
df = pd.read_csv("C:/Users/anabc/Documents/MCD/Primavera202... | github_jupyter |
# Supplementary Practice Problems
These are similar to programming problems you may encounter in the mid-terms. They are not graded but we will review them in lab sessions.
**1**. (10 points) Normalize the $3 \times 4$ diagonal matrix with diagonal (1, ,2, 3) so all rows have mean 0 and standard deviation 1. The matr... | github_jupyter |
```
from collections import defaultdict
import time
import json
from pathlib import Path
from multiprocessing.pool import Pool
import numpy as np
import pandas as pd
# Metrics
from sklearn.metrics import accuracy_score, f1_score
from sklearn.model_selection import StratifiedShuffleSplit
from keras import initializers,... | github_jupyter |
<table width="100%">
<tr style="border-bottom:solid 2pt #009EE3">
<td class="header_buttons">
<a href="open_h5.zip" download><img src="../../images/icons/download.png" alt="biosignalsnotebooks | download button"></a>
</td>
<td class="header_buttons">
<a href="https://... | github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ... | github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
## Deploying a Real-Time Content Based Personalization Model
This notebook provides and example for how a business can use machine learning to automate content based personalization for their customers by using a recommendation... | github_jupyter |
Hello World,
My name is Alex Lord (soon to be LordThorsen) and I'm going to be talking about Flask.
Please go download the presentation @ [https://github.com/rawrgulmuffins/a_guided_tour_of_flask.git](https://github.com/rawrgulmuffins/a_guided_tour_of_flask.git).
* [Introduction](http://localhost:8888/notebooks/a_gu... | github_jupyter |
# Text Analytics
Natural Language Processing (NLP) is a branch of artificial intelligence (AI) that deals with written and spoken language. You can use NLP to build solutions that extracting semantic meaning from text or speech, or that formulate meaningful responses in natural language.
Microsoft Azure *cognitive se... | github_jupyter |
### scipy.cluster
```
%matplotlib inline
import matplotlib.pyplot as plt
# Import ndimage to read the image
from scipy import ndimage
# Import cluster for clustering algorithms
from scipy import cluster
# Read the image
image = ndimage.imread("cluster_test_image.jpg")
# Image is 1000x1000 pixels and it has 3 channel... | github_jupyter |
```
import tensorflow as tf
from keras import layers
from keras.models import Model, Sequential
from keras import backend as K
from sklearn.metrics import mean_squared_error
from skimage.measure import compare_ssim as SSIM
import keras
import numpy as np
from sklearn.model_selection import train_test_split
from keras.o... | github_jupyter |
# [Module 3] Training with Pipe Mode using PipeModeDataset
Amazon SageMaker를 사용하면 Pipe 입력 모드를 사용하여 교육 작업을 생성할 수 있습니다. **Pipe 입력 모드를 사용하면 S3의 학습 데이터셋을 노트북 인스턴스의 로컬 디스크로 다운로드하는 대신 학습 인스턴스로 직접 스트리밍합니다.** 즉, 학습 작업이 더 빨리 시작되고 더 빨리 완료되며 더 적은 디스크 공간이 필요합니다.
SageMaker TensorFlow는 SageMaker에서 Pipe 입력 모드를 쉽게 활용할 수있는 `tf.data.D... | github_jupyter |
# Merge 2020 and 2021 Results
Merge the gene and figure dataframes from `20200224` and `20210515`. This notebook also pulls in metadata for the papers.
```
import json
import os
import re
import sys
import tempfile
from pathlib import Path, PurePath
from pprint import pprint
import numpy as np
import pandas as pd
im... | github_jupyter |
#Import Python libraries
##rdflib - https://pypi.python.org/pypi/rdflib
```
import os
import rdflib as rdf
#import csv for reading csv files
import csv
```
#Create new RDF graph
```
g = rdf.Graph()
```
#Add namespaces
## Add a namespace for each one in the object model
```
nidm = rdf.Namespace("http://nidm.nidash.... | github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file... | github_jupyter |
# Computation with Xarray
- Aggregation: calculation of statistics (e.g. sum) along a dimension of an xarray object can be done by dimension name instead of an integer axis number.
- Arithmetic: arithmetic between xarray objects vectorizes based on dimension names, automatically looping (broadcasting) over each distin... | github_jupyter |
<a href="https://colab.research.google.com/github/sayarghoshroy/Hate-Speech-Detection/blob/master/tweet_processor.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import xlrd
import re
import pickle
import csv
# Uncomment if y... | github_jupyter |
# Cell type differences and effects of interferon stimulation on immune cells
Demonstrating differential expression between cell types and the effect of interferon stimulation within a cell type (CD4 T cells).
```
import pandas as pd
import matplotlib.pyplot as plt
import scanpy.api as sc
import scipy as sp
import it... | github_jupyter |
[View in Colaboratory](https://colab.research.google.com/github/raahatg21/Digit-Recognition-MNIST-Dataset-with-Keras/blob/master/MNIST_1.ipynb)
# MNIST Dataset: Digit Classification
**Simple Neural Network of Fully Connected Layers. Using Regularization, Dropout. 96% Accuary on Validation Set. 95.8% Accuracy on Test ... | github_jupyter |
```
# Description: Plot Figures 5, 6, 7, 9 and 10 and Figures S3-S8.
#
# - Climatology of cross-isobath heat transports (HT's) and wind stress.
# - Time series of circumpolarly-integrated HT's and other variables.
# - Time/along-isobath plots of the mean, eddy and total HT's.
# ... | github_jupyter |
```
# Importing useful libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout, GRU, Bidirectional, Conv1D, Flatten, MaxPooling1D
from keras.optimizers import SGD
imp... | github_jupyter |
```
#@title
%%html
<div style="background-color: pink;">
Implementation of Sentiment Analysis Task using Transformers(Bert-Architecture)
</div>
from google.colab import drive
drive.mount('/content/drive')
```
Introduction to BERT and the problem at hand
Exploratory Data Analysis and Preprocessing
Training/Validat... | github_jupyter |
# Sparse Spiking Ensemble - Temporal coding
- Ensemble = Sensory neurons + latent neurons
- Sensory neurons get spikes from sensory inputs
- Sensory inputs are sparse coded (~20% 1s, rest 0s)
- Each neuron also takes input from the whole ensemble (later to be restricted locally)
## New in this notebook
- Use afferent... | github_jupyter |
```
import requests as req
import pandas as pd
import time
import os
# Google developer API key
from config_local import GreaterSchools_api
# read sites
mainfile = os.path.join("..","Project1_AmazonSites.xlsx")
xls = pd.ExcelFile(mainfile)
sites_df=xls.parse('AmazonSites', dtype=str)
school_sites= sites_df[['Site Nam... | github_jupyter |
# Structured RerF Demo
Similar to figure 13 [here](https://arxiv.org/pdf/1506.03410v4.pdf) we create a
distribution of 28x28 pixel images with randomly spaced and sized bars.
In class 0 the bars are oriented horizontally and in class 1 the bars are oriented vertically.
We compare the error-counting estimator $\hat... | github_jupyter |
# Тематическая модель [Постнауки](http://postnauka.ru)
## Peer Review (optional)
В этом задании мы применим аппарат тематического моделирования к коллекции текстовых записей видеолекций, скачанных с сайта Постнаука. Мы будем визуализировать модель и создавать прототип тематического навигатора по коллекции. В коллекции... | github_jupyter |
```
# As documented in the NRPy+ tutorial module
# Tutorial_SEOBNR_Derivative_Routine.ipynb,
# this module computes partial derivatives
# of the SEOBNRv3 Hamiltonian with respect
# to 12 dynamic variables
# Authors: Zachariah B. Etienne & Tyler Knowles
# zachetie **at** gmail **dot* com
# Step 1.a: im... | github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=1
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], Tr... | github_jupyter |
```
%%capture
!pip install parsivar
# Download the dataset
%%capture
! rm -rf *
! gdown --id 1l3gymRj-or40zAOFA09ETo3kHtA-5PeG
! unzip comments.zip
! rm comments.zip
import requests
from string import punctuation
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
fro... | github_jupyter |
### BASICS OF NETWORKX
##### I place the data in a postgreSQL database. Use these code to create and insert data
create table edges (
fromnode int,
tonode int,
distance numeric
);
insert into edges(fromnode, tonode, distance)
values (1, 2, 1306),
(1, 5, 2161),
(1, 6, 2661),
(2, 3,... | github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
import sys
if "/content/drive/My Drive/Machine Learning/lib/" not in sys.path:
sys.path.append("/content/drive/My Drive/Machine Learning/lib/")
from gym.envs.toy_text import CliffWalkingEnv
import plotting
import gym
import math
import numpy as np
im... | github_jupyter |
## Loading of stringer_orientation data
includes some visualizations
```
#@title Data retrieval and loading
import os
data_fname = 'stringer_orientations.npy'
if data_fname not in os.listdir():
!wget -qO $data_fname https://osf.io/ny4ut/download
import numpy as np
dat = np.load('stringer_orientations.npy', allow_pi... | github_jupyter |
## Example 1 (簡單線性回歸)
先從簡單的線性回歸舉例, , 稱為斜率, 稱為截距。
```
# imports
import numpy as np
import matplotlib.pyplot as plt
# 亂數產生資料
np.random.seed(0)
... | github_jupyter |
### Sentiment Analysis
We want to do sentiment analysis by using [VaderSentiment](https://github.com/cjhutto/vaderSentiment) ML framework not supported as an MLflow Flavor. The goal of sentiment analysis is to "gauge the attitude, sentiments, evaluations, attitudes and emotions of a speaker/writer based on the computa... | github_jupyter |
```
import os
import numpy as np
import tensorflow as tf
from tensorflow.python.keras.datasets import mnist
from tensorflow.contrib.eager.python import tfe
# enable eager mode
tf.enable_eager_execution()
tf.set_random_seed(0)
np.random.seed(0)
# constants
hidden_dim = 500
batch_size = 128
epochs = 10
num_classes = 10
... | github_jupyter |
## Understanding Waveform Simulation for XENONnT
Nov 30
```
import strax, straxen, wfsim
import numpy as np
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
config = straxen.get_resource('https://raw.githubusercontent.com/XENONnT/'
'strax_auxiliary_files/master/fax_files/fax_con... | github_jupyter |
```
#default_exp utils
```
# Utils
```
#export
from typing import Iterable, TypeVar, Generator
from plum import dispatch
from pathlib import Path
from functools import reduce
function = type(lambda: ())
T = TypeVar('T')
def identity(x: T) -> T:
"""Indentity function."""
return x
def simplify(x):
"""Re... | github_jupyter |
```
import tensorflow as tf
import numpy as np
from tensorflow import data
import shutil
from datetime import datetime
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from tensorflow.contrib.learn import learn_runner
from tensorflow.contrib.learn import make_export_strategy
print(tf.__version_... | github_jupyter |
```
import os
import site
import sqlite3
import sys
import logzero
import numpy as np
import pandas as pd
import plotly.graph_objects as go
import yaml
from logzero import logger
from tqdm import tqdm
from tqdm.notebook import tqdm
from yaml import dump, load, safe_load
import dash
import dash_bootstrap_components as ... | github_jupyter |
.. meta::
:description: An implementation of the famous Particle Swarm Optimization (PSO) algorithm which is inspired by the behavior of the movement of particles represented by their position and velocity. Each particle is updated considering the cognitive and social behavior in a swarm.
.. meta::
:keywords: Pa... | github_jupyter |
<table align="left" width="100%"> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="35%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared... | github_jupyter |
# Market Bandit
How well could you invest in the public markets, if you only had one macroeconomic signal *inflation* and could only update your investments once each year?
The following shows how to use a [*contextual bandit*](https://rllib.readthedocs.io/en/latest/rllib-algorithms.html#contextual-bandits-contrib-ba... | github_jupyter |
# 2D Convolutional Neural Networks
```
import time
import gc
import pandas as pd
import numpy as np
import sys
sys.path.append("../src")
from preprocessing import *
from plotting import *
df_db = group_datafiles_byID('../datasets/preprocessed/HT_Sensor_prep_metadata.dat', '../datasets/preprocessed/HT_Sensor_pre... | github_jupyter |
Import Torch Packages
```
import torch
import torch.nn as nn
import torch.optim as optim
```
#### Import Gym Packages
```
import gym
from gym.wrappers import FrameStack
```
#### All Other Packages
```
import numpy as np
import matplotlib.pyplot as plt
from tqdm import trange
import random
from abc import ABC, abst... | github_jupyter |
```
import xgboost as xgb
from xgboost import XGBClassifier
import first
import pandas as pd
import data_io as di
from sklearn import cross_validation, metrics
from sklearn.datasets import load_svmlight_file
from sklearn.grid_search import GridSearchCV
import matplotlib.pylab as plt
%matplotlib inline
from matplotlib.... | github_jupyter |
```
# default_exp models.RNNPlus
```
# RNNPlus
> These are RNN, LSTM and GRU PyTorch implementations created by Ignacio Oguiza - timeseriesAI@gmail.com based on:
The idea of including a feature extractor to the RNN network comes from the solution developed by the UPSTAGE team (https://www.kaggle.com/songwonho,
http... | github_jupyter |
```
import tensorflow
tensorflow.keras.__version__
tensorflow.executing_eagerly()
```
# Using word embeddings
This notebook contains the second code sample found in Chapter 6, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the or... | github_jupyter |
```
import json
with open('../input/toxicity.json') as fopen:
x = json.load(fopen)
texts = x['x']
labels = x['y']
!pip3 install bert-tensorflow sentencepiece
from tqdm import tqdm
import json
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
from bert import mod... | github_jupyter |
```
import numpy as np
import pyautogui
import imutils
import cv2
import objc
from AppKit import NSEvent
import sys
import Quartz
import time
import random
import math
try:
import Image
except ImportError:
from PIL import Image
import pytesseract
import mss
import mss.tools
from PIL import Image
import PIL.Im... | github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ... | github_jupyter |
# 环境准备
```
# set env
from pyalink.alink import *
import sys, os
resetEnv()
useLocalEnv(1, config=None)
```
# 数据准备
```
# schema of train data
schemaStr = "id string, click string, dt string, C1 string, banner_pos int, site_id string, \
site_domain string, site_category string, app_id string, app_domain s... | github_jupyter |
```
%autosave 0
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display
import ipywidgets as widgets
from matplotlib import animation
from functools import partial
slider_layout = widgets.Layout(width='600px', height='20px')
slider_style = {'description_width': 'initi... | github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.