markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Log Conversion> Converts the event logs into csv format to make it easier to load them
%load_ext autoreload %autoreload 2 %matplotlib inline
The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload
Apache-2.0
01_log_conversion.ipynb
L0D3/P191919
Pyedra's TutorialThis tutorial is intended to serve as a guide for using Pyedra to analyze asteroid phase curve data. ImportsThe first thing we will do is import the necessary libraries. In general you will need the following:- `pyedra` (*pyedra*) is the library that we present in this tutorial.- `pandas` (*pandas*) t...
import pyedra import pandas as pd
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Load the dataThe next thing we have to do is load our data. Pyedra should receive a dataframe with three columns: id (MPC number of the asteroid), alpha ($\alpha$, phase angle) and v (reduced magnitude in Johnson's V filter). You must respect the order and names of the columns as they have been mentioned. In this step...
df = pyedra.datasets.load_carbognani2019()
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Here we show you the structure that your data file should have. Note that the file can contain information about many asteroids, which allows to obtain catalogs of the parameters of the phase function for large databases.
df
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Fit your dataPyedra's main objective is to fit a phase function model to our data. Currently the api offers three different models:- `HG_fit` (H, G model): $V(\alpha)=H-2.5log_{10}[(1-G)\Phi_{1}(\alpha)+G\Phi_{2}(\alpha)]$- `Shev_fit` (Shevchenko model): $V(1,\alpha)=V(1,0)-\frac{a}{1+\alpha}+b\cdot\alpha$- `HG1G2_fit...
HG = pyedra.HG_fit(df)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
We have already created our catalog of H, G parameters for our data set. Let's see what it looks like.
HG
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
**R** is the coefficient of determination of the fit All pandas dataframe options are available. For example, you may be interested in knowing the mean H of your sample. To do so:
HG.H.mean()
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Remeber that `HG.H` selects the H column.
HG.H
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
The `PyedraFitDataFrame` can also be filtered, like a canonical pandas dataframe. Let's assume that we want to save the created catalog, but only for those asteroids whose id is less than t300. All we have to do is:
filtered = HG.model_df[HG.model_df['id'] < 300] filtered
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Finally we want to see our data plotted together with their respective fits. To do this we will use the `.plot` function provided by Pyedra. To obtain the graph with the adjustments of the phase function model we only have to pass to `.plot` the dataframe that contains our data in the following way:
HG.plot(df=df)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
If your database is very large and you want a clearer graph, or if you only want to see the fit of one of the asteroids you can filter your initial dataframe.
asteroid_85 = df[df['id'] == 85] HG_85 = pyedra.HG_fit(asteroid_85) HG_85.plot(df = asteroid_85)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
All pandas plots are available if you want to use any of them. For example, we may want to visualize the histogram of one of the parameters:
HG.plot(y='G', kind='hist')
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Or we may want to find out if there is a correlation between parameters:
HG.plot(x='G', y='H', kind='scatter', marker='o', color='black')
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Everything we have done in this section can be extended in an analogous way to the rest of the models, as we will see below. HG1G2_fitNow we want to adjust the H, G$_1$, G$_2$ model to our data. Use the function `HG1G2_fit` in the following way.
HG1G2 = pyedra.HG1G2_fit(df) HG1G2
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
**R** is the coefficient of determination of the fit. We can calculate, for example, the median of each of the columns:
HG1G2.median()
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Again, we can filter our catalog. We are keeping the best settings, that is, those whose R is greater than 0.98.
best_fits = HG1G2.model_df[HG1G2.model_df['R'] > 0.98] best_fits
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
We will now look at the graphics.
HG1G2.plot(df=df)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
If we want to visualize the graph only of the asteroid (522):
asteroid_522 = df[df['id'] == 522] HG1G2_522 = pyedra.HG_fit(asteroid_522) HG1G2_522.plot(df=asteroid_522)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
To see the correlation between the parameters G$_1$ and G$_2$ we can use the "scatter" graph of pandas:
HG1G2.plot(x='G1', y='G2', kind='scatter')
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Shev_fit If we want to adjust the Shevchenko model to our data, we must use `Shev_fit`.
Shev = pyedra.Shev_fit(df) Shev
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
**R** is the coefficient of determination of the fit. We can select a particular column and calculate, for example, its minimum:
Shev.V_lin Shev.V_lin.min()
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
And obviously we can graph the resulting fit:
Shev.plot(df=df)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Selecting a subsample:
subsample = df[df['id'] > 100 ] Shev_subsample = pyedra.Shev_fit(subsample) Shev_subsample.plot(df=subsample)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
We can use some of the pandas plot.
Shev_subsample.plot(y=['b', 'error_b'], kind='density', subplots=True, figsize=(5,5), xlim=(0,2))
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Gaia Data Below we show the procedure to combine some observation dataset with Gaia DR2 observations. We import the gaia data with `load_gaia()`
gaia = pyedra.datasets.load_gaia()
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
We then join both datasets (ours and gaia) with `merge_obs`
merge = pyedra.merge_obs(df, gaia) merge = merge[['id', 'alpha', 'v']]
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
We apply to the new dataframe all the functionalities as we did before
catalog = pyedra.HG_fit(merge) catalog.plot(df=merge)
_____no_output_____
MIT
docs/source/tutorial.ipynb
milicolazo/Pyedra
Shannon's Entropy of ABA features
import numpy as np import pandas as pd import matplotlib.pyplot as plt from skimage.measure import shannon_entropy from morphontogeny.functions.IO import reconstruct_ABA def level_arr(array, levels=256): arr = array - np.nanmin(array) # set the minimum to zero arr = (arr / np.n...
_____no_output_____
MIT
Notebooks/02_analyses/Fig2_Shannon_Entropy.ipynb
BioProteanLabs/SFt_pipeline
The pandas library The [pandas library](https://pandas.pydata.org/) was created by [Wes McKinney](http://wesmckinney.com/) in 2010. pandas provides **data structures** and **functions** for manipulating, processing, cleaning and crunching data. In the Python ecosystem pandas is the state-of-the-art tool for working wi...
import pandas as pd import numpy as np
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
The pandas library has two workhorse data structures: __*Series*__ and __*DataFrame*__.* one dimensional `pd.Series` object* two dimensional `pd.DataFrame` object *** The `pd.Series` object Data generation
# import the random module from numpy from numpy import random # set seed for reproducibility random.seed(123) # generate 26 random integers between -10 and 10 my_data = random.randint(low=-10, high=10, size=26) # print the data my_data type(my_data)
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
A Series is a one-dimensional array-like object containing an array of data and an associated array of data labels, called its _index_. We create a `pd.Series` object by calling the `pd.Series()` function.
# Uncomment to look up the documentation # docstring #?pd.Series # source #??pd.Series # create a pd.Series object s = pd.Series(data=my_data) s type(s)
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** `pd.Series` attributesPython objects in general and the `pd.Series` in particular offer useful object-specific *attributes*.* _attribute_ $\to$ `OBJECT.attribute` $\qquad$ _Note that the attribute is called without parenthesis_
s.dtypes s.index
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
We can use the `index` attribute to assign an index to a `pd.Series` object.Consider the letters of the alphabet....
import string letters = string.ascii_uppercase letters
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
By providing an array-type object we assign a new index to the `pd.Series` object.
s.index = list(letters) s.index s
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** `pd.Series` methodsMethods are functions that are called using the attribute notation. Hence they are called by appending a dot (`.`) to the Python object, followed by the name of the method, parentheses `()` and in case one or more arguments (`arg`). * _method_ $\to$ `OBJECT.method_name(arg1, arg2, ...)`
s.sum() s.mean() s.max() s.min() s.median() s.quantile(q=0.5) s.quantile(q=[0.25, 0.5, 0.75])
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** Element-wise arithmeticA very useful feature of `pd.Series` objects is that we may apply arithmetic operations *element-wise*.
s+10 #s*0.1 #10/s #s**2 #(2+s)*1**3 #s+s
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** Selection and IndexingAnother main data operation is indexing and selecting particular subsets of the data object. pandas comes with a very [rich set of methods](https://pandas.pydata.org/pandas-docs/stable/indexing.html) for these type of tasks. In its simplest form we index a Series numpy-like, by using the `[]`...
s s[3] s[2:6] s["C"] s["C":"K"]
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** The `pd.DataFrame` objectThe primary pandas data structure is the `DataFrame`. It is a two-dimensional size-mutable, potentially heterogeneous tabular data structure with both row and column labels. Arithmetic operations align on both row and column labels. Basically, the `DataFrame` can be thought of as a `diction...
df = pd.DataFrame({"id" : range(1,5), "Name" : ["John", "Paul", "George", "Ringo"], "Last Name" : ["Lennon", "McCartney", "Harrison", "Star"], "dead" : [True, False, True, False], "year_born" : [1940, 1942, 1943, 1940], ...
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** `pd.DataFrame` attributes
df.dtypes # axis 0 df.columns # axis 1 df.index
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** `pd.DataFrame` methods **Get a quick overview of the data set**
df.info() df.describe() df.describe(include="all")
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Change index to the variable `id`**
df df.set_index("id") df
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
Note that nothing changed!!For the purpose of memory and computation efficiency `pandas` returns a view of the object, rather than a copy. Hence, if we want to make a permanent change we have to assign/reassign the object to a variable: df = df.set_index("id") or, some methods have the `inplace=True` argument: ...
df = df.set_index("id") df
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Arithmetic methods**
df df.sum(axis=0) df.sum(axis=1)
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
`groupby` method[Hadley Wickham 2011: The Split-Apply-Combine Strategy for Data Analysis, Journal of Statistical Software, 40(1)](https://www.jstatsoft.org/article/view/v040i01) Image source: [Jake VanderPlas 2016, Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/)
df df.groupby("dead") df.groupby("dead").sum() df.groupby("dead")["no_of_songs"].sum() df.groupby("dead")["no_of_songs"].mean() df.groupby("dead")["no_of_songs"].agg(["mean", "max", "min", "sum"])
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
Family of `apply`/`map` methods* `apply` works on a row (`axis=0`, default) / column (`axis=1`) basis of a `DataFrame`* `applymap` works __element-wise__ on a `DataFrame`* `map` works __element-wise__ on a `Series`.
df # (axis=0, default) df[["Last Name", "Name"]].apply(lambda x: x.sum()) # (axis=1) df[["Last Name", "Name"]].apply(lambda x: x.sum(), axis=1)
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
_... maybe a more useful case ..._
df.apply(lambda x: " ".join(x[["Name", "Last Name"]]), axis=1)
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** Selection and Indexing **Column index**
df["Name"] df[["Name", "Last Name"]] df.dead
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Row index**In addition to the `[]` operator pandas ships with other indexing operators such as `.loc[]` and `.iloc[]`, among others.* `.loc[]` is primarily __label based__, but may also be used with a boolean array.* `.iloc[]` is primarily __integer position based__ (from 0 to length-1 of the axis), but may also be u...
df.head(2) df.loc[1] df.iloc[1]
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Row and Columns indices**`df.loc[row, col]`
df.loc[1, "Last Name"] df.loc[2:4, ["Name", "dead"]]
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Logical indexing**
df df["no_of_songs"] > 50 df.loc[df["no_of_songs"] > 50] df.loc[(df["no_of_songs"] > 50) & (df["year_born"] >= 1942)] df.loc[(df["no_of_songs"] > 50) & (df["year_born"] >= 1942), ["Last Name", "Name"]]
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** Manipulating columns, rows and particular entries **Add a row to the data set**
from numpy import nan df.loc[5] = ["Mickey", "Mouse", nan, 1928, nan] df df.dtypes
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
_Note that the variable `dead` changed. Its values changed from `True`/`False` to `1.0`/`0.0`. Consequently its `dtype` changed from `bool` to `float64`._ **Add a column to the data set**
pd.datetime.today() now = pd.datetime.today().year now df["age"] = now - df.year_born df
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
**Change a particular entry**
df.loc[5, "Name"] = "Minnie" df
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
*** PlottingThe plotting functionality in pandas is built on top of matplotlib. It is quite convenient to start the visualization process with basic pandas plotting and to switch to matplotlib to customize the pandas visualization. `plot` method
# this call causes the figures to be plotted below the code cells %matplotlib inline df df[["no_of_songs", "age"]].plot() df["dead"].plot.hist() df["age"].plot.bar()
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
...some notes on plotting with PythonPlotting is an essential component of data analysis. However, the Python visualization world can be a frustrating place. There are many different options and choosing the right one is a challenge. (If you dare take a look at the [Python Visualization Landscape](https://github.com/r...
import matplotlib.pyplot as plt # create a Figure and Axes object fig, ax = plt.subplots(figsize=(10,5)) # plot the data and reference the Axes object df["age"].plot.bar(ax=ax) # add some customization to the Axes object ax.set_xticklabels(df["Name"], rotation=0) ax.set_xlabel("") ax.set_ylabel("Age", size=14) ax.s...
_____no_output_____
MIT
05-2021-05-21/notebooks/05-00_The_pandas_library.ipynb
eotp/python-FU-class
Методы внутренней точки На прошлом семинаре- Задачи оптимизации с ограничениями на простые множества- Метод проекции градиента как частный случай проксимального градиентного метода- Метод условного градента (Франка-Вольфа) и его сходимость Задача выпуклой оптимизации с ограничениями типа равенств\begin{equation*}\be...
%matplotlib inline import matplotlib.pyplot as plt import numpy as np x = np.linspace(-2, 0, 100000, endpoint=False) plt.figure(figsize=(10, 6)) for t in [0.1, 0.5, 1, 1.5, 2]: plt.plot(x, -t * np.log(-x), label=r"$t = " + str(t) + "$") plt.legend(fontsize=20) plt.xticks(fontsize=20) plt.yticks(fontsize=20) plt.xl...
_____no_output_____
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
"Ограниченная" задача\begin{equation*}\begin{split}& \min f_0(x) + \sum_{i=1}^m -t \log(-f_i(x))\\\text{s.t. } & Ax = b\end{split}\end{equation*}- Задача по-прежнему **выпуклая**- Функция $$\phi(x) = -\sum\limits_{i=1}^m \log(-f_i(x))$$ называется *логарифмическим барьером*. Её область определения - множество точек, д...
import numpy as np %matplotlib inline import matplotlib.pyplot as plt import scipy.optimize as scopt import scipy.linalg as sclin def NewtonLinConstraintsFeasible(f, gradf, hessf, A, x0, line_search, linsys_solver, args=(), disp=False, disp_conv=False, callback=None, tol=1e-6, max_iter...
_____no_output_____
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
Проверим верно ли вычисляется градиент
scopt.check_grad(f, gradf, np.random.rand(n), c, mu)
_____no_output_____
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
Выбор начального приближения допустимого по ограничениям и области определения целевой функции
x0 = np.zeros(2*n) x0[:n] = np.random.rand(n) x0[n:2*n] = b - A.dot(x0[:n]) print(np.linalg.norm(A_lin.dot(x0) - b)) print(np.sum(x0 <= 1e-6))
1.1457157353758233e-13 0
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
Проверим сходимость
hist_conv = [] def cl(x): hist_conv.append(x) res = NewtonLinConstraintsFeasible(f, gradf, hessf, A_lin, x0, backtracking, elimination_solver, (c, mu), callback=cl, max_iter=2000, beta1=0.1, rho=0.7) print("Decrement value = {}".format(res["tol"])) fstar = f(res["x"], c, mu) hist_conv_f =...
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
Реализация барьерного метода
def BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, linesearch, linsys_solver, tol=1e-8, max_iter=500, disp_conv=False, **kwargs): x = x0.copy() n = x0.shape[0] mu = mu0 while True: res = NewtonLinConstraintsFeasible(f, gradf, hessf, A, x, linesearch, l...
/Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log /Users/alex/anaconda3/envs/cvxpy/lib/python3.6/site-packages/ipykernel_launcher.py:6: RuntimeWarning: invalid value encountered in log
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
Сравнение времени работы
mu0 = 2 rho_mu = 0.5 n_list = range(3, 10) n_iters = np.zeros(len(n_list)) times_simplex = np.zeros(len(n_list)) times_barrier_simple = np.zeros(len(n_list)) for i, n in enumerate(n_list): print("Current dimension = {}".format(n)) c, A, b, bounds = generate_KleeMinty_test_problem(n) time = %timeit -o -q sco...
_____no_output_____
MIT
Spring2021/int_point.ipynb
Jhomanik/MIPT-Opt
---title: "kNN-algorithm"author: "Palaniappan S"date: 2020-09-05description: "-"type: technical_notedraft: false---
# importing required libraries import pandas as pd from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score # read the train and test dataset train_data = pd.read_csv('train-data.csv') test_data = pd.read_csv('test-data.csv') print(train_data.head()) # shape of the dataset print('...
Target on test data [0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 1 1 0 0 0 1 0 0 1 1 1 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 0 1 0 0 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 1 0 0 1 0 1 0 1 1 0 1 0 0 1 1 0 0 1 0 0 0 1 1 0 0 0 1 0 1 0 1 0 0 0 0 0 ...
MIT
content/python/ml_algorithms/.ipynb_checkpoints/kNN-algorithm-checkpoint.ipynb
Palaniappan12345/mlnotes
Tutorial Part 2: Learning MNIST Digit ClassifiersIn the previous tutorial, we learned some basics of how to load data into DeepChem and how to use the basic DeepChem objects to load and manipulate this data. In this tutorial, you'll put the parts together and learn how to train a basic image classification model in De...
%tensorflow_version 1.x !curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py import deepchem_installer %time deepchem_installer.install(version='2.3.0') from tensorflow.examples.tutorials.mnist import input_data # TODO: This is deprecated. Let's replace wit...
Validation class 0:auc=0.9999482812520925 class 1:auc=0.9999327470315621 class 2:auc=0.9999223382455529 class 3:auc=0.9999378924197698 class 4:auc=0.999804920932277 class 5:auc=0.9997608046652174 class 6:auc=0.9999347825797615 class 7:auc=0.9997099080694587 class 8:auc=0.999882187740275 class 9:auc=0.9996286953889618
MIT
examples/tutorials/02_Learning_MNIST_Digit_Classifiers.ipynb
martonlanga/deepchem
_*H2 energy plot comparing full to particle hole transformations*_This notebook demonstrates using Qiskit Chemistry to plot graphs of the ground state energy of the Hydrogen (H2) molecule over a range of inter-atomic distances using VQE and UCCSD with full and particle hole transformations. It is compared to the same ...
import numpy as np import pylab from qiskit_chemistry import QiskitChemistry # Input dictionary to configure Qiskit Chemistry for the chemistry problem. qiskit_chemistry_dict = { 'problem': {'random_seed': 50}, 'driver': {'name': 'PYQUANTE'}, 'PYQUANTE': {'atoms': '', 'basis': 'sto3g'}, 'operator': {'n...
_____no_output_____
Apache-2.0
community/aqua/chemistry/h2_particle_hole.ipynb
Chibikuri/qiskit-tutorials
OT for domain adaptation on empirical distributionsThis example introduces a domain adaptation in a 2D setting. It explicitsthe problem of domain adaptation and introduces some optimal transportapproaches to solve it.Quantities such as optimal couplings, greater coupling coefficients andtransported samples are represe...
# Authors: Remi Flamary <remi.flamary@unice.fr> # Stanislas Chambon <stan.chambon@gmail.com> # # License: MIT License import matplotlib.pylab as pl import ot import ot.plot
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
generate data-------------
n_samples_source = 150 n_samples_target = 150 Xs, ys = ot.datasets.make_data_classif('3gauss', n_samples_source) Xt, yt = ot.datasets.make_data_classif('3gauss2', n_samples_target) # Cost matrix M = ot.dist(Xs, Xt, metric='sqeuclidean')
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
Instantiate the different transport algorithms and fit them-----------------------------------------------------------
# EMD Transport ot_emd = ot.da.EMDTransport() ot_emd.fit(Xs=Xs, Xt=Xt) # Sinkhorn Transport ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1) ot_sinkhorn.fit(Xs=Xs, Xt=Xt) # Sinkhorn Transport with Group lasso regularization ot_lpl1 = ot.da.SinkhornLpl1Transport(reg_e=1e-1, reg_cl=1e0) ot_lpl1.fit(Xs=Xs, ys=ys, Xt=Xt...
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
Fig 1 : plots source and target samples + matrix of pairwise distance---------------------------------------------------------------------
pl.figure(1, figsize=(10, 10)) pl.subplot(2, 2, 1) pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples') pl.xticks([]) pl.yticks([]) pl.legend(loc=0) pl.title('Source samples') pl.subplot(2, 2, 2) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples') pl.xticks([]) pl.yticks([]) p...
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
Fig 2 : plots optimal couplings for the different methods---------------------------------------------------------
pl.figure(2, figsize=(10, 6)) pl.subplot(2, 3, 1) pl.imshow(ot_emd.coupling_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Optimal coupling\nEMDTransport') pl.subplot(2, 3, 2) pl.imshow(ot_sinkhorn.coupling_, interpolation='nearest') pl.xticks([]) pl.yticks([]) pl.title('Optimal coupling\nSinkhornTr...
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
Fig 3 : plot transported samples--------------------------------
# display transported samples pl.figure(4, figsize=(10, 4)) pl.subplot(1, 3, 1) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=0.5) pl.scatter(transp_Xs_emd[:, 0], transp_Xs_emd[:, 1], c=ys, marker='+', label='Transp samples', s=30) pl.title('Transported samples\nEm...
_____no_output_____
MIT
_downloads/b1ca754f39005f3188ba9b4423f688b0/plot_otda_d2.ipynb
FlopsKa/pythonot.github.io
Air RoutesThe examples in this notebook demonstrate using the GremlinPython library to connect to and work with a Neptune instance. Using a Jupyter notebook in this way provides a nice way to interact with your Neptune graph database in a familiar and instantly productive environment. Load the Air Routes datasetWhen ...
%run '../util/neptune.py'
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Using the neptune module, we can clear any existing data from the database, and load the air routes graph:
neptune.clear() neptune.bulkLoad('s3://aws-neptune-customer-samples-${AWS_REGION}/neptune-sagemaker/data/let-me-graph-that-for-you/01-air-routes/', interval=5)
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Establish access to our Neptune instanceBefore we can work with our graph we need to establish a connection to it. This is done using the `DriverRemoteConnection` capability as defined by Apache TinkerPop and supported by GremlinPython. The `neptune.py` helper module facilitates creating this connection.Once this cell...
g = neptune.graphTraversal()
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Let's find out a bit about the graphLet's start off with a simple query just to make sure our connection to Neptune is working. The queries below look at all of the vertices and edges in the graph and create two maps that show the demographic of the graph. As we are using the air routes data set, not surprisingly, the...
vertices = g.V().groupCount().by(T.label).toList() edges = g.E().groupCount().by(T.label).toList() print(vertices) print(edges)
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Find routes longer than 8,400 milesThe query below finds routes in the graph that are longer than 8,400 miles. This is done by examining the `dist` property of the `routes` edges in the graph. Having found some edges that meet our criteria we sort them in descending order by distance. The `where` step filters out the ...
paths = g.V().hasLabel('airport').as_('a') \ .outE('route').has('dist',gt(8400)) \ .order().by('dist',Order.decr) \ .inV() \ .where(P.lt('a')).by('code') \ .path().by('code').by('dist').by('code').toList() for p in paths: print(p)
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Draw a Bar Chart that represents the routes we just found.One of the nice things about using Python to work with our graph is that we can take advantage of the larger Python ecosystem of libraries such as `matplotlib`, `numpy` and `pandas` to further analyze our data and represent it pictorially. So, now that we have ...
import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np import matplotlib.pyplot as plt import pandas as pd routes = list() dist = list() # Construct the x-axis labels by combining the airport pairs we found # into strings with with a "-" between them. We also build a list containing # the distance valu...
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Explore the distribution of airports by continentThe next example queries the graph to find out how many airports are in each continent. The query starts by finding all vertices that are continents. Next, those vertices are grouped, which creates a map (or dict) whose keys are the continent descriptions and whose valu...
# Return a map where the keys are the continent names and the values are the # number of airports in that continent. m = g.V().hasLabel('continent') \ .group().by('desc').by(__.out('contains').count()) \ .order(Scope.local).by(Column.keys) \ .next() for c,n in m.items(): print('%4d %s' %...
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Draw a pie chart representing the distribution by continentRather than return the results as text like we did above, it might be nicer to display them as percentages on a pie chart. That is what the code in the next cell does. Rather than return the descriptions of the continents (their names) this time our Gremlin qu...
import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np # Return a map where the keys are the continent codes and the values are the # number of airports in that continent. m = g.V().hasLabel('continent').group().by('code').by(__.out().count()).next() fig,pie1 = plt.subplots() pie1.pie(m.values() \ ...
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Find some routes from London to San Jose and draw themOne of the nice things about connected graph data is that it lends itself nicely to visualization that people can get value from looking at. The Python `networkx` library makes it fairly easy to draw a graph. The next example takes advantage of this capability to d...
import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np import matplotlib.pyplot as plt import pandas as pd import networkx as nx # Find up to 15 routes from LHR to SJC that make one stop. paths = g.V().has('airport','code','LHR') \ .out().out().has('code','SJC').limit(15) \ .pat...
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
PART 2 - Examples that use iPython GremlinThis part of the notebook contains examples that use the iPython Gremlin Jupyter extension to work with a Neptune instance using Gremlin. Configuring iPython Gremlin to work with NeptuneBefore we can start to use iPython Gremlin we need to load the Jupyter Kernel extension an...
# Create a string containing the full Web Socket path to the endpoint # Replace <neptune-instance-name> with the name of your Neptune instance. # which will be of the form myinstance.us-east-1.neptune.amazonaws.com #neptune_endpoint = '<neptune-instance-name>' import os neptune_endpoint = os.environ['NEPTUNE_CLUSTER_E...
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Run this cell if you need to reload the Gremlin extension.Occaisionally it becomes necessary to reload the iPython Gremlin extension to make things work. Running this cell will do that for you.
# Re-load the iPython Gremlin Jupyter Kernel extension. %reload_ext gremlin
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
A simple query to make sure we can connect to the graph. Find all the airports in England that are in London. Notice that when using iPython Gremlin you do not need to use a terminal step such as `next` or `toList` at the end of the query in order to get it to return results. As mentioned earlier in this post, the `%r...
%reset -f %gremlin g.V().has('airport','region','GB-ENG') \ .has('city','London').values('desc')
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
You can store the results of a query in a variable just as when using Gremlin Python.The query below is the same as the previous one except that the results of running the query are stored in the variable 'places'. We can then work with that variable in our code.
%reset -f places = %gremlin g.V().has('airport','region','GB-ENG') \ .has('city','London').values('desc') for p in places: print(p)
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
Treating entire cells as GremlinAny cell that begins with `%%gremlin` tells iPython Gremlin to treat the entire cell as Gremlin. You cannot mix Python code into these cells.
%%gremlin g.V().has('city','London').has('region','GB-ENG').count()
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/Let-Me-Graph-That-For-You/01-Air-Routes.ipynb
JanuaryThomas/amazon-neptune-samples
LOAD DESIRED MODEL
#load certain model model = load_model('./for_old22/reverse_MFCC_Dense_Classifier_l-3_u-512_e-1000_1588062326.h5') # plot_model(model, to_file='reverse_MFCC_Dense_Classifier_model.png', show_shapes=True,show_layer_names=True)
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
LOAD TEST DATA
#read test dataset from csv # librispeech data5_unseen_10 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data5_unseen_10ms_R.csv') data5_unseen_50 = pd.read_csv('D:/Users/MC/Documents/UNI/MASTER/thesis/MFCC_FEATURES2/reverse_Mel_scale/data5_unseen_50ms_R.csv') data5_unseen_100 =...
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
GET TOTAL NUMBER OF FILES PER TYPE i.e. get number of entries per dataset (5,6) OR number of entries per IR-length (10,50,100,500,1000)
investigate_differencess_between_datasets = 1 # else investigate between IR lenght #aggregate all data if investigate_differencess_between_datasets: L5 = len(data5_unseen_10) + len(data5_unseen_50) + len(data5_unseen_100) + len(data5_unseen_500) + len(data5_unseen_1000) L6 = len(data6_unseen_10) + len(data6_uns...
number of music samples: 15800 number of speech samples: 16000 of which 10000 are from Librispeech and 6000 are from Musan number of rows: 31800 random selection of rows:
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
PREPARING DATA
#dropping unneccesary columns and storing filenames elsewhere fileNames = data['filename'] data = data.drop(['filename'],axis=1) # function to reduce label resolution from every 9° to 4 quadrants def reduce_Resolution(old_data): new_data = old_data.iloc[:, -1] new_label_list = pd.DataFrame(new_data) for i i...
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
MAKE PREDICTIONS AND EVALUATE
#make prediction for each sample in X and evaluate entire model to get an idea of accuracy predictions = model.predict(X) final_predictions = np.argmax(predictions,axis=1) test_loss, test_acc = model.evaluate(X,y)
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
COMPUTE SOME GENERAL STATISTICS
#method to get difference between elements on circular scale def absolute_diff(int1,int2): m_min = min(int1,int2) m_max = max(int1,int2) diff1 = m_max-m_min diff2 = m_min + 40 - m_max return diff1 if diff1 <=20 else diff2 ##COMPUTE STATISTICS labels = y predictions = predictions #check which errors ...
tolerated error is 27°
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
PLOT STATISTICS
#ERROR OCCURENCE x_as = np.array(range(21)) plt.bar(x_as,avg_occuring_errors) plt.title('reverse model: average error occurrence on unseen data') plt.ylabel('%') plt.ylim([0,0.5]) plt.xlabel('error [°]') plt.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180]) save_fig_fi...
label: 16 predict: 24 error: 8
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
RANDOM TESTING
#types of errors #iterate direction per direction and see what types of errors occur index = 0 while y[index] == 0: ax = plt.subplot(111, projection='polar') ax.bar(theta, predictions[index,:], width=width, color='b', bottom=0.0, alpha=1) plt.show() index += 1 dirrection = 90 df = pd.DataFrame(data)...
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
ALL MISCLASSIFICATIONS
error_5 = 0 error_6 = 0 error_7 = 0 for i in range(len(indexes_of_misclassifications)): if 0 <= indexes_of_misclassifications[i] < L5: error_5 += 1 elif L5 <= indexes_of_misclassifications[i] < L5 + L6: error_6 += 1 elif L5 + L6 <= indexes_of_misclassifications[i] < L5 + L6 + L7: err...
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
GRAVE MISCLASSIFICATIONS i.e. error > 45°
error_5_G = 0 error_6_G = 0 error_7_G = 0 for i in range(len(indexes_of_grave_misclassifications)): if 0 <= indexes_of_grave_misclassifications[i] < L5: error_5_G += 1 elif L5 <= indexes_of_grave_misclassifications[i] < L5 + L6: error_6_G += 1 elif L5 + L6 <= indexes_of_grave_misclassificati...
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
ERROR PER DATASET
x_as = np.array(range(21)) plt.bar(x_as,avg_errors_5) plt.title('reverse model: average error occurrence on unseen data5') plt.ylabel('%') plt.ylim([0,1]) plt.xlabel('error [°]') plt.xticks([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20], [ 0, 18, 36, 54, 72, 90, 108, 126, 144, 162, 180]) save_fig_file_path = 'D:/Users/MC/Do...
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
ATTEMPT TO PLOT RADAR CHART
avg_correct = all_correct/sum_correct x_as = np.array(range(40)) avg_correct = all_correct/sum_correct x_as = np.array(range(40)) N = 40 theta = np.linspace(0.0, 2 * np.pi, N, endpoint=True) width = np.pi / 40 fig= plt.figure(dpi=120) ax = fig.add_subplot(111, polar=True) ax.plot(theta, avg_correct, '-', linewidth=2)...
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
WHAT FILES CAUSE ERRORS
libri_error = 0 gov_error = 0 for i in range(len(indexes_of_misclassifications)): if 0 <= indexes_of_misclassifications[i] < L5: 0+0 elif L5 <= indexes_of_misclassifications[i] < L5 + L6: 0+0 elif L5 + L6 <= indexes_of_misclassifications[i] < L5 + L6 + L7: if 'us-gov' in misclassific...
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
WHAT IR LENGTHS CAUSE ERRORS
L_10_error = 0 L_50_error = 0 L_100_error = 0 L_500_error = 0 L_1000_error = 0 for i in range(len(indexes_of_misclassifications)): if 0 <= indexes_of_misclassifications[i] < L_10: L_10_error += 1 elif L_10 <= indexes_of_misclassifications[i] < L_10 + L_50: L_50_error += 1 elif L_10 + L_50 ...
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL
TESTS ON DIFF IR LENGTHS
x_as = np.array(range(21)) plt.bar(x_as,avg_errors_10) plt.ylim([0,1]); x_as = np.array(range(21)) plt.bar(x_as,avg_errors_50) plt.ylim([0,1]); x_as = np.array(range(21)) plt.bar(x_as,avg_errors_100) plt.ylim([0,1]); x_as = np.array(range(21)) plt.bar(x_as,avg_errors_500) plt.ylim([0,1]); x_as = np.array(range(21)) plt...
_____no_output_____
MIT
Project4_NN/test_model_unseen_data_reverse.ipynb
Rendsnack/Thesis-SMSL