markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
From the stats, the actual write time in TileDB took under 1 second (the rest was mostly parsing the CSV in Pandas). The raw uncompressed CSV data was about 870 MB in binary format, and those got compressed down to about 131 MB in TileDB. There are 18 columns written as attributes, one of which is var-sized (of string ...
A = tiledb.open("taxi_dense_array") print(A.schema)
ArraySchema( domain=Domain(*[ Dim(name='__tiledb_rows', domain=(0, 6405007), tile=100000, dtype='uint64'), ]), attrs=[ Attr(name='VendorID', dtype='float64', var=False, filters=FilterList([ZstdFilter(level=1), ])), Attr(name='tpep_pickup_datetime', dtype='datetime64[ns]', var=False, filters=FilterList...
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
That shows the 18 columns being stored as attributes, along with their types and filters (e.g., zstd compression, which is the default). There is a single dimension `__tiledb_rows`, which corresponds to the row indices. This essentially means that you will be able to slice fast across the row indices. In order to see ...
print(A.nonempty_domain())
((array(0, dtype=uint64), array(6405007, dtype=uint64)),)
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Let's reset the stats and perform a **full read** of the array (all rows and all columns). The result is stored directly in a pandas dataframe. Note that ranges with `df` are always *inclusive*.
%%time tiledb.stats_reset() df = A.df[0:6405007] df tiledb.stats_dump()
TileDB Embedded Version: (2, 1, 3) TileDB-Py Version: 0.7.4 ==== READ ==== - Number of read queries: 1 - Number of attempts until results are found: 1 - Number of attributes read: 18 * Number of fixed-sized attributes read: 17 * Number of var-sized attributes read: 1 - Number of dimensions read: 1 * Number of f...
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
This operation fetches the entire array / dataframe from the disk, decompresses all tiles and creates a pandas dataframe with the result. The whole process takes 1.2 seconds in TileDB core (C++) and about 0.7 seconds on the Python wrapper side for buffer conversion. The stats are quite informative. They break down how...
%%time df = A.df[0:999] df
CPU times: user 19.1 ms, sys: 137 ms, total: 156 ms Wall time: 74.2 ms
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Notice how much faster that operation was, taking only a few milliseconds. Finally, you can slice any **subset of columns**, without fetching all the columns first in a pandas dataframe.
%%time df = A.query(attrs=['tpep_dropoff_datetime', 'fare_amount']).df[0:6405007] df
CPU times: user 423 ms, sys: 614 ms, total: 1.04 s Wall time: 176 ms
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Once again, that operation was much faster than fetching the entire dataframe in main memory. The stats also inform you about how many attributes (i.e., columns) were retrieved, which is two in this example. Remember to close the array when you are done.
A.close()
_____no_output_____
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
The Sparse Case Storing the dataframe as a 1D dense array allowed us to rapidly slice on row indexes. *But what if we wished to slice fast on predicates applied to column values*, such as dropoff time and fare amount? For such scenarios and if you know for a fact that the majority of your workloads involve applying a ...
%%time tiledb.stats_reset() tiledb.from_csv("taxi_sparse_array", "yellow_tripdata_2020-01.csv", capacity=100000, sparse=True, index_col=['tpep_dropoff_datetime', 'fare_amount'], parse_dates=['tpep_dropoff_datetime', 'tpep_pickup_datetime'], ...
/opt/miniconda3/envs/tiledb/lib/python3.8/site-packages/IPython/core/magic.py:187: DtypeWarning: Columns (6) have mixed types.Specify dtype option on import or set low_memory=False. call = lambda f, *a, **k: f(*a, **k) /opt/miniconda3/envs/tiledb/lib/python3.8/site-packages/numpy/lib/arraysetops.py:580: FutureWarning...
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Once again, most of the total ingestion time is spent on parsing on the pandas side. Notice that the R-tree (which is 2D) this time is slightly larger, as this is the main indexing method is sparse arrays. It is still tiny though relative to the entire array size, which is ~100MB. Note that you can choose **any** sub...
A = tiledb.open("taxi_sparse_array") print(A.schema)
ArraySchema( domain=Domain(*[ Dim(name='tpep_dropoff_datetime', domain=(numpy.datetime64('2003-01-01T14:16:59.000000000'), numpy.datetime64('2021-01-02T01:25:01.000000000')), tile=1000 nanoseconds, dtype='datetime64[ns]'), Dim(name='fare_amount', domain=(-1238.0, 4265.0), tile=1000.0, dtype='float64'), ]), ...
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Observe that now the array is sparse, having 16 attributes and 2 dimensions. Also notice that, by default, the array **allows duplicates**. This can be turned off by passing `allows_duplicates=False` in `from_csv`, which will return an error if the CSV contains rows with identical coordinates along the array dimensions...
A.nonempty_domain()
_____no_output_____
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
The first range corresponds to `tpep_dropoff_datetime` and the second to `fare_amount`. Now let's slice the whole array into a pandas dataframe.
%%time tiledb.stats_reset() df = A.query().df[:] df tiledb.stats_dump()
TileDB Embedded Version: (2, 1, 3) TileDB-Py Version: 0.7.4 ==== READ ==== - Number of read queries: 1 - Number of attempts until results are found: 1 - Number of attributes read: 16 * Number of fixed-sized attributes read: 15 * Number of var-sized attributes read: 1 - Number of dimensions read: 2 * Number of f...
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Notice that this takes longer than the dense case. This is because the sparse case involves more advanced indexing and copying operations than dense. However, the real benefit of sparse dataframe modeling is the ability to **slice rapidly with range conditions on the indexed dimensions**, without having to fetch the en...
%%time df = A.df[np.datetime64("2020-07-01"):np.datetime64("2020-10-01"), 5.5:12.5] df
CPU times: user 14.7 ms, sys: 83.8 ms, total: 98.4 ms Wall time: 92.2 ms
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
This is truly rapid. In the dense case, you would have to load the whole dataframe in main memory and then slice using pandas. You can subset on attributes as follows.
%%time df = A.query(attrs=['trip_distance']).df[:] df
CPU times: user 1.65 s, sys: 798 ms, total: 2.45 s Wall time: 1.61 s
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
By default, TileDB fetches also the coordinate values and sets them as pandas indices. To disable them, you can run:
%%time df = A.query(dims=False, attrs=['trip_distance']).df[:] df
CPU times: user 787 ms, sys: 533 ms, total: 1.32 s Wall time: 655 ms
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Wer can also subselect on dimensions:
%%time df = A.query(dims=['tpep_dropoff_datetime'], attrs=['trip_distance']).df[:] df
CPU times: user 822 ms, sys: 690 ms, total: 1.51 s Wall time: 662 ms
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Finally, you can choose even attributes to act as dataframe indices using the `index_col` argument.
%%time df = A.query(index_col=['trip_distance'], attrs=['passenger_count', 'trip_distance']).df[:] df
CPU times: user 1.02 s, sys: 1.3 s, total: 2.32 s Wall time: 811 ms
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
For convenience, TileDB can also return dataframe results as an **Arrow Table** as follows:
%%time df = A.query(return_arrow=True, index_col=['trip_distance'], attrs=['passenger_count', 'trip_distance']).df[:] df
CPU times: user 1 s, sys: 972 ms, total: 1.97 s Wall time: 742 ms
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Since we are done, we can close the array.
A.close()
_____no_output_____
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Storing Pandas Dataframes in TileDB Arrays You can also store a pandas dataframe you already created in main memory into a TileDB array. The following will create a new TileDB array and write the contents of a pandas dataframe.
# First read some data into a pandas dataframe A = tiledb.open("taxi_sparse_array") df = A.query(attrs=['passenger_count', 'trip_distance']).df[:] df # Create and write into a TileDB array tiledb.from_pandas("sliced_taxi_sparse_array", df)
_____no_output_____
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Let's inspect the schema.
A2 = tiledb.open("sliced_taxi_sparse_array") A2.schema
_____no_output_____
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Reading the array back:
A2.df[:]
_____no_output_____
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Lastly, we close the opened arrays.
A.close() A2.close()
_____no_output_____
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Running SQL Queries One of the cool things about TileDB is that it offers a powerful integration with [embedded MariaDB](https://docs.tiledb.com/main/solutions/tiledb-embedded/api-usage/embedded-sql). This allows for execution of arbitrary SQL queries directly on TileDB arrays (both dense and sparse). We took appropri...
import tiledb.sql, pandas as pd db = tiledb.sql.connect() %%time pd.read_sql(sql="SELECT AVG(trip_distance) FROM taxi_dense_array WHERE __tiledb_rows >= 0 AND __tiledb_rows <1000", con=db) %%time pd.read_sql(sql="SELECT AVG(trip_distance) FROM taxi_sparse_array WHERE tpep_dropoff_datetime <= '2019-07-31' AND fare_amoun...
CPU times: user 14.4 ms, sys: 106 ms, total: 121 ms Wall time: 47.6 ms
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Other backends So far we have explained how to store TileDB arrays to the local disk. TileDB is optimized for [numerous storage backends](https://docs.tiledb.com/main/solutions/tiledb-embedded/backends), including AWS S3, Azure Blob Storage and more. The entire functionality shown above (including SQL queries with emb...
vfs = tiledb.VFS() vfs.ls("taxi_sparse_array")
_____no_output_____
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Or remove the arrays we created.
vfs.remove_dir("taxi_dense_array") vfs.remove_dir("taxi_sparse_array") vfs.remove_dir("sliced_taxi_sparse_array")
_____no_output_____
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Also you can remove the CSV file as follows.
vfs.remove_file('yellow_tripdata_2020-01.csv')
_____no_output_____
MIT
tutorials/notebooks/python/dataframes/df_basics.ipynb
TileDB-Inc/TileDB-Examples
Here, we define the functions, Expected cost, expected number of policies sold, and the expected cost per policy sold. I assume the model for bids is exponential.
#expectedcost, expectedsold, expected cost per policy sold def expectedsold(bid):#expected number of policies sold for i in range(0, len(bid)): if bid[i]<0: return 0 e = 0 for i in range(0, 35):#35 because click at 36 = 0, lambda will cause errors for r in range(1, 6): ...
_____no_output_____
MIT
Computing Bids/Computing Optimal Bids.ipynb
TXhadb/2021-May-Data-Science-Project
Some additional functions
#expected clicks def expectedclick(bid):#expected clicks e = 0 for i in range(0, 35): for r in range(1, 6): if bid[i]<0: e = e+0 else: e = e + click[i]*theta[r-1]*math.comb(4, r-1)*(np.exp(-bid[i]/competitionrate[i]))**(r-1)*(1-np.exp(-bid[i]/comp...
_____no_output_____
MIT
Computing Bids/Computing Optimal Bids.ipynb
TXhadb/2021-May-Data-Science-Project
I compute the gradient of expectedcost
#expected cost derivative def grad_cost(bid): gradient = [] for i in range(0,35): k = 0 for r in range(1,6): if bid[i]<0: k = k+0 else: k = k + click[i]*theta[r-1]*math.comb(4, r-1)*((np.exp(-bid[i]/competitionrate[i]))**(r-1)*(1-np.exp(-bi...
_____no_output_____
MIT
Computing Bids/Computing Optimal Bids.ipynb
TXhadb/2021-May-Data-Science-Project
Here we optimize costpersold()
#hessian is zero matrix if needed def cons_H(x, v): return np.zeros((35,35)) #nonlinar constraint is the constraint function is bounded from 400 to 1000 nonlinear_constraint = NonlinearConstraint(constraint, 400, 1000)#, hess=cons_H) #linear constraint is each bid is between 1 to 50 lincon = LinearConstraint(np.ide...
3761.859313508475 399.9999999999999 9.404648283771191 22.76661467552185 157 True 33.96841100112909 99.99999999999997 0.339684110011291 45.92473888397217 269 True 644.9672914434103 199.99999999999997 3.224836457217052 61.344897985458374 345 True 1889.1481206645906 300.0 6.297160402215302 51.99087572097778 345 True 3761....
MIT
Computing Bids/Computing Optimal Bids.ipynb
TXhadb/2021-May-Data-Science-Project
Going to run gradient descent on expectedcost
#code modified from https://stackabuse.com/gradient-descent-in-python-implementation-and-theory/ # Make threshold a -ve value if you want to run exactly # max_iterations. def gradient_descent(max_iterations,threshold,w_init, obj_func,grad_func, learning_rate=0.05,momentum=0.8)...
_____no_output_____
MIT
Computing Bids/Computing Optimal Bids.ipynb
TXhadb/2021-May-Data-Science-Project
Now I find the optimized bids assuming the model is uniformly distributed. First, the functions.
def expectedsolduni(bid):#expected number of policies sold for i in range(0, len(bid)): if bid[i]<0: return 0 e = 0 for i in range(0, 35):#35 because click at 36 = 0, lambda will cause errors for r in range(1, 6): if bid[i]<0: e = e +0 else...
_____no_output_____
MIT
Computing Bids/Computing Optimal Bids.ipynb
TXhadb/2021-May-Data-Science-Project
Load the loss function across training
# you will find the training runs in ./Runs/ # these training traces are just csv's exported from tensorboard, # They are exported to make a figrue for the manuscript -- you can just look at the runs in tensorboard run_csv_folder = 'example_data/training_traces/' files = os.listdir(run_csv_folder) files_txt = [i for ...
Wall time Step Value 0 1.574440e+09 0 5138.962891 1 1.574440e+09 1 4729.284668 2 1.574440e+09 2 4470.230469 3 1.574440e+09 3 5048.220703 4 1.574440e+09 4 5750.927246 .. ... ... ... 665 1.574443e+09 133000 894.225952 666 1.5...
Apache-2.0
analysis/004_Plot_network_training.ipynb
chrelli/3DDD_social_mouse_tracker
Plot the training and validation loss
import matplotlib # Say, "the default sans-serif font is COMIC SANS" matplotlib.rcParams['font.sans-serif'] = "Liberation Sans" # Then, "ALWAYS use sans-serif fonts" matplotlib.rcParams['font.family'] = "sans-serif" matplotlib.rc('font', family='sans-serif') matplotlib.rc('text', usetex='false') matplotlib.rcParams...
134
Apache-2.0
analysis/004_Plot_network_training.ipynb
chrelli/3DDD_social_mouse_tracker
Introduction to cuML
!nvidia-smi !wget -nc https://github.com/rapidsai/notebooks-extended/raw/master/utils/rapids-colab.sh !bash rapids-colab.sh import sys, os sys.path.append('/usr/local/lib/python3.6/site-packages/') os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so' os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cud...
--2019-09-11 19:48:17-- https://github.com/rapidsai/notebooks-extended/raw/master/utils/rapids-colab.sh Resolving github.com (github.com)... 140.82.118.3 Connecting to github.com (github.com)|140.82.118.3|:443... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: https://github.com/rapi...
Apache-2.0
rapids/rapids-cuDF-cuml-01.ipynb
martin-fabbri/colab-notebooks
Required Imports
import cudf import pandas as pd import numpy as np import math from math import cos, sin, asin, sqrt, pi, atan2 from numba import cuda import time import os import matplotlib.pyplot as plt import sklearn from sklearn.linear_model import LinearRegression import cuml from cuml.linear_model import LinearRegression as Line...
NumPy Version: 1.16.5 Scikit-learn Version: 0.21.3 cuDF Version: 0.10.0a+1233.gf8e8353 cuML Version: 0.10.0a+456.gb96498b
Apache-2.0
rapids/rapids-cuDF-cuml-01.ipynb
martin-fabbri/colab-notebooks
Scikit-Learn Linear Regressiony = 2.0 * x + 1.0
n_rows = 1000000 w = 2.0 x = np.random.normal(loc=0, scale=2, size=(n_rows,)) b = 1.0 y = w * x + b noise = np.random.normal(loc=0, scale=2, size=(n_rows,)) y_noisy = y + noise y_noisy[:5] plt.scatter(x, y_noisy, label='empirical data points') plt.plot(x, y, color='black', label='true relatioship') plt.legend() %%t...
CPU times: user 29.8 ms, sys: 0 ns, total: 29.8 ms Wall time: 28.8 ms
Apache-2.0
rapids/rapids-cuDF-cuml-01.ipynb
martin-fabbri/colab-notebooks
Create new data and perform inference
inputs = np.linspace(start=-5, stop=5, num=1000000) outputs = linear_regression.predict(np.expand_dims(inputs, 1))
_____no_output_____
Apache-2.0
rapids/rapids-cuDF-cuml-01.ipynb
martin-fabbri/colab-notebooks
Let's now visualize our empirical data points
plt.scatter(x, y_noisy, label='empirical data points') plt.plot(x, y, color='black', label='true relatioship') plt.plot(inputs, outputs, color='red', label='predict relationships (cpu)') plt.legend() df = cudf.DataFrame({'x': x, 'y': y_noisy}) df.head(5) %%time # instantiate and fit model linear_regression_gpu = Linear...
_____no_output_____
Apache-2.0
rapids/rapids-cuDF-cuml-01.ipynb
martin-fabbri/colab-notebooks
Dependencies
import pandas as pd import psycopg2
_____no_output_____
MIT
Models/Prepare Data For Predictive Modeling.ipynb
mbrady4/ClinicalTrialFinder-DS
Connect to Database
dbname = 'aact' user = 'postgres' password = 'lqt38be' host = 'localhost' conn = psycopg2.connect(dbname=dbname, user=user, password=password, host=host) curs = conn.cursor() # Verifying Connection query = """SELECT COUNT(*) FROM ctgov.studies; """ curs.execute(query) curs.fetchall()
_____no_output_____
MIT
Models/Prepare Data For Predictive Modeling.ipynb
mbrady4/ClinicalTrialFinder-DS
Load Studies Table
query = 'SELECT * FROM ctgov.studies' studies = pd.read_sql(sql=query, con=conn) studies.shape
_____no_output_____
MIT
Models/Prepare Data For Predictive Modeling.ipynb
mbrady4/ClinicalTrialFinder-DS
Split into Pred, Test, Val, and Train Sets
studies['overall_status'].value_counts() active_status = ['Recruiting', 'Active, not recruiting', 'Not yet recruiting', 'Enrolling by invitation', 'Available', 'Approved for marketing'] pred_set = studies[ studies['overall_status'].isin(active_status) ] pred_set.shape inactive_status = ['Completed', '...
_____no_output_____
MIT
Models/Prepare Data For Predictive Modeling.ipynb
mbrady4/ClinicalTrialFinder-DS
Mc Donalds - Nutritional Facts
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import thinkplot import thinkstats2 result=pd.read_csv('menu.csv')
_____no_output_____
MIT
src/New S&P Project.ipynb
lucky135/McDonalds-Nutritional-Facts
Analyzing Data Frame
result.head() result.tail() result.describe() result.info() print("Columns in the data frame : ",result.columns) print("Shape : ",result.shape) result.isnull().any()
_____no_output_____
MIT
src/New S&P Project.ipynb
lucky135/McDonalds-Nutritional-Facts
Hence there are no null values in the Data. Let's study how many calories each food category contains. Therefore helping the health conscious people to select the perfect combination :-)
# Getting total number of calories for each food item in a separate column result['Total Calories']=result['Calories']+result['Calories from Fat'] result['Total Calories'].head() # Rounding Off the calories to nearest hundred so that data can be handled and analyzed easily def roundOff(x): if x==0: x=0 ...
_____no_output_____
MIT
src/New S&P Project.ipynb
lucky135/McDonalds-Nutritional-Facts
From the above graph its depicted that food items in 'Breakfast','Chicken & Fish' and 'Smoothies and Shakes' category contains the maximum amount of calories whereas amount of calories in 'Snacks & Sides','Salads' and 'Beef & Pork' are in a moderate amount while the lowest calories are in the food items that covers th...
pmf=thinkstats2.Pmf(result['Estimated Calories']) pmf plt.figure(figsize=(14,9)) thinkplot.Pmf(pmf,color='green')
_____no_output_____
MIT
src/New S&P Project.ipynb
lucky135/McDonalds-Nutritional-Facts
From the graph we can see that most of the food items contain about 500 calories and majorly the range extends between 250 - 1000. Further, very less food items have calories more than 1200. Analyzing data on the basis of sugar present in the food items. Sugar free components have been removed
# Removing the sugar free food items sugar_plot=pd.DataFrame(result[result['Sugars']!=0]) sugar_plot.head() # Rounding Sugar Content to nearest multiple of 5 def roundSugar(x): if x<5: x=5 else: check=x%5 if check==0: x=x elif check<3: x=x-check el...
_____no_output_____
MIT
src/New S&P Project.ipynb
lucky135/McDonalds-Nutritional-Facts
After removing the sugar free food items, here we can see that about 25% of the food items include about 5g of sugar. That seems pretty good considering the amount of calories we take in at the Mc Donalds Now the people who go to gym need loads of proteins so let's see if they can have a great time at Mc Donalds an...
# Rounding the Protein Content to nearest multiple of 5 def roundProtein(x): if x<5: x=5 else: check=x%5 if check==0: x=x elif check<3: x=x-check else: x=x+5-check return x result['Protein']=result['Protein'].apply(roundProtein) re...
_____no_output_____
MIT
src/New S&P Project.ipynb
lucky135/McDonalds-Nutritional-Facts
Most of the food items contain 0 to 20 grams of protein. Well that's not satisfactory considering the amount of calories it gives. But still few food items majorly fish and meat give more protein than others and are preferred. According to the study provided by World Health Organisation, 1500mg of Sodium intake is ...
plot=sns.swarmplot(x="Category", y="Sodium", data=result) plt.setp(plot.get_xticklabels(), rotation=45) plt.title("Sodium Intake") plt.show()
_____no_output_____
MIT
src/New S&P Project.ipynb
lucky135/McDonalds-Nutritional-Facts
As seen in the graph,in an overall scenario the maximum Sodium is consumed by the customers during Breakfast. Let's see what meals can be avoided occasionally and what meals can be preferred.
x=result[result['Category']=='Breakfast'] x print('List of food items with high sodium intake consumed during breakfast: ') x[x['Sodium']>1500]['Item'] print('List of food items with moderate to low Sodium intake: ') x[x['Sodium']<=1500]['Item']
List of food items with moderate to low Sodium intake:
MIT
src/New S&P Project.ipynb
lucky135/McDonalds-Nutritional-Facts
Analysing the healthy nutritional facts of the menu
health=['Dietary Fiber','Iron (% Daily Value)','Vitamin A (% Daily Value)','Vitamin C (% Daily Value)','Calcium (% Daily Value)'] for x in health: sns.barplot(x='Category',y=x,data=result) plt.xticks(rotation=90) plt.show() print("Item with high Dietary Fiber: ",result.Item[result['Dietary Fiber']].max()) p...
_____no_output_____
MIT
src/New S&P Project.ipynb
lucky135/McDonalds-Nutritional-Facts
Loading neurons from s3
import napari %gui qt5 import brainlit from brainlit.utils.ngl_pipeline import NeuroglancerSession from brainlit.viz.swc import * import numpy as np from skimage import io
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/python_jsonschema_objects/__init__.py:53: UserWarning: Schema version http://json-schema.org/draft-04/schema not recognized. Some keywords and features may not be supported. self.schema["$schema"]
Apache-2.0
docs/notebooks/visualization/loading.ipynb
Jwright707/brainl
Loading entire neuron from AWS `napari.components.viewer_model.ViewerModel.add_swc` does this via the following functions in the `napari.layers.swc.swc` module1. `swc.read_s3` to read the s3 file into a pd.DataFrame2. `swc.read_swc` to read the swc file into a pd.DataFrame3. `swc.generate_df_subset` creates a smaller...
s3_path = "s3://open-neurodata/brainlit/brain1_segments" seg_id = 2 mip = 1 df = read_s3(s3_path, seg_id, mip) df.head()
Downloading: 100%|██████████| 1/1 [00:00<00:00, 16.27it/s]
Apache-2.0
docs/notebooks/visualization/loading.ipynb
Jwright707/brainl
2. `swc.read_swc`This function parses the swc file into a pd.DataFrame. Each row is a vertex in the swc file with the following information: `sample number``structure identifier``x coordinate``y coordinate``z coordinate``radius of dendrite``sample number of parent`The coordinates are given in spatial units of micromet...
consen_neuron_path = '2018-08-01_G-002_consensus.swc' df = read_swc(swc_path=consen_neuron_path) df.head()
_____no_output_____
Apache-2.0
docs/notebooks/visualization/loading.ipynb
Jwright707/brainl
3. `generate_df_subset`This function parses the swc file into a pd.DataFrame. Each row is a vertex in the swc file with the following information: `sample number``structure identifier``x coordinate``y coordinate``z coordinate``radius of dendrite``sample number of parent`The coordinates are given in same spatial units ...
# Choose vertices to use for the subneuron subneuron_df = df[0:3] vertex_list = subneuron_df['sample'].array # Define a neuroglancer session url = "s3://open-neurodata/brainlit/brain1" mip = 1 ngl = NeuroglancerSession(url, mip=mip) # Get vertices seg_id = 2 buffer = [10, 10, 10] img, bounds, vox_in_img_list = ngl....
Downloading: 100%|██████████| 1/1 [00:00<00:00, 42.07it/s] Downloading: 100%|██████████| 1/1 [00:00<00:00, 42.80it/s] Downloading: 100%|██████████| 1/1 [00:00<00:00, 37.35it/s] Downloading: 0%| | 0/1 [00:00<?, ?it/s] Downloading: 0%| | 0/1 [00:00<?, ?it/s] Downloading: 0%| | 0/1 [00:00<...
Apache-2.0
docs/notebooks/visualization/loading.ipynb
Jwright707/brainl
4. `swc_to_voxel`If we want to overlay the swc file with a corresponding image, we need to make sure that they are in the same coordinate space. Because an image in an array of voxels, it makes sense to convert the vertices in the dataframe from spatial units into voxel units.Given the `spacing` (spatial units/voxel) ...
spacing = np.array([0.29875923,0.3044159,0.98840415]) origin = np.array([70093.276,15071.596,29306.737]) df_voxel = swc_to_voxel(df=df, spacing=spacing, origin=origin) df_voxel.head()
_____no_output_____
Apache-2.0
docs/notebooks/visualization/loading.ipynb
Jwright707/brainl
5. `df_to_graph`A neuron is a graph with no cycles (tree). While napari does not support displaying graph objects, it can display multiple paths. The DataFrame already contains all the possible edges in the neurons. Each row in the DataFrame is an edge. For example, from the above we can see that `sample 2` has `paren...
G = df_to_graph(df) print('Number of nodes:', len(G.nodes)) print('Number of edges:', len(G.edges)) print('\n') print('Sample 1 coordinates (x,y,z)') print(G.nodes[1]['x'],G.nodes[1]['y'],G.nodes[1]['z'])
Number of nodes: 1650 Number of edges: 1649 Sample 1 coordinates (x,y,z) 4713 4470 3857
Apache-2.0
docs/notebooks/visualization/loading.ipynb
Jwright707/brainl
6. `graph_to_paths`This function takes in a graph and returns a list of non-overlapping paths. The union of the paths forms the graph.The algorithm works by:1. Find longest path in the graph ([networkx.algorithms.dag.dag_longest_path](https://networkx.github.io/documentation/stable/reference/algorithms/generated/netwo...
paths = graph_to_paths(G=G) print(f"The graph was decomposed into {len(paths)} paths")
The graph was decomposed into 179 paths
Apache-2.0
docs/notebooks/visualization/loading.ipynb
Jwright707/brainl
6. `ViewerModel.add_shapes`napari displays "layers". The most common layer is the image layer. In order to display the neuron, we use `path` from the [shapes](https://napari.org/tutorials/shapes) layer
viewer = napari.Viewer(ndisplay=3) viewer.add_shapes(data=paths, shape_type='path', edge_color='white', name='Skeleton 2')
_____no_output_____
Apache-2.0
docs/notebooks/visualization/loading.ipynb
Jwright707/brainl
Loading sub-neuronThe image of the entire brain has dimensions of (33792, 25600, 13312) voxels. G-002 spans a sub-image of (7386, 9932, 5383) voxels. Both are too big to load in napari and overlay the neuron.To circumvent this, we can crop out a smaller region of the neuron, load the sub-neuron, and load the correspon...
# Create an NGL session to get the bounding box ngl_sess = NeuroglancerSession(mip = 1) img, bbbox, vox = ngl_sess.pull_chunk(2, 300, 1, 1, 1) bbox = bbbox.to_list() box = (bbox[:3], bbox[3:]) print(box) G_sub = get_sub_neuron(G, box) paths_sub = graph_to_paths(G_sub) viewer = napari.Viewer(ndisplay=3) viewer.add_shape...
_____no_output_____
Apache-2.0
docs/notebooks/visualization/loading.ipynb
Jwright707/brainl
1D Plug Flow Reactor Model with Surface Chemistry In this model, we will illustrate the derivation of the governing differential equations and algebraic constraints, calculation of the initial conditions of the variables and their spatial derivatives and use the [scikits.odes.dae](http://scikits-odes.readthedocs.io/en...
from __future__ import print_function, division import numpy as np from scikits.odes import dae import cantera as ct import matplotlib.pyplot as plt %matplotlib inline print('Runnning Cantera version: ' + ct.__version__)
Runnning Cantera version: 2.4.0
BSD-3-Clause
reactors/1D_pfr_surfchem.ipynb
santoshshanbhogue/cantera-jupyter
Define gas species, bulk species, surface species and the interface Here, we use a kinetic mechanism involving the chemical vapor deposition of silicon nitride (Si3N4) from SiF4 and NH3. 25 gas species, 6 surface species and 2 bulk species mechanism is applied by [Richard S. Larson et al. 1996, SAND96-8211](https://gi...
#import the SiF4 + NH3 reaction mechanism mech = 'data/SiF4_NH3_mec.cti' #import the models for gas and bulk gas, bulk_Si, bulk_N = ct.import_phases(mech, ['gas', 'SiBulk', 'NBulk']) #import the model for gas-Si-N interface gas_Si_N_interface = ct.Interface(mech, 'SI3N4', [gas,bulk_Si,bulk_N])
_____no_output_____
BSD-3-Clause
reactors/1D_pfr_surfchem.ipynb
santoshshanbhogue/cantera-jupyter
Case 1: isothermal reactor Define reactor conditions : temperature, pressure, fuel, and some important parameters
T0 = 1713 # Kelvin p0 = 2 * ct.one_atm / 760.0 # Pa ~2 Torr gas.TPX = T0, p0, "NH3:6, SiF4:1" bulk_Si.TP = T0, p0 bulk_N.TP = T0, p0 gas_Si_N_interface.TP = T0, p0 D = 5.08e-2 # diameter of the tube [m] Ac = np.pi * D**2/4 # cross section of the tube [m^2] mu = 5.7e-5 # kg/(m-s) dynamic viscosity perim = np.pi * D...
_____no_output_____
BSD-3-Clause
reactors/1D_pfr_surfchem.ipynb
santoshshanbhogue/cantera-jupyter
Define a residual function for IDA solver For the isothermal tube with laminar flow, since the temperature of the flow and tube is constant, the energy conservation equation can be ignored. The governing equations include conservation of mass and species, momentum equation, equation of state, and the algebraic constra...
%%latex \begin{align} R[0] &= u\frac{d\rho}{dz} + \rho\frac{du}{dz} - \frac{p'}{A_c}\sum^{K_g}\dot{s}_{k,g}W_{k,g} \\ R[1] &= \rho u A_c\frac{dY_k}{dz} + Y_k p'\sum^{K_g}\dot{s}_{k,g}W_{k,g} - \dot{\omega_k}W_kA_c - \dot{s}_{k,g}W_{k,g} p' \\ R[2] &= 2\rho u \frac{du}{dz} + u^2\frac{d\rho}{dz} + \frac{dP}{d...
_____no_output_____
BSD-3-Clause
reactors/1D_pfr_surfchem.ipynb
santoshshanbhogue/cantera-jupyter
The detailed derivation of the DAE system can be found in [my report](https://github.com/yuj056/yuj056.github.io/blob/master/Week1/yuj056_github_io.pdf).
def residual(z, vec, vecp, result): """ we create the residual equations for the problem vec = [u, rho, Yk, p, Zk] vecp = [dudz, drhodz, dYkdz, dpdz, dZkdz] """ # temporary variables u = vec[0] # velocity rho = vec[1] # density Y = vec[2:2+N] # vector of mass fractions of all ...
_____no_output_____
BSD-3-Clause
reactors/1D_pfr_surfchem.ipynb
santoshshanbhogue/cantera-jupyter
Determine the initial values of the spatial derivatives of the unknowns which need to be used as the initial conditions for the IDA solver The following linear equation system has been solved by [np.linalg.solve](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.solve.html), a linear solver, to ...
%%latex \begin{align} u_0\rho_0' + \rho_0 u_0' - \frac{p'}{A_c}\sum^{K_g}\dot{s}_{k,g}W_{k,g} &= 0\\ \rho_0 u_0 A_c Y_{k,0}' + Y_{k,0} p'\sum^{K_g}\dot{s}_{k,g}W_{k,g} - \dot{\omega_k}W_kA_c - \dot{s}_{k,g}W_{k,g} p' &=0 \\ 2\rho_0 u_0 u_0' + u_0^2\rho_0' + P_0' + \frac{32u_0 \mu}{D^2} &=0\\ -RT\rho_0' ...
_____no_output_____
BSD-3-Clause
reactors/1D_pfr_surfchem.ipynb
santoshshanbhogue/cantera-jupyter
We assume the derivatives of the site fractions are equal to zero, although it is trivial for the IDA solver.
######## Solve linear system for the initial vecp ########### """ a = coefficient of [u', rho', Yk', P'] b = RHS constant of each conservation equations """ rho0 = gas.density # initial density of the flow u0 = 11.53 # m/s initial velocity of the flow W = gas.molecular_weights W_avg = gas.mean_molecular_weight ...
_____no_output_____
BSD-3-Clause
reactors/1D_pfr_surfchem.ipynb
santoshshanbhogue/cantera-jupyter
Run the IDA solver to calculate the unknowns varying in the flow direction
solver = dae( 'ida', residual, first_step_size=1e-16, atol=1e-8, # absolute tolerance for solution rtol=1e-8, # relative tolerance for solution # If the given problem is of type DAE, some items of the residual vector # returned by the 'resfn' have to be treated as algebraic equations, and...
_____no_output_____
BSD-3-Clause
reactors/1D_pfr_surfchem.ipynb
santoshshanbhogue/cantera-jupyter
Plot the results
# plot velocity of gas along the flow direction f, ax = plt.subplots(3,2, figsize=(9,9), dpi=96) ax[0,0].plot(times, solution.values.y[:,0], color='C0') ax[0,0].set_xlabel('Distance (m)') ax[0,0].set_ylabel('Velocity (m/s)') # plot gas density along the flow direction ax[0,1].plot(times, solution.values.y[:,1], color=...
_____no_output_____
BSD-3-Clause
reactors/1D_pfr_surfchem.ipynb
santoshshanbhogue/cantera-jupyter
Case 2: Adiabatic reactor Since the application of isothermal reactor is not prevalent, to improve the model for real use, the adiabatic reator is considered. Here, the energy balance equation is also considered.The heat flow rate into the system has two components. One is due to the heat flux $q_e$ from the surroundi...
%%latex \begin{align} \rho u A_c c_p \frac{dT}{dz} +A_c \sum_{K_g}\dot{\omega}_k W_k h_k + p'\sum_{K_g}h_k\dot{s}_k W_k &= a_eq_e - p'\sum^{K_b}_{bulk}\dot{\omega}_kh_k\\&=p'q_i + p'\sum^{K_g}_{gas}\dot{s_k}W_kh_k \end{align}
_____no_output_____
BSD-3-Clause
reactors/1D_pfr_surfchem.ipynb
santoshshanbhogue/cantera-jupyter
Since the adiabatic reactor is considered, $q_e = 0$. Similar to the procedure for the isothermal reactor model, add the energy equation into the residual function and calculate the initial value of the spatial derivative of the temperature.
############################### initial conditions ################################################################## # import the SiF4 + NH3 reaction mechanism mech = 'data/SiF4_NH3_mec.cti' # import the models for gas and bulk gas, bulk_Si, bulk_N = ct.import_phases(mech,['gas','SiBulk','NBulk']) # import the model ...
_____no_output_____
BSD-3-Clause
reactors/1D_pfr_surfchem.ipynb
santoshshanbhogue/cantera-jupyter
Lab Exercise 2 for SCB Errors Ex 1-3 * Try to read `absrelerror()` in `measureErrors.py` and use it for the exercises Ex. 4 Round-off errors
# Generate a random nxn matrix and compute A^{-1}*A which should be I analytically def testErrA(n = 10): A = np.random.rand(n,n) Icomp = np.matmul(np.linalg.inv(A),A) Iexact = np.eye(n) absrelerror(Iexact, Icomp)
_____no_output_____
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
Random matrix $A$ with size $n=10$
testErrA()
*----------------------------------------------------------* This program illustrates the absolute and relative error. *----------------------------------------------------------* Absolute error: 3.6893631945840935e-14 Relative error: 1.1666790810480723e-14
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
$n=100$
testErrA(100)
*----------------------------------------------------------* This program illustrates the absolute and relative error. *----------------------------------------------------------* Absolute error: 1.1445778429691323e-12 Relative error: 1.1445778429691323e-13
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
$n=1000$
testErrA(1000)
*----------------------------------------------------------* This program illustrates the absolute and relative error. *----------------------------------------------------------* Absolute error: 6.045719583144339e-11 Relative error: 1.911824397741983e-12
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
**Note**: The execution time changes with the size of $n$ almost linearly, but for $n=10000$, it will take much longer time. Ex. 5 Discretization Errors Program that illustrate the concept discretization.Replacing continuous with discrete, i.e. represent a continuous function on a interval with a finite number of poin...
h = 0.1
_____no_output_____
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
Discretize and compute the numerical derivatives.Here, the derivative `f'(x)` is computed in a finite number of points on a interval.
# The exact solution N = 400 l = 0 u = 2 x = np.linspace(l, u, N) f_exa = np.exp(x) # check if h is too large or too small if h > 1 or h < 1e-5: h = 0.5 # compute the numerical derivatives xh = np.linspace(l, u, int(abs(u-l)/h)) fprimF = ForwardDiff(np.exp, xh, h);
_____no_output_____
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
Use `matplotlib` to visuallize the results. Try to check on [https://matplotlib.org/](https://matplotlib.org/) for mor features, it is really powerful!
# Plot fig, ax = plt.subplots(1) ax.plot(x, f_exa, color='blue') ax.plot(xh, fprimF, 'ro', clip_on=False) ax.set_xlim([0,2]) ax.set_ylim([1,max(fprimF)]) ax.set_xlabel(r'$x$') ax.set_ylabel('Derivatives') ax.set_title('Discretization Errors') ax.legend(['Exact Derivatives','Calculated Derivatives']) if saveFigure: ...
_____no_output_____
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
Computer Arithmetic Machine limits for floating point types use `np.finfo(float)`
print('machhine epsilon in python is: ' + str(np.finfo(float).eps))
machhine epsilon in python is: 2.220446049250313e-16
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
The overflow in python is shown by `np.finfo(float).max` and the underflow by `np.finfo(float).tiny`
print('The largest real number in python is: ' + str(np.finfo(float).max)) print('The smallest positive real number in python is: ' + str(np.finfo(float).tiny))
The largest real number in python is: 1.7976931348623157e+308 The smallest positive real number in python is: 2.2250738585072014e-308
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
Other attributes of `finfo` can be found [here](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.finfo.html) Computation of the derivative The function $f(x) = e^x$ in $x=1$ is used as test function.* forward difference: $\displaystyle{f'(x)\approx\frac{f(x+h)-f(x)}{h}}$* central difference: $\display...
# choose h from 0.1 to 10^-t, t>=2 t = 15 hx = 10**np.linspace(-1,-t, 30)
_____no_output_____
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
Compute the numerical derivatives using the three different schemes
# The exact derivative at x=1 x0 = 1 fprimExact = np.exp(1) # Numerical derivative using the three methods fprimF = ForwardDiff(np.exp, x0, hx) fprimC = CentralDiff(np.exp, x0, hx) fprim5 = FivePointsDiff(np.exp, x0, hx) # Relative error felF = abs(fprimExact - fprimF)/abs(fprimExact) felC = abs(fprimExact - fprimC)/...
_____no_output_____
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
Visualize the results
# Plot fig, ax = plt.subplots(1) ax.loglog(hx, felF) ax.loglog(hx, felC) ax.loglog(hx, fel5) ax.autoscale(enable=True, axis='x', tight=True) ax.set_xlabel(r'Step length $h$') ax.set_ylabel('Relative error') ax.legend(['Forward difference','Central difference', 'Five points difference']) if saveFigure: filename = '...
_____no_output_____
MIT
Lab/L2/Lab2.ipynb
enigne/ScientificComputingBridging
Permutation Tests
users = edb.get_uuid_db().find() import pandas as pd from scipy import stats import emission.storage.timeseries.abstract_timeseries as esta from datetime import timedelta, date, tzinfo, datetime import numpy as np # Create a dataframe with columns user_id, number of diary checks, week number, and group. df = pd.DataFr...
_____no_output_____
BSD-3-Clause
tripaware_2017/Cleared Outputs Notebooks/Expanded Trips.ipynb
shankari/e-mission-eval
Bootstrapping Tests
e_c = df[df['group'] != 'information'] sf.bootstrap_test(e_c['group'], e_c['total'], sf.mean_diff, 100000)
_____no_output_____
BSD-3-Clause
tripaware_2017/Cleared Outputs Notebooks/Expanded Trips.ipynb
shankari/e-mission-eval
Mann Whitney U Tests
from scipy.stats import mannwhitneyu control = df[df['group'] == 'control'] control_array = control.as_matrix(columns=control.columns[1:2]) emotion = df[df['group'] == 'emotion'] emotion_array = emotion.as_matrix(columns=emotion.columns[1:2]) print(mannwhitneyu(emotion_array, control_array))
_____no_output_____
BSD-3-Clause
tripaware_2017/Cleared Outputs Notebooks/Expanded Trips.ipynb
shankari/e-mission-eval
Please cite us if you use the software PyCM Document Version : 2.6----- Table of contents Overview Installation Source Code PyPI Easy Install Docker &nbsp; Usage From Vector Direct CM Activation Threshol...
from pycm import * y_actu = [2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2] y_pred = [0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2] cm = ConfusionMatrix(y_actu, y_pred,digit=5)
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : `digit` (the number of digits to the right of the decimal point in a number) is new in version 0.6 (default value : 5) Only for print and save
cm cm.actual_vector cm.predict_vector cm.classes cm.class_stat
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : `cm.statistic_result` prev versions (0.2 >)
cm.overall_stat
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 0.3 Notice : `_` removed from overall statistics names in version 1.6
cm.table cm.matrix cm.normalized_matrix cm.normalized_table
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : `matrix`, `normalized_matrix` & `normalized_table` added in version 1.5 (changed from print style)
import numpy y_actu = numpy.array([2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2]) y_pred = numpy.array([0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2]) cm = ConfusionMatrix(y_actu, y_pred,digit=5) cm
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : `numpy.array` support in versions > 0.7 Direct CM
cm2 = ConfusionMatrix(matrix={0: {0: 3, 1: 0, 2: 0}, 1: {0: 0, 1: 1, 2: 2}, 2: {0: 2, 1: 1, 2: 3}},digit=5) cm2 cm2.actual_vector cm2.predict_vector cm2.classes cm2.class_stat cm2.overall_stat
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 0.8.1 In direct matrix mode `actual_vector` and `predict_vector` are empty Activation threshold `threshold` is added in `version 0.9` for real value prediction. For more information visit Example 3 Notice : new in version 0.9 Load from file `file` is added in `version...
cm = ConfusionMatrix(matrix={0: {0: 3, 1: 0, 2: 0}, 1: {0: 0, 1: 1, 2: 2}, 2: {0: 2, 1: 1, 2: 3}},digit=5,transpose=True) cm.print_matrix()
Predict 0 1 2 Actual 0 3 0 2 1 0 1 1 2 0 2 3
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.2 Relabel `relabel` method is added in `version 1.5` in order to change ConfusionMatrix class names.
cm.relabel(mapping={0:"L1",1:"L2",2:"L3"}) cm
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : new in version 1.5 Online help `online_help` function is added in `version 1.1` in order to open each statistics definition in web browser. ```python>>> from pycm import online_help>>> online_help("J")>>> online_help("J", alt_link=True)>>> online_help("SOA1(Landis & Koch)")>>> online_help(2)``` * List ...
online_help()
Please choose one parameter : Example : online_help("J") or online_help(2) 1-95% CI 2-ACC 3-ACC Macro 4-AGF 5-AGM 6-AM 7-ARI 8-AUC 9-AUCI 10-AUNP 11-AUNU 12-AUPR 13-BCD 14-BM 15-Bennett S 16-CBA 17-CEN 18-CSI 19-Chi-Squared 20-Chi-Squared DF 21-Conditional Entropy 22-Cramer V 23-Cross Entropy 24-DOR 25-DP 26-DPI 27-...
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : `alt_link` , new in version 2.4 Parameter recommender This option has been added in `version 1.9` to recommend the most related parameters considering the characteristics of the input dataset. The suggested parameters are selected according to some characteristics of the input such as being balance/im...
cm.imbalance cm.binary cm.recommended_list
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
Notice : also available in HTML report Notice : The recommender system assumes that the input is the result of classification over the whole data rather than just a part of it. If the confusion matrix is the result of test data classification, the recommendation is not valid. Comapre In `version 2.0`, a m...
cm2 = ConfusionMatrix(matrix={0:{0:2,1:50,2:6},1:{0:5,1:50,2:3},2:{0:1,1:7,2:50}}) cm3 = ConfusionMatrix(matrix={0:{0:50,1:2,2:6},1:{0:50,1:5,2:3},2:{0:1,1:55,2:2}}) cp = Compare({"cm2":cm2,"cm3":cm3}) print(cp) cp.scores cp.sorted cp.best cp.best_name cp2 = Compare({"cm2":cm2,"cm3":cm3},by_class=True,weight={0:5,1:1,2...
Best : cm3 Rank Name Class-Score Overall-Score 1 cm3 19.05 1.98333 2 cm2 14.65 2.55
MIT
Document/Document.ipynb
GeetDsa/pycm
Acceptable data types ConfusionMatrix 1. `actual_vector` : python `list` or numpy `array` of any stringable objects2. `predict_vector` : python `list` or numpy `array` of any stringable objects3. `matrix` : `dict`4. `digit`: `int`5. `threshold` : `FunctionType (function or lambda)`6. `file` : `File object`7. `sample...
cm.TP
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
TN (True negative) A true negative test result is one that does not detect the condition whenthe condition is absent (correctly rejected) [[3]](ref3).
cm.TN
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm
FP (False positive) A false positive test result is one that detects the condition when thecondition is absent (incorrectly identified) [[3]](ref3).
cm.FP
_____no_output_____
MIT
Document/Document.ipynb
GeetDsa/pycm