text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Introduction to Pandas, Python and Jupyter
### 2nd September 2014 Neil D. Lawrence
This notebook introduces some of the tools we will use for data science: Pandas and Python. Python is a generic program language, designed in the early 1990s with *scripting* in mind, but with a view to ensuring that the scripts that... | github_jupyter |
# FAKE NEWS CLASSIFIER

# IMPORTING THE LIBRARIES
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
import scipy as sp
import string
import warnings
warnings.filterwarnings("ignore")
%matplotli... | github_jupyter |
```
-- Library: https://github.com/lehins/Color
-- Demo notebooks: https://github.com/lehins/talks/tree/master/2020-HaskellerZ/Color/Jupyter
import Graphics.Color.Demo
import Graphics.Color.Model as M
import qualified Data.Massiv.Array as A
import Data.Complex
import Control.Monad
:set -XTypeApplications
:t ColorRGB
c ... | github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
torch.backends... | github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import copy
import itertools
from qiskit import transpile
from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister
from qiskit import Aer, execute
from qiskit.tools.visualization import plot_histogram
from torch import optim
# ... | github_jupyter |
# HW3 : Neural Networks and Stochastic Gradient Descent
This is the starter notebook for HW3.
### Instructions
The authoritative HW3 instructions are on the course website:
http://www.cs.tufts.edu/comp/135/2020f/hw3.html
Please report any questions to Piazza.
We've tried to make random seeds set explicitly so you... | github_jupyter |
# 2A.ml - Imbalanced dataset
Un jeu de données *imbalanced* signifie qu'une classe est sous représentée dans un problème de classification. Lire [8 Tactics to Combat Imbalanced Classes in Your Machine Learning Dataset](http://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-data... | github_jupyter |
# 8 Modeling a Drone Swinging in a Trifilar Pendulum
A trifilar pendulum is a common tool for determining the inertia of a rigid body. In the video below a small quadcopter drone is hung from a trifilar pendulum and set into an oscillation about the vertical axis. The frequency (or period) of this oscillation correlat... | github_jupyter |
# Stateful Elasticsearch Feedback Workflow for Metrics Server
In this example we will add statistical performance metrics capabilities by leveraging the Seldon metrics server with persistence through the elasticsearch setup.
Dependencies
* Seldon Core installed
* Ingress provider (Istio or Ambassador)
* Install [Elast... | github_jupyter |
# SageMaker Experiments
This notebook shows how you can use SageMaker Experiment Management Python SDK to organize, track, compare, and evaluate your machine learning (ML) model training experiments.
You can track artifacts for experiments, including data sets, algorithms, hyper-parameters, and metrics. Experiments e... | github_jupyter |
# [Stack Overflow Developer Survey, 2017 | Kaggle](https://www.kaggle.com/stackoverflow/so-survey-2017)
참고 :
* https://www.kaggle.com/ash316/the-stack-survey
* [Student? Web-Dev? ML Expert? lets Explore all | Kaggle](https://www.kaggle.com/m2skills/student-web-dev-ml-expert-lets-explore-all)
* [Data Analysis - SO Surv... | github_jupyter |
```
# NOTE: installation of modules is required before running this notebook. (See README.md)
# Also, for plotting, matplotlib is required (run "pip3 install matplotlib" for installation)
import matplotlib.pyplot as plt
import numpy as np
from ccpca import CCPCA
from opt_sign_flip import OptSignFlip
from mat_reorder i... | github_jupyter |
```
import os
import sys
import seaborn as sns
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import torch
import pandas as pd
import pickle
from scipy.stats import norm
from utils import compare
from envs import SingleSmallPeakEnv, DiscreteBanditEnv
def filter_df(df, **kwargs):
for k,v in k... | github_jupyter |
# common-lisp-jupyter
A Common Lisp kernel for Jupyter.
All stream output is captured and displayed in the notebook interface.
```
(format t "Hello, World")
(format *error-output* "Goodbye, cruel World.")
```
Evaluation results are displayed directory in the notebook.
```
(+ 2 3 4 5)
```
All Lisp code is value, i... | github_jupyter |
```
import pandas as pd
bx_ratings = pd.read_csv('BX-Book-Ratings.csv')
bx_books = pd.read_csv('BX-Books.csv')
bx_users = pd.read_csv('BX-Users.csv')
bx_ratings.head()
print len(bx_ratings), len(bx_books), len(bx_users)
print len(bx_ratings)
#bx_ratings = bx_ratings[ bx_ratings['Book-Rating'] != 0]
print len(bx_ratings... | github_jupyter |
- a notebook to save preprocessing model and train/save NN models
- all necessary ouputs are stored in MODEL_DIR = output/kaggle/working/model
- put those into dataset, and load it from inference notebook
```
import sys
sys.path.append('../input/iterative-stratification/iterative-stratification-master')
sys.path.a... | github_jupyter |
## Second level GLM analysis
This script performs group level modeling of BOLD response. Script features:
- loads statistical maps from first level GLM analysis
- discard data from excluded subjects
- performs second level GLM analysis
---
**Last update**: 24.07.2020
```
%matplotlib inline
import matplotlib.pyplot ... | github_jupyter |
```
import nltk
import string
from nltk.corpus import gutenberg, brown, wordnet
from neo4j.v1 import GraphDatabase, basic_auth
from nltk.stem import WordNetLemmatizer
# INSERT YOUR NEO4j AUTHENACATION DETAILS HERE
NEO4J_BOLT_URL = "bolt://localhost:7687"
NEO4J_USERNAME = "neo4j"
NEO4J_PASSWORD = ""
# CHANGE YOUR CUSTO... | github_jupyter |
# Main notebook for battery state estimation
```
import numpy as np
import pandas as pd
import scipy.io
import math
import os
import ntpath
import sys
import logging
import time
import sys
from importlib import reload
import plotly.graph_objects as go
import tensorflow as tf
from tensorflow import keras
from tensorf... | github_jupyter |
<a id=notebook_start></a>
There is an infinte amount of resources out there, for instance [here](https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/).
# Notebook intro
## navigating the notebook
There are three types of cells:
1. input cells - contain the actual code
2. output cell - display the ... | github_jupyter |
# "A Guided Tour of My Projects"
> "A maintained list of my (software) projects."
- author: jhermann
- toc: true
- branch: master
- badges: false
- comments: true
- published: true
- categories: [misc, development]
- image: images/copied_from_nb/img/misc/ball_binary-geralt_pixabay-1280.jpg
> 🚧 *This article is work i... | github_jupyter |
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
<!--NAVIGATION-->
< [Working with Pose residues](http://nbviewer.jupyter.org/github/RosettaCommon... | github_jupyter |
# This jupyter notebook contains two examples of
- how to create multi figures
```
%matplotlib notebook
#%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import MDAnalysis as mda
import pyrexMD.misc as misc
import pyrexMD.core as core
import pyrexMD.analysis.analyze as ana
import pyrexMD.analysi... | github_jupyter |
<a href="https://colab.research.google.com/github/cxbxmxcx/PAIGCP/blob/master/PAIGCP_image_captioning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Versio... | github_jupyter |
(objects_tutorial)=
# Tutorial
We will here write some code to create and manipulate quadratic expressions.
With `sympy` this is not necessary as all functionality required is available
within `sympy` however this will be a good exercise in understanding how to
build such functionality.
```{admonition} Problem
Consi... | github_jupyter |
ERROR: type should be string, got "https://gpitbjss.atlassian.net/browse/PRMT-1955\n\n### [Hypothesis] Proportion of TPP-EMIS error code 99s will have reduced from February to March\n\n**Refer to notebook PRMT-2057 for updated analysis that contains improvements and April data**\n\n### Hypothesis\n\n**We believe** that transfers resulting in error code 99’s from TPP to EMIS\n\n**Will** have reduced in proportion from February to March 2021\n\n**We will know this to be true when** we see that the % of all TPP-EMIS transfers resulting is this error is lower in March than in February\n\n \n\n### Scope\n- Generate a break down of error codes per supplier pathway for February, and a separate one for March\n- Identify the total number of TPP-EMIS transfers for both February and March\n- Calculate what % of the total number of TPP-EMIS transfers resulted in error code 99, for both February and March\n\n### Acceptance Criteria\n- We know whether the proportion of error code 99s has decreased\n- We have a confluence page that shows the two monthly breakdowns - specifically a comparison of the proportions of the error code 99s between TPP-EMIS and EMIS-EMIS\n\n## Import and prep data\n\n```\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\n# Using data generated from branch PRMT-1742-duplicates-analysis.\n# This is needed to correctly handle duplicates.\n# Once the upstream pipeline has a fix for duplicate EHRs, then we can go back to using the main output.\ntransfer_file_location = \"s3://prm-gp2gp-data-sandbox-dev/transfers-duplicates-hypothesis/\"\ntransfer_files = [\n \"2-2021-transfers.parquet\",\n \"3-2021-transfers.parquet\"\n]\n\ntransfer_input_files = [transfer_file_location + f for f in transfer_files]\ntransfers_raw = pd.concat((\n pd.read_parquet(f)\n for f in transfer_input_files\n))\n\n# In the data from the PRMT-1742-duplicates-analysis branch, these columns have been added, but contain only empty values.\ntransfers_raw = transfers_raw.drop([\"sending_supplier\", \"requesting_supplier\"], axis=1)\n\n\n# Given the findings in PRMT-1742 - many duplicate EHR errors are misclassified, the below reclassifies the relevant data\nhas_at_least_one_successful_integration_code = lambda errors: any((np.isnan(e) or e==15 for e in errors))\nsuccessful_transfers_bool = transfers_raw['request_completed_ack_codes'].apply(has_at_least_one_successful_integration_code)\ntransfers = transfers_raw.copy()\ntransfers.loc[successful_transfers_bool, \"status\"] = \"INTEGRATED\"\n\n# Correctly interpret certain sender errors as failed.\n# This is explained in PRMT-1974. Eventually this will be fixed upstream in the pipeline. \npending_sender_error_codes=[6,7,10,24,30,23,14,99]\ntransfers_with_pending_sender_code_bool=transfers['sender_error_code'].isin(pending_sender_error_codes)\ntransfers_with_pending_with_error_bool=transfers['status']=='PENDING_WITH_ERROR'\ntransfers_which_need_pending_to_failure_change_bool=transfers_with_pending_sender_code_bool & transfers_with_pending_with_error_bool\ntransfers.loc[transfers_which_need_pending_to_failure_change_bool,'status']='FAILED'\n\n# Add integrated Late status\neight_days_in_seconds=8*24*60*60\ntransfers_after_sla_bool=transfers['sla_duration']>eight_days_in_seconds\ntransfers_with_integrated_bool=transfers['status']=='INTEGRATED'\ntransfers_integrated_late_bool=transfers_after_sla_bool & transfers_with_integrated_bool\ntransfers.loc[transfers_integrated_late_bool,'status']='INTEGRATED LATE'\n\n# If the record integrated after 28 days, change the status back to pending.\n# This is to handle each month consistentently and to always reflect a transfers status 28 days after it was made.\n# TBD how this is handled upstream in the pipeline\ntwenty_eight_days_in_seconds=28*24*60*60\ntransfers_after_month_bool=transfers['sla_duration']>twenty_eight_days_in_seconds\ntransfers_pending_at_month_bool=transfers_after_month_bool & transfers_integrated_late_bool\ntransfers.loc[transfers_pending_at_month_bool,'status']='PENDING'\ntransfers_with_early_error_bool=(~transfers.loc[:,'sender_error_code'].isna()) |(~transfers.loc[:,'intermediate_error_codes'].apply(len)>0)\ntransfers.loc[transfers_with_early_error_bool & transfers_pending_at_month_bool,'status']='PENDING_WITH_ERROR'\n\n# Supplier name mapping\nsupplier_renaming = {\n \"EGTON MEDICAL INFORMATION SYSTEMS LTD (EMIS)\":\"EMIS\",\n \"IN PRACTICE SYSTEMS LTD\":\"Vision\",\n \"MICROTEST LTD\":\"Microtest\",\n \"THE PHOENIX PARTNERSHIP\":\"TPP\",\n None: \"Unknown\"\n}\n\nasid_lookup_file = \"s3://prm-gp2gp-data-sandbox-dev/asid-lookup/asidLookup-Mar-2021.csv.gz\"\nasid_lookup = pd.read_csv(asid_lookup_file)\nlookup = asid_lookup[[\"ASID\", \"MName\", \"NACS\",\"OrgName\"]]\n\ntransfers = transfers.merge(lookup, left_on='requesting_practice_asid',right_on='ASID',how='left')\ntransfers = transfers.rename({'MName': 'requesting_supplier', 'ASID': 'requesting_supplier_asid', 'NACS': 'requesting_ods_code','OrgName':'requesting_practice_name'}, axis=1)\ntransfers = transfers.merge(lookup, left_on='sending_practice_asid',right_on='ASID',how='left')\ntransfers = transfers.rename({'MName': 'sending_supplier', 'ASID': 'sending_supplier_asid', 'NACS': 'sending_ods_code','OrgName':'sending_practice_name'}, axis=1)\n\ntransfers[\"sending_supplier\"] = transfers[\"sending_supplier\"].replace(supplier_renaming.keys(), supplier_renaming.values())\ntransfers[\"requesting_supplier\"] = transfers[\"requesting_supplier\"].replace(supplier_renaming.keys(), supplier_renaming.values())\n# Filter for the transfers relevant to the question and rename month\nrelevant_pathway_bool=(transfers['sending_supplier'].isin(['TPP','EMIS'])) & (transfers['requesting_supplier']=='EMIS')\nrelevant_transfers=transfers.copy().loc[relevant_pathway_bool]\nrelevant_transfers['Month']=relevant_transfers['date_requested'].dt.month.replace({1:'January',2:'February',3:'March'})\n\n# Combine all error codes into a single unique set of error codes\nrelevant_transfers['all_errors']=relevant_transfers.apply(lambda row:np.concatenate((np.append(row[\"intermediate_error_codes\"], row[\"sender_error_code\"]),row['request_completed_ack_codes'])), axis=1)\nrelevant_transfers['all_errors']=relevant_transfers['all_errors'].apply(lambda error_list:[error for error in error_list if np.isfinite(error)])\nrelevant_transfers['unique_errors']=relevant_transfers['all_errors'].apply(set).apply(list)\n\n# Add in which transfers contain error code 99\nrelevant_transfers['Contains error code 99']=relevant_transfers['unique_errors'].apply(lambda error_list: 99 in error_list).astype(int)\n# Relabel status for readability\nrelevant_transfers['Status at 28 days']=relevant_transfers['status'].apply(lambda x: x.replace('_',' ').title())\nrelevant_transfers.loc[relevant_transfers['Status at 28 days']==\"Pending\",\"Status at 28 days\"]=\"Pending Without Error\"\n# Add in supplier pathway\nrelevant_transfers['Supplier Pathway']=relevant_transfers['sending_supplier'] + ' to ' + relevant_transfers['requesting_supplier']\n```\n\n### A. What proportion of transfers still have Error Code 99?\n\n```\nchange_in_99=relevant_transfers.groupby(['Supplier Pathway','Month']).agg({'Contains error code 99':['count','sum','mean']})\nchange_in_99=change_in_99['Contains error code 99'].rename({'count':'Total Transfers','sum':'Transfers with Error 99','mean':'% Transfers with Error 99'},axis=1)\nchange_in_99['% Transfers with Error 99']=change_in_99['% Transfers with Error 99'].multiply(100).round(2)\nchange_in_99\n```\n\n### B. What is the change in status?\n\n```\ncolumn_order=['Integrated','Integrated Late','Pending Without Error','Pending With Error','Failed']\nstatus_table=relevant_transfers.pivot_table(index=['Supplier Pathway','Month'],columns='Status at 28 days',values='conversation_id',aggfunc='count')\nstatus_table=status_table[column_order]\nstatus_table_percentage=status_table.div(status_table.sum(axis=1),axis=0).multiply(100).round(2)\nstatus_table_percentage.columns=status_table_percentage.columns + ' %'\npd.concat([status_table,status_table_percentage],axis=1)\n```\n\n### C. Can we attribute status changes to the reduction of Error code 99?\n\n```\n# Create a new field that combines the status and if the transfer contained error 99\ncontains_99={0:'(No 99)',1:'(Contains 99)'}\nrelevant_transfers['status and presence of 99']=relevant_transfers.apply(lambda row: row['Status at 28 days']+ ' '+contains_99[row['Contains error code 99']],axis=1)\n```\n\n#### Number of transfers per status and instance of error code 99s\n\n```\nstatus_and_99_table_count=relevant_transfers.pivot_table(index=['Supplier Pathway','Month'],columns='status and presence of 99',values='status',aggfunc='count').fillna(0).astype(int)\nnew_column_order=['Integrated (No 99)','Integrated (Contains 99)','Integrated Late (No 99)','Integrated Late (Contains 99)','Pending Without Error (No 99)','Pending Without Error (Contains 99)','Pending With Error (No 99)','Pending With Error (Contains 99)','Failed (No 99)','Failed (Contains 99)']\n# Filter out any columns that have no associated transfers\nnew_column_order=[column_name for column_name in new_column_order if column_name in status_and_99_table_count.columns]\nstatus_and_99_table_count=status_and_99_table_count.loc[:, new_column_order]\n\nstatus_and_99_table_count\n```\n\n#### Percentage of transfers per status and instance of error code 99s\n\n```\nstatus_and_99_table_percentage=status_and_99_table_count.div(status_and_99_table_count.sum(axis=1),axis=0).multiply(100)\nstatus_and_99_table_percentage.round(2)\nlm_color_table=dict()\nlm_color_table['Integrated (No 99)']='#9EE09E'\nlm_color_table['Integrated (Contains 99)']='#FFF'\nlm_color_table['Integrated Late (No 99)']='#9EC1CF'\nlm_color_table['Integrated Late (Contains 99)']='#FFF'\nlm_color_table['Pending Without Error (No 99)']='#CC99C9'\nlm_color_table['Pending With Error (No 99)']='#FDFD97'\n\nlm_color_table['Pending With Error (Contains 99)']='#FFF'\nlm_color_table['Failed (Contains 99)']='#FF6663'\nlm_color_table['Failed (No 99)']='#FEB144'\nstatus_and_99_table_percentage.plot.barh(stacked=True,figsize=(15,10),color=lm_color_table)\nplt.gca().invert_yaxis()\n```\n\n" | github_jupyter |
[](https://colab.research.google.com/github/eirasf/GCED-AA3/blob/main/lab5/lab5.ipynb)
# Lab5: Aprendizaje por refuerzo - Métodos Montecarlo
En este laboratorio profundizaremos en los métodos de control del aprendizaje por refuerzo. En particul... | github_jupyter |
```
import numpy.random as npr
import statsmodels.api as sm
import scipy
import numpy as np
from sklearn import linear_model
from sklearn.decomposition import PCA
from sklearn.datasets import make_spd_matrix
from scipy import stats
import matplotlib.pyplot as plt
stats.chisqprob = lambda chisq, df: stats.chi2.sf(ch... | github_jupyter |
```
import nibabel as nib
import numpy as np
import matplotlib.pyplot as plt
from copy import deepcopy
from nilearn import image, plotting
from mpl_toolkits.mplot3d import Axes3D
from scipy import ndimage
%matplotlib inline
mni = nib.load('../data/MNI152_T1_1mm_brain.nii.gz')
```
## Cho
```
cho = nib.load('../data/... | github_jupyter |
# 目标检测和边界框
:label:`sec_bbox`
在前面的章节(例如 :numref:`sec_alexnet`— :numref:`sec_googlenet`)中,我们介绍了各种图像分类模型。
在图像分类任务中,我们假设图像中只有一个主要物体对象,我们只关注如何识别其类别。
然而,很多时候图像里有多个我们感兴趣的目标,我们不仅想知道它们的类别,还想得到它们在图像中的具体位置。
在计算机视觉里,我们将这类任务称为*目标检测*(object detection)或*目标识别*(object recognition)。
目标检测在多个领域中被广泛使用。
例如,在无人驾驶里,我们需要通过识别拍摄到的视频图像里的车辆、行人、道... | github_jupyter |
<a href="https://colab.research.google.com/github/ParsaHejabi/ComputationalIntelligence-ComputerAssignments/blob/main/HW1/COVID19_Iran_linear.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# COVID-19 infections in Iran - Linear Regression
## Downl... | github_jupyter |
# ReEDS Scenarios on PV ICE Tool
To explore different scenarios for furture installation projections of PV (or any technology), ReEDS output data can be useful in providing standard scenarios. ReEDS installation projections are used in this journal as input data to the PV ICE tool.
Current sections include:
<ol>
... | github_jupyter |
# N-MNIST Classification
__N-MNIST__ is the neuromorphic version of MNIST digit recognition. The MNIST digits are converted into event based data using a DVS sensor moving in a repatable tri-saccadic motion each about 100 ms long.
The task is to classify each event sequence to it's corresponding digit.
<table>
<tr>
... | github_jupyter |
## Imports
```
import pandas
import csv
pandas.set_option('display.max_columns', None) # or 1000
pandas.set_option('display.max_rows', None)
```
## Import Data
```
rad_labels = pandas.read_csv(r"../dataset/radiogenomics_labels.csv")
rad_labels.head()
dataset = pandas.read_csv('../pyradiomics/data/pyradiomics_extrac... | github_jupyter |
# Diversificación y fuentes de riesgo en un portafolio II - Una ilustración con mercados internacionales.
<img style="float: right; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/5/5f/Map_International_Markets.jpg" width="500px" height="300px" />
> Entonces, la clase pasada vimos cómo... | github_jupyter |
```
import sys
sys.path.append('/Users/c242587/Desktop/projects/git/ngboost')
```
# Developing NGBoost
As you work with NGBoost, you may want to experiment with distributions or scores that are not yet supported. Here we will walk through the process of implementing a new distribution or score.
## Adding Distributio... | github_jupyter |
### Notes
TODO: in this notebook
- let's try some other libraries, like:
- statsmodels and glmnet
- xgboost and lightgbm
- keras, neon, and mxnet
- let's use automatic ML with TPOT and autosklearn to choose find the best hyerparameters
- let's counter-balance the predicting class high bias of good vs. bad ... | github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License [2017] Zalando SE, https://tech.zalando.com

<center>

</center>
## FOSSEE
- *Free Open Source Software for Education*
- Increase use of FOSS in education
- Minimise use of ... | github_jupyter |
<a href="https://colab.research.google.com/github/jonkrohn/pytorch/blob/master/notebooks/deep_net_in_pytorch_DEMO.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Deep Neural Network in PyTorch (DEMO)
_Remember to change your Runtime type to GPU o... | github_jupyter |
Notes:
- Important parameters: kernel size, no. of feature maps
- 1-max pooling generally outperforms otehr types of pooling
- Dropout has little effect
- Gridsearch across kernel size in the range 1-10
- Search no. of filters from 100-600 and dropout of 0.0-0.5
- Explore tanh, relu, linear activation functions
```
mo... | github_jupyter |
# 5HDB Performance Evaluation
```
import matplotlib.pyplot as plt
import numpy as np
import sys
import torch
import pandas as pd
from tqdm import tqdm
sys.path.append('../src')
from vae_lightning_utils import load_vae_model
from ours_lightning_utils import load_our_model
from dataset_utils import get_dataset
```
# L... | github_jupyter |
```
%load_ext rpy2.ipython
%matplotlib inline
import logging
logging.getLogger('fbprophet').setLevel(logging.ERROR)
import warnings
warnings.filterwarnings("ignore")
```
## Python API
Prophet follows the `sklearn` model API. We create an instance of the `Prophet` class and then call its `fit` and `predict` methods.
... | github_jupyter |
```
title.akas.tsv.gz - Contains the following information for titles:
#titleId (string) - a tconst, an alphanumeric unique identifier of the title
#ordering (integer) – a number to uniquely identify rows for a given titleId
#title (string) – the localized title
#region (string) - the region for this ve... | github_jupyter |
# GRADIENT
```
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
matplotlib.rcParams['xtick.direction'] = 'out'
matplotlib.rcParams['ytick.direction'] = 'out'
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, ... | github_jupyter |
```
#Fill the paths below
PATH_FRC = "" # git repo directory path
KADIK = "" # data from http://database.mmsp-kn.de/kadid-10k-database.html
PATH_ZENODO = "" # Data and models are available here: https://zenodo.org/record/5831014#.YdnW_VjMLeo
GAUSS_L2_MODEL = PATH_ZENODO+'/models/gaussian/noise04/l2/'
import sys
import... | github_jupyter |
### Generating names with recurrent neural networks
This time you'll find yourself delving into the heart (and other intestines) of recurrent neural networks on a class of toy problems.
Struggle to find a name for the variable? Let's see how you'll come up with a name for your son/daughter. Surely no human has expert... | github_jupyter |
# Maximum Mean Discrepancy drift detector on CIFAR-10
### Method
The [Maximum Mean Discrepancy (MMD)](http://jmlr.csail.mit.edu/papers/v13/gretton12a.html) detector is a kernel-based method for multivariate 2 sample testing. The MMD is a distance-based measure between 2 distributions *p* and *q* based on the mean emb... | github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop//blob/master/tutorials/Certification_Trainings/Public/6.Playground_DataFrames.ipynb)
# Spark DataFr... | github_jupyter |
# CS 1656 – Introduction to Data Science
## Instructor: Xiaowei Jia / Teaching Assistant: Evangelos Karageorgos
### Additional Credits: Xiaoting Li, Tahereh Arabghalizi, Zuha Agha, Anatoli Shein, Phuong Pham
## Recitation 5: Clustering
---
In this recitation you will be learning clustering along with a little bit m... | github_jupyter |
# Exploratory Data Analysis
Brief introduction to Pandas, Matplotlib and Seaborn
_Francesco Mosconi, May 2016_
## 1. Data munging in Pandas
```
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
df = pd.read_csv("titanic-train.csv")
```
## Quick exploration
```
df.head(3)
d... | github_jupyter |
# Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the [orig... | github_jupyter |
```
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy.constants import h, c, k
```
## Constants
```
n_air = 1 # Index of refraction of air.
n_soap = 1.33 # Index of refraction of soap + water mixture.
```
## Single Wavelength
First look at the behavior of a single wavelength of light for var... | github_jupyter |
# 1.3 Dirichlet boundary conditions
This tutorial goes in depth into the mechanisms required to solve the Dirichlet problem
$$
-\Delta u = f \quad \text{ in } \Omega,
$$
with a **nonzero** Dirichlet boundary condition
$$
u|_{\Gamma_D} = g
\quad \text{ on a boundary part } \Gamma_D \subset \partial\Omega.
$$
T... | github_jupyter |
# K-Nearest Neighbors (KNN)
#### by Chiyuan Zhang and Sören Sonnenburg
This notebook illustrates the <a href="http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm">K-Nearest Neighbors</a> (KNN) algorithm on the USPS digit recognition dataset in Shogun. Further, the effect of <a href="http://en.wikipedia.or... | github_jupyter |
# Sitzung 1
Diese Skripte sind ausschließlich als Zusatz-Material gedacht. Speziell für diejenigen unter Euch, die einen Einblick in das Programmieren gewinnen wollen. Wenn Du es also leid bist repetitive Tätigkeiten auszuführen und das lieber einer Maschine überlassen willst, bist Du hier genau richtig.
<span style... | github_jupyter |
##### Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of th... | github_jupyter |
# IBM Business Automation Workflow recommendation service with IBM Business Automation Insights and Machine Learning
Artificial intelligence can be combined with business processes management in many ways. For example, AI can help transforming unstructured data into data that a process can work with, through technique... | github_jupyter |
# Data Science Tutorial 01 @ Data Science Society
那須野薫(Kaoru Nasuno)/ 東京大学(The University of Tokyo)
データサイエンスの基礎的なスキルを身につける為のチュートリアルです。
KaggleのコンペティションであるRECRUIT Challenge, Coupon Purchase Predictionのデータセットを題材として、
データサイエンスの基礎的なスキルに触れ,理解の土台を養うことを目的とします。
(高い予測精度を出すことが目的ではないです)
まだ、書きかけでして、要望に合わせて誤りの修正や加筆をしていく予定です。... | github_jupyter |
```
#Required for accessing openml datasets from Lale
!pip install 'liac-arff>=2.4.0'
```
### Dataset with class imbalance
```
import lale.datasets.openml
import pandas as pd
(train_X, train_y), (test_X, test_y) = lale.datasets.openml.fetch(
'breast-cancer', 'classification', preprocess=True)
import numpy as np
n... | github_jupyter |
# Boolean conditions
Another **class** that is available in Python is the **boolean**. It is used to represent if a condition is verified or not.
A **boolean** can only have 2 possible values: `True` or `False`.
These are possible values that can be assigned to variables and used in your Python code, such as you have... | github_jupyter |
# Binary classifier - mitigate overfitting
Show some experiments about mitigating overfitting of the model described in chapter 3 (notebook 01).
### Import libraries
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import models, layers, optimizers, losses, metric... | github_jupyter |
```
import os
import glob
import pprint
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (14, 10)
dir_src = './logs'
dir_src2 = './logs_backup'
log_files = sorted(glob.glob('{}/*/*.log'.format(dir_src)))
print(len(log_f... | github_jupyter |
# Optimization with bathymetry and max water depth constraint
## Install packages if running in Colab
```
try:
RunningInCOLAB = 'google.colab' in str(get_ipython())
except NameError:
RunningInCOLAB = False
%%capture
if RunningInCOLAB:
!pip install git+https://gitlab.windenergy.dtu.dk/TOPFARM/PyWake.git
!p... | github_jupyter |
## Introduction
Here, we provide some background on CARMA models and the connection between CARMA and Gaussian Process (GP).
### CARMA
CARMA stands for continuous-time autoregressive moving average, it is the continuous-time version of the better known autoregressive moving average (ARMA) model. In recent years, CARM... | github_jupyter |
## Final Distribution
```
from config import PROJECT_ID, INITIAL_TS, SNAPSHOT_TS, \
CITIZENS_AUDIENCE, ETH_ANALYSIS_DATASET_NAME, ETH_ANALYSIS_DISTRIBUTION_TABLE_NAME, \
CYBERPUNKS_AUDIENCE, \
HACKERS_AUDIENCE, GAS_ANALYSIS_DATASET_NAME, GAS_ANALYSIS_DISTRIBUTION_TABLE_NAME, \
LEADERS_AUDIENCE, LEADERS... | github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.optimizers import Adam
from keras.layers.normalization import BatchNormalization
from... | github_jupyter |
# Setup
```
from warnings import simplefilter
simplefilter(action='ignore', category=FutureWarning)
from tensorflow.keras import backend as K
from tensorflow.keras.models import load_model
from tensorflow.keras.utils import to_categorical
from sklearn.metrics import f1_score, accuracy_score, confusion_matrix
import... | github_jupyter |
# Efficient Grammar Fuzzing
In the [chapter on grammars](Grammars.ipynb), we have seen how to use _grammars_ for very effective and efficient testing. In this chapter, we refine the previous string-based algorithm into a tree-based algorithm, which is much faster and allows for much more control over the production o... | github_jupyter |
# KNeighborsClassifier with MinMaxScaler & Power Transformer
This Code template is for the Classification task using a simple KNeighborsClassifier based on the K-Nearest Neighbors algorithm using MinMaxScaler for rescaling and PowerTransformer as feature transformation in a pipeline.
### Required Packages
```
!pip i... | github_jupyter |
# AXON: Data Patterns
```
from __future__ import unicode_literals, print_function
from axon.api import loads, dumps
```
``AXON`` represents data with the help of compositions of several patterns of structuring and notation of data.
## Data Structures
There are *atomic values* at the bottom level: *unicode strings*,... | github_jupyter |
# 📝 Exercise M7.01
This notebook aims at building baseline classifiers, which we'll use to
compare our predictive model. Besides, we will check the differences with
the baselines that we saw in regression.
We will use the adult census dataset, using only the numerical features.
```
import pandas as pd
adult_census... | github_jupyter |
## Classifying Variable Stars
Derive a set of features for a set of light curves of variable stars of known class. Train Machine Learning (ML) algorithms on a sample of this data set and then apply the algorithms to a set of light curves of unknown class to predict what type of variable star they are. Validate classif... | github_jupyter |
```
#マルチコプタシミュレーション Ver 0.1 (暫定版)
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from tqdm.notebook import tqdm_notebook as tqdm
"""
def rk4(func, t, h, x, *p)
4次のルンゲ・クッタ法を一回分計算する関数
引数リスト
func:導関数
t:現在時刻を表す変数
h:刻み幅
x:出力変数(求めたい値)
*p:引数の数が可変する事に対応する、その他の必要変数
※この関数では時刻は... | github_jupyter |
<a href="https://colab.research.google.com/github/AguaClara/PF200/blob/master/PF200_Final_Report.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# PF 200, Fall 2019
#### Whitney Denison, Fernando Merino Martinez, Nicole Wang, Jacob Wyrick, Amy You, ... | github_jupyter |
# Transfer Learning of YoloV3 with GluonCV
## Introduction
This is an end-to-end example of GluonCV YoloV3 model training inside of Amazon SageMaker notebook using Script mode and then compiling the trained model using SageMaker Neo runtime. In this demo, we will demonstrate how to finetune a model using the autonomou... | github_jupyter |
# GQN View Interpolation
Loads a trained GQN and performs a sequence of view interpolations.
```
'''imports'''
# stdlib
import os
import sys
import logging
# numerical computing
import numpy as np
import tensorflow as tf
# plotting
import imageio
logging.getLogger("imageio").setLevel(logging.ERROR) # switch off warni... | github_jupyter |
<img style="float: center;" src="images/CI_horizontal.png" width="600">
<center>
<span style="font-size: 1.5em;">
<a href='https://www.coleridgeinitiative.org'>Website</a>
</span>
</center>
Rayid Ghani, Frauke Kreuter, Julia Lane, Adrianne Bradford, Alex Engler, Nicolas Guetta Jeanrenaud, Graham Henke,... | github_jupyter |
# Objectifs
- Premiere exploration des sources de données SAE https://www.sae-diffusion.sante.gouv.fr/
- Mise en contexte avec les données du recencement
- carto rapide avec contour départementaux
En l'absence de connaissance métiers fines sur la signification des différents types de lits en réaniation, on se... | github_jupyter |
# Overview:
1. *Classify a Randomized clinical trials (RCTs) abstarct to subclasses for easier to read and understand*.
2. *Basically convert a medical abstarct to chunks of sentences of particaular classes like "Background", "Methods", "Results" and "Conclusion".*
3. *Its a Many to One Text Classification problem. Wh... | github_jupyter |
# 创建管道
可以通过使用 Azure ML SDK 来运行基于脚本的试验,从而执行引入数据、训练模型和注册模型各自所需的各个步骤。但是,在企业环境中,通常根据用户需求或按计划在自动生成过程中将生成机器学习解决方案所需的各个步骤序列封装到可在一个或多个计算目标上运行的管道中。
在该笔记本中,你将把所有这些元素组合在一起,以创建一个简单的管道,该管道可以预处理数据、训练和注册模型。
## 连接到工作区
首先,请连接到你的工作区。
> **备注**:如果尚未与 Azure 订阅建立经过身份验证的会话,则系统将提示你通过执行以下操作进行身份验证:单击链接,输入验证码,然后登录到 Azure。
```
impor... | github_jupyter |
# Introducing the Keras Sequential API on Vertex AI Platform
**Learning Objectives**
1. Build a DNN model using the Keras Sequential API
1. Learn how to use feature columns in a Keras model
1. Learn how to train a model with Keras
1. Learn how to save/load, and deploy a Keras model on GCP
1. Learn how to dep... | github_jupyter |
```
import numpy as np
import os
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
from matplotlib import style
import pandas as pd
style.use('fivethirtyeight')
from tensorflow.keras import layers
directory_url='https://storage.googleapis.com/download.tensorflow.org/data/illiad/... | github_jupyter |
```
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import pathlib
sns.set_style('white')
sns.set_context('talk')
import numpy as np
import pandas as pd
import addict
from tqdm import tqdm
import scipy.integrate
import csv
import datetime
font_size=35
sns.set_style('white')
sns.set_conte... | github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
bikes_Q1 = pd.read_csv('/home/jupyter-l.fedoseeva-12/prodvin_tems_python/bikes_q1_sample.csv')
```
Для начала, возьмем данные только за Q1, они уже сохранены в переменную bikes_Q1. Перед тем как сделать .resample(), нужно ... | github_jupyter |
# REINFORCE in TensorFlow (3 pts)
Just like we did before for Q-learning, this time we'll design a TensorFlow network to learn `CartPole-v0` via policy gradient (REINFORCE).
Most of the code in this notebook is taken from approximate Q-learning, so you'll find it more or less familiar and even simpler.
```
import sy... | github_jupyter |
# 第16章 函数基础
## python对象的多态
```
def f(a,b,c,d): return a*2+b*3+c*4+d*6
f('as','we','learn','functions')
f(12,36,43,52)
f(*['in','simple','terms','device'])
import pandas as pd
d = {'Statement': ["Calls","def,return","global","nonlocal","yield", "lambda"],"Examples": ["myfunc('spam','eggs', meat=ham)","def adder(a, b=1... | github_jupyter |
# Challenge 02: Minimum Hamming Distance using a Quantum Algorithm
The Hamming distance between two binary strings (with the same number of bits) is defined as the number positions where the bits differ from each other. For example, the Hamming distance between these $6$-bit strings <span style="color:red">$0$</span>$... | github_jupyter |
**Chapter 16 – Natural Language Processing with RNNs and Attention**
_This notebook contains all the sample code in chapter 16._
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/16_nlp_with_rnns_and_attention.ipynb"><img src="https://www.... | github_jupyter |
<a href="https://colab.research.google.com/github/pablo-arantes/making-it-rain/blob/main/AlphaFold2%2BMD.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **Hello there!**
This is a Jupyter notebook for running Molecular Dynamics (MD) simulations u... | github_jupyter |
# Supply Network Design 2
## Objective and Prerequisites
Take your supply chain network design skills to the next level in this example. We’ll show you how – given a set of factories, depots, and customers – you can use mathematical optimization to determine which depots to open or close in order to minimize overall ... | github_jupyter |
<table> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="25%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by <a href="http://abu.lu.... | github_jupyter |
# Python: Introduction to generators
**Goal**: Understanding generators and how to use them!
## Introduction
Before you start talking about ``generators``, first let understand ``iterators``. An ``iterator`` is an ``object`` that enables a programmer a container, particularly ``lists``. However, an ``iterator`` perf... | github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=''#0
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0],... | github_jupyter |
# Register TSV Data With Athena
This will create an Athena table in the Glue Catalog (Hive Metastore).
Now that we have a database, we’re ready to create a table that’s based on the `Amazon Customer Reviews Dataset`. We define the columns that map to the data, specify how the data is delimited, and provide the locatio... | github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.