repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
gojomo/gensim | docs/notebooks/doc2vec-wikipedia.ipynb | lgpl-2.1 | from gensim.corpora.wikicorpus import WikiCorpus
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from pprint import pprint
import multiprocessing
"""
Explanation: Doc2Vec to wikipedia articles
We conduct the replication to Document Embedding with Paragraph Vectors (http://arxiv.org/abs/1507.07998).
In this p... |
dwhswenson/contact_map | examples/contact_trajectory.ipynb | lgpl-2.1 | from __future__ import print_function
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from contact_map import ContactTrajectory, RollingContactFrequency
import mdtraj as md
traj = md.load("data/gsk3b_example.h5")
print(traj) # to see number of frames; size of system
"""
Explanation: Contact Tra... |
jskDr/jamespy_py3 | wireless/algorithm_nb/qsort_by_numba.ipynb | mit | from numba import jit, int32
import numpy as np
"""
Explanation: Quick Sort Algorithm Code Implemented by Numba in Python
파이썬의 numba 패키지를 이용한 퀵 정렬 알고리즘 구현
Numba는 파이썬 코드를 실시간으로 C로 번역해 속도를 높힌다.
Numba로 구현했을 때와 일반적인 파이썬을 사용한 경우의 속도를 비교한다.
길이 1000짜리 정수 배열을 아래 알고리즘으로 퀵정렬한 경우, numba를 사용한 경우의 속도가 266배 빠르다.
End of explanati... |
tensorflow/docs-l10n | site/en-snapshot/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb | apache-2.0 | # Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by app... |
geektoni/shogun | doc/ipython-notebooks/multiclass/KNN.ipynb | bsd-3-clause | import numpy as np
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import loadmat, savemat
from numpy import random
from os import path
import matplotlib.pyplot as plt
%matplotlib inline
import shogun as sg
mat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.... |
xtr33me/deep-learning | tensorboard/Anna_KaRNNa_Summaries.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is base... |
lesonkorenac/dataquest-projects | 1. Python Introduction/Exploring Gun Deaths in the US/Exploring Gun Deaths in the US.ipynb | mit | census = list(csv.reader(open("census.csv", 'r')))
for index, column in enumerate(census[0]):
print("{} - {}: {}".format(index, column, census[1][index]))
def get_race_count(census, column_indexes):
return sum([int(census[1][index]) for index in column_indexes])
race_percentage = {
"Black": get_race_coun... |
jornvdent/WUR-Geo-Scripting-Course | Lesson 14/Lesson 14 - Assignment.ipynb | gpl-3.0 | from twython import TwythonStreamer
import string, json, pprint
import urllib
from datetime import datetime
from datetime import date
from time import *
import string, os, sys, subprocess, time
import psycopg2
import re
from osgeo import ogr
"""
Explanation: Import modules
End of explanation
"""
# get access to the ... |
tensorflow/docs-l10n | site/en-snapshot/probability/examples/Probabilistic_Layers_VAE.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, sof... |
sorig/shogun | doc/ipython-notebooks/multiclass/naive_bayes.ipynb | bsd-3-clause | %matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import numpy as np
import pylab as pl
np.random.seed(0)
n_train = 300
models = [{'mu': [8, 0], 'sigma':
np.array([[np.cos(-np.pi/4),-np.sin(-np.pi/4)],
[np.sin(-np.pi/4), np.cos(-np.pi/4)]]).dot... |
yhilpisch/dx | 03_dx_valuation_single_risk.ipynb | agpl-3.0 | from dx import *
from pylab import plt
plt.style.use('seaborn')
"""
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Single-Risk Derivatives Valuation
This part introduces into the modeling and valuation of derivatives instruments (contingent claims... |
tien-le/kaggle-titanic | Titanic - Machine Learning from Disaster - Applying Machine Learning Techniques.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import random
"""
Explanation: Titanic: Machine Learning from Disaster - Applying Machine Learning Techniques
Homepage: https://github.com/tien-le/kaggle-titanic
unbelivable ... to achieve 1.000. How did th... |
henchc/Data-on-the-Mind-2017-scraping-apis | 01-APIs/solutions/01-API_solutions.ipynb | mit | import requests # to make the GET request
import json # to parse the JSON response to a Python dictionary
import time # to pause after each API call
import csv # to write our data to a CSV
import pandas # to see our CSV
"""
Explanation: Accessing Databases via Web APIs
In this lesson we'll learn what an API (Ap... |
ibm-cds-labs/spark.samples | notebook/Twitter Sentiment with Watson TA and PI.ipynb | apache-2.0 | !pip install --user python-twitter
!pip install --user watson-developer-cloud
"""
Explanation: Twitter Sentiment analysis with Watson Tone Analyzer and Watson Personality Insights
<img style="max-width: 800px; padding: 25px 0px;" src="https://ibm-watson-data-lab.github.io/spark.samples/Twitter%20Sentiment%20with%20Wa... |
harper/dlnd_thirdproject | seq2seq/sequence_to_sequence_implementation.ipynb | mit | import helper
source_path = 'data/letters_source.txt'
target_path = 'data/letters_target.txt'
source_sentences = helper.load_data(source_path)
target_sentences = helper.load_data(target_path)
"""
Explanation: Character Sequence to Sequence
In this notebook, we'll build a model that takes in a sequence of letters, an... |
liumengjun/cn-deep-learning | language-translation/dlnd_language_translation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: 语言翻译
在此项目中,你将了解神经网络机器翻译这一领域。你将用由英语和法语语句组成的数据集,训练一个... |
tpin3694/tpin3694.github.io | regex/match_urls.ipynb | mit | # Load regex package
import re
"""
Explanation: Title: Match URLs
Slug: match_urls
Summary: Match URLs
Date: 2016-05-01 12:00
Category: Regex
Tags: Basics
Authors: Chris Albon
Source: StackOverflow
Preliminaries
End of explanation
"""
# Create a variable containing a text string
text = 'My blog is http://www.chris... |
jseabold/statsmodels | examples/notebooks/statespace_seasonal.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
"""
Explanation: Seasonality in time series data
Consider the problem of modeling time series data with multiple seasonal components with dif... |
abatula/MachineLearningIntro | InstructorNotebooks/Iris_DataSet_Instructor.ipynb | gpl-2.0 | # Print figures in the notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import datasets # Import datasets from scikit-learn
# Import patch for drawing rectangles in the legend
from matplotlib.patches import Rectangle
# Create co... |
encima/Comp_Thinking_In_Python | Session_5/5_IO, Formatting Strings and Functions.ipynb | mit | name = "bob"
print(name)
name * 5
print(name)
print(name * 10) #This output is kinda useless, right?
name = 11
print("Name is equal to " + str(name))
print("Something about: ")
print(name)
name *= 5
print("Name has been multiplied by 5 and is now equal to " + name) #slightly more informative
"""
Explanation: Input, O... |
staeiou/assorted-notebooks | infinite_scream/2017-07-27/infinite_scream.ipynb | mit | !pip install tweepy pandas seaborn
"""
Explanation: Graphing the number of favorites to @infinite_scream over time
By R. Stuart Geiger (@staeiou), Released CC-BY 4.0 & MIT License
Setup
Installing dependencies
End of explanation
"""
import random
import twitter_login # a file containing my API keys
import tweepy
... |
albahnsen/ML_RiskManagement | notebooks/09_StatisticalInference.ipynb | mit | import pandas as pd
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Credit.csv', index_col=0)
data.head(10)
"""
Explanation: 09 - Statistical Inference
by Alejandro Correa Bahnsen & Iván Torroledo
version 1.2, Feb 2018
Part of the class Machine Learning for Risk Management
This notebook is licensed under a Crea... |
kkkddder/dmc | notebooks/week-4/01-tensorflow ANN for regression.ipynb | apache-2.0 | %matplotlib inline
import math
import random
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import load_boston
import numpy as np
import tensorflow as tf
sns.set(style="ticks", color_codes=True)
"""
Explanation: Lab 4 - Tensorflow ANN for regression
In this lab we wi... |
CopernicusMarineInsitu/INSTACTraining | PythonNotebooks/IndexFilePlots/plot_positions_latest_global.ipynb | mit | datadir = "~/CMEMS_INSTAC/INSITU_GLO_NRT_OBSERVATIONS_013_030/latest/20151201/"
%matplotlib inline
import matplotlib.pyplot as plt
import glob
import os
import netCDF4
import numpy as np
"""
Explanation: In this exercise we will plot all the data locations available for a given day in the latest directory of the glob... |
takanory/python-machine-learning | pydata-tokyo-tutorial.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv("train.csv")
df.head()
"""
Explanation: PyData.Tokyo Tutorial
https://pydata.tokyo/ipynb/tutorial-1/dh.html
df[df.Embarked=='C'] # Embarked=='C'で絞り込み
df[df.Embarked=='C']['Survived'] # Embarked=='C'で絞り込んだdfのSurvived列を取得
df[(d... |
Python4AstronomersAndParticlePhysicists/PythonWorkshop-ICE | notebooks/10_04_Astronomy_Astroplan.ipynb | mit | %matplotlib inline
import numpy as np
import math
import matplotlib.pyplot as plt
import seaborn
from astropy.io import fits
from astropy import units as u
from astropy.coordinates import SkyCoord
plt.rcParams['figure.figsize'] = (12, 8)
plt.rcParams['font.size'] = 14
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['x... |
tacaswell/altair | notebooks/ChartExamples.ipynb | bsd-3-clause | import random
from IPython.display import HTML, display
import numpy as np
import pandas as pd
import altair.api as alt
from altair import html
"""
Explanation: Altair Basic Charting
This notebook seeks to walk you through many of the basic chart types you're going to be building with Altair, such as line charts, ba... |
KIPAC/StatisticalMethods | tutorials/microlensing.ipynb | gpl-2.0 | exec(open('tbc.py').read()) # define TBC and TBC_above
import numpy as np
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
import scipy.stats as st
%matplotlib inline
import incredible as cr
from corner import corner
TBC()
# dat = np.loadtxt('../ignore/phot.dat') # edit path if needed
t = dat... |
MarkPinches/Metrum-Institute | MI250 Lab1 simple regression example.ipynb | gpl-3.0 | from pymc3 import Model, Normal, Uniform, NUTS, sample, find_MAP, traceplot, summary, df_summary, trace_to_dataframe
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Introduction
This notebook is a port to pymc3 of the example given in the Metrum Institutes ... |
esa-as/2016-ml-contest | LiamLearn/K-fold_CV_F1_score__MATT.ipynb | apache-2.0 | import pandas as pd
training_data = pd.read_csv('../training_data.csv')
"""
Explanation: 'Grouped' k-fold CV
A quick demo by Matt
In cross-validating, we'd like to drop out one well at a time. LeaveOneGroupOut is good for this:
End of explanation
"""
X = training_data.drop(['Formation', 'Well Name', 'Depth','Facies'... |
amehrjou/amehrjou.github.io | markdown_generator/publications.ipynb | mit | !cat publications.tsv
"""
Explanation: Publications markdown generator for academicpages
Takes a TSV of publications with metadata and converts them for use with academicpages.github.io. This is an interactive Jupyter notebook (see more info here). The core python code is also in publications.py. Run either from the m... |
QuantCrimAtLeeds/PredictCode | notebooks/kernel_estimation.ipynb | artistic-2.0 | data = np.random.normal(loc=2.0, scale=1.5, size=20)
kernel = scipy.stats.gaussian_kde(data)
fig, ax = plt.subplots(figsize=(10,5))
x = np.linspace(-1, 5, 100)
var = 2 * 1.5 ** 2
y = np.exp(-(x-2)**2/var) / np.sqrt(var * np.pi)
ax.plot(x, y, color="red", linewidth=1)
y = kernel(x)
ax.plot(x, y, color="blue", linew... |
RoebideBruijn/datascience-intensive-course | exercises/naive_bayes/Mini_Project_Naive_Bayes.ipynb | mit | %matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from six.moves import range
import seaborn as sns
# Setup Pandas
pd.set_option('display.width', 500)
pd.set_option('display.max_columns'... |
DistrictDataLabs/yellowbrick | examples/uricod/ShoeSizeToHeight.ipynb | apache-2.0 | from sklearn.model_selection import train_test_split, KFold
from sklearn.linear_model import LinearRegression, Ridge, SGDRegressor, ElasticNet
from sklearn.kernel_ridge import KernelRidge
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from yellowbrick.features ... |
quantopian/research_public | notebooks/lectures/Maximum_Likelihood_Estimation/questions/notebook.ipynb | apache-2.0 | # Useful Libraries
import pandas as pd
import math
import matplotlib.pyplot as plt
import numpy as np
import scipy
import scipy.stats as stats
"""
Explanation: Exercises: Maximum Likelihood Estimation
By Christopher van Hoecke, Max Margenot, and Delaney Mackenzie
Lecture Link :
https://www.quantopian.com/lectures/maxi... |
tensorflow/fairness-indicators | g3doc/tutorials/Facessd_Fairness_Indicators_Example_Colab.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under... |
mne-tools/mne-tools.github.io | 0.12/_downloads/plot_spm_faces_dataset.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import matplotlib.pyplot as plt
import mne
from mne.datasets import spm_face
from mne.preprocessing import ICA, create_eog_epochs
from mne impor... |
quoniammm/mine-tensorflow-examples | fastAI/deeplearning2/DCGAN.ipynb | mit | %matplotlib inline
import importlib
import utils2; importlib.reload(utils2)
from utils2 import *
from tqdm import tqdm
"""
Explanation: Generative Adversarial Networks in Keras
End of explanation
"""
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train.shape
n = len(X_t... |
bgroveben/python3_machine_learning_projects | learn_kaggle/machine_learning/pipelines.ipynb | mit | import pandas as pd
from sklearn.model_selection import train_test_split
data = pd.read_csv('input/melbourne_data.csv')
cols_to_use = ['Rooms', 'Distance', 'Landsize', 'BuildingArea', 'YearBuilt']
X = data[cols_to_use]
y = data.Price
train_X, test_X, train_y, test_y = train_test_split(X, y)
"""
Explanation: Pipelines... |
tridesclous/tridesclous | example/example_olfactory_bulb_dataset.ipynb | mit | %matplotlib inline
import time
import numpy as np
import matplotlib.pyplot as plt
import tridesclous as tdc
from tridesclous import DataIO, CatalogueConstructor, Peeler
"""
Explanation: tridesclous example with olfactory bulb dataset
End of explanation
"""
#download dataset
localdir, filenames, params = tdc.downlo... |
tuanavu/python-cookbook-3rd | notebooks/ch01/05_implementing_a_priority_queue.ipynb | mit | import heapq
class PriorityQueue:
def __init__(self):
self._queue = []
self._index = 0
def push(self, item, priority):
heapq.heappush(self._queue, (-priority, self._index, item))
self._index += 1
def pop(self):
return heapq.heappop(self._queue)[-1]
"""
Explanation... |
matmodlab/matmodlab2 | notebooks/Hyperfit.ipynb | bsd-3-clause | %load_ext autoreload
%autoreload 2
from numpy import *
import numpy as np
from bokeh.plotting import *
from pandas import read_excel
from matmodlab2.fitting.hyperopt import *
output_notebook()
"""
Explanation: Hyperelastic Model Fitting
End of explanation
"""
# uniaxial data
udf = read_excel('Treloar_hyperelastic_da... |
FedericoMuciaccia/SistemiComplessi | src/heatmap_and_range.ipynb | mit | roma = pandas.read_csv("../data/Roma_towers.csv")
coordinate = roma[['lat', 'lon']].values
heatmap = gmaps.heatmap(coordinate)
gmaps.display(heatmap)
# TODO scrivere che dietro queste due semplici linee ci sta un pomeriggio intero di smadonnamenti
colosseo = (41.890183, 12.492369)
import gmplot
from gmplot import G... |
dnc1994/MachineLearning-UW | ml-classification/blank/module-5-decision-tree-assignment-2-blank.ipynb | mit | import graphlab
"""
Explanation: Implementing binary decision trees
The goal of this notebook is to implement your own binary decision tree classifier. You will:
Use SFrames to do some feature engineering.
Transform categorical variables into binary variables.
Write a function to compute the number of misclassified e... |
lucasmaystre/choix | notebooks/intro-pairwise.ipynb | mit | import choix
import networkx as nx
import numpy as np
%matplotlib inline
np.set_printoptions(precision=3, suppress=True)
"""
Explanation: Introduction using pairwise-comparison data
This notebook provides a gentle introduction to the choix library.
We consider the case of pairwise-comparison outcomes between items fr... |
tatjanus/cianparser | cian_parser2.0.ipynb | bsd-2-clause | import requests
import re
from bs4 import BeautifulSoup
import pandas as pd
import time
import numpy as np
def html_stripper(text):
return re.sub('<[^<]+?>', '', str(text))
"""
Explanation: Посмотрев на свой предыдущий ноутбук, я ощутила острое желание все переделать и реструктурировать.
Прошлая версия по сути бы... |
adityaka/misc_scripts | python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/04_03/Final/.ipynb_checkpoints/Indexing-checkpoint.ipynb | bsd-3-clause | import pandas as pd
import numpy as np
produce_dict = {'veggies': ['potatoes', 'onions', 'peppers', 'carrots'],'fruits': ['apples', 'bananas', 'pineapple', 'berries']}
produce_df = pd.DataFrame(produce_dict)
produce_df
"""
Explanation: Indexing and Selection
| Operation | Syntax | Result ... |
AlexDaciuk/Algoritmos | Random_Forest.ipynb | gpl-3.0 | import base64
token = base64.b64decode("Njk4ZGVjMWE5Y2YyNDQ5ZmNhY2FkOWU4NDdjMDk5NWU1NTZhMDk5Yw====").decode("utf-8")
! rm -rf tp-datos-2c2020 datos
! git clone https://{token}@github.com/AlexDaciuk/tp-datos-2c2020.git
! mv tp-datos-2c2020 datos
from datos.preproc import preprocessing
from sklearn.preprocessing impor... |
dkirkby/bossdata | examples/nb/StackingWithSpeclite.ipynb | mit | %pylab inline
import speclite
print(speclite.version.version)
import bossdata
print(bossdata.__version__)
finder = bossdata.path.Finder()
mirror = bossdata.remote.Manager()
"""
Explanation: Examples of Stacking BOSS Spectra using Speclite
Examples of using the speclite package to perform basic operations on spectra... |
GuillaumeDec/machine-learning | tutorials/deep-lstm-time-series.ipynb | gpl-3.0 | from __future__ import print_function
import os
import mxnet as mx
from mxnet import nd, autograd
import numpy as np
from exceptions import ValueError
import warnings
from collections import defaultdict
# we use cpus here
ctx = mx.cpu(0)
warnings.filterwarnings('ignore', category=DeprecationWarning, module='.*/IPytho... |
karlstroetmann/Artificial-Intelligence | Python/2 Constraint Solver/N-Queens-Problem-CSP.ipynb | gpl-2.0 | def create_csp(n):
S = range(1, n+1)
Variables = { f'V{i}' for i in S }
Values = set(S)
DifferentCols = { f'V{i} != V{j}' for i in S
for j in S
if i < j
}
Differ... |
ES-DOC/esdoc-jupyterhub | notebooks/ncc/cmip6/models/noresm2-mm/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-MM
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balan... |
AlphaGit/deep-learning | language-translation/dlnd_language_translation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going... |
jerkos/cobrapy | documentation_builder/milp.ipynb | lgpl-2.1 | cone_selling_price = 7.
cone_production_cost = 3.
popsicle_selling_price = 2.
popsicle_production_cost = 1.
starting_budget = 100.
"""
Explanation: Mixed-Integer Linear Programming
Ice Cream
This example was originally contributed by Joshua Lerman.
An ice cream stand sells cones and popsicles. It wants to maximize its... |
dmytroKarataiev/MachineLearning | creating_customer_segments/customer_segments.ipynb | mit | # Import libraries necessary for this project
import numpy as np
import pandas as pd
import renders as rs
from IPython.display import display # Allows the use of display() for DataFrames
# Show matplotlib plots inline (nicely formatted in the notebook)
%matplotlib inline
# Load the wholesale customers dataset
try:
... |
jbocharov-mids/W207-Machine-Learning | reference/firstname_lastname_p1.ipynb | apache-2.0 | # This tells matplotlib not to try opening a new window for each plot.
%matplotlib inline
# Import a bunch of libraries.
import time
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
from sklearn.pipeline import Pipeline
from sklearn.datasets import fetch_mldata
from skle... |
mtasende/Machine-Learning-Nanodegree-Capstone | notebooks/dev/.ipynb_checkpoints/n16_hallucinating_with_predictor-checkpoint.ipynb | mit | # Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10... |
mne-tools/mne-tools.github.io | 0.22/_downloads/7bbeb6a728b7d16c6e61cd487ba9e517/plot_morph_volume_stc.ipynb | bsd-3-clause | # Author: Tommy Clausner <tommy.clausner@gmail.com>
#
# License: BSD (3-clause)
import os
import nibabel as nib
import mne
from mne.datasets import sample, fetch_fsaverage
from mne.minimum_norm import apply_inverse, read_inverse_operator
from nilearn.plotting import plot_glass_brain
print(__doc__)
"""
Explanation: M... |
lionell/laboratories | math_modelling/lab3/lab3.ipynb | mit | def fmap(fs, x):
return np.array([f(*x) for f in fs])
def runge_kutta4_system(fs, x, y0):
h = x[1] - x[0]
y = np.ndarray((len(x), len(y0)))
y[0] = y0
for i in range(1, len(x)):
k1 = h * fmap(fs, [x[i - 1], *y[i - 1]])
k2 = h * fmap(fs, [x[i - 1] + h/2, *(y[i - 1] + k1/2)])
k... |
sdss/marvin | docs/sphinx/jupyter/dap_spaxel_queries.ipynb | bsd-3-clause | from marvin import config
from marvin.tools.query import Query
config.mode='remote'
"""
Explanation: DAP Zonal Queries (or Spaxel Queries)
Marvin allows you to perform queries on individual spaxels within and across the MaNGA dataset.
End of explanation
"""
config.setRelease('MPL-5')
f = 'emline_gflux_ha_6564 > 25'
... |
quantumlib/Cirq | docs/tutorials/google/identifying_hardware_changes.ipynb | apache-2.0 | try:
import cirq
except ImportError:
!pip install --quiet cirq --pre
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import cirq
import cirq_google as cg
"""
Explanation: Identifying Hardware Changes
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href... |
obulpathi/datascience | scikit/Chapter 9/Summary.ipynb | apache-2.0 | from sklearn.datasets import load_digits
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import cross_val_score
digits = load_digits()
X, y = digits.data / 16., digits.target
cross_val_score(LogisticRegression(), X, y, cv=5)
from sklearn.grid_search import GridSearchCV
from sklearn.... |
ES-DOC/esdoc-jupyterhub | notebooks/csiro-bom/cmip6/models/sandbox-1/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-1', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CSIRO-BOM
Source ID: SANDBOX-1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, En... |
mespe/SolRad | exploration/ozone.ipynb | mit | ozone_daily['site'][ozone_daily['site'].isin([2778, 2783])]
"""
Explanation: Although these two sites are listed in the Location file they are not found in the 'ozone'
data set. Check out, "MSA name" column in the Location.xlxs. I think they are not used
for monitoring air quality parameters.
End of explanation
"""
... |
mjirik/pyseg_base | examples/pretrain_model.ipynb | bsd-3-clause | from imcut import pycut
import numpy as np
import scipy.ndimage
import matplotlib.pyplot as plt
from datetime import datetime
def make_data(sz=32, offset=0, sigma=80):
seeds = np.zeros([sz, sz, sz], dtype=np.int8)
seeds[offset + 12, offset + 9 : offset + 14, offset + 10] = 1
seeds[offset + 20, offset + 18 ... |
jbwhit/jupyter-best-practices | notebooks/03-Git-and-Autoreload.ipynb | mit | df = pd.read_csv("../data/coal_prod_cleaned.csv")
df.head()
df.shape
df.columns
qgrid_widget = qgrid.show_grid(
df[["Year", "Mine_State", "Labor_Hours", "Production_short_tons"]],
show_toolbar=True,
)
qgrid_widget
df2 = df.groupby('Mine_State').sum()
df3 = df.groupby('Mine_State').sum()
df2.loc['Wyoming', ... |
karlstroetmann/Formal-Languages | Ply/Symbolic-Calculator.ipynb | gpl-2.0 | import ply.lex as lex
"""
Explanation: A Simple Symbolic Calculator
This file shows how a simple symbolic calculator can be implemented using Ply. The grammar for the language implemented by this parser is as follows:
$$
\begin{array}{lcl}
\texttt{stmnt} & \rightarrow & \;\texttt{IDENTIFIER} \;\texttt{':='}\; \te... |
google/starthinker | colabs/cm_user_editor.ipynb | apache-2.0 | !pip install git+https://github.com/google/starthinker
"""
Explanation: CM360 Bulk User Editor
A tool for rapidly bulk editing Campaign Manager profiles, roles, and sub accounts.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in comp... |
eusebioaguilera/scalablemachinelearning | Lab04/ML_lab4_ctr_student.ipynb | gpl-3.0 | labVersion = 'cs190_week4_v_1_3'
"""
Explanation: Click-Through Rate Prediction Lab
This lab covers the steps for creating a click-through rate (CTR) prediction pipeline. You will work with the Criteo Labs dataset that was used for a recent Kaggle competition.
This lab will cover:
Part 1: Featurize categorical da... |
PLOS/allofplos | allofplos/allofplos_basics.ipynb | mit | import datetime
from allofplos.plos_regex import (validate_doi, show_invalid_dois, find_valid_dois)
from allofplos.samples.corpus_analysis import (get_random_list_of_dois, get_all_local_dois,
get_all_plos_dois)
from allofplos.corpus.plos_corpus import (get_uncorrected_proo... |
wiki-ai/editquality | ipython/reverted_detection_demo.ipynb | mit | # Magical ipython notebook stuff puts the result of this command into a variable
revids_f = !wget http://quarry.wmflabs.org/run/65415/output/0/tsv?download=true -qO-
revids = [int(line) for line in revids_f[1:]]
len(revids)
"""
Explanation: Basic damage detection in Wikipedia
This notebook demonstrates the basic con... |
zzsza/bigquery-tutorial | tutorials/02-Utils/02. Connect Datalab.ipynb | mit | import google.datalab.bigquery as bq
# Query 생성
query_string = '''
#standardSQL
SELECT corpus AS title, COUNT(*) AS unique_words
FROM `publicdata.samples.shakespeare`
GROUP BY title
ORDER BY unique_words DESC
LIMIT 10
'''... |
herruzojm/udacity-deep-learning | autoencoder/Convolutional_Autoencoder_Solution.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: C... |
tcstewar/testing_notebooks | spike trains - poisson and regular.ipynb | gpl-2.0 | class PoissonSpikingApproximate(object):
def __init__(self, size, seed, dt=0.001):
self.rng = np.random.RandomState(seed=seed)
self.dt = dt
self.value = 1.0 / dt
self.size = size
self.output = np.zeros(size)
def __call__(self, t, x):
self.output[:] = 0
p =... |
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera | Improving Deep Neural Networks/Initialization.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['f... |
WNoxchi/Kaukasos | quantum/openfermion-forest-demo-codealong.ipynb | mit | from openfermion.ops import QubitOperator
from forestopenfermion import pyquilpauli_to_qubitop, qubitop_to_pyquilpauli
"""
Explanation: OpenFermion – Forest demo
Wayne H Nixalo – 2018/6/26
A codealong of Forest-OpenFermion_demo.ipynb
Generating and simulating circuits with OpenFermion Forest
The QubitOperator datast... |
griffinfoster/fundamentals_of_interferometry | 2_Mathematical_Groundwork/2_6_cross_correlation_and_auto_correlation.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
"""
Explanation: Outline
Glossary
2. Mathematical Groundwork
Previous: 2.5 Convolution
Next: 2.7 Fourier Theorems
Import standard modules:
End of explanation
"""
... |
ffyu/Build_Model_from_Scratch | 6_Principal_Component_Analysis.ipynb | mit | import numpy as np
class PCA():
def __init__(self, num_components):
self.num_components = num_components
self.U = None
self.S = None
def fit(self, X):
# perform pca
m = X.shape[0]
X_mean = np.mean(X, axis=0)
X -= X_mean
cov = X.T.dot(X) * 1.0 ... |
tensorflow/docs-l10n | site/zh-cn/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb | apache-2.0 | # Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by app... |
Aniruddha-Tapas/Applied-Machine-Learning | Miscellaneous/Gesture-Phase-Detection.ipynb | mit | %matplotlib inline
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation, metrics
from sklearn import preprocessing
import matplotlib
import matplotlib.pyplot as plt
# read .csv from provided dataset
csv_filename1="a1_raw.csv"
csv_filename2="a... |
deepchem/deepchem | examples/tutorials/Introduction_to_Gaussian_Processes.ipynb | mit | %pip install --pre deepchem
"""
Explanation: Introduction to Gaussian Processes
In the world of cheminformatics and machine learning, models are often trees (random forest, XGBoost, etc.) or artifical neural networks (deep neural networks, graph convolutional networks, etc.). These models are known as "Frequentist" mo... |
huizhuzhao/jupyter_notebook | RNNLM.ipynb | mit | import csv
import itertools
import operator
import numpy as np
import nltk
import sys
from datetime import datetime
from utils import *
import matplotlib.pyplot as plt
%matplotlib inline
# Download NLTK model data (you need to do this once)
nltk.download("book")
"""
Explanation: Recurrent Neural Networks Tutorial, P... |
emredjan/emredjan.github.io | code/plot_lm.ipynb | mit | %matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
from statsmodels.graphics.gofplots import ProbPlot
plt.style.use('seaborn') # pretty matplotlib plots
plt.rc('font', size=14)
plt.rc('figure', titlesize=18)
plt.rc('a... |
GoogleCloudPlatform/cloudml-samples | notebooks/keras/cascade.ipynb | apache-2.0 | !gsutil cp gs://cloud-samples-data/air/fruits360/fruits360-combined.zip .
!ls
!unzip -qn fruits360-combined.zip
"""
Explanation: Cascade (HD-CNN Model Deriative)
Objective
This notebook demonstrates building a hierachical image classifer based on a HD-CNN deriative which uses cascading classifers to predict the class ... |
rickiepark/tfk-notebooks | tensorflow_for_beginners/3. Linear Regression.ipynb | mit | import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: 그래프를 그리기 위해서 matplotlib을 임포트 합니다. %matplotlib inline은 새로운 창을 띄우지 않고 주피터 노트북 안에 이미지를 삽입하여 줍니다.
End of explanation
"""
x_raw = ...
x = ...
"""
Explanation: 텐서플로우를 tf 란 이름으로 임포트 하세요.
tf.Session()을 사용하여 세션 객체를 하나 만드세요.
sess = tf.Session()
임의의 샘플 데이터를 만... |
nadvamir/deep-learning | dcgan-svhn/DCGAN_Exercises.ipynb | mit | %matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
"""
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a De... |
minxuancao/shogun | doc/ipython-notebooks/neuralnets/autoencoders.ipynb | gpl-3.0 | %pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from scipy.io import loadmat
from modshogun import RealFeatures, MulticlassLabels, Math
# load the dataset
dataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))
Xall = dataset['data']
# the usps ... |
nbelaid/nbelaid.github.io | dev/_trush/mooc_python-machine-learning/Assignment+1.ipynb | mit | import numpy as np
import pandas as pd
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
#print(cancer.DESCR) # Print the data set description
"""
Explanation: You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter no... |
celiasmith/syde556 | SYDE 556 Lecture 1 Introduction.ipynb | gpl-2.0 | from IPython.display import YouTubeVideo
YouTubeVideo('U_Q6Xjz9QHg', width=720, height=400, loop=1, autoplay=0, playlist='U_Q6Xjz9QHg')
"""
Explanation: SYDE 556/750: Simulating Neurobiological Systems
Accompanying Readings: Chapter 1
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('jHx... |
UWPRG/Python | tutorials/MetaD countours.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
unbiasedCVs = np.genfromtxt('NVT_monitor/COLVAR',comments='#');
biasedCVs = np.genfromtxt('MetaD/COLVAR',comments='#');
unbiasedCVsHOT = np.genfromtxt('NVT_monitor/hot/COLVAR',comments='#');
"""
Explanation: Jim's notebook on contour plots, showing projection of 2... |
unmrds/cc-python | Name_Data.ipynb | apache-2.0 | # http://api.census.gov/data/2010/surname
import requests
import json
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: An Introductory Python Workflow: US Census Surname Data
This notebook provides working examples of many of the concepts introduced earlier:
Importing modules or libraries to exten... |
jrbourbeau/cr-composition | notebooks/legacy/learning-curve.ipynb | mit | import sys
sys.path.append('/home/jbourbeau/cr-composition')
print('Added to PYTHONPATH')
import argparse
from collections import defaultdict
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn.apionly as sns
from sklearn.metrics import ac... |
nicococo/tilitools | lectures/optimization_solution.ipynb | mit | from functools import partial
from scipy.optimize import check_grad, minimize
import numpy as np
import cvxopt as cvx
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Exercise: Optimization
We will implement various optimization algorithms and examine their performance for various tasks.
First-o... |
simkovic/matustools | Statformulas.ipynb | mit | model = """
data {
int<lower=0> N; //nr subjects
real<lower=0> k;
real<lower=0> t;
}generated quantities{
real<lower=0> y;
y=gamma_rng(k,1/t);
}
"""
smGammaGen = pystan.StanModel(model_code=model)
model = """
data {
int<lower=0> N; //nr subjects
real<lower=0> y[N];
}parameters{
real<low... |
martinjrobins/hobo | examples/plotting/residuals-variance-diagnostics.ipynb | bsd-3-clause | import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
# Use the toy logistic model
model = toy.LogisticModel(initial_population_size=1500)
real_parameters = [0.000025, 10]
times = np.linspace(0, 1000, 1000)
org_values = model.simulate(real_parameters, times)
# Add ... |
quoniammm/mine-tensorflow-examples | fastAI/deeplearning2/spelling_bee_RNN.ipynb | mit | %matplotlib inline
import importlib
import utils2; importlib.reload(utils2)
from utils2 import *
np.set_printoptions(4)
PATH = 'data/spellbee/'
limit_mem()
from sklearn.model_selection import train_test_split
"""
Explanation: Spelling Bee
This notebook starts our deep dive (no pun intended) into NLP by introducing s... |
UWPRG/Python | tutorials/PEP8_compliance_tips.ipynb | mit | %%bash
ipython profile create blake
mkdir /Users/houghb/.ipython/profile_blake/static/
mkdir /Users/houghb/.ipython/profile_blake/static/custom/
touch /Users/houghb/.ipython/profile_blake/static/custom/custom.css
"""
Explanation: Tips to make it easier to comply with the PEP8 style guide
Read the style guide here. Pl... |
brain-research/guided-evolutionary-strategies | Guided_Evolutionary_Strategies_Demo_TensorFlow2.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the L... |
ydkahin/jupyter-notebooks | notebooks/quora-views-challenge/quora_views_challenge-partiii-EDA_and_feature_engineering.ipynb | mit | import pandas as pd
import json
json_data = open('../views/sample/input00.in') # Edit this to where you have put the input00.in file
data = []
for line in json_data:
data.append(json.loads(line))
data.remove(9000)
data.remove(1000)
df = pd.DataFrame(data)
df['anonymous'] = df['anonymous'].map({False: 0, True:1}... |
pgmpy/pgmpy_notebook | notebooks/7. Parameterizing with Continuous Variables.ipynb | mit | from IPython.display import Image
"""
Explanation: Parameterizing with Continuous Variables
End of explanation
"""
import numpy as np
from scipy.special import beta
# Two variable drichlet ditribution with alpha = (1,2)
def drichlet_pdf(x, y):
return (np.power(x, 1)*np.power(y, 2))/beta(x, y)
from pgmpy.facto... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.