repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
jamesfolberth/NGC_STEM_camp_AWS | notebooks/data8_notebooks/lab07/lab07.ipynb | bsd-3-clause | # Run this cell, but please don't change it.
# These lines import the Numpy and Datascience modules.
import numpy as np
from datascience import *
# These lines do some fancy plotting magic.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.s... |
Danghor/Formal-Languages | ANTLR4-Python/Calculator/Calculator.ipynb | gpl-2.0 | !cat -n Program.g4
"""
Explanation: Embedded Actions in <span style="font-variant:small-caps;">Antlr</span> Grammars
The pure grammar is stored in the file Grammar.g4.
End of explanation
"""
!cat -n Calculator.g4
"""
Explanation: The grammar shown above has no semantic actions (with the exception of the skip action... |
phoebe-project/phoebe2-docs | development/tutorials/undo_redo.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: Advanced: Undo/Redo
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
... |
ES-DOC/esdoc-jupyterhub | notebooks/hammoz-consortium/cmip6/models/sandbox-3/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-3', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamic... |
sf-wind/caffe2 | caffe2/python/tutorials/Basics.ipynb | apache-2.0 | # We'll also import a few standard python libraries
from matplotlib import pyplot
import numpy as np
import time
# These are the droids you are looking for.
from caffe2.python import core, workspace
from caffe2.proto import caffe2_pb2
# Let's show all plots inline.
%matplotlib inline
"""
Explanation: Caffe2 Basic Co... |
fluxcapacitor/source.ml | jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/Conferences/ODSC/MasterClass/Mar-01-2017/SparkMLTensorflowAI-HybridCloud-ContinuousDeployment.ipynb | apache-2.0 | import numpy as np
import os
import tensorflow as tf
from tensorflow.contrib.session_bundle import exporter
import time
# make things wide
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from IPython.display import clear_output, Image, display, HTML... |
mne-tools/mne-tools.github.io | 0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb | bsd-3-clause | import mne
import numpy as np
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
data_path = sample.data_path()
evokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif')
left_auditory = evokeds[0].apply_baseline()
fwd = mne.read_forward_solution(
dat... |
Kaggle/learntools | notebooks/computer_vision/raw/tut1.ipynb | apache-2.0 | #$HIDE_INPUT$
# Imports
import os, warnings
import matplotlib.pyplot as plt
from matplotlib import gridspec
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
# Reproducability
def set_seed(seed=31415):
np.random.seed(seed)
tf.random.set_seed(see... |
kaushik94/sympy | examples/notebooks/Sylvester_resultant.ipynb | bsd-3-clause | x = sym.symbols('x')
"""
Explanation: Resultant
If $p$ and $q$ are two polynomials over a commutative ring with identity which can be factored into linear factors,
$$p(x)= a_0 (x - r_1) (x- r_2) \dots (x - r_m) $$
$$q(x)=b_0 (x - s_1)(x - s_2) \dots (x - s_n)$$
then the resultant $R(p,q)$ of $p$ and $q$ is defined as:... |
gVallverdu/cookbook | intro_folium.ipynb | gpl-2.0 | import folium
"""
Explanation: Folium examples
Germain Salvato Vallverdu germain.vallverdu@gmail.com
This notebook shows simple examples of Folium package in order to draw marker on a map.
Colors from flatui colors.
End of explanation
"""
carte = folium.Map(location=[45.5236, -122.6750], zoom_start=12)
marker = foli... |
dipanjanS/text-analytics-with-python | New-Second-Edition/Ch10 - The Promise of Deep Learning/Ch10a - Deep Transfer Learning for NLP - Text Classification with Universal Embeddings.ipynb | apache-2.0 | !pip install tensorflow-hub
"""
Explanation: Sentiment Analysis - Text Classification with Universal Embeddings
Textual data in spite of being highly unstructured, can be classified into two major types of documents.
- Factual documents which typically depict some form of statements or facts with no specific feelings... |
smharper/openmc | examples/jupyter/mgxs-part-ii.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-dark')
import openmoc
import openmc
import openmc.mgxs as mgxs
import openmc.data
from openmc.openmoc_compatible import get_openmoc_geometry
%matplotlib inline
"""
Explanation: This IPython Notebook illustrates the use of the openmc.mgxs modu... |
bbglab/adventofcode | 2018/ferran/day12/subterranean_sustainability.ipynb | mit | initial = '.##..#.#..##..##..##...#####.#.....#..#..##.###.#.####......#.......#..###.#.#.##.#.#.###...##.###.#'
r = ! cat input.txt | tr '\n' ';'
r = dict(list(map(lambda x: tuple(x.split(' => ')), r[0].split(';')[:-1])))
def evolve(state, rules, time):
s = state
for t in range(1, time + 1):
n = len(... |
ES-DOC/esdoc-jupyterhub | notebooks/cas/cmip6/models/fgoals-f3-h/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'fgoals-f3-h', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CAS
Source ID: FGOALS-F3-H
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Ra... |
GoogleCloudPlatform/asl-ml-immersion | notebooks/launching_into_ml/labs/2_first_model.ipynb | apache-2.0 | PROJECT = !gcloud config get-value project
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
"""
Explanation: First BigQuery ML models for Taxifare Prediction
Learning Objectives
* Choose the correct BigQuery ML model type and specify options
... |
BrownDwarf/ApJdataFrames | notebooks/Devor2008.ipynb | mit | import pandas as pd
from astropy.io import ascii, votable, misc
"""
Explanation: ApJdataFrames
Devor et al. 2008
Title: IDENTIFICATION, CLASSIFICATIONS, AND ABSOLUTE PROPERTIES OF 773 ECLIPSING BINARIES FOUND IN THE TRANS-ATLANTIC EXOPLANET SURVEY
Authors: Jonathan Devor, David Charbonneau, Francis T O'Donovan, Georg... |
4dsolutions/Python5 | S_Train.ipynb | mit | from IPython.display import YouTubeVideo
YouTubeVideo("1VXDejQcAWY")
"""
Explanation: Oregon Curriculum Network <br />
Discovering Math with Python
All Aboard the S Train!
Those of us exploring the geometry of thinking laid out in Synergetics (subtitled explorations in the geometry of thinking) will be familiar with t... |
steven-murray/halomod | devel/einasto_profile.ipynb | mit | %pylab inline
from halomod import HaloModel
from scipy.interpolate import InterpolatedUnivariateSpline as spline
hm = HaloModel(profile_model="Einasto")
"""
Explanation: Einasto Profile
In this notebook we visually test the Einasto profile (and do some timing etc.)
End of explanation
"""
_ = hm.profile.rho(hm.r,hm.... |
dennys-bd/Coursera-Machine-Learning-Specialization | Course 2 - ML, Regression/week-2-multiple-regression-assignment-2-blank.ipynb | mit | import graphlab
"""
Explanation: Regression Week 2: Multiple Regression (gradient descent)
In the first notebook we explored multiple regression using graphlab create. Now we will use graphlab along with numpy to solve for the regression weights with gradient descent.
In this notebook we will cover estimating multiple... |
KaiSzuttor/espresso | doc/tutorials/08-visualization/08-visualization.ipynb | gpl-3.0 | from matplotlib import pyplot
import espressomd
import numpy
espressomd.assert_features("LENNARD_JONES")
# system parameters (10000 particles)
box_l = 10.7437
density = 0.7
# interaction parameters (repulsive Lennard-Jones)
lj_eps = 1.0
lj_sig = 1.0
lj_cut = 1.12246
lj_cap = 20
# integration parameters
system = esp... |
tuanavu/coursera-university-of-washington | machine_learning/4_clustering_and_retrieval/assigment/week2/1_nearest-neighbors-lsh-implementation_graphlab.ipynb | mit | import numpy as np
import graphlab
from scipy.sparse import csr_matrix
from scipy.sparse.linalg import norm
from sklearn.metrics.pairwise import pairwise_distances
import time
from copy import copy
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Locality Sensitive Hashing
Locality Sensitive Hashing... |
landlab/landlab | notebooks/tutorials/flow_direction_and_accumulation/PriorityFlood_realDEMs.ipynb | mit | import sys, time, os
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
from landlab.components import FlowAccumulator, PriorityFloodFlowRouter, ChannelProfiler
from landlab.io.netcdf import read_netcdf
from landlab.utils import get_watershed_mask
from landlab import imshowhs_grid, imshow_gri... |
keoghdata/bradlib | install.ipynb | gpl-3.0 | module_name = 'bradlib'
"""
Explanation: Script to copy files to Anaconda paths so can import and use scripts
End of explanation
"""
from distutils.sysconfig import get_python_lib #; print(get_python_lib())
path_main = get_python_lib()
path_main
path_main.split('Anaconda3')
"""
Explanation: find main path for in... |
wmvanvliet/neuroscience_tutorials | eeg-erp/adept.ipynb | bsd-2-clause | from mne.io import read_raw_bdf
raw = read_raw_bdf('data/magic-trick-raw.bdf', preload=True)
print(raw)
"""
Explanation: <img src="images/charmeleon.png" alt="Adept" width="200">
Data preprocessing
Welcome to the next level!
In the previous level, you have learned some programming basics, culminating in you successful... |
frankbearzou/Data-analysis | Star Wars survey/Star Wars survey.ipynb | mit | star_wars = pd.read_csv('star_wars.csv', encoding="ISO-8859-1")
star_wars.head()
star_wars.columns
"""
Explanation: Data Exploration
End of explanation
"""
star_wars = star_wars.dropna(subset=['RespondentID'])
"""
Explanation: Data Cleaning
Remove invalid first column RespondentID which are NaN.
End of explanatio... |
slundberg/shap | notebooks/overviews/Be careful when interpreting predictive models in search of causal insights.ipynb | mit | # This cell defines the functions we use to generate the data in our scenario
import numpy as np
import pandas as pd
import scipy.stats
import sklearn
import xgboost
class FixableDataFrame(pd.DataFrame):
""" Helper class for manipulating generative models.
"""
def __init__(self, *args, fixed={}, **kwargs)... |
anandha2017/udacity | nd101 Deep Learning Nanodegree Foundation/DockerImages/24_embeddings_and_word2vec/notebooks/01-embeddings/Skip-Grams-Solution.ipynb | mit | import time
import numpy as np
import tensorflow as tf
import utils
"""
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language p... |
landlab/landlab | notebooks/tutorials/component_tutorial/component_tutorial.ipynb | mit | from landlab.components import LinearDiffuser
from landlab.plot import imshow_grid
from landlab import RasterModelGrid
import matplotlib as mpl
import matplotlib.cm as cm
from matplotlib.pyplot import figure, show, plot, xlabel, ylabel, title
import numpy as np
"""
Explanation: <a href="http://landlab.github.io"><img ... |
fantasycheng/udacity-deep-learning-project | tutorials/dcgan-svhn/DCGAN.ipynb | mit | %matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
"""
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a De... |
sdpython/ensae_teaching_cs | _doc/notebooks/td2a_ml/td2a_pipeline_tree_selection_correction.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
"""
Explanation: 2A.ml - Pipeline pour un réduction d'une forêt aléatoire - correction
Le modèle Lasso permet de sélectionner des variables, une forêt aléatoire produit une prédiction comme étant la moyenne d'arbres de régression. Cet ... |
okkhoy/pyDataAnalysis | ml-foundation/recommendation/Song recommender.ipynb | mit | import graphlab
"""
Explanation: Building a song recommender
Fire up GraphLab Create
End of explanation
"""
song_data = graphlab.SFrame('song_data.gl/')
"""
Explanation: Load music data
End of explanation
"""
song_data.head()
"""
Explanation: Explore data
Music data shows how many times a user listened to a song... |
rashikaranpuria/Machine-Learning-Specialization | Regression/Assignment_four/week-4-ridge-regression-assignment-1-blank.ipynb | mit | import graphlab
"""
Explanation: Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization.... |
wzxiong/DAVIS-Machine-Learning | homeworks/HW2.ipynb | mit | import numpy as np
import pandas as pd
# dataset path
data_dir = "."
"""
Explanation: STA 208: Homework 2
This is based on the material in Chapters 3, 4.4 of 'Elements of Statistical Learning' (ESL), in addition to lectures 4-6. Chunzhe Zhang came up with the dataset and the analysis in the second section.
Instructi... |
laserson/phip-stat | notebooks/phip_modeling/phip-kinetic-computations.ipynb | apache-2.0 | df = pd.read_csv('/Users/laserson/lasersonlab/larman/libraries/T7-Pep_InputCountsComplete46M.csv', header=None, index_col=0)
counts = df.values.ravel()
sns.distplot(counts)
"""
Explanation: PhIP-Seq kinetics computations
Reaction summary
IP reaction (1 mL)
* IgG
* MW of IgG = 150 kDa
* 2 µg IgG = 13.3 pmol =... |
anachlas/w210_vendor_recommendor | vendor recommender - EDA.ipynb | gpl-3.0 | import google.datalab.bigquery as bq
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn import cross_validation as cv
fro... |
bmeaut/python_nlp_2017_fall | course_material/09_Morphology_lab/09_Morphology_lab.ipynb | mit | import os
# Note that the actual output of `ls` is not printed!
print('Exit code:', os.system('ls -a'))
files = os.listdir('.')
print('Should have printed:\n\n{}'.format('\n'.join(files if len(files) <= 3 else files[:3] + ['...'])))
"""
Explanation: 9. Morphology — Lab exercises
XFST / foma
XFST provides two formalis... |
tensorflow/docs-l10n | site/pt-br/tutorials/images/transfer_learning_with_hub.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under... |
martinjrobins/hobo | examples/sampling/slice-rank-shrinking-mcmc.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
import numpy as np
import pints
import pints.toy
# Define target
log_pdf = pints.toy.MultimodalGaussianLogPDF(modes = [[0, 0], [10, 10], [10, 0]])
# Plot target
levels = np.linspace(-3,12,20)
num_points = 100
x = np.linspace(-5, 15, num_points)
y = np.linspace(-5, 15, num_points)
X, Y ... |
ES-DOC/esdoc-jupyterhub | notebooks/inm/cmip6/models/sandbox-2/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-2', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: INM
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, ... |
cjcardinale/climlab | docs/source/courseware/PolarAmplification.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
from climlab import constants as const
"""
Explanation: Polar amplification in simple models
End of explanation
"""
ebm = climlab.GreyRadiationModel(num_lev=1, num_lat=90)
insolation = climlab.radiation.AnnualMeanInsolation(domains=... |
yhat/ggplot | docs/how-to/Customizing Colors.ipynb | bsd-2-clause | ggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) +\
geom_point() +\
scale_color_brewer(type='qual')
ggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) + \
geom_point() + \
scale_color_brewer(type='seq')
ggplot(aes(x='carat', y='price', color='clarity'), data=diamonds) +... |
ffmmjj/intro_to_data_science_workshop | 04-Exemplo - Análise de sobreviventes do Titanic.ipynb | apache-2.0 | import pandas as pd
raw_data = pd.read_csv('datasets/titanic.csv')
raw_data.head()
raw_data.info()
"""
Explanation: Análise de sobreviventes do Titanic
O dataset de sobrevivents do Titanic é bastante usado como exemplo didático para ilustrar conceitos de tratamento e exploração de dados.
Vamos começar importando da... |
google/starthinker | colabs/dbm.ipynb | apache-2.0 | !pip install git+https://github.com/google/starthinker
"""
Explanation: DV360 Report
Create a DV360 report.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https:... |
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/time_series_prediction/solutions/3_modeling_bqml.ipynb | apache-2.0 | PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
%env
PROJECT = PROJECT
REGION = REGION
%%bash
sudo python3 -m pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo python3 -m pip install google-cloud-bigquery==1.6.1
"""
E... |
bjornaa/roppy | examples/flux_feie_shetland.ipynb | mit | # Imports
=======
The class depends on `numpy` and is part of `roppy`. To read the data `netCDF4` is needed.
The graphic package `matplotlib` is not required for `FluxSection` but is used for visualisation in this notebook.
# Imports
import numpy as np
import matplotlib.pyplot as plt
from netCDF4 import Dataset
imp... |
RyanAlberts/Springbaord-Capstone-Project | Statistics_Exercises/sliderule_dsi_inferential_statistics_exercise_3.ipynb | mit | %matplotlib inline
import pandas as pd
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import bokeh.plotting as bkp
from mpl_toolkits.axes_grid1 import make_axes_locatable
# read in readmissions data provided
hospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')
"""
... |
asharel/ml | LAB2/Recursos/Red_Dimension.ipynb | gpl-3.0 | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn.feature_selection as FS
data = pd.read_csv("./wine_dataset.csv", delimiter=";")
data.head()
"""
Explanation: Práctica Reduccción de Dimensionalidad
Métodos de Filtrado
Métodos Wrapper
Métodos Extracción:
LDA
PCA
End of explanation
... |
giacomov/astromodels | examples/Priors_for_Bayesian_analysis.ipynb | bsd-3-clause | from astromodels import *
# Create a point source named "pts1"
pts1 = PointSource('pts1',ra=125.23, dec=17.98, spectral_shape=powerlaw())
# Create the model
my_model = Model(pts1)
"""
Explanation: Priors for Bayesian analysis
Astromodels supports the definition of priors for all parameters in your model. You can use... |
Serulab/Py4Bio | notebooks/Chapter 12 - Python and Databases.ipynb | mit | !curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/samples/samples.tar.bz2 -o samples.tar.bz2
!mkdir samples
!tar xvfj samples.tar.bz2 -C samples
!wget https://raw.githubusercontent.com/Serulab/Py4Bio/master/code/ch12/PythonU.sql
!apt-get -y install mysql-server
!/etc/init.d/mysql start
!mysql -e 'create da... |
sdss/marvin | docs/sphinx/jupyter/saving_and_restoring.ipynb | bsd-3-clause | # let's grab the H-alpha emission line flux map
from marvin.tools.maps import Maps
mapfile = '/Users/Brian/Work/Manga/analysis/v2_0_1/2.0.2/SPX-GAU-MILESHC/8485/1901/manga-8485-1901-MAPS-SPX-GAU-MILESHC.fits.gz'
maps = Maps(filename=mapfile)
haflux = maps.getMap('emline_gflux', channel='ha_6564')
print(haflux)
"""
Exp... |
rcurrie/tumornormal | treehouse.ipynb | apache-2.0 | import os
import json
import numpy as np
import pandas as pd
import tensorflow as tf
import keras
import matplotlib.pyplot as pyplot
# fix random seed for reproducibility
np.random.seed(42)
# See https://github.com/h5py/h5py/issues/712
os.environ["HDF5_USE_FILE_LOCKING"] = "FALSE"
"""
Explanation: Classify Treehous... |
conversationai/unintended-ml-bias-analysis | archive/unintended_ml_bias/Bias_fuzzed_test_set.ipynb | apache-2.0 | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import pandas as pd
import urllib
import matplotlib.pyplot as plt
%matplotlib inline
COMMENTS = '../data/toxicity_annotated_comments.tsv'
ANNOTATIONS = '../data/toxicity_annotations.tsv'
comments = pd.read_cs... |
carthach/essentia | src/examples/tutorial/example_truepeakdetector.ipynb | agpl-3.0 | import essentia.standard as es
import numpy as np
import matplotlib
matplotlib.use('nbagg')
import matplotlib.pyplot as plt
import ipywidgets as wg
from IPython.display import Audio
from essentia import array as esarr
plt.rcParams["figure.figsize"] =(9, 5)
"""
Explanation: TruePeakDetector use example
This algorithm... |
mne-tools/mne-tools.github.io | stable/_downloads/7ba58cd4e9bc2622d60527d21fc13577/decoding_spatio_temporal_source.ipynb | bsd-3-clause | # Author: Denis A. Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Jean-Remi King <jeanremi.king@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import... |
bkimo/discrete-math-with-python | lab2-bubble-sort.ipynb | mit | def bubbleSort(alist):
for i in range(0, len(alist)-1):
for j in range(0, len(alist)-1-i):
if alist[j] > alist[j+1]:
alist[j], alist[j+1] = alist[j+1], alist[j]
alist = [54,26,93,17,77,31,44,55,20]
bubbleSort(alist)
print(alist)
"""
Explanation: Algorithm Complexity: Array and... |
anugrah-saxena/pycroscopy | jupyter_notebooks/BE_Processing.ipynb | mit | !pip install -U numpy matplotlib Ipython ipywidgets pycroscopy
# Ensure python 3 compatibility
from __future__ import division, print_function, absolute_import
# Import necessary libraries:
# General utilities:
import sys
import os
# Computation:
import numpy as np
import h5py
# Visualization:
import matplotlib.pyp... |
keras-team/keras-io | examples/nlp/ipynb/active_learning_review_classification.ipynb | apache-2.0 | import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import re
import string
tfds.disable_progress_bar()
"""
Explanation: Review Classification using Active Learning
Author: Darshan Deshpande<br>
Date created: 2021/... |
Kaggle/learntools | notebooks/sql_advanced/raw/tut1.ipynb | apache-2.0 | #$HIDE_INPUT$
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "hacker_news" dataset
dataset_ref = client.dataset("hacker_news", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Constru... |
Who8MyLunch/ipynb_widget_canvas | notebooks/02 - Canvas Widget Example.ipynb | mit | ll ../widget_canvas/
fname = '../widget_canvas/widget_canvas.js'
f = os.path.abspath(fname)
js = IPython.display.Javascript(filename=f) # data=None, url=None, filename=None, lib=None
print('inject!')
IPython.display.display(js)
from __future__ import print_function, unicode_literals, division, absolute_import
im... |
zerothi/ts-tbt-sisl-tutorial | TB_03/run.ipynb | gpl-3.0 | graphene = sisl.geom.graphene().tile(2, axis=0)
H = sisl.Hamiltonian(graphene)
H.construct([[0.1, 1.43], [0., -2.7]])
"""
Explanation: This example will setup the required electronic structures for usage in TBtrans.
We will continue with the graphene nearest neighbour tight-binding model and perform simple transport c... |
google/jax | docs/notebooks/Writing_custom_interpreters_in_Jax.ipynb | apache-2.0 | import numpy as np
import jax
import jax.numpy as jnp
from jax import jit, grad, vmap
from jax import random
"""
Explanation: Writing custom Jaxpr interpreters in JAX
JAX offers several composable function transformations (jit, grad, vmap,
etc.) that enable writing concise, accelerated code.
Here we show how to add ... |
AstroHackWeek/AstroHackWeek2016 | day2-machine-learning/machine-learning-on-SDSS.ipynb | mit | ## get the data locally ... I put this on a gist
!curl -k -O https://gist.githubusercontent.com/anonymous/53781fe86383c435ff10/raw/4cc80a638e8e083775caec3005ae2feaf92b8d5b/qso10000.csv
!curl -k -O https://gist.githubusercontent.com/anonymous/2984cf01a2485afd2c3e/raw/964d4f52c989428628d42eb6faad5e212e79b665/star1000.csv... |
mbeyeler/opencv-machine-learning | notebooks/09.01-Understanding-perceptrons.ipynb | mit | import numpy as np
class Perceptron(object):
def __init__(self, lr=0.01, n_iter=10):
"""Constructor
Parameters
----------
lr : float
Learning rate.
n_iter : int
Number of iterations after which the algorithm should
terminate.
... |
sebastianmarkow/san-francisco-crime-kaggle | prediction.ipynb | mit | import datetime
import gc
import zipfile
import matplotlib as mpl
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn as sk
from pandas.tseries.holiday import USFederalHolidayCalendar
from sklearn.cross_validation import KFold, cross_val_score
from sklearn.ensemble import RandomForestClassifier... |
andrzejkrawczyk/python-course | part_1/08.Funkcje.ipynb | apache-2.0 | def foo():
pass
def suma(a, b):
return a + b
print(foo())
print(foo)
print(suma(5, 10))
def suma(a, b, c=5, d=10):
return a + b + c + d
print(suma(1, 2))
def suma(a, b, c=5, d=10):
return a + b + c + d
print(suma(1, 2, 3))
def suma(a, b, c=5, d=10):
return a + b + c + d
print(suma(1, 2, d=5,... |
jmhsi/justin_tinker | data_science/courses/deeplearning1/nbs/lesson1.ipynb | apache-2.0 | %matplotlib inline
"""
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Do... |
ToqueWillot/M2DAC | FDMS/TME2/TME2_Paul_Willot.ipynb | gpl-2.0 | %matplotlib inline
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import copy
from sklearn.datasets import fetch_mldata
from sklearn import cross_validation
from sklearn import base
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet... |
aoool/behavioral-cloning | data.ipynb | mit | import os
import zipfile
if not (os.path.isdir("data_raw") and os.path.exists("data_raw.csv")):
zip_ref = zipfile.ZipFile("data_raw.zip", 'r')
zip_ref.extractall(".")
zip_ref.close()
"""
Explanation: Behavioral Cloning
Here the driving data collected using the driving simulator will be explored and augmen... |
GoogleCloudPlatform/training-data-analyst | quests/data-science-on-gcp-edition1_tf2/07_sparkml_and_bqml/logistic_regression.ipynb | apache-2.0 | BUCKET='cs358-bucket' # CHANGE ME
import os
os.environ['BUCKET'] = BUCKET
# Create spark session
from __future__ import print_function
from pyspark.sql import SparkSession
from pyspark import SparkContext
sc = SparkContext('local', 'logistic')
spark = SparkSession \
.builder \
.appName("Logistic regression... |
kubeflow/kfp-tekton-backend | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | apache-2.0 | %%capture --no-stderr
KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'
!pip3 install $KFP_PACKAGE --upgrade
"""
Explanation: Name
Data preparation using Hadoop MapReduce on YARN with Cloud Dataproc
Label
Cloud Dataproc, GCP, Cloud Storage, Hadoop, YARN, Apache, MapReduce
Summary
A ... |
GoogleCloudPlatform/tensorflow-without-a-phd | tensorflow-rnn-tutorial/01_Keras_stateful_RNN_playground.ipynb | apache-2.0 | # using Tensorflow 2
%tensorflow_version 2.x
import math
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
print("Tensorflow version: " + tf.__version__)
#@title Data formatting and display utilites [RUN ME]
def dumb_minibatch_sequencer(data, batch_size, sequence_size, nb_epochs):
""... |
plipp/informatica-pfr-2017 | nbs/5/1-Marvel-World-SNA-Intro.ipynb | mit | import networkx as nx
import csv
G = nx.Graph(name="Hero Network")
with open('../../data/hero-network.csv', 'r') as data:
reader = csv.reader(data)
for row in reader:
G.add_edge(*row)
nx.info(G)
G.order() # number of nodes
G.size() # number of edges
"""
Explanation: Social Network Analysis
Analysis... |
ljvmiranda921/pyswarms | docs/examples/tutorials/visualization.ipynb | mit | # Import modules
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
# Import PySwarms
import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx
from pyswarms.utils.plotters import (plot_cost_history, plot_contour, plot_surface)
"""
Explanation: Visualization
PySwarm... |
dereneaton/ipyrad | newdocs/API-analysis/cookbook-sharing.ipynb | gpl-3.0 | %load_ext autoreload
%autoreload 2
%matplotlib inline
"""
Explanation: <h2><span style="color:gray">ipyrad-analysis toolkit:</span> sharing</h2>
Calculate and plot pairwise locus sharing and pairwise missigness
End of explanation
"""
# conda isntall -c conda-forge seaborn
import ipyrad
import ipyrad.analysis as i... |
jorisvandenbossche/2015-EuroScipy-pandas-tutorial | pandas_intro_example-names.ipynb | bsd-2-clause | # !curl -O http://www.ssa.gov/oact/babynames/names.zip
# !mkdir -p data/names
# !mv names.zip data/names/
# !cd data/names/ && unzip names.zip
"""
Explanation: Example: Names in the Wild
This example is drawn from Wes McKinney's excellent book on the Pandas library, O'Reilly's Python for Data Analysis.
We'll be takin... |
statsmodels/statsmodels.github.io | v0.12.1/examples/notebooks/generated/ordinal_regression.ipynb | bsd-3-clause | import numpy as np
import pandas as pd
import scipy.stats as stats
from statsmodels.miscmodels.ordinal_model import OrderedModel
"""
Explanation: Ordinal Regression
End of explanation
"""
url = "https://stats.idre.ucla.edu/stat/data/ologit.dta"
data_student = pd.read_stata(url)
data_student.head(5)
data_student.d... |
letsgoexploring/economicData | cross-country-production/python/cross_country_production_data.ipynb | mit | # Set the current value of the PWT data file
current_pwt_file = 'pwt100.xlsx'
# Import data from local source or download if not present
if os.path.exists('../xslx/pwt100.xlsx'):
info = pd.read_excel('../xslx/'+current_pwt_file,sheet_name='Info',header=None)
legend = pd.read_excel('../xslx/'+current_pwt_file,s... |
scollins83/deep-learning | first-neural-network/Your_first_neural_network.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code... |
datascience-practice/data-quest | python_introduction/intermediate/Modules.ipynb | mit | import math
"""
Explanation: 3: The math module
Instructions
Use the sqrt() function within the math module to assign the square root of 16.0 to a.
Use the ceil() function within the math module to assign the ceiling of 111.3 to b.
Use the floor() function within the math module to assign the floor of 89.9 to c.
End o... |
eds-uga/csci1360e-su17 | lectures/L6.ipynb | mit | x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73]
"""
Explanation: Lecture 6: Conditionals and Exceptions
CSCI 1360E: Foundations for Informatics and Analytics
Overview and Objectives
In this lecture, we'll go over how to make "decisions" over the course of your code depending on the values certain variables take. We'll al... |
aje/POT | notebooks/plot_gromov_barycenter.ipynb | mit | # Author: Erwan Vautier <erwan.vautier@gmail.com>
# Nicolas Courty <ncourty@irisa.fr>
#
# License: MIT License
import numpy as np
import scipy as sp
import scipy.ndimage as spi
import matplotlib.pylab as pl
from sklearn import manifold
from sklearn.decomposition import PCA
import ot
"""
Explanation: Gromov... |
hannorein/rebound | ipython_examples/Resonances_of_Jupiters_moons.ipynb | gpl-3.0 | import rebound
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
sim = rebound.Simulation()
sim.units = ('AU', 'days', 'Msun')
# We can add Jupiter and four of its moons by name, since REBOUND is linked to the HORIZONS database.
labels = ["Jupiter", "Io", "Europa","Ganymede","Callisto"]
sim.add(l... |
SBRG/ssbio | docs/notebooks/Complex - Testing.ipynb | mit | import ecolime
import ecolime.flat_files
"""
Explanation: README
Notebook to test the Complex class as well as parsing code from cobrame/ecolime
From COBRAme/ECOLIme...
Flat files / ProcessData
End of explanation
"""
# First load the list of complexes which tells you complexes + subunit stoichiometry
# Converts the ... |
yuvrajsingh86/DeepLearning_Udacity | sentiment-network/Sentiment_Classification_Projects.ipynb | mit | def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()... |
gcgruen/homework | data-databases-homework/.ipynb_checkpoints/Homework_2_Gruen-checkpoint.ipynb | mit | import pg8000
conn = pg8000.connect(user="postgres", password="12345", database="homework2")
"""
Explanation: Homework 2: Working with SQL (Data and Databases 2016)
This homework assignment takes the form of an IPython Notebook. There are a number of exercises below, with notebook cells that need to be completed in or... |
sot/aca_stats | fit_acq_prob_model-2018-04-poly-spline-warmpix.ipynb | bsd-3-clause | from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table
from astropy.time import Time
import tables
from scipy import stats
import tables3_api
from scipy.interpolate import CubicSpline
%matplotlib inline
"""
Explanation: Fit the poly-spline-warmpix acquisiti... |
gagneurlab/concise | nbs/legacy/01-simulated-data.ipynb | mit | ## Concise extensions of keras:
import concise
import concise.layers as cl
import concise.initializers as ci
import concise.regularizers as cr
import concise.metrics as cm
from concise.preprocessing import encodeDNA, encodeSplines
from concise.data import attract, encode
## layers:
cl.ConvDNA
cl.ConvDNAQuantitySpline... |
hongguangguo/shogun | doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb | gpl-3.0 | %pylab inline
%matplotlib inline
# import all Shogun classes
from modshogun import *
"""
Explanation: Kernel hypothesis testing in Shogun
By Heiko Strathmann - <a href="mailto:heiko.strathmann@gmail.com">heiko.strathmann@gmail.com</a> - <a href="github.com/karlnapf">github.com/karlnapf</a> - <a href="herrstrathmann.de... |
jdorvi/MonteCarlos_SLC | .ipynb_checkpoints/Distribution_Fit_MC-checkpoint.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import gridspec
import scipy
import scipy.stats as stats
import pandas as pd
import numpy as np
from ipywidgets import interact, interact_manual
import os
"""
Explanation: Fitting Distributions to a Dataset
End of explanation
"""
continuous_distribut... |
jrg365/gpytorch | examples/04_Variational_and_Approximate_GPs/GP_Regression_with_Uncertain_Inputs.ipynb | mit | import math
import torch
import tqdm
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
"""
Explanation: GP Regression with Uncertain Inputs
Introduction
In this notebook, we're going to demonstrate one way of dealing with uncertainty in our training data. Let's... |
gammapy/PyGamma15 | tutorials/naima/naima_mcmc.ipynb | bsd-3-clause | import naima
import numpy as np
from astropy.io import ascii
import astropy.units as u
%matplotlib inline
import matplotlib.pyplot as plt
hess_spectrum = ascii.read('RXJ1713_HESS_2007.dat', format='ipac')
fig = naima.plot_data(hess_spectrum)
"""
Explanation: SED fitting with naima
In this notebook we will carry out a... |
feffenberger/StatisticalMethods | lessons/3.PDFCharacterization.ipynb | gpl-2.0 | from straightline_utils import *
%matplotlib inline
from matplotlib import rcParams
rcParams['savefig.dpi'] = 100
(x,y,sigmay) = get_data_no_outliers()
plot_yerr(x, y, sigmay)
"""
Explanation: PHYS366: Statistical Methods in Astrophysics
Lesson 3: Inference in Practice: PDF Characterization
Goals for this session:
Li... |
kyleabeauchamp/mdtraj | examples/rmsd-drift.ipynb | lgpl-2.1 | import mdtraj.testing
crystal_fn = mdtraj.testing.get_fn('native.pdb')
trajectory_fn = mdtraj.testing.get_fn('frame0.xtc')
crystal = md.load(crystal_fn)
trajectory = md.load(trajectory_fn, top=crystal) # load the xtc. the crystal structure defines the topology
trajectory
"""
Explanation: Find two files that are dist... |
tabakg/potapov_interpolation | commensurate_roots.ipynb | gpl-3.0 | import numpy as np
import numpy.linalg as la
import Potapov_Code.Time_Delay_Network as networks
import matplotlib.pyplot as plt
%pylab inline
import sympy as sp
from sympy import init_printing
init_printing()
from fractions import gcd
## To identify commensurate delays, we must use a decimal and NOT a binary repr... |
sueiras/training | tensorflow/02-text/20newsgroups_keras_model.ipynb | gpl-3.0 | from __future__ import print_function
import os
import sys
import numpy as np
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.layers import Dense, Input, GlobalMaxPooling1D
from keras.layers import Conv1D, MaxPooli... |
tensorflow/docs-l10n | site/ja/probability/examples/Factorial_Mixture.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, sof... |
tzk/EDeN_examples | classification.ipynb | gpl-2.0 | from eden.util import load_target
y = load_target( 'http://www.bioinf.uni-freiburg.de/~costa/bursi.target' )
"""
Explanation: Classification
Consider a binary classification problem. The data and target files are available online. The domain of the problem is chemoinformatics. Data is about toxicity of 4K small molecu... |
ESO-python/ESOPythonTutorials | notebooks/nov_2_2015.ipynb | bsd-3-clause | %%bash
find . -name "*.c" | xargs sed -i bck "/#include<malloc\.h>/d"
%%bash
cat ./isis/abs/allocate.cbck
"""
Explanation: Finding a string in files and removing it
Removing #include<malloc.h> in all c files in a structure
End of explanation
"""
from astropy import units as u, constants as const
class SnickersBa... |
ThunderShiviah/code_guild | interactive-coding-challenges/recursion_dynamic/fibonacci/fibonacci_challenge.ipynb | mit | def fib_recursive(n):
# TODO: Implement me
pass
num_items = 10
cache = [None] * (num_items + 1)
def fib_dynamic(n):
# TODO: Implement me
pass
def fib_iterative(n):
# TODO: Implement me
pass
"""
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on ... |
bgroveben/python3_machine_learning_projects | learn_kaggle/machine_learning/cross_validation.ipynb | mit | import pandas as pd
data = pd.read_csv('input/melbourne_data.csv')
cols_to_use = ['Rooms', 'Distance', 'Landsize', 'BuildingArea', 'YearBuilt']
X = data[cols_to_use]
y = data.Price
"""
Explanation: Cross-Validation
Machine learning is an iterative process.
You will face choices about predictive variables to use, what... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.