repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
ecabreragranado/OpticaFisicaII | Experimento de Young/Biprisma de Fresnel_Ejercicio.ipynb | gpl-3.0 | import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('bmh')
import ipywidgets as widgets
from IPython.display import display
import io
import base64
from IPython.display import clear_output
#Datos fijos
###################33
D = 3
Lambda =... |
artzers/MachineLearning | Deconv/Deconvolution.ipynb | mit | import itertools
ta=[1,2,3]
tb=[4,5,6]
#tc=[(i,j) for i,j in zip(ta,tb)]
#print tc
#import itertools
#for i in itertools.product('ABCD', repeat = 2):
# print i,
for i in itertools.product(range(1,4),range(4,7)):#dikaer product
print(i,)
print(' ')
a=np.arange(10)
print(a)
a[ta]*=2
print(a)
from scipy.sparse im... |
azubiolo/itstep | it_step/ml_from_scratch/8_svm_lab/svm.ipynb | mit | k_classes = 2
X = [[1., 1.5, 0.2], [1., 0.3, 1.2], [1, 1.6, 0.4], [1., 1.3, 0.25], [1., 0.5, 1.12]]
Y = [1, 2, 1, 1, 2]
"""
Explanation: Support Vector Machines
Course recap
This lab consists in implementing the Support Vector Machines (SVM) algorithm.
Given a training set $ D = \left{ \left(x^{(i)}, y^{(i)}\right), ... |
akallio1/science-days-2017 | tieteen-paivat-2017.ipynb | mit | # Alustetaan koneoppimisen ympäristö (ohjelmakirjastot)
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
from time import time
import numpy as np
from sklearn import random_projection, decomposition, manifold
import matplotlib.pyplot as plt
import seaborn as sns
from keras.datasets import mnist
from... |
imatge-upc/activitynet-2016-cvprw | notebooks/01 Checking Downloaded Videos.ipynb | mit | import os
import json
DOWNLOAD_DIR = '/imatge/amontes/work/datasets/ActivityNet/v1.3/videos'
videos = os.listdir(DOWNLOAD_DIR)
videos_ids = []
for video in videos:
videos_ids.append(video.split('.mp4')[0])
"""
Explanation: Checking Downloaded Videos
Due all the videos are located at YouTube, not all the videos ... |
iannesbitt/ml_bootcamp | Deep Learning/Tensorflow Basics.ipynb | mit | import tensorflow as tf
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Tensorflow Basics
Remember to reference the video for full explanations, this is just a notebook for code reference.
You can import the library:
End of explanation
"""
hello = tf.constant('Hello... |
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session01/Day4/IntroToMachineLearning.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Introduction to Machine Learning:
Examples of Unsupervised and Supervised Machine-Learning Algorithms
Version 0.1
Broadly speaking, machine-learning methods constitute a diverse collection of data-driven algorithms designed to class... |
rebeccabilbro/viz | animation/lorenz_ipywidgets.ipynb | mit | %matplotlib inline
from ipywidgets import interact, interactive
from IPython.display import clear_output, display, HTML
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
from matplotlib import animation
""... |
igabr/Metis_Projects_Chicago_2017 | 04-Project-Fletcher/Phases/Phase_4/Phase_4_Notebook.ipynb | mit | gabr_tweets = unpickle_object("gabr_ibrahim_tweets_LDA_Complete.pkl")
gabr_tweets[0]['gabr_ibrahim'].keys() #just to refresh our mind of the keys in the sub-dictionary
"""
Explanation: So far, we have two databases:
2nd degree connection database where all handles have valid LDA Analysis.
A database with my tweet... |
darkomen/TFG | medidas/18082015/Análisis de datos Ensayo 1.ipynb | cc0-1.0 | #Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos... |
jbwhit/WSP-312-Tips-and-Tricks | notebooks/02-diff.ipynb | mit | # uncomment the bottom line in this cell, change the final line of
# the loaded script to `mpld3.display()` (instead of show).
# %load http://mpld3.github.io/_downloads/linked_brush.py
"""
Explanation: Interactive Notebook Possibilities
http://mpld3.github.io/examples/linked_brush.html
End of explanation
"""
impor... |
dnc1994/MachineLearning-UW | ml-classification/blank/module-4-linear-classifier-regularization-assignment-blank.ipynb | mit | from __future__ import division
import graphlab
"""
Explanation: Logistic Regression with L2 regularization
The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following:
Extract features from Amazon product reviews.
Convert an SFrame into a... |
chloeyangu/BigDataAnalytics | The Airbnb Scoop/Source Code/2. Data Preparation Part 1 (Listings).ipynb | mit | import pymongo
from pymongo import MongoClient
"""
Explanation: From Command Line - Import CSV file (Raw Data) into MongoDB
mongoimport --db airbnb --type csv --file listings_new.csv -c listings_new
mongoimport --db airbnb --type csv --file barcelona_attractions.csv -c attractions
End of explanation
"""
client = Mon... |
ComputationalModeling/spring-2017-danielak | past-semesters/fall_2016/day-by-day/day06-modeling-radioactivity-day1/radioactivity_modeling.ipynb | agpl-3.0 | # put your code here! add additional cells if necessary.
"""
Explanation: Why is my banana glowing?
(modeling a system that evolves in time)
Student names
Work in pairs, and put the names of both people in your group here! (If you're in a group of 3, just move your chairs so you can work together.)
Learning Goal... |
sofmonk/aima-python | csp.ipynb | mit | from csp import *
"""
Explanation: Constraint Satisfaction Problems (CSPs)
This IPy notebook acts as supporting material for topics covered in Chapter 6 Constraint Satisfaction Problems of the book Artificial Intelligence: A Modern Approach. We make use of the implementations in csp.py module. Even though this noteboo... |
sintefmath/Splipy | doc/Tutorial/Basic manipulation.ipynb | gpl-3.0 | import splipy as sp
import numpy as np
import matplotlib.pyplot as plt
import splipy.curve_factory as curve_factory
"""
Explanation: Basic Manipulation
Splipy implements all affine transformations like translate (move), rotate, scale etc. These should be available as operators where this makes sense. To start, we nee... |
jhconning/Dev-II | notebooks/SFM.ipynb | bsd-3-clause | ppf(Tbar=100, Kbar=100, Lbar=400)
"""
Explanation: The Specific Factors or Ricardo-Viner Model
Background
The SF model is a workhorse model in trade, growth, political economy and development. We will see variants of the model used to describe rural to urban migration, the Lewis model and other dual sector models of ... |
spatialfrog/pi_weather | Weather_data_collection.ipynb | gpl-3.0 | import csv
import os
import sys
import time
from datetime import datetime
from sense_hat import SenseHat
sense = SenseHat()
sense.clear()
"""
Explanation: Collect Data
Goal is to collect data from SenseHat.
End of explanation
"""
sense.get_temperature()
sense.get_humidity()
sense.get_compass()
sense.get_tempera... |
PyLCARS/PythonUberHDL | myHDL_ComputerFundamentals/Counters/.ipynb_checkpoints/CountersInMyHDL-checkpoint.ipynb | bsd-3-clause | from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import random
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, ... |
volodymyrss/3ML | docs/notebooks/The 3ML workflow.ipynb | bsd-3-clause | from threeML import *
"""
Explanation: The 3ML workflow
Generally, an analysis in 3ML is performed in 3 steps:
Load the data: one or more datasets are loaded and then listed in a DataList object
Define the model: a model for the data is defined by including one or more PointSource, ExtendedSource or ParticleSource in... |
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/migration/UJ14 AutoML for vision with Vertex AI Video Classification.ipynb | apache-2.0 | ! pip3 install -U google-cloud-aiplatform --user
"""
Explanation: Vertex SDK: AutoML video classification model
Installation
Install the latest (preview) version of Vertex SDK.
End of explanation
"""
! pip3 install google-cloud-storage
"""
Explanation: Install the Google cloud-storage library as well.
End of explan... |
jordan-melendez/buqeyemodel | docs/notebooks/truncation_recap.ipynb | mit | df0 = 0
Q = 0.33
# Must be 2d array, with orders spanning the last axis (columns)
coeffs = np.array(
[[1.0, 1.0, 1.0], # Set 1, orders 0, 1, 2
[1.0, 0.5, 0.1], # Set 2, orders 0, 1, 2
[1.0, 0.1, 0.1] # Set 3, orders 0, 1, 2
]
)
# The truncation model accepts *partial sums*,
# i.e., order-by-orde... |
henchc/Rediscovering-Text-as-Data | 07-Textual-Similarity/01-Textual-Similarity.ipynb | mit | !wget https://ndownloader.figshare.com/files/3686778 -P data/
%%capture
!unzip data/3686778 -d data/
"""
Explanation: Textual Similarity
This notebook is designed to reproduce several findings from Andrew Piper's article "Novel Devotions: Conversional Reading, Computational Modeling, and the Modern Novel" (<i>New Lit... |
chrinide/optunity | notebooks/basic-cross-validation.ipynb | bsd-3-clause | import optunity
import optunity.cross_validation
"""
Explanation: Basic: cross-validation
This notebook explores the main elements of Optunity's cross-validation facilities, including:
standard cross-validation
using strata and clusters while constructing folds
using different aggregators
We recommend perusing the <... |
NEONScience/NEON-Data-Skills | tutorials/Python/Hyperspectral/uncertainty-and-validation/hyperspectral_validation_py/hyperspectral_validation_py.ipynb | agpl-3.0 | import h5py
import csv
import numpy as np
import os
import gdal
import matplotlib.pyplot as plt
import sys
from math import floor
import time
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
"""
Explanation: syncID: 84457ead9b964c8d916eacde9f271ec7
title: "Assessing Spectrometer Accuracy using Vali... |
phobson/statsmodels | examples/notebooks/glm_formula.ipynb | bsd-3-clause | from __future__ import print_function
import statsmodels.api as sm
import statsmodels.formula.api as smf
star98 = sm.datasets.star98.load_pandas().data
formula = 'SUCCESS ~ LOWINC + PERASIAN + PERBLACK + PERHISP + PCTCHRT + \
PCTYRRND + PERMINTE*AVYRSEXP*AVSALK + PERSPENK*PTRATIO*PCTAF'
dta = star98[['NABOVE... |
edwardd1/phys202-2015-work | assignments/assignment05/InteractEx03.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
"""
Explanation: Interact Exercise 3
Imports
End of explanation
"""
def soliton(x, t, c, a):
"""Return phi(x, t) for a soliton wave with co... |
Intel-Corporation/tensorflow | tensorflow/lite/g3doc/performance/quantization_debugger.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under... |
tpin3694/tpin3694.github.io | sql/aliases.ipynb | mit | # Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
"""
Explanation: Title: Using Aliases
Slug: aliases
Summary: Using Aliases in SQL.
Date: 2017-01-16 12:00
Category: SQL
Tags: Basics
Authors: Chris Albon
Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks l... |
jhillairet/scikit-rf | doc/source/examples/metrology/One Port Tiered Calibration.ipynb | bsd-3-clause | !ls {"oneport_tiered_calibration/"}
"""
Explanation: One Port Tiered Calibration
Intro
A one-port network analyzer can be used to measure a two-port device, provided that the device is reciprocal. This is accomplished by performing two calibrations, which is why its called a tiered calibration.
First, the VNA is cali... |
RNAer/Calour | doc/source/notebooks/microbiome_step_by_step.ipynb | bsd-3-clause | import calour as ca
"""
Explanation: Microbiome experiment step-by-step analysis
This is a jupyter notebook example of how to load, process and plot data from a microbiome experiment using Calour.
Setup
Import the calour module
End of explanation
"""
ca.set_log_level(11)
"""
Explanation: (optional) Set the level of... |
materialsvirtuallab/matgenb | notebooks/2018-07-24-Adsorption on solid surfaces.ipynb | bsd-3-clause | # Import statements
from pymatgen import Structure, Lattice, MPRester, Molecule
from pymatgen.analysis.adsorption import *
from pymatgen.core.surface import generate_all_slabs
from pymatgen.symmetry.analyzer import SpacegroupAnalyzer
from matplotlib import pyplot as plt
%matplotlib inline
# Note that you must provide y... |
saravanakumar-periyasamy/deep-learning | image-classification/dlnd_image_classification.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hoo... |
leriomaggio/numpy_euroscipy2015 | extra_torch_tensor.ipynb | mit | import torch
"""
Explanation: Original Notebook
Introduction to PyTorch Tensor
Reference: "What is PyTorch?" by Soumith Chintala
What is PyTorch?
It’s a Python-based scientific computing package targeted at two sets of
audiences:
A replacement for NumPy to use the power of GPUs
a deep learning research platform that ... |
InsightLab/data-science-cookbook | 2019/09-clustering/Notebook_KMeans_Assignment.ipynb | mit | # import libraries
# linear algebra
import numpy as np
# data processing
import pandas as pd
# data visualization
from matplotlib import pyplot as plt
# load the data with pandas
dataset = pd.read_csv('dataset.csv', header=None)
dataset = np.array(dataset)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.show()
... |
google/starthinker | colabs/dv360_editor.ipynb | apache-2.0 | !pip install git+https://github.com/google/starthinker
"""
Explanation: DV360 Bulk Editor
Allows bulk editing DV360 through Sheets and BigQuery.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may o... |
johnnyliu27/openmc | examples/jupyter/mgxs-part-i.ipynb | mit | from IPython.display import Image
Image(filename='images/mgxs.png', width=350)
"""
Explanation: This IPython Notebook introduces the use of the openmc.mgxs module to calculate multi-group cross sections for an infinite homogeneous medium. In particular, this Notebook introduces the the following features:
General equ... |
nbokulich/short-read-tax-assignment | ipynb/mock-community/find-expected-gapless.ipynb | bsd-3-clause | from tax_credit import mock_quality
from os.path import expanduser, join
"""
Explanation: Mock community quality control
This notebook maps observed mock community sequences, which are technically from unknown organisms, to "trueish" taxonomies, i.e., the most likely taxonomic match given a list of expected sequences ... |
google/earthengine-api | python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb | apache-2.0 | from google.colab import auth
auth.authenticate_user()
"""
Explanation: <table class="ee-notebook-buttons" align="left"><td>
<a target="_blank" href="http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb">
<img src="https://www.t... |
deepmind/deepmind-research | nowcasting/Open_sourced_dataset_and_model_snapshot_for_precipitation_nowcasting.ipynb | apache-2.0 | !pip -q install tensorflow~=2.5.0 numpy~=1.19.5 matplotlib~=3.2.2 tensorflow_hub~=0.12.0 cartopy~=0.19.0
# Workaround for cartopy crashes due to the shapely installed by default in
# google colab kernel (https://github.com/anitagraser/movingpandas/issues/81):
!pip uninstall -y shapely
!pip install shapely --no-binary ... |
JakeColtman/BayesianSurvivalAnalysis | PyMC Part 1 Done.ipynb | mit | running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = out... |
angelmtenor/deep-learning | dcgan-svhn/DCGAN.ipynb | mit | %matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
"""
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a De... |
mlperf/training_results_v0.5 | v0.5.0/google/cloud_v3.8/resnet-tpuv3-8/code/resnet/model/models/samples/outreach/blogs/segmentation_blogpost/image_segmentation.ipynb | apache-2.0 | !pip install kaggle
import os
import glob
import zipfile
import functools
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['axes.grid'] = False
mpl.rcParams['figure.figsize'] = (12,12)
from sklearn.model_selection import train_test_split
import matplotlib.image as mpimg
import... |
ShubhamDebnath/Coursera-Machine-Learning | Course 1/Logistic Regression with a Neural Network mindset.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
"""
Explanation: Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a ... |
kkkddder/dmc | notebooks/week-6/01-training a RNN model in Keras.ipynb | apache-2.0 | import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
from time import gmtime, strftime
import os
import re
import pickle
import random
import sys
... |
lujinhong/lujinhong.github.io | _posts/tensorflow-keras的基本使用方式.ipynb | mit | fashion_mnist = keras.datasets.fashion_mnist
(x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data()
x_valid,x_train = x_train_all[:5000],x_train_all[5000:]
y_valid,y_train = y_train_all[:5000],y_train_all[5000:]
print(x_train.shape,y_train.shape)
print(x_valid.shape,y_valid.shape)
print(x_test.shape,y_t... |
Weenkus/Machine-Learning-University-of-Washington | Regression/examples/week-3-polynomial-regression-assignment-blank.ipynb | mit | import graphlab
"""
Explanation: Regression Week 3: Assessing Fit (polynomial regression)
In this notebook you will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular you will:
* Write a function to take a... |
google/android-management-api-samples | notebooks/codelab_kiosk.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the Lic... |
ES-DOC/esdoc-jupyterhub | notebooks/ec-earth-consortium/cmip6/models/sandbox-1/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-1', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dyn... |
robertoalotufo/ia898 | 2S2018/04 Gerando imagens sinteticas.ipynb | mit | import numpy as np
"""
Explanation: Criação de imagens sintéticas
Imagens sintéticas são bastante utilizadas nos testes de algoritmos e na geração de
padrões de imagens.
Iremos aprender a gerar os valores dos pixels de uma imagem a partir de uma equação matemática
de forma muito eficiente, sem a necessidade de se usar... |
google/data-driven-discretization-1d | notebooks/burgers-super-resolution.ipynb | apache-2.0 | ! pip install -q -U xarray matplotlib
! rm -rf data-driven-discretization-1d
! git clone https://github.com/google/data-driven-discretization-1d.git
! pip install -q -e data-driven-discretization-1d
# install the seaborn bug-fix from https://github.com/mwaskom/seaborn/pull/1602
! pip install -U -q git+git://github.com/... |
Charleo85/ml_project | resource/scribe/sample.ipynb | mit | import numpy as np
import numpy.matlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
import math
import random
import time
import os
import pickle
import tensorflow as tf #built with TensorFlow version 0.9
"""
Explanation: Scribe: Realistic Handwriting with TensorFlow
<img src="static... |
antoniomezzacapo/qiskit-tutorial | community/aqua/artificial_intelligence/svm_classical.ipynb | apache-2.0 | from datasets import *
from qiskit_aqua.utils import split_dataset_to_data_and_labels, map_label_to_class_name
from qiskit_aqua.input import get_input_instance
from qiskit_aqua import run_algorithm
"""
Explanation: SVM with a classical RBF kernel
We have shown here a QSVM_Kernel notebook with the classification proble... |
ProfessorKazarinoff/staticsite | content/code/ENGR213/Problem_4C2.ipynb | gpl-3.0 | d = 351
tf = 9.78
tw = 6.86
bf = 171
ys = 300
E = 200*10**3 #Elastic modulus in MPa
"""
Explanation: Problem 4.C2 in Beer and Johnson
Below is an engineering mechanics problem that can be solved with Python. Follow along to see how to solve the problem with code.
Problem
Given:
An I-beam (also called a W-shape for wid... |
wilomaku/IA369Z | dev/Autoencoderxclass.ipynb | gpl-3.0 | ## Functions
import sys,os
import copy
path = os.path.abspath('../dev/')
if path not in sys.path:
sys.path.append(path)
import bib_mri as FW
import numpy as np
import scipy as scipy
import scipy.misc as misc
import matplotlib as mpl
import matplotlib.pyplot as plt
from numpy import genfromtxt
import platform
imp... |
google-research/google-research | micronet_challenge/EfficientNetCounting.ipynb | apache-2.0 | # Copyright 2019 MicroNet Challenge Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License atte
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by a... |
ThomasProctor/Slide-Rule-Data-Intensive | TaxicabProject/Code/Feature Selection.ipynb | mit | import pandas as pd
import sqlalchemy as sqla
import numpy as np
#import matplotlib
import matplotlib.pyplot as plt
import statsmodels.api as sm
#%matplotlib qt
%matplotlib inline
engine = sqla.create_engine('postgresql://postgres:postgres@localhost:5432/TaxiData',echo=False)
columntypelist=pd.read_sql_query("SELE... |
quoniammm/mine-tensorflow-examples | gan/gan_mnist/Intro_to_GANs_Solution.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generativ... |
igor-sokolov/dataminingcapstone | Capstone project 6.ipynb | mit | basePath = 'dataminingcapstone-001'
hygienePath = 'Hygiene'
workingDir = os.path.join(os.curdir, basePath, hygienePath)
reviewsPath = os.path.join(workingDir, 'hygiene.dat')
labelsPath = os.path.join(workingDir, 'hygiene.dat.labels')
"""
Explanation: Task 6: Hygiene Prediction
End of explanation
"""
N = 546
with o... |
adolfoguimaraes/machinelearning | Projects/02_RecommenderSystem_Movies.ipynb | mit | # Import necessários para esta seção
import pandas as pd
idx = pd.IndexSlice
# Preparando o Dataset
links = pd.read_csv("../datasets/movielens/links.csv", index_col=['movieId'])
movies = pd.read_csv("../datasets/movielens/movies.csv", sep=",", index_col=['movieId'])
ratings = pd.read_csv("../datasets/movielens/ratin... |
andre-martini/advanced-comp-2017 | 04-model-performance/lecture.ipynb | gpl-3.0 | %config InlineBackend.figure_format='retina'
%matplotlib inline
# Silence warnings
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.simplefilter(action="ignore", category=UserWarning)
warnings.simplefilter(action="ignore", category=RuntimeWarning)
import numpy as np
np.random.se... |
nslatysheva/data_science_blogging | polished_prediction/polished_prediction.ipynb | gpl-3.0 | import wget
import pandas as pd
# Import the dataset
data_url = 'https://raw.githubusercontent.com/nslatysheva/data_science_blogging/master/datasets/wine/winequality-red.csv'
dataset = wget.download(data_url)
dataset = pd.read_csv(dataset, sep=";")
# Take a peak at the first few columns of the data
first_5_columns = ... |
zomansud/coursera | ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb | mit | import graphlab
from __future__ import division
import numpy as np
graphlab.canvas.set_target('ipynb')
"""
Explanation: Exploring precision and recall
The goal of this second notebook is to understand precision-recall in the context of classifiers.
Use Amazon review data in its entirety.
Train a logistic regression m... |
empet/PSCourse | CryptographicHashFunctions.ipynb | bsd-3-clause | import hashlib
mes = hashlib.md5()#declara mes ca un obiect hash vid
mes.update('anul1CTI@yahoogroups.com')# se updateaza obiectul hash prin concatenarea unui string
s=mes.hexdigest()
print 'valoarea hash in hexa prin MD5 a adresei email este', s
print 'lungimea in biti a valorii hash este:', len(s)*4
"""
Explanation... |
jonbruner/tensorflow-basics | save-load/save.ipynb | mpl-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow as tf
sess = tf.InteractiveSession()
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf... |
kunaltyagi/SDES | notes/python/p_norvig/logic/Sicherman Dice.ipynb | gpl-3.0 | def sicherman():
"""The set of pairs of 6-sided dice that have the same
distribution of sums as a regular pair of dice."""
return {pair for pair in pairs(all_dice())
if pair != regular_pair
and sums(pair) == regular_sums}
# TODO: pairs, all_dice, regular_pair, sums, regular_sums
""... |
anhaidgroup/py_entitymatching | notebooks/guides/step_wise_em_guides/Performing Matching Using a ML Matcher.ipynb | bsd-3-clause | # Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
"""
Explanation: Introduction
This IPython notebook illustrates how to performing matching with a ML matcher. In particular we show examples with a decision tree matcher, but the same principles apply to all of the other ML... |
prashantas/MyDataScience | Python/MnistDigitsKeras.ipynb | bsd-2-clause | batch_size = 128
nb_classes =10
nb_epochs = 10
# convert class vectors to binary class matrices for softmax layer
Y_train = keras.utils.np_utils.to_categorical(y_train,nb_classes)
Y_test = keras.utils.np_utils.to_categorical(y_test,nb_classes)
## for example 6's label is now [0,0,0,0,0,0,0,0]
print(Y_train.shape)
""... |
mne-tools/mne-tools.github.io | 0.19/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb | bsd-3-clause | # Authors: Denis Engemann <denis.engemann@gmail.com>
# Luke Bloy <luke.bloy@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
from mne.filter import next_fast_len
import mne
print(__doc__)
data_path = mne.datasets.opm.data_path()
subject = 'OPM_s... |
cuttlefishh/emp | methods/figure-data/fig-3/Fig3_data_files.ipynb | bsd-3-clause | # read in nestedness output for all samples
fig3a = pd.read_csv('../../../data/nestedness/nest_phylum_allsamples.csv')
fig3a.head()
"""
Explanation: Figure 3 csv data generation
Figure data consolidation for Figure 3, which shows patterns of nestedness in beta diversity
Figure 3a: phyla occupancy plot, all samples
E... |
tiagogiraldo/Machine_Learning_Nanodegree_Projects | boston_housing.ipynb | gpl-3.0 | # Import libraries necessary for this project
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.cross_validation import ShuffleSplit
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
f... |
OpenDataPolicingNC/Traffic-Stops | il/data/New-IL-Data-Review.ipynb | mit | # 2004 --- 2017
! head ../../IL-New-Data/ILtrafficstops-2016-10-03.csv
"""
Explanation: New IL Data Review
Old Data Summary
Simple schema:
Just Agency, Gender, Race, Search (T/F), Contraband (T/F), and StopPurpose
Only Year (not full date)
No officers
Date range:
* 2005 --- 2014
* 23m stops
* https://opendatapolicin... |
google-research/agent-based-epidemic-sim | agent_based_epidemic_sim/learning/covid_ens_simulation.ipynb | apache-2.0 | import itertools
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
import pandas as pd
from collections import namedtuple
from enum import Enum, IntEnum
from dataclasses import dataclass
import matplotlib.cm as cm
import sklearn
from sklearn import metrics
# Configure plot style sheet
plt.style.u... |
lukemans/Hello-world | t81_558_class10_lstm.ipynb | apache-2.0 | from sklearn import preprocessing
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df,name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}... |
ES-DOC/esdoc-jupyterhub | notebooks/ipsl/cmip6/models/sandbox-2/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-2', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection... |
quantumlib/ReCirq | docs/qaoa/binary_paintshop.ipynb | apache-2.0 | # @title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed unde... |
srodriguex/coursera_data_management_and_visualization | Week_4.ipynb | mit | %pylab inline
# This package is very useful to data analysis in Python.
import pandas as pd
# This package makes nice looking graphics.
import seaborn as sn
# Read the csv file to a dataframe object.
df = pd.read_csv('data/gapminder.csv')
# Convert all number values to float.
df = df.convert_objects(convert_numeric... |
sjsrey/giddy | notebooks/RankMarkov.ipynb | bsd-3-clause | import libpysal as ps
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import pandas as pd
import geopandas as gpd
"""
Explanation: Full Rank Markov and Geographic Rank Markov
Author: Wei Kang weikang9009@gma... |
rbiswas4/ObsCond | examples/CheckFiltCalc.ipynb | gpl-3.0 | from brightness import mCalcs, atmTransName
df.head()
"""
Explanation: def atmTransName(airmass):
"""
return filename for atmospheric transmission with aerosols for airmass
closest to input
Parameters
----------
airmass : airmass
"""
l = np.arange(1.0, 2.51, 0.1)
idx = np.abs(l - airmass).argmin()
a = np.... |
rasbt/pattern_classification | machine_learning/scikit-learn/ensemble_classifier.ipynb | gpl-3.0 | from sklearn import datasets
iris = datasets.load_iris()
X, y = iris.data[:, 1:3], iris.target
from sklearn import cross_validation
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
import numpy as np
np.random.seed(123... |
google/data-pills | pills/CM/[DATA_PILL]_[CM]_Frequency_Analysis_(ADH).ipynb | apache-2.0 | # The Developer Key is used to retrieve a discovery document containing the
# non-public Full Circle Query v2 API. This is used to build the service used
# in the samples to make API requests. Please see the README for instructions
# on how to configure your Google Cloud Project for access to the Full Circle
# Query v2... |
GoogleCloudPlatform/ml-design-patterns | 02_data_representation/embeddings.ipynb | apache-2.0 | import shutil
import os
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import callbacks, layers, models, utils
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow_hub import KerasLay... |
bashtage/statsmodels | examples/notebooks/distributed_estimation.ipynb | bsd-3-clause | import numpy as np
from scipy.stats.distributions import norm
from statsmodels.base.distributed_estimation import DistributedModel
def _exog_gen(exog, partitions):
"""partitions exog data"""
n_exog = exog.shape[0]
n_part = np.ceil(n_exog / partitions)
ii = 0
while ii < n_exog:
jj = int(m... |
anhaidgroup/py_entitymatching | notebooks/guides/step_wise_em_guides/.ipynb_checkpoints/Performing Blocking Using Rule-Based Blocking-checkpoint.ipynb | bsd-3-clause | # Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
"""
Explanation: Introduction
This IPython notebook illustrates how to perform blocking using rule-based blocker.
First, we need to import py_entitymatching package and other libraries as follows:
End of explanation
"""
#... |
jaakla/getdelficomments | Welcome_To_Colaboratory.ipynb | unlicense | seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day
"""
Explanation: <a href="https://colab.research.google.com/github/jaakla/getdelficomments/blob/master/Welcome_To_Colaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<p><img alt="Colaboratory... |
turbomanage/training-data-analyst | courses/machine_learning/deepdive/02_tensorflow/labs/e_traineval.ipynb | apache-2.0 | import tensorflow as tf
import shutil
print(tf.__version__)
"""
Explanation: Introducing tf.estimator.train_and_evaluate()
Learning Objectives
- Introduce new type of input function (serving_input_reciever_fn()) which supports remote access to our model via REST API
- Use the tf.estimator.train_and_evaluate() method t... |
kwinkunks/timefreak | stft.ipynb | apache-2.0 | import numpy as np
from scipy.fftpack import fft, ifft, rfft, irfft, fftfreq, rfftfreq
import scipy.signal
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: STFT and ISTFT
I'd like to make my own spectrogram, so that I can play with Gabor logons, AKA Heisenberg boxes.
End of explanation
"""
def stf... |
domino14/macondo | notebooks/deprecated/superleaves.ipynb | gpl-3.0 | from itertools import combinations
import numpy as np
import pandas as pd
import seaborn as sns
from string import ascii_uppercase
import time as time
%matplotlib inline
maximum_superleave_length = 5
ev_calculator_max_length = 5
log_file = 'log_games.csv'
"""
Explanation: How to use
maximum_superleave_length indic... |
savioabuga/arrows | arrows.ipynb | mit | from arrows.preprocess import load_df
"""
Explanation: arrows: Yet Another Twitter/Python Data Analysis
Geospatially, Temporally, and Linguistically Analyzing Tweets about Top U.S. Presidential Candidates with Pandas, TextBlob, Seaborn, and Cartopy
Hi, I'm Raj. For my internship this summer, I've been using data scien... |
hpparvi/Parviainen-2017-WASP-80b | notebooks/01_broadband_analysis/E1_data_preparation.ipynb | mit | %pylab inline
%run __init__.py
import astropy.io.fits as pf
import pandas as pd
import seaborn as sb
from glob import glob
from os.path import basename, splitext, join
from astropy.table import Table
from exotk.utils.misc import fold
from src.extcore import TC, P, TZERO, DDATA
"""
Explanation: WASP-80b broadband ana... |
jorisvandenbossche/DS-python-data-analysis | notebooks/visualization_02_seaborn.ipynb | bsd-3-clause | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: <p><font size="6"><b>Visualisation: Seaborn </b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@g&#... |
staeiou/wiki-stat-notebooks | retention_20180712/wiki_edit_counts.ipynb | mit | import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.ticker import ScalarFormatter
%matplotlib inline
matplotlib.style.use('ggplot')
# Data by Erik Zachte at https://stats.wikimedia.org/EN/TablesWikipediaEN.htm
counts = pd.read_csv("edit_counts.tsv", sep="\t")
# Convert dates to dat... |
tensorflow/examples | courses/udacity_intro_to_tensorflow_for_deep_learning/l09c06_nlp_subwords.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under... |
AllenDowney/ModSimPy | notebooks/chap13.ipynb | mit | # Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
"""
Explanation: Modeling and Simulati... |
samstav/scipy_2015_sklearn_tutorial | notebooks/04.2 Model Complexity and GridSearchCV.ipynb | cc0-1.0 | from figures import plot_kneighbors_regularization
plot_kneighbors_regularization()
"""
Explanation: Parameter selection, Validation & Testing
Most models have parameters that influence how complex a model they can learn. Remember using KNeighborsRegressor.
If we change the number of neighbors we consider, we get a sm... |
tensorflow/docs-l10n | site/ko/tutorials/estimator/linear.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under... |
YuriyGuts/kaggle-quora-question-pairs | notebooks/feature-magic-frequencies.ipynb | mit | from pygoose import *
"""
Explanation: Feature: Question Occurrence Frequencies
This is a "magic" (leaky) feature published by Jared Turkewitz that doesn't rely on the question text. Questions that occur more often in the training and test sets are more likely to be duplicates.
Imports
This utility package imports num... |
amkatrutsa/MIPT-Opt | Spring2017-2019/15-ConjGrad/Seminar15.ipynb | mit | import numpy as np
n = 100
# Random
# A = np.random.randn(n, n)
# A = A.T.dot(A)
# Clustered eigenvalues
A = np.diagflat([np.ones(n//4), 10 * np.ones(n//4), 100*np.ones(n//4), 1000* np.ones(n//4)])
U = np.random.rand(n, n)
Q, _ = np.linalg.qr(U)
A = Q.dot(A).dot(Q.T)
A = (A + A.T) * 0.5
print("A is normal matrix: ||AA*... |
briennakh/BIOF509 | Wk08/Wk08_Numpy_model_package_survey_inclass_exercises.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
n = 20
x = np.random.random((n,1))
y = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))
plt.plot(x, y, 'b.')
plt.show()
"""
Explanation: Week 8 - Implementing a model in numpy and a survey of machine learning packages for python
This week we... |
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/06_structured/4_preproc_tft.ipynb | apache-2.0 | %%bash
conda update -y -n base -c defaults conda
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz
pip install apache-beam[gcp]==2.9.0
pip install apache-beam[gcp] tensorflow_transform==0.8.0
%%bash
pip freeze | grep -e 'flow\|beam'
"""
Explanation: <h1> Preprocessing using tf.trans... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.