text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
# Unsupervised Learning
The diabetes example is an example of a supervised learning task -- we wanted to predict some value from a higher dimensional dataset. Sometimes, we do not have any output variable that we need to predict. Instead, w... | github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"></ul></div>
# ConvNets for CIFAR10 with TensorFlow and Keras <a class="tocSkip">
```
import os
import numpy as np
np.random.seed(123)
print("NumPy:{}".format(np.__version__))
import tensorflow as t... | github_jupyter |
# Jupyter cheatsheet
[28 tips and tricks](https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/)
# Extensions
**jupyter-notebook-gist** lets you create a gist from the Jupyter Notebook UI.
## jupyter cms
- Search dialog on dashboard, editor, and notebook screens to search over filenames and .ipynb ... | github_jupyter |
# Agenda
* Convolution Block and its components in Keras
* Implement a CNN on Mnist-Fashion dataset
# Convolutional Layer
## A reminder about Model vs. Sequential
**Option 1: Sequential**
```
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.models import Sequential
# option 1
model = Sequenti... | github_jupyter |
<h1 align="center">Caso 1</h1>
<h3 align="center">Predecir quién es más probable de dar click en un anuncio de tu campaña publicitaria.
</h3>
```
#Importing Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarni... | github_jupyter |
# DataFrames as RDD
In the first chapter we learned about Pandas DataFrames and how great they are to work with for data science. Let's see how we can make similar objects with Spark.
## Spark SQL
In the previous sheet, we used a basic SparkContext to access our data. But what if we wanted a more robust Spark interfac... | github_jupyter |
<a href="https://colab.research.google.com/github/marongkang/MLeveryday/blob/main/MLEveryday13.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#多层感知(MLP,Multilayer Perceptron)
使用PyTorch实现
#OverView
$z_2=XW_1$
$a_2=f(z_2)$
$z_3=a_2W_2$
$\hat{y... | github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at... | github_jupyter |
# <center> 분류 프로젝트 발표 - SS60</center>
## <center>Walmart Triptype Classification
## <center> 노승환, 백형렬, 이수정
# <center> 프로세스 </center>
# 0. 환경설정
# 1. 모델의 성능
# 2. 변수설명
# 3. 전처리 및 EDA
# 4. lightgbm을 사용한 분류
# 5. kaggle에 제출
# 6. 한계점 및 보완방향
# <center>0.환경설정</center>
```
from sklearn.preprocessing import LabelEncoder
from s... | github_jupyter |
**Chapter 4 – Training Linear Models**
_This notebook contains all the sample code and solutions to the exercises in chapter 4._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figur... | github_jupyter |
# Домашнее задание BigData School Самошина Андрея
**Дано:** логи пользователей с GPS-данными.
Необходимо провести чистку данных, выбрать только тех, у кого размеченые логи и предсказать часть логов. Это задача мульти-класс классификация, потому что одному логу можно сопоставить только однку метку какого-либо класса.
... | github_jupyter |
# Predict Boston Housing Prices
This python program predicts the price of houses in Boston using a machine learning algorithm called a Linear Regression.
# Linear Regression
Linear regression is a linear approach to modeling the relationship between a scalar response (or dependent variable) and one or more explanator... | github_jupyter |
# 制限従属変数モデル
If you come here without expecting Japanese, please click [Google translated version](https://translate.google.com/translate?hl=&sl=ja&tl=en&u=https%3A%2F%2Fpy4etrics.github.io%2F21_TruncregTobitHeckit.html) in English or the language of your choice.
---
```
import numpy as np
import pandas as pd
from sc... | github_jupyter |
# NumPy Fundamentals for Data Science and Machine Learning
---
<iframe src="https://github.com/sponsors/pabloinsente/card" title="Sponsor pabloinsente" height="225" width="600" style="border: 0;"></iframe>
---
***Note***: If you prefer to read with a **white background and black font**, you can see this article in ... | github_jupyter |
# Knapsack With Integer Weights問題
各々重さ$(w_\alpha \geq 0)$と価値$(c_\alpha \geq 0)$の決まったN個のアイテムがあり、そのうちのいくつかをナップサックに入れるとき、総重量$ \displaystyle W(= \sum _ {\alpha = 1} ^ {N} w_{\alpha}x_{\alpha})$をある決められた値$W_{limit}$以下に抑えながら、価値の合計$ \displaystyle C(= \sum _ {\alpha = 1} ^ {N} c_{\alpha}x_{\alpha})$を最大化するような入れ方を探す問題をナップサック問題とい... | github_jupyter |
# Calculating the invariant mass
In this example the calculation of the __invariant mass__ with the CMS open data is learned. The invariant mass is an important concept for particle physicists to find out new particles.
The following CSV files include already calculated values for invariant masses:
- dielectron.c... | github_jupyter |
# Simplify network topology and consolidate intersections
Author: [Geoff Boeing](https://geoffboeing.com/)
- [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/)
- [GitHub repo](https://github.com/gboeing/osmnx)
- [Examples, demos, tutorials](https://github.com/gboeing/osmnx-example... | github_jupyter |
# Studi Kasus dan Analisis Masalah
___
Studi kasus yang digunakan adalah Binary Knapsack. Binary Knapsack merupakan masalah kombinatorial yang bertujuan memaksimalkan profit dari knapsack (karung) tanpa melebihi kapasitas maksimalnya.
Pada kasus ini, digunakan data dengan jumlah barang sebanyak 23 item (dapat beruba... | github_jupyter |
<img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
<br></br>
<br></br>
## *Data Science Unit 4 Sprint 3 Assignment 2*
# Convolutional Neural Networks (CNNs)
# Assignment
- <a href="#p1">Part 1:</a> Pre-Trained Model
- <a href="#p2">Pa... | github_jupyter |
```
from glob import glob
from time import sleep
from baselines.bench import load_results
from matplotlib import pylab as plt
import numpy as np
import argparse
import os
import pandas as pd
my_dir = '/workspace7/Unity3D/gabriele/Animal-AI/animal-ppo-value/RUNS/exp_reason_202'
exps1 = glob(my_dir+'*')
d = exps1[0]
df... | github_jupyter |
```
# Import that good good
import sys
import os
sys.path.append('/Users/kolbt/Desktop/ipython/diam_files')
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import math
from IPython.display import display
from collections import OrderedDict
pd.options.display.max_rows = 2
import matplotlib.colors... | github_jupyter |
## Multi-class single label classification
The natural extension of binary classification is a multi-class classification task.
We first approach multi-class single-label classification, which makes the assumption that each example is assigned
to one and only one label.
For illustration purposes, we use the Iris flow... | github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ... | github_jupyter |
```
import pandas as pd
from viasegura import ModelLabeler
import numpy as np
import os
from pathlib import Path
import tensorflow as tf
from tqdm.notebook import tqdm
```
### Espacio descriptivo por cada seccion
***
#### Descripcion
Viasegura es una libreria para etiquetar algunos de los atributos de seguridad vial ... | github_jupyter |
# Check GPU status
Make surre to use : GPU runtime mode (Runtime->Change Runtime type -> python3 + GPU
)
```
# Check nvidia and nvcc cuda compiler
!nvidia-smi
!/usr/local/cuda/bin/nvcc --version
```
#Mount Goolge Drive
```
# link to google drive
from google.colab import drive
drive.mount('/content/gdrive/')
#chec... | github_jupyter |
# Inference and Validation
Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen... | github_jupyter |
```
import requests
import urllib.parse as urlparse
from urllib.parse import urlencode
deep_learning_frameworks = {
'tensorflow': {'github': 'tensorflow/tensorflow', 'comment': ''},
'pytorch': {'github': 'pytorch/pytorch', 'comment': ''},
'torch': {'github': 'torch/torch7', 'comment': ''},
... | github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Plot style
sns.set()
%pylab inline
pylab.rcParams['figure.figsize'] = (6, 4)
%%html
<style>
.pquote {
text-align: left;
margin: 40px 0 40px auto;
width: 70%;
font-size: 1.5em;
font-style: italic;
display: block;
line-height: 1.... | github_jupyter |
# Assigning different types of data to a variable
```
x = 3
print(type(x)) # type(variable) gives the type of that data. Here 3 is "int" type (integer)
a = -2
print(type(a))
B = 4234.24
print(type(B))
c = -342.45
print(type(c))
# same variable name can be used in different cells.
c = 2 + 3j
print(type(c))
comp = ... | github_jupyter |

Now that we have interactive reports exposing different aspects of our data, we’re ready to make our first prediction. This forms our fourth agile sprint.
When making predictions, we take what we know about the past and use it to infer what will happen in the future. In doin... | github_jupyter |
### Importing the Required Libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import clear_output
from time import sleep
import os
train_data = pd.read_csv('training.csv')
test_data = pd.read_csv('test.csv')
lookid_data = pd.read_csv('IdLookupTable.csv')
train_d... | github_jupyter |
```
import nltk
# nltk.download()
# The book module contains all the data
# you will need as you read this chapter.
from nltk.book import *
text1
```
# 1.3 Searching text
## Concordance
A concordance view shows us every occurrence of a given word, together with some context
```
text1.concordance("monstrous")
```
#... | github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or ... | github_jupyter |
```
# Copyright 2021 Google LLC
# Use of this source code is governed by an MIT-style
# license that can be found in the LICENSE file or at
# https://opensource.org/licenses/MIT.
# Author(s): Kevin P. Murphy (murphyk@gmail.com) and Mahmoud Soliman (mjs@aucegypt.edu)
```
<a href="https://opensource.org/licenses/MIT" t... | github_jupyter |
# Network Training
Having implemented and tested all the components of the final networks in steps 1-3, we are now ready to train the network on a large dataset (ImageNet).
```
import os
# os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
# The GPU id to use, usually either "0" or "1";
os.environ["CUDA_VISIBLE_DEVICES"]... | github_jupyter |
```
import seaborn as sns
import glob
import pickle
import os
import pandas as pd
import re
import matplotlib.pyplot as plt
from FairnessExperiment import *
experiment_dir = f"/home/author/fairness/deco/experiments/race.exp_{model_number}/"
with open(os.path.join("/home/author/fairness/deco/experiments/", f"model_info_... | github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Gena/basic_image.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https... | github_jupyter |
# Knife-edge in elevated duct
example from [Lytaev M. S. Nonlocal Boundary Conditions for Split-Step Padé Approximations of the Helmholtz Equation With Modified Refractive Index
//IEEE Antennas and Wireless Propagation Letters. – 2018. – Vol. 17. – N. 8. – pp. 1561-1565.](https://ieeexplore.ieee.org/document/8409980)
... | github_jupyter |
# Chapter 2 - Data Types and Formatting
<h2 id="Topics-Covered:">Topics Covered:</h2>
<ul>
<li><a href="#Numerics" target="_blank">Numerics </a></li>
<li><a href="#Boolean" target="_blank">Boolean </a></li>
<li><a href="#Strings" target="_blank">Strings </a></li>
<li><a href="#Numerical-precision" target="_blank"... | github_jupyter |
# Multiclass Support Vector Machine exercise
*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course we... | github_jupyter |
# DAT257x: Reinforcement Learning Explained
## Lab 4: Dynamic Programming
### Exercise 4.3 Policy Iteration
Policy Iteration calculates the optimal policy for an MDP, given its full definition. The full definition of an MDP is the set of states, the set of available actions for each state, the set of rewards, the d... | github_jupyter |
# Regression loss (MAE vs MSE)
### Dataset
We use [Concrete Compressive Strength Data Set](<http://archive.ics.uci.edu/ml/datasets/concrete+compressive+strength>) to compare various losses we can use for regression. In this notebook, we are primarily looking at (MAE vs MSE). For other losses, you can refer to thi... | github_jupyter |
# SQL
<br>
<img src="img/sql_example.png" width="700" />
<br>
Structured Query Language, ou Linguagem de Consulta Estruturada ou SQL, é a linguagem de pesquisa declarativa padrão para banco de dados relacional (base de dados relacional). Muitas das características originais do SQL foram inspiradas na álgebra relacion... | github_jupyter |
# Itération et algorithmes
Ce chapitre revisite les itérations et introduit des algorithmes.
## Réaffectation
Une première affectation de variable crée cette variable et y associe cette valeur.
```
x = 5
print(x)
```
Une réaffectation y associe une nouvelle valeur.
```
x = 7
print(x)
```
La variable $y$ est affe... | github_jupyter |
# Deep-nilmtk Results exploration
The current notebook illustrates an example of appliances
```
import pickle
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from nilmtk.losses import rmse, f1_score
sns.set_style("whitegrid")
from scipy.interpolate import Univariat... | github_jupyter |
# 0. Iportable Globals and Metadata
Global data and helper functions to be imported into each notebook.
The functions here are outlined more clearly in 1.Globals_and_Metadata.ipynb
```
import requests
import requests_cache
from pprint import pprint
import numpy as np
import pandas as pd
import matplotlib.pyplot as p... | github_jupyter |
# BE 240 Lecture 4
# Sub-SBML
## Modeling diffusion, shared resources, and compartmentalized systems
## _Ayush Pandey_
```
# This notebook is designed to be converted to a HTML slide show
# To do this in the command prompt type (in the folder containing the notebook):
# jupyter nbconvert BE240_Lecture4_Sub-SBML.ipy... | github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from copy import deepcopy
from mpl_toolkits.axes_grid1 import make_axes_locatable
import matplotlib as mpl
import json
from keras.models import load_model
import pandas as pd
from scipy.stats import pearsonr
from keras.utils.generic_uti... | github_jupyter |
# Computer Vision for Medical Imaging: Part 2. Model Lineage and Model Registry
This notebook is part 2 of a 4-part series of techniques and services offer by SageMaker to build a model which predicts if an image of cells contains cancer. This notebook gives an overview of how to track model lineage, how to create a mo... | github_jupyter |
```
import matplotlib.pyplot as plt
import cv2
import numpy as np
%matplotlib inline
# Show a random images from CAT category
rand = np.random.randint(0,9373)
cat_rand=cv2.imread("CATS_DOGS/train/CAT/{}.jpg".format(str(rand)))
cat_rand=cv2.cvtColor(cat_rand,cv2.COLOR_BGR2RGB)
plt.imshow(cat_rand)
# Show a random i... | github_jupyter |
# Allen Cahn equation
* Physical space
\begin{align}
u_{t} = \epsilon u_{xx} + u - u^{3}
\end{align}
* Discretized with Chebyshev differentiation matrix (D)
\begin{align}
u_t = (\epsilon D^2 + I)u - u^{3}
\end{align}
# Imports
```
import numpy as np
import matplotlib.pyplot as plt
from rkstiff.grids import construc... | github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Reducer/image_reductions.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" hre... | github_jupyter |
Use Case 9
==========
Problem Definition:
-------------------
A climate scientist wishes to analyse potential correlations between *Ozone* and *Cloud* ECVs.
Required Toolbox Features:
--------------------------
* Access to and ingestion of ESA CCI Ozone and Cloud data (Atmosphere Mole Content of Ozone and Cloud Cov... | github_jupyter |
# pomegranate / sklearn GMM comparison
authors: <br>
Nicholas Farn (nicholasfarn@gmail.com) <br>
Jacob Schreiber (jmschreiber91@gmail.com)
<a href="https://github.com/scikit-learn/scikit-learn">sklearn</a> is a very popular machine learning package for Python which implements a wide variety of classical machine learn... | github_jupyter |
# Tutorial on building configuration file for Keras classification models
This tutorial is aimed to help users of the open-sourced library **DeepPavlov** to understand the structure of configuration files for **classification** models implemented in DeepPavlov on **Keras** (with tensorflow backend).
Let's take a look... | github_jupyter |
# ***Introduction to Radar Using Python and MATLAB***
## Andy Harrison - Copyright (C) 2019 Artech House
<br/>
# Planar Array Antennas
***
An extension of the linear array antenna is the planar arrays. Planar arrays are formed by placing the radiating elements in a grid and may take on various configurations, as sho... | github_jupyter |
```
#@title Preparing the data
# Importing libraries
import pandas as pd
import numpy as np
pd.set_option("display.precision", 4)
# Metrics
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
# Loading the data
df = pd.read_csv('Data.csv')
X = df.iloc[:, :-1].values
y = df.iloc[:, -1].value... | github_jupyter |
<center>
<h1 style="text-align:center"> Generalized Algebraic Data Types </h1>
<h2 style="text-align:center"> CS3100 Fall 2019 </h2>
</center>
## Simple language
Consider this simple language of integers and booleans
```
type value =
| Int of int
| Bool of bool
type expr =
| Val of value
| Plus of expr * e... | github_jupyter |
# Querying Wikidata
* by [R. Stuart Geiger](http://stuartgeiger.com), released [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/)
[Wikidata](https://wikidata.org) is a free linked database that serves as a central storage for the structured data in Wikipedia and other Wikimedia projects. Their [query service](... | github_jupyter |
# Annotated_Transformer_English_to_Chinese_Translator
```python
Created on :2020/09/19 23:27:28
@author :Caihao (Chris) Cui
@file :Annotated_Transformer_English_to_Chinese_Translator.ipynb
@content :Transformer English-to-Chinese.
@version :0.1.0
```
In this notebook, I will build a Transformer ... | github_jupyter |
# Dataset Versioning and Endpoint Deployment
<a href="https://colab.research.google.com/github/VertaAI/modeldb/blob/master/client/workflows/demos/dataset-registry-endpoint.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This notebook combines the c... | github_jupyter |
## Is this just magic? What is Numba doing to make code run quickly?
When you add the `jit` decorator (or function call), Numba examines the code in the function and then tries to compile it using the LLVM compiler. LLVM takes Numba's translation of the Python code and compiles it into something like assembly code, w... | github_jupyter |
# Text Vectorization
```
# Import python libs
import sqlite3 as sqlite # work with sqlite databases
import os # used to set working directory
import pandas as pd # process data with pandas dataframe
import numpy as np
# Setup pandas display options
pd.options.display.max_colwid... | github_jupyter |
```
#Packages
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# OpenTNSIM
import opentnsim.core as core
import opentnsim.graph_module as graph_module
import opentnsim.plot as plot
import opentnsim.model as model
```
### Making updated mixin classes
```
class VesselProperties:
"""Mixin cla... | github_jupyter |
# Control Flow Graph
The code in this notebook helps with obtaining the control flow graph of python functions.
**Prerequisites**
* This notebook needs some understanding on advanced concepts in Python, notably
* classes
## Control Flow Graph
The class `PyCFG` allows one to obtain the control flow graph.
```... | github_jupyter |
## Notebook to test Optimal controller on MS
### Notes
- Change picked peak file to whatever is most up-to-date
- Change MS from Simulator to real :-)
- Fix paths
```
import os
%load_ext autoreload
%autoreload 2
import xml.etree.ElementTree
import os,glob,sys
import numpy as np
import pandas as pd
import seaborn as ... | github_jupyter |
# Implementing a new model with Jack
In this tutorial, we focus on the minimal steps required to implement a new model from scratch using Jack.
We will implement a simple Bi-LSTM baseline for extractive question answering.
The architecture is as follows:
- Words of question and support are embedded using random embed... | github_jupyter |
# RE19-linguistic-classification: performance evaluation
This notebook evaluates the performance of a F and Q requirements classifiers using different linguistic features.
## 0. Set up (optional)
Run the following install functions if running Jupyter on a cloud environment like Colaboratory, which does not allow you... | github_jupyter |
# Hyper-parameter tuning
First, let's fetch the "titanic" dataset directly from OpenML.
```
import pandas as pd
```
In this dataset, the missing values are stored with the following character `"?"`. We will notify it to Pandas when reading the CSV file.
```
df = pd.read_csv(
"https://www.openml.org/data/get_csv... | github_jupyter |
# Scene Classification
## Overview
1. Preprocess
- Import pkg
- Import data
- Show a sample
- Shuffle split data
2. Analysis Features
3. Build Model
4. Train & Cross Validation
5. Predict
Reference:
- https://challenger.ai/competitions
- https://github.com/jupyter/notebook/issues/2287
## 1. Preproces... | github_jupyter |
# Steepest Descent
Copyright (C) 2020 Andreas Kloeckner
<details>
<summary>MIT License</summary>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation ... | github_jupyter |
#### Reference: <br>
* S. A. Niaki, E. Haghighat, X. Li, T. Campbell, and R. Vaziri, "Physics-Informed Neural Network for Modelling the Thermochemical Curing Process of Composite-Tool Systems During Manufacture", arXiv:2011.13511 (2020), https://arxiv.org/abs/2011.13511
```
# imports
import numpy as np
from numpy impo... | github_jupyter |
# Laws of probability
```
# Run this cell; do not change it.
import numpy as np
# Make printing of numbers a bit neater.
np.set_printoptions(precision=4, suppress=True)
import matplotlib.pyplot as plt
# Make the plots look more fancy.
plt.style.use('fivethirtyeight')
```
There are two important laws of probability th... | github_jupyter |
```
# Render our plots inline
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (15, 5)
```
# 1.1 Reading data from a csv file
You can read data from a CSV file using the `read_csv` function. By default, it assumes that the fields are comma-separated.
We're goi... | github_jupyter |
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
from keras import applications
model = applications.resnet50.ResNet50(weights='imagenet', include_top=False, pooling='avg')
from keras.preprocessing import image
import numpy as np
from keras.applications.resnet50 import preprocess_input
# load image s... | github_jupyter |
```
import networkx as nx
from networkx.readwrite import gexf
import pandas as pd
import matplotlib.pyplot as plt
G = gexf.read_gexf('data/outputs/tags-tags.gexf')
print(nx.__version__)
print(nx.info(G))
```
# EDA on graph
1. Basic info and average degree
2. check if whole network is connected, is it a maximal connec... | github_jupyter |
# GPU
```
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
print(gpu_info)
```
# CFG
```
CONFIG_NAME = 'config32.yml'
debug = False
from google.colab import drive, auth
# ドライブのマウント
drive.mount('/content/drive')
# Google Cloudの権限設定
auth.authenticate_user()
def get_github_secret():
import json
... | github_jupyter |
```
from mxnet import nd
from mxnet.gluon import nn
class MLP(nn.Block):
# 声明带有模型参数的层,这里我们声明了两个全链接层。
def __init__(self, **kwargs):
# 调用 MLP 父类 Block 的构造函数来进行必要的初始化。这样在构造实例时还可以指定
# 其他函数参数,例如下下一节将介绍的模型参数 params。
super(MLP, self).__init__(**kwargs)
# 隐藏层。
self.hidden = nn.D... | github_jupyter |
### Initial set-up
Getting up to speed with the current version
```
!ls ../ | grep script
# !dvc fetch ../../data/default_env.pickle.dvc
# !dvc checkout ../../data/default_env.pickle.dvc
!dvc repro -s gen_env
!dvc dag > dvc_dag.out
```
### Check out gen_env
```
import sys
sys.path.append('../src')
import yaml
import... | github_jupyter |
# NLP model for detecting signs of dpression from google search phrases
This notebook file is for training a model to be used in this [hackathon](https://hackon.hackerearth.com/). It will take a phrase as input and use traditional NLP and sentiment analysis methods to analyse whether the individual who used the phrase ... | github_jupyter |
```
# https://www.tensorflow.org/tutorials/text/text_generation
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
import numpy as np
import os
import time
root = r'D:\Users\Arkady\Verint\Coursera_2019_Tensorflow_Specialization\Course3_NLP'
path_to_file = tf.ker... | github_jupyter |
# Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use ... | github_jupyter |
## Train a model with Mushroom data using XGBoost algorithm
### Model is trained with XGBoost installed in notebook instance
### In the later examples, we will train using SageMaker's XGBoost algorithm
```
# Install xgboost in notebook instance.
#### Command to install xgboost
#!conda install -y -c conda-forge xgboo... | github_jupyter |
<a href="https://colab.research.google.com/github/afnf33/emoTale/blob/master/models/Korean_multisentiment_classifier_KoBERT.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# 구글 드라이브와 연동합니다
from google.colab import drive
drive.mount('/content/dri... | github_jupyter |
```
!pip install --quiet "git+https://github.com/optimizacion-2-2021-1-gh-classroom/practica-2-segunda-parte-caroacostatovany.git#egg=mex&subdirectory=src"
#!pip uninstall mex -y
import mex
help(mex)
import numpy as np
import sys
np.set_printoptions(threshold=sys.maxsize)
import scipy.io as sio
mat = sio.loadmat('AFIRO... | github_jupyter |
```
# https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/
from keras.models import Sequential
from keras.layers import Dense
import numpy
# fix random seed for reproducibility
numpy.random.seed(7)
#load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.data", delimiter=",")
... | github_jupyter |
## INTRODUCTION
- It’s a Python based scientific computing package targeted at two sets of audiences:
- A replacement for NumPy to use the power of GPUs
- Deep learning research platform that provides maximum flexibility and speed
- pros:
- Iinteractively debugging PyTorch. Many users who have used both fr... | github_jupyter |
# Image Overlay #
This overlays the frame with a small 256 tone image
First you download the bit file.
```
from pynq.overlays.video import *
from pynq.lib.video import *
base = VideoOverlay("video.bit")
hdmi_in = base.video.hdmi_in
hdmi_out = base.video.hdmi_out
```
Then start up the PRControl, the video will not ... | github_jupyter |
# tidy3d
gdsfactory simulation plugin for tidy3d
[tidy3D is a FDTD web based software](https://simulation.cloud/)
```
# basic ipython configuration (reload source code automatically and plots inline)
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import gdsp... | github_jupyter |
# 使用实例
版本:pyecharts 0.5.11
使用之前我们要强调一点:就是python2.x和python3.x的编码问题,在python3.x中你可以把它看做默认是unicode编码,但在python2.x中并不是默认的,原因就在它的bytes对象定义的混乱,而pycharts是使用unicode编码来处理字符串和文件的,所以当你使用的是python2.x时,请务必在上方插入此代码:
from __future__ import unicode_literals
现在我们来开始正式使用pycharts,这里我们直接使用官方的数据:
## 柱状图-Bar
render()
默认将会在根目录下生成一个 rende... | github_jupyter |
___
<a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>
___
<center><em>Copyright Pierian Data</em></center>
<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>
# BONUS - Multivariate Time Series with RNN
---
----
# PLEASE... | github_jupyter |
```
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from matplotlib import rcParams
import numpy as np
from scipy.spatial.distance import pdist, cosine
from scipy import stats
from itertools import combinations, permutations
%matplotlib inline
np.random.seed(100)
rcParams['font.sans-ser... | github_jupyter |
# Welcome to fastai
```
from fastai.vision import *
from fastai.gen_doc.nbdoc import *
from fastai.core import *
from fastai.basic_train import *
```
The fastai library simplifies training fast and accurate neural nets using modern best practices. It's based on research in to deep learning best practices undertaken a... | github_jupyter |
## 01 IO, Dimensional Reduction, and Clustering
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons Licence" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" title='This work is licensed under a Creative Commons Attribution 4.0 International Lice... | github_jupyter |
```
%matplotlib inline
import pandas as pd
import seaborn as sn
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
```
# Alternative sea-salt correction methodology for Ireland
Our current sea-salt correction methodology for Mg, Ca and SO4 assumes (i) that all chloride is marine, and (ii) tha... | github_jupyter |
# TME 8: Split
> Consignes: le fichier TME8_Sujet.ipynb est à déposer sur le site Moodle de l'UE https://moodle-sciences.upmc.fr/moodle-2019/course/view.php?id=4248. Si vous êtes en binôme, renommez-le en TME8_nom1_nom2.ipynb.
N'oubliez pas de sauvegarder fréquemment votre notebook !!
```
from PIL import Image
from p... | github_jupyter |
# Matplotlib 한글폰트 사용하기
## 1. 필요한 패키지를 가져옵니다.
```
# 그래프를 노트북 안에 그리기 위해 설정
%matplotlib inline
# 필요한 패키지와 라이브러리를 가져옴
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.font_manager as fm
# 그래프에서 마이너스 폰트 깨지는 문제에 대한 대처
mpl.rcParams['axes.unicode_minus'] = False
```
## 2. 그래프를 그리기 위해 임의의 데이터를 만들어... | github_jupyter |
ERROR: type should be string, got "https://docs.google.com/document/d/15GMxvJYAUO-b96c18QmfcF278IVVBBtCFk8nayDp9oY/edit\n\n```\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport glob\nimport os\nimport scipy as sp\nfrom scipy import stats\n\nfrom tools.plt import color2d #from the 'srcole/tools' repo\nfrom matplotlib import cm\npd.options.display.max_rows = 1000\npd.options.display.max_columns = 100\n```\n\n### Load dataframes\n\n```\n# Load cities info\ndf_cities = pd.read_csv('/gh/data2/yelp/city_pop.csv', index_col=0)\ndf_cities.head()\n# Load restaurants\ndf_restaurants = pd.read_csv('/gh/data2/yelp/food_by_city/df_restaurants.csv', index_col=0)\ndf_restaurants.head()\n# Load categories by restaurant\ndf_categories = pd.read_pickle('/gh/data2/yelp/food_by_city/df_categories_sparse.pkl')\ndf_categories.head()\n# These are used for the 'category' input to the search function\ndf_categories_info = pd.read_json('/gh/data2/yelp/categories.json')\ndf_categories_info.head()\n```\n\n# Cuisines by city\n\n```\n# New dataframe: For each cuisine, compute the average rating, average price, and # restaurants\nall_cuisines = df_categories.keys()\ncuisine_dict = {'cuisine': [],\n 'avg_rating': [],\n 'avg_cost': [],\n 'N': []}\nfor k in all_cuisines:\n df_temp = df_restaurants[df_categories[k]==1]\n cuisine_dict['cuisine'].append(k)\n cuisine_dict['avg_rating'].append(df_temp['rating'].mean())\n cuisine_dict['avg_cost'].append(df_temp['cost'].mean())\n cuisine_dict['N'].append(len(df_temp))\ndf_cuisine = pd.DataFrame.from_dict(cuisine_dict)\n\n# Determine cuisines of interest\n# Only look at cuisines with at least 2000 restaurants\nmin_N = 1000\ncategory_counts = df_categories.sum()\ncategories_keep = category_counts[category_counts > min_N]\ncuisines_rmv = ['bars', 'beer_and_wine', 'beerbar', 'breweries', 'butcher', 'cafes', 'catering',\n 'chickenshop', 'cocktailbars', 'convenience', 'cosmetics', 'customcakes',\n 'deptstores', 'divebars', 'drugstores', 'eventplanning', 'farmersmarket', 'fooddeliveryservices',\n 'foodstands', 'gastropubs', 'gourmet', 'grocery', 'healthmarkets', 'importedfood', 'intlgrocery',\n 'karaoke', 'lounges', 'markets', 'meats', 'musicvenues', 'personalchefs', 'pubs',\n 'restaurants', 'salvadoran', 'seafoodmarkets', 'servicestations', 'sportsbars', 'streetvendors',\n 'tapasmallplates', 'venues', 'wine_bars', 'wineries']\ncategories_keep.drop(cuisines_rmv, inplace=True)\ncategories_keep = categories_keep.keys()\ndf_categories.loc[:,categories_keep]\n```\n\n# IN THIS NOTEBOOK AND OTHERS, NEED TO FIX HOW USE DF_CAT IN BELOW DF NOW THAT ITS SPARSE\n\n```\n# Set up dataframe for restaurants with categories of interest\nrestaurant_have_category = df_categories.loc[:,categories_keep].sum(axis=1).to_dict()\ndf_restaurants_keep_idx = [k for k in restaurant_have_category.keys() if restaurant_have_category[k]]\ndf_restaurants_temp = df_restaurants.loc[df_restaurants_keep_idx].reset_index(drop=True)\ndf_categories_temp = df_categories.loc[df_restaurants_keep_idx,categories_keep].reset_index(drop=True)\ndf_restaurants_temp = df_restaurants_temp.merge(df_categories_temp, left_index=True, right_index=True)\n# Compute fraction of each cuisine by city\ndf_city_cuisines = df_restaurants_temp.groupby('city').mean()\ndf_state_cuisines = df_restaurants_temp.groupby('state').mean()\n```\n\n# Explore features by city\n* rating, review_count, cost, has_delivery, has_pickup\n* each cuisine\n\n```\ndf_city_cuisines.head(5)\n```\n\n# Highest average rating\n* Highest average rating are the most popular cities because yelp will return the top ones in each city\n\n```\ndf_city_cuisines.sort_values('rating', ascending=False, inplace=True)\n\nN=60\nplt.figure(figsize=(30,5))\nplt.bar(np.arange(N), df_city_cuisines['rating'].values[:N], color='k', ecolor='.5')\nplt.xticks(np.arange(N), df_city_cuisines.index[:N])\nplt.ylabel('Average rating', size=20)\nplt.xlabel('City', size=20)\nplt.xticks(size=15, rotation='vertical')\n# plt.yticks([10**3, 10**4, 10**5], size=15)\nplt.ylim((3.5, 4.5))\nplt.xlim((-1, N))\n```\n\n# boba\n\n```\nc = 'bubbletea'\ndf_city_cuisines.sort_values(c, ascending=False, inplace=True)\n\nN=60\nplt.figure(figsize=(30,5))\nplt.bar(np.arange(N), df_city_cuisines[c].values[:N], color='k', ecolor='.5')\nplt.xticks(np.arange(N), df_city_cuisines.index[:N])\nplt.ylabel('Fraction of restaurants are\\n'+c, size=20)\nplt.xlabel('City', size=20)\nplt.xticks(size=15, rotation='vertical')\nplt.xlim((-1, N))\nc = 'mexican'\ndf_state_cuisines.sort_values(c, ascending=False, inplace=True)\n\nplt.figure(figsize=(30,5))\nplt.bar(np.arange(len(df_state_cuisines)), df_state_cuisines[c].values, color='k', ecolor='.5')\nplt.xticks(np.arange(len(df_state_cuisines)), df_state_cuisines.index)\nplt.ylabel('Fraction of restaurants are\\n'+c, size=20)\nplt.xlabel('State', size=20)\nplt.xticks(size=15, rotation='vertical')\nplt.xlim((-1, len(df_state_cuisines)))\nc = 'italian'\ndf_state_cuisines.sort_values(c, ascending=False, inplace=True)\n\nplt.figure(figsize=(30,5))\nplt.bar(np.arange(len(df_state_cuisines)), df_state_cuisines[c].values, color='k', ecolor='.5')\nplt.xticks(np.arange(len(df_state_cuisines)), df_state_cuisines.index)\nplt.ylabel('Fraction of restaurants are\\n'+c, size=20)\nplt.xlabel('State', size=20)\nplt.xticks(size=15, rotation='vertical')\nplt.xlim((-1, len(df_state_cuisines)))\n```\n\n" | github_jupyter |
```
import matplotlib
%matplotlib inline
# matplotlib.use('agg')
# coding: utf-8
import csv
import poppy
# get_ipython().run_line_magic('pylab', 'inline --no-import-all')
# matplotlib.rcParams['image.origin'] = 'lower'
print(poppy.__version__)
from poppy.sub_sampled_optics import Subapertures, ShackHartmannWavefront... | github_jupyter |
# Optimization Examples
[](https://colab.research.google.com/github/slickml/slick-ml/blob/master/examples/optimization.ipynb)
### Google Colab Configuration
```
!git clone https://github.com/slickml/slick-ml.git
%cd slick-ml
!pip install -r re... | github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.