markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Table of Contents1  Dimensionality Reduction1.1  The Problem1.1.1  Multi-Collinearity1.2  Sparsity2  Principle Component Analysis2.1  Important Points:3  Singular Value Decomposition3.1  Measuring the Quality of the Reconstruction3.2 &...
import pandas as pd from sklearn.feature_extraction.text import CountVectorizer reviews = pd.read_csv("mcdonalds-yelp-negative-reviews.csv", encoding='latin-1') reviews = open("poor_amazon_toy_reviews.txt", encoding='latin-1') #text = reviews["review"].values text = reviews.readlines() vectorizer = CountVectorizer(ng...
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
Principle Component AnalysisIf you have an original matrix $Z$, you can decompose this matrix into two smaller matrices $X$ and $Q$. Important Points:- Multiplying a vector by a matrix typically changes the direction of the vector. For instance: Lazy Programmer- Tutorial to PCA However, there are eigenva...
# what is the shape of our features? features.shape from sklearn.decomposition import PCA pca = PCA(n_components=4) Z = pca.fit_transform(features) # what is the shape of Z? Z.shape # what will happen if we take the correlation matrix and covariance matrix of our new reduced features? import numpy as np covariances = ...
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
Singular Value DecompositionGiven an input matrix $A$, we want to try to represent it instead as three smaller matrices $U$, $\sum$, and $V$. Instead of **$n$ original terms**, we want to represent each document as **$r$ concepts** (other referred to as **latent dimensions**, or **latent factors**): Minin...
# create sample data import numpy as np import matplotlib.pyplot as plt from scipy.linalg import svd x = np.linspace(1,20, 20) # create the first dimension x = np.concatenate((x,x)) y = x + np.random.normal(0,1, 40) # create the second dimension z = x + np.random.normal(0,2, 40) # create the third dimension a = x + np...
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
GLOVEGlobal vectors for word presentation: GloVe: Global Vectors for Word Representation
!pip3 install gensim # import glove embeddings into a word2vec format that is consumable by Gensim from gensim.scripts.glove2word2vec import glove2word2vec glove_input_file = 'glove.6B.100d.txt' word2vec_output_file = 'glove.6B.100d.txt.word2vec' glove2word2vec(glove_input_file, word2vec_output_file) from gensim.models...
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
Using Spacy word2vec embeddings
import en_core_web_md import spacy from scipy.spatial.distance import cosine nlp = en_core_web_md.load() words = ["woman", "king", "man", "queen", "puppy", "kitten", "cat", "quarterback", "football", "stadium", "touchdown", "dog", "government", "tax", "federal", "judicial", "elections", "avo...
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
Using Glove
%matplotlib inline from sklearn.manifold import TSNE from sklearn.decomposition import PCA from sklearn.decomposition import TruncatedSVD import matplotlib.pyplot as plt dimension_model = PCA(n_components=2) reduced_vectors = dimension_model.fit_transform(vectors) for i, vector in enumerate(reduced_vectors): x = ...
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
Clustering Text
from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=4) cluster_assignments = kmeans.fit_predict(reduced_vectors) for cluster_assignment, word in zip(cluster_assignments, words): print(f"{word} assigned to cluster {cluster_assignment}") color_map = { 0: "r", 1: "b", 2: "g", 3: "y" } plt....
_____no_output_____
MIT
week8/Dimensionality Reduction and Clustering.ipynb
yixinouyang/dso-560-nlp-and-text-analytics
์šฉ์–ด ์ •์˜
#๊ฐ€์„ค์„ค์ • # A hypothesis test is a statistical method that uses sample data to evaluate a hypothesis about a population. 1. First, we state a hypothesis about a population. Usually the hypothesis concerns the value of a population parameter. 2. Before we select a sample, we use the hypothesis to predict the characteristi...
_____no_output_____
MIT
08_OSK.ipynb
seokyeongheo/slow_statistics
Problems
1. Identify the four steps of a hypothesis test as presented in this chapter. 1)State the hypothesis. ๊ท€๋ฌด๊ฐ€์„ค๊ณผ ๋Œ€์•ˆ๊ฐ€์„ค ์–ธ๊ธ‰ 2)alpha level ์„ค์ •, ์‹ ๋ขฐ ๊ตฌ๊ฐ„ ์„ค์ • 3) Collect data and compute sample statistics. ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘๊ณผ ์ƒ˜ํ”Œ ํ†ต๊ณ„์  ๊ณ„์‚ฐ 4)make decision ๊ฒฐ๋ก  ๊ฒฐ์ • 2. Define the alpha level and the critical region for a hypothesis test. ๋…๋ฆฝ๋ณ€์ˆ˜์™€ ์ข…์†๋ณ€์ˆ˜์—...
_____no_output_____
MIT
08_OSK.ipynb
seokyeongheo/slow_statistics
Source: https://qiskit.org/documentation/tutorials/circuits/01_circuit_basics.html Circuit Basics
import numpy as np from qiskit import QuantumCircuit %matplotlib inline
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Create a Quantum Circuit acting on a quantum register of three qubits
circ = QuantumCircuit(3)
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
After you create the circuit with its registers, you can add gates (โ€œoperationsโ€) to manipulate the registers. As you proceed through the tutorials you will find more gates and circuits; below is an example of a quantum circuit that makes a three-qubit GHZ state|๐œ“โŸฉ=(|000โŸฉ+|111โŸฉ)/2โ€พโˆšTo create such a state, we start wit...
# Add a H gate on qubit 0, putting this qubit in superposition. circ.h(0) # Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting # the qubits in a Bell state. circ.cx(0, 1) # Add a CX (CNOT) gate on control qubit 0 and target qubit 2, putting # the qubits in a GHZ state. circ.cx(0, 2)
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Visualize circuit
circ.draw('mpl')
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Simulating Circuits To simulate a circuit we use the quant-info module in Qiskit. This simulator returns the quantum state, which is a complex vector of dimensions 2๐‘›, where ๐‘› is the number of qubits (so be careful using this as it will quickly get too large to run on your machine).There are two stages to the simulat...
from qiskit.quantum_info import Statevector # Set the intial state of the simulator to the ground state using from_int state = Statevector.from_int(0, 2**3) # Evolve the state by the quantum circuit state = state.evolve(circ) #draw using latex state.draw('latex')
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
VisualizationBelow, we use the visualization function to plot the qsphere and a hinton representing the real and imaginary components of the state density matrix ๐œŒ.
state.draw('qsphere') state.draw('hinton')
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Unitary representation of a circuitQiskitโ€™s quant_info module also has an operator method which can be used to make a unitary operator for the circuit. This calculates the 2๐‘›ร—2๐‘› matrix representing the quantum circuit.
from qiskit.quantum_info import Operator U = Operator(circ) # Show the results U.data
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
OpenQASM backendThe simulators above are useful because they provide information about the state output by the ideal circuit and the matrix representation of the circuit. However, a real experiment terminates by measuring each qubit (usually in the computational |0โŸฉ,|1โŸฉ basis). Without measurement, we cannot gain infor...
# Create a Quantum Circuit meas = QuantumCircuit(3, 3) meas.barrier(range(3)) # map the quantum measurement to the classical bits meas.measure(range(3), range(3)) # The Qiskit circuit object supports composition. # Here the meas has to be first and front=True (putting it before) # as compose must put a smaller circuit...
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
This circuit adds a classical register, and three measurements that are used to map the outcome of qubits to the classical bits.To simulate this circuit, we use the qasm_simulator in Qiskit Aer. Each run of this circuit will yield either the bitstring 000 or 111. To build up statistics about the distribution of the bit...
# Adding the transpiler to reduce the circuit to QASM instructions # supported by the backend from qiskit import transpile # Use Aer's qasm_simulator from qiskit.providers.aer import QasmSimulator backend = QasmSimulator() # First we have to transpile the quantum circuit # to the low-level QASM instructions used by ...
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Once you have a result object, you can access the counts via the function get_counts(circuit). This gives you the aggregated binary outcomes of the circuit you submitted.
counts = result_sim.get_counts(qc_compiled) print(counts)
{'000': 503, '111': 521}
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Approximately 50 percent of the time, the output bitstring is 000. Qiskit also provides a function plot_histogram, which allows you to view the outcomes.
from qiskit.visualization import plot_histogram plot_histogram(counts)
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
The estimated outcome probabilities Pr(000) and Pr(111) are computed by taking the aggregate counts and dividing by the number of shots (times the circuit was repeated). Try changing the shots keyword in the execute function and see how the estimated probabilities change.
import qiskit.tools.jupyter %qiskit_version_table %qiskit_copyright
_____no_output_____
MIT
qiskit/circuit_basics.ipynb
jonhealy1/learning-quantum
Linear algebra
import numpy as np np.__version__
_____no_output_____
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Matrix and vector products Q1. Predict the results of the following code.
import numpy as np x = [1,2] y = [[4, 1], [2, 2]] print np.dot(x, y) print np.dot(y, x) print np.matmul(x, y) print np.inner(x, y) print np.inner(y, x)
[8 5] [6 6] [8 5] [6 6] [6 6]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q2. Predict the results of the following code.
x = [[1, 0], [0, 1]] y = [[4, 1], [2, 2], [1, 1]] print np.dot(y, x) print np.matmul(y, x)
[[4 1] [2 2] [1 1]] [[4 1] [2 2] [1 1]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q3. Predict the results of the following code.
x = np.array([[1, 4], [5, 6]]) y = np.array([[4, 1], [2, 2]]) print np.vdot(x, y) print np.vdot(y, x) print np.dot(x.flatten(), y.flatten()) print np.inner(x.flatten(), y.flatten()) print (x*y).sum()
30 30 30 30 30
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q4. Predict the results of the following code.
x = np.array(['a', 'b'], dtype=object) y = np.array([1, 2]) print np.inner(x, y) print np.inner(y, x) print np.outer(x, y) print np.outer(y, x)
abb abb [['a' 'aa'] ['b' 'bb']] [['a' 'b'] ['aa' 'bb']]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Decompositions Q5. Get the lower-trianglular `L` in the Cholesky decomposition of x and verify it.
x = np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]], dtype=np.int32) L = np.linalg.cholesky(x) print L assert np.array_equal(np.dot(L, L.T.conjugate()), x)
[[ 2. 0. 0.] [ 6. 1. 0.] [-8. 5. 3.]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q6. Compute the qr factorization of x and verify it.
x = np.array([[12, -51, 4], [6, 167, -68], [-4, 24, -41]], dtype=np.float32) q, r = np.linalg.qr(x) print "q=\n", q, "\nr=\n", r assert np.allclose(np.dot(q, r), x)
q= [[-0.85714287 0.39428571 0.33142856] [-0.42857143 -0.90285712 -0.03428571] [ 0.2857143 -0.17142858 0.94285715]] r= [[ -14. -21. 14.] [ 0. -175. 70.] [ 0. 0. -35.]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q7. Factor x by Singular Value Decomposition and verify it.
x = np.array([[1, 0, 0, 0, 2], [0, 0, 3, 0, 0], [0, 0, 0, 0, 0], [0, 2, 0, 0, 0]], dtype=np.float32) U, s, V = np.linalg.svd(x, full_matrices=False) print "U=\n", U, "\ns=\n", s, "\nV=\n", v assert np.allclose(np.dot(U, np.dot(np.diag(s), V)), x)
U= [[ 0. 1. 0. 0.] [ 1. 0. 0. 0.] [ 0. 0. 0. -1.] [ 0. 0. 1. 0.]] s= [ 3. 2.23606801 2. 0. ] V= [[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Matrix eigenvalues Q8. Compute the eigenvalues and right eigenvectors of x. (Name them eigenvals and eigenvecs, respectively)
x = np.diag((1, 2, 3)) eigenvals = np.linalg.eig(x)[0] eigenvals_ = np.linalg.eigvals(x) assert np.array_equal(eigenvals, eigenvals_) print "eigenvalues are\n", eigenvals eigenvecs = np.linalg.eig(x)[1] print "eigenvectors are\n", eigenvecs
eigenvalues are [ 1. 2. 3.] eigenvectors are [[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q9. Predict the results of the following code.
print np.array_equal(np.dot(x, eigenvecs), eigenvals * eigenvecs)
True
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Norms and other numbers Q10. Calculate the Frobenius norm and the condition number of x.
x = np.arange(1, 10).reshape((3, 3)) print np.linalg.norm(x, 'fro') print np.linalg.cond(x, 'fro')
16.8819430161 4.56177073661e+17
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q11. Calculate the determinant of x.
x = np.arange(1, 5).reshape((2, 2)) out1 = np.linalg.det(x) out2 = x[0, 0] * x[1, 1] - x[0, 1] * x[1, 0] assert np.allclose(out1, out2) print out1
-2.0
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q12. Calculate the rank of x.
x = np.eye(4) out1 = np.linalg.matrix_rank(x) out2 = np.linalg.svd(x)[1].size assert out1 == out2 print out1
4
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q13. Compute the sign and natural logarithm of the determinant of x.
x = np.arange(1, 5).reshape((2, 2)) sign, logdet = np.linalg.slogdet(x) det = np.linalg.det(x) assert sign == np.sign(det) assert logdet == np.log(np.abs(det)) print sign, logdet
-1.0 0.69314718056
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Q14. Return the sum along the diagonal of x.
x = np.eye(4) out1 = np.trace(x) out2 = x.diagonal().sum() assert out1 == out2 print out1
4.0
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
Solving equations and inverting matrices Q15. Compute the inverse of x.
x = np.array([[1., 2.], [3., 4.]]) out1 = np.linalg.inv(x) assert np.allclose(np.dot(x, out1), np.eye(2)) print out1
[[-2. 1. ] [ 1.5 -0.5]]
MIT
Linear_algebra_Solutions.ipynb
suryasuresh06/cvg1
่‡ช็„ถ่ฏญ่จ€ๅค„็†ๅฎžๆˆ˜โ€”โ€”ๅ‘ฝๅๅฎžไฝ“่ฏ†ๅˆซ ่ฟ›ๅ…ฅModelArts็‚นๅ‡ปๅฆ‚ไธ‹้“พๆŽฅ๏ผšhttps://www.huaweicloud.com/product/modelarts.html ๏ผŒ ่ฟ›ๅ…ฅModelArtsไธป้กตใ€‚็‚นๅ‡ปโ€œ็ซ‹ๅณไฝฟ็”จโ€ๆŒ‰้’ฎ๏ผŒ่พ“ๅ…ฅ็”จๆˆทๅๅ’Œๅฏ†็ ็™ปๅฝ•๏ผŒ่ฟ›ๅ…ฅModelArtsไฝฟ็”จ้กต้ขใ€‚ ๅˆ›ๅปบModelArts notebookไธ‹้ข๏ผŒๆˆ‘ไปฌๅœจModelArtsไธญๅˆ›ๅปบไธ€ไธชnotebookๅผ€ๅ‘็Žฏๅขƒ๏ผŒModelArts notebookๆไพ›็ฝ‘้กต็‰ˆ็š„Pythonๅผ€ๅ‘็Žฏๅขƒ๏ผŒๅฏไปฅๆ–นไพฟ็š„็ผ–ๅ†™ใ€่ฟ่กŒไปฃ็ ๏ผŒๅนถๆŸฅ็œ‹่ฟ่กŒ็ป“ๆžœใ€‚็ฌฌไธ€ๆญฅ๏ผšๅœจModelArtsๆœๅŠกไธป็•Œ้ขไพๆฌก็‚นๅ‡ปโ€œๅผ€ๅ‘็Žฏๅขƒโ€ใ€โ€œๅˆ›ๅปบโ€![create_nb_create_button](./img/cr...
from modelarts.session import Session session = Session() if session.region_name == 'cn-north-1': bucket_path = 'modelarts-labs/notebook/DL_nlp_ner/ner.tar.gz' elif session.region_name == 'cn-north-4': bucket_path = 'modelarts-labs-bj4/notebook/DL_nlp_ner/ner.tar.gz' else: print("่ฏทๆ›ดๆขๅœฐๅŒบๅˆฐๅŒ—ไบฌไธ€ๆˆ–ๅŒ—ไบฌๅ››") ...
Successfully download file modelarts-labs/notebook/DL_nlp_ner/ner.tar.gz from OBS to local ./ner.tar.gz total 375220 drwxrwsrwx 4 ma-user ma-group 4096 Sep 6 13:34 . drwsrwsr-x 22 ma-user ma-group 4096 Sep 6 13:03 .. drwxr-s--- 2 ma-user ma-group 4096 Sep 6 13:33 .ipynb_checkpoints -rw-r----- 1...
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
่งฃๅŽ‹ไปŽobsไธ‹่ฝฝ็š„ๅŽ‹็ผฉๅŒ…๏ผŒ่งฃๅŽ‹ๅŽๅˆ ้™คๅŽ‹็ผฉๅŒ…ใ€‚
# ่งฃๅŽ‹ !tar xf ./ner.tar.gz # ๅˆ ้™ค !rm ./ner.tar.gz !ls -la
total 68 drwxrwsrwx 5 ma-user ma-group 4096 Sep 6 13:35 . drwsrwsr-x 22 ma-user ma-group 4096 Sep 6 13:03 .. drwxr-s--- 2 ma-user ma-group 4096 Sep 6 13:33 .ipynb_checkpoints drwxr-s--- 8 ma-user ma-group 4096 Sep 6 00:24 ner -rw-r----- 1 ma-user ma-group 45114 Sep 6 13:33 ner.ipynb drwx--S--- 2 ma-...
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅฏผๅ…ฅPythonๅบ“
import os import json import numpy as np import tensorflow as tf import codecs import pickle import collections from ner.bert import modeling, optimization, tokenization
_____no_output_____
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅฎšไน‰่ทฏๅพ„ๅŠๅ‚ๆ•ฐ
data_dir = "./ner/data" output_dir = "./ner/output" vocab_file = "./ner/chinese_L-12_H-768_A-12/vocab.txt" data_config_path = "./ner/chinese_L-12_H-768_A-12/bert_config.json" init_checkpoint = "./ner/chinese_L-12_H-768_A-12/bert_model.ckpt" max_seq_length = 128 batch_size = 64 num_train_epoc...
_____no_output_____
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅฎšไน‰processor็ฑป่Žทๅ–ๆ•ฐๆฎ๏ผŒๆ‰“ๅฐๆ ‡็ญพ
tf.logging.set_verbosity(tf.logging.INFO) from ner.src.models import InputFeatures, InputExample, DataProcessor, NerProcessor processors = {"ner": NerProcessor } processor = processors["ner"](output_dir) label_list = processor.get_labels() print("labels:", label_list)
labels: ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'X', '[CLS]', '[SEP]']
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ไปฅไธŠ labels ๅˆ†ๅˆซ่กจ็คบ๏ผš- O๏ผš้žๆ ‡ๆณจๅฎžไฝ“- B-PER๏ผšไบบๅ้ฆ–ๅญ—- I-PER๏ผšไบบๅ้ž้ฆ–ๅญ—- B-ORG๏ผš็ป„็ป‡้ฆ–ๅญ—- I-ORG๏ผš็ป„็ป‡ๅ้ž้ฆ–ๅญ—- B-LOC๏ผšๅœฐๅ้ฆ–ๅญ—- I-LOC๏ผšๅœฐๅ้ž้ฆ–ๅญ—- X๏ผšๆœช็Ÿฅ- [CLS]๏ผšๅฅ้ฆ–- [SEP]๏ผšๅฅๅฐพ ๅŠ ่ฝฝ้ข„่ฎญ็ปƒๅ‚ๆ•ฐ
data_config = json.load(codecs.open(data_config_path)) train_examples = processor.get_train_examples(data_dir) num_train_steps = int(len(train_examples) / batch_size * num_train_epochs) num_warmup_steps = int(num_train_steps * 0.1) data_config['num_train_steps'] = num_train_steps data_config['num_warmup_step...
ๆ˜พ็คบ้…็ฝฎไฟกๆฏ: attention_probs_dropout_prob:0.1 directionality:bidi hidden_act:gelu hidden_dropout_prob:0.1 hidden_size:768 initializer_range:0.02 intermediate_size:3072 max_position_embeddings:512 num_attention_heads:12 num_hidden_layers:12 pooler_fc_size:768 pooler_num_attention_heads:12 pooler_num_fc_layers:3 pooler_size_p...
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
่ฏปๅ–ๆ•ฐๆฎ๏ผŒ่Žทๅ–ๅฅๅ‘้‡
def convert_single_example(ex_index, example, label_list, max_seq_length, tokenizer, output_dir, mode): label_map = {} for (i, label) in enumerate(label_list, 1): label_map[label] = i if not os.path.exists(os.path.join(output_dir, 'label2id.pkl')): with codecs.ope...
INFO:tensorflow:Writing example 0 of 20864 INFO:tensorflow:Writing example 5000 of 20864 INFO:tensorflow:Writing example 10000 of 20864 INFO:tensorflow:Writing example 15000 of 20864 INFO:tensorflow:Writing example 20000 of 20864
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅผ•ๅ…ฅ BiLSTM+CRF ๅฑ‚๏ผŒไฝœไธบไธ‹ๆธธๆจกๅž‹
learning_rate = 5e-5 dropout_rate = 1.0 lstm_size=1 cell='lstm' num_layers=1 from ner.src.models import BLSTM_CRF from tensorflow.contrib.layers.python.layers import initializers def create_model(bert_config, is_training, input_ids, input_mask, segment_ids, labels, num_labels, use_one_hot_emb...
_____no_output_____
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅˆ›ๅปบๆจกๅž‹๏ผŒๅผ€ๅง‹่ฎญ็ปƒ
model_fn = model_fn_builder( bert_config=bert_config, num_labels=len(label_list) + 1, init_checkpoint=init_checkpoint, learning_rate=learning_rate, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, use_one_hot_embeddings=False) def file_based_in...
INFO:tensorflow:***** Running training ***** INFO:tensorflow: Num examples = 20864 INFO:tensorflow: Batch size = 64 INFO:tensorflow: Num steps = 1630 INFO:tensorflow:Using config: {'_model_dir': './ner/output', '_tf_random_seed': None, '_save_summary_steps': 1000, '_save_checkpoints_steps': 1000, '_save_checkpoints_...
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅœจ้ชŒ่ฏ้›†ไธŠ้ชŒ่ฏๆจกๅž‹
eval_examples = processor.get_dev_examples(data_dir) eval_file = os.path.join(output_dir, "eval.tf_record") filed_based_convert_examples_to_features( eval_examples, label_list, max_seq_length, tokenizer, eval_file) data_config['eval.tf_record_path'] = eval_file data_config['num_eval_size'] = len(eval_ex...
INFO:tensorflow:Writing example 0 of 4631 INFO:tensorflow:***** Running evaluation ***** INFO:tensorflow: Num examples = 4631 INFO:tensorflow: Batch size = 64 INFO:tensorflow:Calling model_fn. INFO:tensorflow:*** Features *** INFO:tensorflow: name = input_ids, shape = (?, 128) INFO:tensorflow: name = input_mask, sh...
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅœจๆต‹่ฏ•้›†ไธŠ่ฟ›่กŒๆต‹่ฏ•
token_path = os.path.join(output_dir, "token_test.txt") if os.path.exists(token_path): os.remove(token_path) with codecs.open(os.path.join(output_dir, 'label2id.pkl'), 'rb') as rf: label2id = pickle.load(rf) id2label = {value: key for key, value in label2id.items()} predict_examples = processor.get_test_e...
INFO:tensorflow:Writing example 0 of 68 INFO:tensorflow:***** Running prediction***** INFO:tensorflow: Num examples = 68 INFO:tensorflow: Batch size = 64 INFO:tensorflow:Calling model_fn. INFO:tensorflow:*** Features *** INFO:tensorflow: name = input_ids, shape = (?, 128) INFO:tensorflow: name = input_mask, shape =...
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
ๅœจ็บฟๅ‘ฝๅๅฎžไฝ“่ฏ†ๅˆซ็”ฑไปฅไธŠ่ฎญ็ปƒๅพ—ๅˆฐๆจกๅž‹่ฟ›่กŒๅœจ็บฟๆต‹่ฏ•๏ผŒๅฏไปฅไปปๆ„่พ“ๅ…ฅๅฅๅญ๏ผŒ่ฟ›่กŒๅ‘ฝๅๅฎžไฝ“่ฏ†ๅˆซใ€‚่พ“ๅ…ฅโ€œๅ†่งโ€๏ผŒ็ป“ๆŸๅœจ็บฟๅ‘ฝๅๅฎžไฝ“่ฏ†ๅˆซใ€‚่‹ฅไธ‹่ฟฐ็จ‹ๅบๆœชๆ‰ง่กŒๆˆๅŠŸ๏ผŒๅˆ™่กจ็คบ่ฎญ็ปƒๅฎŒๆˆๅŽ๏ผŒGPUๆ˜พๅญ˜่ฟ˜ๅœจๅ ็”จ๏ผŒ้œ€่ฆrestart kernel๏ผŒ็„ถๅŽๆ‰ง่กŒ %run ๅ‘ฝไปคใ€‚้‡Šๆ”พ่ต„ๆบๅ…ทไฝ“ๆต็จ‹ไธบ๏ผš่œๅ• > Kernel > Restart ![้‡Šๆ”พ่ต„ๆบ](./img/้‡Šๆ”พ่ต„ๆบ.png)
%run ner/src/terminal_predict.py
checkpoint path:./ner/output/checkpoint going to restore checkpoint INFO:tensorflow:Restoring parameters from ./ner/output/model.ckpt-1630 {1: 'O', 2: 'B-PER', 3: 'I-PER', 4: 'B-ORG', 5: 'I-ORG', 6: 'B-LOC', 7: 'I-LOC', 8: 'X', 9: '[CLS]', 10: '[SEP]'} ่พ“ๅ…ฅๅฅๅญ: ไธญๅ›ฝ็”ท็ฏฎไธŽๅง”ๅ†…็‘žๆ‹‰้˜ŸๅœจๅŒ—ไบฌไบ”ๆฃตๆพไฝ“่‚ฒ้ฆ†ๅฑ•ๅผ€ๅฐ็ป„่ต›ๆœ€ๅŽไธ€ๅœบๆฏ”่ต›็š„ไบ‰ๅคบ๏ผŒ่ตต็ปงไผŸ12ๅˆ†4ๅŠฉๆ”ป3ๆŠขๆ–ญใ€ๆ˜“ๅปบ่”11ๅˆ†8็ฏฎๆฟใ€ๅ‘จ็ฆ8ๅˆ†...
Apache-2.0
notebook/DL_nlp_bert_ner/nlp_ner.ipynb
YMJS-Irfan/ModelArts-Lab
100 pandas puzzlesInspired by [100 Numpy exerises](https://github.com/rougier/numpy-100), here are 100* short puzzles for testing your knowledge of [pandas'](http://pandas.pydata.org/) power.Since pandas is a large library with many different specialist features and functions, these excercises focus mainly on the fund...
import pandas as pd
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**2.** Print the version of pandas that has been imported.
pd.__version__
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**3.** Print out all the *version* information of the libraries that are required by the pandas library.
pd.show_versions()
INSTALLED VERSIONS ------------------ commit : 2cb96529396d93b46abab7bbc73a208e708c642e python : 3.8.8.final.0 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.22000 machine : AMD64 processor : Intel64 Family 6 Model 142 Stepping 10, Gen...
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
DataFrame basics A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFramesDifficulty: *easy*Note: remember to import numpy using:```pythonimport numpy as np```Consider the following Python dictionary `data` and Python list `labels`:``` pythondata = {'animal': ['cat', 'cat', 's...
import numpy as np raw_data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'], 'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3], 'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1], 'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']} la...
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**5.** Display a summary of the basic information about this DataFrame and its data (*hint: there is a single method that can be called on the DataFrame*).
df.describe()
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**6.** Return the first 3 rows of the DataFrame `df`.
df.iloc[:3,:]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**7.** Select just the 'animal' and 'age' columns from the DataFrame `df`.
df[['animal', 'age']]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**8.** Select the data in rows `[3, 4, 8]` *and* in columns `['animal', 'age']`.
df.iloc[[3, 4, 8]][['animal','age']]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**9.** Select only the rows where the number of visits is greater than 3.
df[df['visits'] > 3]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**10.** Select the rows where the age is missing, i.e. it is `NaN`.
df[df['age'].isna()]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**11.** Select the rows where the animal is a cat *and* the age is less than 3.
df[(df['animal'] == 'cat') & (df['age'] < 3)]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**12.** Select the rows the age is between 2 and 4 (inclusive).
df.iloc[2:5]
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**13.** Change the age in row 'f' to 1.5.
df.loc['f','age'] = 1.5 df
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**14.** Calculate the sum of all visits in `df` (i.e. find the total number of visits).
df['visits'].sum()
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**15.** Calculate the mean age for each different animal in `df`.
df.groupby('animal').agg({'age':'mean'})
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**16.** Append a new row 'k' to `df` with your choice of values for each column. Then delete that row to return the original DataFrame.
import numpy as np rnum = np.random.randint(10, size=len(df)) df['k'] = rnum
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**17.** Count the number of each type of animal in `df`.
df['animal'].value_counts()
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**18.** Sort `df` first by the values in the 'age' in *decending* order, then by the value in the 'visits' column in *ascending* order (so row `i` should be first, and row `d` should be last).
df.sort_values(by = ['age', 'visits'], ascending=[False, True])
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**19.** The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be `True` and 'no' should be `False`.
df['priority'] = df['priority'].map({'yes': True, 'no': False})
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**20.** In the 'animal' column, change the 'snake' entries to 'python'.
df['animal'] = df['animal'].replace('snake', 'python') df
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**21.** For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (*hint: use a pivot table*).
df.pivot_table(index = 'animal', columns = 'visits', values = 'age', aggfunc = 'mean').fillna(0)
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
DataFrames: beyond the basics Slightly trickier: you may need to combine two or more methods to get the right answerDifficulty: *medium*The previous section was tour through some basic but essential DataFrame operations. Below are some ways that you might need to cut your data, but for which there is no single "out of...
nan = np.nan data = [[0.04, nan, nan, 0.25, nan, 0.43, 0.71, 0.51, nan, nan], [ nan, nan, nan, 0.04, 0.76, nan, nan, 0.67, 0.76, 0.16], [ nan, nan, 0.5 , nan, 0.31, 0.4 , nan, nan, 0.24, 0.01], [0.49, nan, nan, 0.62, 0.73, 0.26, 0.85, nan, nan, nan], [ nan, nan, 0.41,...
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**27.** A DataFrame has a column of groups 'grps' and and column of integer values 'vals': ```pythondf = pd.DataFrame({'grps': list('aaabbcaabcccbbc'), 'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})```For each *group*, find the sum of the three greatest values. You should end up with the answ...
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'), 'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]}) # write a solution to the question here
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**28.** The DataFrame `df` constructed below has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive). For each group of 10 consecutive integers in 'A' (i.e. `(0, 10]`, `(10, 20]`, ...), calculate the sum of the corresponding values in column 'B'.The answer should be a Series as follows:...
df = pd.DataFrame(np.random.RandomState(8765).randint(1, 101, size=(100, 2)), columns = ["A", "B"]) # write a solution to the question here
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
DataFrames: harder problems These might require a bit of thinking outside the box......but all are solvable using just the usual pandas/NumPy methods (and so avoid using explicit `for` loops).Difficulty: *hard* **29.** Consider a DataFrame `df` where there is an integer column 'X':```pythondf = pd.DataFrame({'X': [7,...
df = pd.DataFrame(np.random.RandomState(30).randint(1, 101, size=(8, 8)))
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
**31.** You are given the DataFrame below with a column of group IDs, 'grps', and a column of corresponding integer values, 'vals'.```pythondf = pd.DataFrame({"vals": np.random.RandomState(31).randint(-30, 30, size=15), "grps": np.random.RandomState(31).choice(["A", "B"], 15)})```Create a new column ...
import numpy as np def float_to_time(x): return str(int(x)) + ":" + str(int(x%1 * 60)).zfill(2) + ":" + str(int(x*60 % 1 * 60)).zfill(2) def day_stock_data(): #NYSE is open from 9:30 to 4:00 time = 9.5 price = 100 results = [(float_to_time(time), price)] while time < 16: elapsed = np.ra...
_____no_output_____
MIT
100-pandas-puzzles.ipynb
LouisNodskov/100-pandas-puzzles
sample data from a 1d gussian mixture model
n_samples = 200 n_components = 3 X, y, true_params = sample_1d_gmm(n_samples=n_samples, n_components=n_components, random_state=1) plot_scatter_1d(X)
_____no_output_____
MIT
doc/example_notebooks/Single view, gaussian mixture model.ipynb
idc9/mvmm
Fit a Gaussian mixture model
# fit a guassian mixture model with 3 (the true number) of components # from mvmm.single_view.gaussian_mixture.GaussianMixture() is similar to sklearn.mixture.GaussianMixture() gmm = GaussianMixture(n_components=3, n_init=10) # 10 random initalizations gmm.fit(X) # plot parameter estimates plot_...
_____no_output_____
MIT
doc/example_notebooks/Single view, gaussian mixture model.ipynb
idc9/mvmm
Model selection with BIC
# setup the base estimator for the grid search # here we add some custom arguments base_estimator = GaussianMixture(reg_covar=1e-6, init_params_method='rand_pts', # initalize cluster means from random data points n_init=10, abs_tol=1e-8, rel_tol=1e-8, ...
10 bic aic 0 516.654319 510.057684 1 353.053751 336.562164 2 167.420223 141.033684 3 172.957731 136.676240 4 181.186136 135.009693 5 198.034419 141.963024 6 201.967577 136.001230 7 226.684487 150.823187 8 230.178894 144.422643 9 244.147315 148.496112
MIT
doc/example_notebooks/Single view, gaussian mixture model.ipynb
idc9/mvmm
read the data (only **Germany**)
ger <- read.dta("/...your folder/DAYPOLLS_GER.dta")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
create the date (based on stata)
ger$date <- seq(as.Date("1957-09-16"),as.Date("2013-09-22"), by="day")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
subset the data
geroa <- ger[ger$date >= "2000-01-01",]
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
reducing the data
geroar <- cbind(geroa$poll_p1_ipo, geroa$poll_p4_ipo)
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
create the daily times series data
geroar <- zoo(geroar, geroa$date)
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
name the column (don't need for date)
colnames(geroar) <- c("CDU/CSU", "FDP")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
searching for the index of the date when scandal happend
which(time(geroar)=="2010-12-02") which(time(geroar)=="2011-02-16")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
create values for vline, one for each panel
v.panel <- function(x, ...){ lines(x, ...) panel.number <- parent.frame()$panel.number abline(v = vlines[panel.number], col = "red", lty=2) }
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
plot **CDU/CSU** after 2000
plot(geroar$CDU, main="CDU/CSU after 2000", xlab="Time", ylab="Approval Rate") abline(v=time(geroar$CDU)[3989], lty=2, col="red") abline(v=time(geroar$CDU)[4065], lty=2, col="red")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
plot **FDP** after 2000
plot(geroar$FDP, main="FDP after 2000", xlab="Time", ylab="Approval Rate") abline(v=time(geroar$CDU)[3989], lty=2, col="red") abline(v=time(geroar$CDU)[4065], lty=2, col="red")
_____no_output_____
MIT
R Files/German(Overall).R.ipynb
tzuliu/Do-Scandals-Matter-An-Interrupted-Time-Series-Design-on-Three-Cases
CS-109B Introduction to Data Science Lab 5: Convolutional Neural Networks**Harvard University****Spring 2019****Lab instructor:** Eleni Kaxiras**Instructors:** Pavlos Protopapas and Mark Glickman**Authors:** Eleni Kaxiras, Pavlos Protopapas, Patrick Ohiomoba, and Davis Sontag
# RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES import requests from IPython.core.display import HTML styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text HTML(styles)
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Learning GoalsIn this lab we will look at Convolutional Neural Networks (CNNs), and their building blocks.By the end of this lab, you should:- know how to put together the building blocks used in CNNs - such as convolutional layers and pooling layers - in `keras` with an example.- have a good undertanding on how image...
import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (5,5) import numpy as np from scipy.optimize import minimize import tensorflow as tf import keras from keras import layers from keras import models from keras import utils from keras.layers import Dense from keras.models import Sequential from keras.lay...
1.12.0 2.1.6-tf
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Prologue: `keras-viz` Visualization Toolkit`keras-vis` is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently supported visualizations include:- Activation maximization- **Saliency maps** - Class activation mapsAll visualizations by default support N-dimensional image inp...
img = plt.imread('data/picasso.png') img.shape img[1,:,1] print(type(img[50][0][0])) # let's see the image imgplot = plt.imshow(img)
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Visualizing the channels
R_img = img[:,:,0] G_img = img[:,:,1] B_img = img[:,:,2] plt.subplot(221) plt.imshow(R_img, cmap=plt.cm.Reds) plt.subplot(222) plt.imshow(G_img, cmap=plt.cm.Greens) plt.subplot(223) plt.imshow(B_img, cmap=plt.cm.Blues) plt.subplot(224) plt.imshow(img) plt.show()
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
More on preprocessing data below! If you want to learn more: [Image Processing with Python and Scipy](http://prancer.physics.louisville.edu/astrowiki/index.php/Image_processing_with_Python_and_SciPy) Part 3: Putting the Parts together to make a small ConvNet ModelLet's put all the parts together to make a convnet for ...
# Load data and preprocess (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # load MNIST data train_images.shape train_images.max(), train_images.min() train_images = train_images.reshape((60000, 28, 28, 1)) # Reshape to get third dimension train_images = train_images.astype('float32') / 255...
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
The next step is to feed the last output tensor (of shape (3, 3, 64)) into a densely connected classifier network like those youโ€™re already familiar with: a stack of Dense layers. These classifiers process vectors, which are 1D, whereas the current output is a 3D tensor. First we have to flatten the 3D outputs to 1D, a...
mnist_cnn_model.add(layers.Flatten()) mnist_cnn_model.add(layers.Dense(64, activation='relu')) mnist_cnn_model.add(layers.Dense(10, activation='softmax')) mnist_cnn_model.summary() # Compile model mnist_cnn_model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accura...
Epoch 1/5 60000/60000 [==============================] - 21s 343us/step - loss: 0.1780 - acc: 0.9456 Epoch 2/5 60000/60000 [==============================] - 21s 352us/step - loss: 0.0479 - acc: 0.9854 Epoch 3/5 60000/60000 [==============================] - 25s 419us/step - loss: 0.0341 - acc: 0.9896 Epoch 4/5 60000/6...
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
A densely connected network (MLP) running MNIST usually has a test accuracy of 97.8%, whereas our basic convnet has a test accuracy of 99.03%: we decreased the error rate by 68% (relative) with only 5 epochs. Not bad! But why does this simple convnet work so well, compared to a densely connected model? The answer is ab...
# TODO: set your base dir to your correct local location base_dir = 'data/cats_and_dogs_small' import os, shutil # Set up directory information train_dir = os.path.join(base_dir, 'train') validation_dir = os.path.join(base_dir, 'validation') test_dir = os.path.join(base_dir, 'test') train_cats_dir = os.path.join(tr...
total training cat images: 1000 total training dog images: 1000 total validation cat images: 500 total validation dog images: 500 total test cat images: 500 total test dog images: 500
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
So you do indeed have 2,000 training images, 1,000 validation images, and 1,000 test images. Each split contains the same number of samples from each class: this is a balanced binary-classification problem, which means classification accuracy will be an appropriate measure of success. Building the network
from keras import layers from keras import models model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) m...
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_4 (Conv2D) (None, 148, 148, 32) 896 ________________________________________________________...
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
For the compilation step, youโ€™ll go with the RMSprop optimizer. Because you ended the network with a single sigmoid unit, youโ€™ll use binary crossentropy as the loss.
from keras import optimizers model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc'])
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
The steps for getting it into the network are roughly as follows:1. Read the picture files.2. Decode the JPEG content to RGB grids of pixels.3. Convert these into floating-point tensors.4. Rescale the pixel values (between 0 and 255) to the [0, 1] interval (as you know, neural networks prefer to deal with small input v...
from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_dir, target_size=(150, 150), batch_size=20, class_mode='binary') val...
Found 2000 images belonging to 2 classes. Found 1000 images belonging to 2 classes.
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Letโ€™s look at the output of one of these generators: it yields batches of 150ร—150 RGB images (shape (20, 150, 150, 3)) and binary labels (shape (20,)). There are 20 samples in each batch (the batch size). Note that the generator yields these batches indefinitely: it loops endlessly over the images in the target folder...
for data_batch, labels_batch in train_generator: print('data batch shape:', data_batch.shape) print('labels batch shape:', labels_batch.shape) break
data batch shape: (20, 150, 150, 3) labels batch shape: (20,)
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Letโ€™s fit the model to the data using the generator. You do so using the `.fit_generator` method, the equivalent of `.fit` for data generators like this one. It expects as its first argument a Python generator that will yield batches of inputs and targets indefinitely, like this one does. Because the data is being gene...
history = model.fit_generator( train_generator, steps_per_epoch=100, epochs=5, # TODO: should be 30 validation_data=validation_generator, validation_steps=50) # Itโ€™s good practice to always save your models after training. model.save('cats_and_dogs_small_1.h5')
Epoch 1/5 100/100 [==============================] - 55s 549ms/step - loss: 0.6885 - acc: 0.5320 - val_loss: 0.6711 - val_acc: 0.6220 Epoch 2/5 100/100 [==============================] - 56s 558ms/step - loss: 0.6620 - acc: 0.5950 - val_loss: 0.6500 - val_acc: 0.6170 Epoch 3/5 100/100 [==============================] -...
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1