markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Finally, the multiples of 7, starting at 49 (the first multiple of 7 greater than 7 that's left!)
primes[49::7] = [None] * len(primes[49::7]) # The right side is a list of Nones, of the necessary length. print(primes) # What have we done?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
What's left? A lot of Nones and the prime numbers up to 100. We have successfully sieved out all the nonprime numbers in the list, using just four sieving steps (and setting 0 and 1 to None manually). But there's a lot of room for improvement, from beginning to end! The format of the end result is not so nice. We ...
def isprime_list(n): ''' Return a list of length n+1 with Trues at prime indices and Falses at composite indices. ''' flags = [True] * (n+1) # A list [True, True, True,...] to start. flags[0] = False # Zero is not prime. So its flag is set to False. flags[1] = False # One is not prime. ...
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
If you look carefully at the list of booleans, you will notice a True value at the 2nd index, the 3rd index, the 5th index, the 7th index, etc.. The indices where the values are True are precisely the prime indices. Since booleans take the smallest amount of memory of any data type (one bit of memory per boolean), yo...
def where(L): ''' Take a list of booleans as input and outputs the list of indices where True occurs. ''' return [n for n in range(len(L)) if L[n]]
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Combined with the isprime_list function, we can produce long lists of primes.
print(where(isprime_list(100)))
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Let's push it a bit further. How many primes are there between 1 and 1 million? We can figure this out in three steps: Create the isprime_list. Use where to get the list of primes. Find the length of the list of primes. But it's better to do it in two steps. Create the isprime_list. Sum the list! (Note that True ...
sum(isprime_list(1000000)) # The number of primes up to a million! %timeit isprime_list(10**6) # 1000 ms = 1 second. %timeit sum(isprime_list(10**6))
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
This isn't too bad! It takes a fraction of a second to identify the primes up to a million, and a smaller fraction of a second to count them! But we can do a little better. The first improvement is to take care of the even numbers first. If we count carefully, then the sequence 4,6,8,...,n (ending at n-1 if n is o...
def isprime_list(n): ''' Return a list of length n+1 with Trues at prime indices and Falses at composite indices. ''' flags = [True] * (n+1) # A list [True, True, True,...] to start. flags[0] = False # Zero is not prime. So its flag is set to False. flags[1] = False # One is not prime. ...
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Another modest improvement is the following. In the code above, the program counts the terms in sequences like 9,15,21,27,..., in order to set them to False. This is accomplished with the length command len(flags[p*p::2*p]). But that length computation is a bit too intensive. A bit of algebraic work shows that the ...
def isprime_list(n): ''' Return a list of length n+1 with Trues at prime indices and Falses at composite indices. ''' flags = [True] * (n+1) # A list [True, True, True,...] to start. flags[0] = False # Zero is not prime. So its flag is set to False. flags[1] = False # One is not prime. ...
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
That should be pretty fast! It should be under 100 ms (one tenth of one second!) to determine the primes up to a million, and on a newer computer it should be under 50ms. We have gotten pretty close to the fastest algorithms that you can find in Python, without using external packages (like SAGE or sympy). See the r...
primes = where(isprime_list(1000000)) len(primes) # Our population size. A statistician might call it N. primes[-1] # The last prime in our list, just before one million. type(primes) # What type is this data? print(primes[:100]) # The first hundred prime numbers.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
To carry out serious analysis, we will use the method of list comprehension to place our population into "bins" for statistical analysis. Our first type of list comprehension has the form [x for x in LIST if CONDITION]. This produces the list of all elements of LIST satisfying CONDITION. It is similar to list slicin...
redprimes = [p for p in primes if p%4 == 1] # Note the [x for x in LIST if CONDITION] syntax. blueprimes = [p for p in primes if p%4 == 3] print('Red primes:',redprimes[:20]) # The first 20 red primes. print('Blue primes:',blueprimes[:20]) # The first 20 blue primes. print("There are {} red primes and {} blue primes,...
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
This is pretty close! It seems like prime numbers are about evenly distributed between red and blue. Their remainder after division by 4 is about as likely to be 1 as it is to be 3. In fact, it is proven that asymptotically the ratio between the number of red primes and the number of blue primes approaches 1. Howev...
def primes_upto(x): return len([p for p in primes if p <= x]) # List comprehension recovers the primes up to x. primes_upto(1000) # There are 168 primes between 1 and 1000.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now we graph the prime counting function. To do this, we use a list comprehension, and the visualization library called matplotlib. For graphing a function, the basic idea is to create a list of x-values, a list of corresponding y-values (so the lists have to be the same length!), and then we feed the two lists into ...
import matplotlib # A powerful graphics package. import numpy # A math package import matplotlib.pyplot as plt # A plotting subpackage in matplotlib.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now let's graph the function $y = x^2$ over the domain $-2 \leq x \leq 2$ for practice. As a first step, we use numpy's linspace function to create an evenly spaced set of 11 x-values between -2 and 2.
x_values = numpy.linspace(-2,2,11) # The argument 11 is the *number* of terms, not the step size! print(x_values) type(x_values)
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
You might notice that the format looks a bit different from a list. Indeed, if you check type(x_values), it's not a list but something else called a numpy array. Numpy is a package that excels with computations on large arrays of data. On the surface, it's not so different from a list. The numpy.linspace command is...
[1,2,3] + [1,2,3] x_values + x_values y_values = x_values * x_values # How is multiplication interpreted on numpy arrays? print(y_values)
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now we use matplotlib to create a simple line graph.
%matplotlib inline plt.plot(x_values, y_values) plt.title('The graph of $y = x^2$') # The dollar signs surround the formula, in LaTeX format. plt.ylabel('y') plt.xlabel('x') plt.grid(True) plt.show()
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Let's analyze the graphing code a bit more. See the official pyplot tutorial for more details. python %matplotlib inline plt.plot(x_values, y_values) plt.title('The graph of $y = x^2$') # The dollar signs surround the formula, in LaTeX format. plt.ylabel('y') plt.xlabel('x') plt.grid(True) plt.show() The first line c...
x_values = numpy.linspace(0,1000000,1001) # The numpy array [0,1000,2000,3000,...,1000000] pix_values = numpy.array([primes_upto(x) for x in x_values]) # [FUNCTION(x) for x in LIST] syntax
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
We created an array of x-values as before. But the creation of an array of y-values (here, called pix_values to stand for $\pi(x)$) probably looks strange. We have done two new things! We have used a list comprehension [primes_upto(x) for x in x_values] to create a list of y-values. We have used numpy.array(LIST) sy...
type(numpy.array([1,2,3])) # For example.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now we have two numpy arrays: the array of x-values and the array of y-values. We can make a plot with matplotlib.
len(x_values) == len(pix_values) # These better be the same, or else matplotlib will be unhappy. %matplotlib inline plt.plot(x_values, pix_values) plt.title('The prime counting function') plt.ylabel('$\pi(x)$') plt.xlabel('x') plt.grid(True) plt.show()
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
In this range, the prime counting function might look nearly linear. But if you look closely, there's a subtle downward bend. This is more pronounced in smaller ranges. For example, let's look at the first 10 x-values and y-values only.
%matplotlib inline plt.plot(x_values[:10], pix_values[:10]) # Look closer to 0. plt.title('The prime counting function') plt.ylabel('$\pi(x)$') plt.xlabel('x') plt.grid(True) plt.show()
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
It still looks almost linear, but there's a visible downward bend here. How can we see this bend more clearly? If the graph were linear, its equation would have the form $\pi(x) = mx$ for some fixed slope $m$ (since the graph does pass through the origin). Therefore, the quantity $\pi(x)/x$ would be constant if the ...
m_values = pix_values[1:] / x_values[1:] # We start at 1, to avoid a division by zero error. %matplotlib inline plt.plot(x_values[1:], m_values) plt.title('The ratio $\pi(x) / x$ as $x$ varies.') plt.xlabel('x') plt.ylabel('$\pi(x) / x$') plt.grid(True) plt.show()
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
That is certainly not constant! The decay of $\pi(x) / x$ is not so different from $1 / \log(x)$, in fact. To see this, let's overlay the graphs. We use the numpy.log function, which computes the natural logarithm of its input (and allows an entire array as input).
%matplotlib inline plt.plot(x_values[1:], m_values, label='$\pi(x)/x$') # The same as the plot above. plt.plot(x_values[1:], 1 / numpy.log(x_values[1:]), label='$1 / \log(x)$') # Overlay the graph of 1 / log(x) plt.title('The ratio of $\pi(x) / x$ as $x$ varies.') plt.xlabel('x') plt.ylabel('$\pi(x) / x$') plt.grid(T...
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The shape of the decay of $\pi(x) / x$ is very close to $1 / \log(x)$, but it looks like there is an offset. In fact, there is, and it is pretty close to $1 / \log(x)^2$. And that is close, but again there's another little offset, this time proportional to $2 / \log(x)^3$. This goes on forever, if one wishes to appr...
%matplotlib inline plt.plot(x_values[1:], m_values * numpy.log(x_values[1:]) ) # Should get closer to 1. plt.title('The ratio $\pi(x) / (x / \log(x))$ approaches 1... slowly') plt.xlabel('x') plt.ylabel('$\pi(x) / (x / \log(x)) $') plt.ylim(0.8,1.2) plt.grid(True) plt.show()
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Comparing the graph to the theoretical result, we find that the ratio $\pi(x) / (x / \log(x))$ approaches $1$ (the theoretical result) but very slowly (see the graph above!). A much stronger result relates $\pi(x)$ to the "logarithmic integral" $li(x)$. The Riemann hypothesis is equivalent to the statement $$\left\ver...
from mpmath import li print(primes_upto(1000000)) # The number of primes up to 1 million. print(li(1000000)) # The logarithmic integral of 1 million.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Not too shabby! Prime gaps As a last bit of data analysis, we consider the prime gaps. These are the numbers that occur as differences between consecutive primes. Since all primes except 2 are odd, all prime gaps are even except for the 1-unit gap between 2 and 3. There are many unsolved problems about prime gaps; t...
len(primes) # The number of primes up to 1 million. primes_allbutlast = primes[:-1] # This excludes the last prime in the list. primes_allbutfirst = primes[1:] # This excludes the first (i.e., with index 0) prime in the list. primegaps = numpy.array(primes_allbutfirst) - numpy.array(primes_allbutlast) # Numpy is fa...
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
What have we done? It is useful to try out this method on a short list.
L = [1,3,7,20] # A nice short list. print(L[:-1]) print(L[1:])
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now we have two lists of the same length. The gaps in the original list L are the differences between terms of the same index in the two new lists. One might be tempted to just subtract, e.g., with the command L[1:] - L[:-1], but subtraction is not defined for lists. Fortunately, by converting the lists to numpy arra...
L[1:] - L[:-1] # This will give a TypeError. You can't subtract lists! numpy.array(L[1:]) - numpy.array(L[:-1]) # That's better. See the gaps in the list [1,3,7,20] in the output.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now let's return to our primegaps data set. It contains all the gap-sizes for primes up to 1 million.
print(len(primes)) print(len(primegaps)) # This should be one less than the number of primes.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
As a last example of data visualization, we use matplotlib to produce a histogram of the prime gaps.
max(primegaps) # The largest prime gap that appears! %matplotlib inline plt.figure(figsize=(12, 5)) # Makes the resulting figure 12in by 5in. plt.hist(primegaps, bins=range(1,115)) # Makes a histogram with one bin for each possible gap from 1 to 114. plt.ylabel('Frequency') plt.xlabel('Gap size') plt.grid(True) pl...
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Approximation of the J-function taken from [1] with $$ J(\mu) \approx \left(1 - 2^{-H_1\cdot (2\mu)^{H_2}}\right)^{H_3} $$ and its inverse function can be easily found as $$ \mu = J^{-1}(I) \approx \frac{1}{2}\left(-\frac{1}{H_1}\log_2\left(1-I^{\frac{1}{H_3}}\right)\right)^{\frac{1}{H_2}} $$ with $H_1 = 0.3073$, $H_2=...
H1 = 0.3073 H2 = 0.8935 H3 = 1.1064 def J_fun(mu): I = (1 - 2**(-H1*(2*mu)**H2))**H3 return I def invJ_fun(I): if I > (1-1e-10): return 100 mu = 0.5*(-(1/H1) * np.log2(1 - I**(1/H3)))**(1/H2) return mu
SC468/LDPC_Optimization_AWGN.ipynb
kit-cel/wt
gpl-2.0
The following function solves the optimization problem that returns the best $\lambda(Z)$ for a given BI-AWGN channel quality $E_s/N_0$, corresponding to a $\mu_c = 4\frac{E_s}{N_0}$, for a regular check node degree $d_{\mathtt{c}}$, and for a maximum variable node degree $d_{\mathtt{v},\max}$. This optimization proble...
def find_best_lambda(mu_c, v_max, dc): # quantization of EXIT chart D = 500 I_range = np.arange(0, D, 1)/D # Linear Programming model, maximize target expression model = pulp.LpProblem("Finding best lambda problem", pulp.LpMaximize) # definition of variables, v_...
SC468/LDPC_Optimization_AWGN.ipynb
kit-cel/wt
gpl-2.0
As an example, we consider the case of optimization carried out in the lecture after 10 iterations, where we have $\mu_c = 3.8086$ and $d_{\mathtt{c}} = 14$ with $d_{\mathtt{v},\max}=16$
best_lambda = find_best_lambda(3.8086, 16, 14) print(np.poly1d(best_lambda, variable='Z'))
SC468/LDPC_Optimization_AWGN.ipynb
kit-cel/wt
gpl-2.0
In the following, we provide an interactive widget that allows you to choose the parameters of the optimization yourself and get the best possible $\lambda(Z)$. Additionally, the EXIT chart is plotted to visualize the good fit of the obtained degree distribution.
def best_lambda_interactive(mu_c, v_max, dc): # get lambda and rho polynomial from optimization and from c_avg, respectively p_lambda = find_best_lambda(mu_c, v_max, dc) # if optimization successful, compute rate and show plot if not p_lambda: print('Optimization infeasible, no solution...
SC468/LDPC_Optimization_AWGN.ipynb
kit-cel/wt
gpl-2.0
Now, we carry out the optimization over a wide range of $d_{\mathtt{c},\text{avg}}$ values for a given $\epsilon$ and find the largest possible rate.
def find_best_rate(mu_c, dv_max, dc_max): c_range = np.arange(3, dc_max+1) rates = np.zeros_like(c_range,dtype=float) # loop over all c_avg, add progress bar f = widgets.FloatProgress(min=0, max=np.size(c_range)) display(f) for index,dc in enumerate(c_range): f.value += 1 ...
SC468/LDPC_Optimization_AWGN.ipynb
kit-cel/wt
gpl-2.0
Running binary search to find code with a given target rate for the AWGN channel
target_rate = 0.7 dv_max = 16 dc_max = 22 T_Delta = 0.01 mu_c = 10 Delta_mu = 10 while Delta_mu >= T_Delta: print('Running optimization for mu_c = %1.5f, corresponding to Es/N0 = %1.2f dB' % (mu_c, 10*np.log10(mu_c/4))) rate = find_best_rate(mu_c, dv_max, dc_max) if rate > target_rate: mu_...
SC468/LDPC_Optimization_AWGN.ipynb
kit-cel/wt
gpl-2.0
Software Engineering for Data Scientists Manipulating Data with Python CSE 599 B1 Today's Objectives 1. Opening & Navigating the IPython Notebook 2. Simple Math in the IPython Notebook 3. Loading data with pandas 4. Cleaning and Manipulating data with pandas 5. Visualizing data with pandas 1. Opening and Navigating the...
import pandas
PreFall2018/02-Python-and-Data/Lecture-Python-and-Data.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Because we'll use it so much, we often import under a shortened name using the import ... as ... pattern:
import pandas as pd
PreFall2018/02-Python-and-Data/Lecture-Python-and-Data.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Now we can use the read_csv command to read the comma-separated-value data: Viewing Pandas Dataframes The head() and tail() methods show us the first and last rows of the data The shape attribute shows us the number of elements: The columns attribute gives us the column names The index attribute gives us the index name...
from IPython.display import Image Image('split_apply_combine.png')
PreFall2018/02-Python-and-Data/Lecture-Python-and-Data.ipynb
UWSEDS/LectureNotes
bsd-2-clause
So, for example, we can use this to find the average length of a ride as a function of time of day: The simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.) &lt;data object&gt;.groupby(&lt;grouping ...
%matplotlib inline
PreFall2018/02-Python-and-Data/Lecture-Python-and-Data.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data: Adjusting the Plot Style The default formatting is not very nice; I often make use of the Seaborn library for better plotting defaults. First you'll have to $ conda install seaborn and then you can do this:
import seaborn seaborn.set()
PreFall2018/02-Python-and-Data/Lecture-Python-and-Data.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Steps 4 & 5: Sample data from setting similar to data and record classification accuracy
accuracy = np.zeros((len(S), len(classifiers), 2), dtype=np.dtype('float64')) for idx1, s in enumerate(S): s0=s/2 s1=s/2 g0 = 1 * (np.random.rand( r, r, s0) > 1-p0) g1 = 1 * (np.random.rand( r, r, s1) > 1-p1) mbar0 = 1.0*np.sum(g0, axis=(0,1)) mbar1 = 1.0*np.sum(g1, axis=(0,1)) X = np.arra...
code/classification_simulation.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
Step 6: Plot Accuracy versus N
font = {'weight' : 'bold', 'size' : 14} import matplotlib matplotlib.rc('font', **font) plt.figure(figsize=(8,5)) plt.errorbar(S, accuracy[:,0,0], yerr = accuracy[:,0,1]/np.sqrt(S), hold=True, label=names[0]) plt.errorbar(S, accuracy[:,1,0], yerr = accuracy[:,1,1]/np.sqrt(S), color='green', hold=True, label...
code/classification_simulation.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
Step 7: Apply technique to data
# Initializing dataset names dnames = list(['../data/KKI2009']) print "Dataset: " + ", ".join(dnames) # Getting graph names fs = list() for dd in dnames: fs.extend([root+'/'+file for root, dir, files in os.walk(dd) for file in files]) fs = fs[1:] def loadGraphs(filenames, rois, printer=False): A = np.zeros...
code/classification_simulation.ipynb
Upward-Spiral-Science/grelliam
apache-2.0
Affine layer: foward Open the file cs231n/layers.py and implement the affine_forward function. Once you are done you can test your implementaion by running the following:
# Test the affine_forward function num_inputs = 2 input_shape = (4, 5, 6) output_dim = 3 input_size = num_inputs * np.prod(input_shape) weight_size = output_dim * np.prod(input_shape) x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape) w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.p...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Affine layer: backward Now implement the affine_backward function and test your implementation using numeric gradient checking.
# Test the affine_backward function x = np.random.randn(10, 2, 3) w = np.random.randn(6, 5) b = np.random.randn(5) dout = np.random.randn(10, 5) dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout) d...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
ReLU layer: forward Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:
# Test the relu_forward function x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4) out, _ = relu_forward(x) correct_out = np.array([[ 0., 0., 0., 0., ], [ 0., 0., 0.04545455, 0.13636364,], [ 0.22727273, 0.31818182, 0...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
ReLU layer: backward Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:
x = np.random.randn(10, 10) dout = np.random.randn(*x.shape) dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout) _, cache = relu_forward(x) dx = relu_backward(dout, cache) # The error should be around 1e-12 print 'Testing relu_backward function:' print 'dx error: ', rel_error(dx_num, dx)
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
"Sandwich" layers There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py. For now take a look at the affine_rel...
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward x = np.random.randn(2, 3, 4) w = np.random.randn(12, 10) b = np.random.randn(10) dout = np.random.randn(2, 10) out, cache = affine_relu_forward(x, w, b) dx, dw, db = affine_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambd...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Loss layers: Softmax and SVM You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py. You can make sure that the implementations are correct by running the followin...
num_classes, num_inputs = 10, 50 x = 0.001 * np.random.randn(num_inputs, num_classes) y = np.random.randint(num_classes, size=num_inputs) dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False) loss, dx = svm_loss(x, y) # Test svm_loss function. Loss should be around 9 and dx error should be 1...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Two-layer network In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations. Open the file cs231n/classifiers/fc_net.py and com...
N, D, H, C = 3, 5, 50, 7 X = np.random.randn(N, D) y = np.random.randint(C, size=N) std = 1e-2 model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std) print 'Testing initialization ... ' W1_std = abs(model.params['W1'].std() - std) b1 = model.params['b1'] W2_std = abs(model.params['W2'].std() ...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Solver In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class. Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After do...
# model = TwoLayerNet() # solver = None ############################################################################## # TODO: Use a Solver instance to train a TwoLayerNet that achieves at least # # 50% accuracy on the validation set. # ##########################################...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Multilayer network Next you will implement a fully-connected network with an arbitrary number of hidden layers. Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py. Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout ...
N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for reg in [0, 3.14]: print 'Running check with reg = ', reg model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, reg=reg, weight_scale=5e-2, dtype=np.float64) loss, grads = m...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
# TODO: Use a three-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } learning_rate = 1e-2 weight_scale = 1e-2 model = FullyConnectedNet([100, 100], ...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
# TODO: Use a five-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } learning_rate = 1e-2 weight_scale = 6e-2 model = FullyConnectedNet([100, 100, 100, 100],...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Inline question: Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net? Answer: It's much harder to find the right weight initialization and learning rate for five layer net. As the network grows deeper, we tend to have more dead activations, and thus ki...
from cs231n.optim import sgd_momentum N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-3, 'velocity': v} next_w, _ = sgd_momentum(w, dw, config=config) expected_next_w = np.a...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
num_train = 4000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } solvers = {} for update_rule in ['sgd', 'sgd_momentum']: print 'running with ', update_rule model = FullyConnectedNet([100, 100, 100, 100, 100],...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
RMSProp and Adam RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients. In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your i...
# Test RMSProp implementation; you should see errors less than 1e-7 from cs231n.optim import rmsprop N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-2, 'cache': cache} ne...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3} for update_rule in ['adam', 'rmsprop']: print 'running with ', update_rule model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2) solver = Solver(model, small_data, num_epochs=5, batch_size=100, update_rule=upda...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Train a good model! Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net. If you are careful it should be possible to get accuracies above 55%, but we don't require...
best_model = None ################################################################################ # TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might # # batch normalization and dropout useful. Store your best model in the # # best_model variable. ...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Test you model Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
X_test = data['X_test'] y_test = data['y_test'] X_val = data['X_val'] y_val = data['y_val'] y_test_pred = np.argmax(best_model.loss(X_test), axis=1) y_val_pred = np.argmax(best_model.loss(X_val), axis=1) print 'Validation set accuracy: ', (y_val_pred == y_val).mean() print 'Test set accuracy: ', (y_test_pred == y_test)...
solutions/levin/assignment2/FullyConnectedNets.ipynb
machinelearningnanodegree/stanford-cs231
mit
Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inpu...
def model_inputs(real_dim, z_dim): inputs_real = inputs_z = return inputs_real, inputs_z
gan_mnist/Intro_to_GANs_Exercises.ipynb
ktmud/deep-learning
mit
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero ...
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables...
gan_mnist/Intro_to_GANs_Exercises.ipynb
ktmud/deep-learning
mit
Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a...
def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak pa...
gan_mnist/Intro_to_GANs_Exercises.ipynb
ktmud/deep-learning
mit
Hyperparameters
# Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1
gan_mnist/Intro_to_GANs_Exercises.ipynb
ktmud/deep-learning
mit
Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Th...
tf.reset_default_graph() # Create our input placeholders input_real, input_z = # Generator network here g_model = # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = d_model_fake, d_logits_fake =
gan_mnist/Intro_to_GANs_Exercises.ipynb
ktmud/deep-learning
mit
Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will be sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_wit...
# Calculate losses d_loss_real = d_loss_fake = d_loss = g_loss =
gan_mnist/Intro_to_GANs_Exercises.ipynb
ktmud/deep-learning
mit
Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the gener...
# Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = g_vars = d_vars = d_train_opt = g_train_opt =
gan_mnist/Intro_to_GANs_Exercises.ipynb
ktmud/deep-learning
mit
Training
batch_size = 100 epochs = 100 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) ...
gan_mnist/Intro_to_GANs_Exercises.ipynb
ktmud/deep-learning
mit
Training loss Here we'll check out the training losses for the generator and discriminator.
%matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend()
gan_mnist/Intro_to_GANs_Exercises.ipynb
ktmud/deep-learning
mit
These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
_ = view_samples(-1, samples)
gan_mnist/Intro_to_GANs_Exercises.ipynb
ktmud/deep-learning
mit
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by us...
saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha), ...
gan_mnist/Intro_to_GANs_Exercises.ipynb
ktmud/deep-learning
mit
Questions: Exercise 1a What are the leakage factors of the aquifer system?
print('The leakage factors of the aquifers are:') print(ml.aq.lab)
notebooks/timml_notebook1_sol.ipynb
Hugovdberg/timml
mit
Exercise 1b What is the head at the well?
print('The head at the well is:') print(w.headinside())
notebooks/timml_notebook1_sol.ipynb
Hugovdberg/timml
mit
Exercise 1c Create a contour plot of the head in the three aquifers. Use a window with lower left hand corner $(x,y)=(−3000,−3000)$ and upper right hand corner $(x,y)=(3000,3000)$. Notice that the heads in the three aquifers are almost equal at three times the largest leakage factor.
ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[0, 1, 2], levels=10, legend=True, figsize=figsize)
notebooks/timml_notebook1_sol.ipynb
Hugovdberg/timml
mit
Exercise 1d Create a contour plot of the head in aquifer 1 with labels along the contours. Labels are added when the labels keyword argument is set to True. The number of decimal places can be set with the decimals keyword argument, which is zero by default.
ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[1], levels=np.arange(30, 45, 1), labels=True, legend=['layer 1'], figsize=figsize)
notebooks/timml_notebook1_sol.ipynb
Hugovdberg/timml
mit
Exercise 1e Create a contour plot with a vertical cross-section below it. Start three pathlines from $(x,y)=(-2000,-1000)$ at levels $z=-120$, $z=-60$, and $z=-10$. Try a few other starting locations.
win=[-3000, 3000, -3000, 3000] ml.plot(win=win, orientation='both', figsize=figsize) ml.tracelines(-2000 * ones(3), -1000 * ones(3), [-120, -60, -10], hstepmax=50, win=win, orientation='both') ml.tracelines(0 * ones(3), 1000 * ones(3), [-120, -50, -10], hstepmax=50, win=win, orientation='b...
notebooks/timml_notebook1_sol.ipynb
Hugovdberg/timml
mit
Exercise 1f Add an abandoned well that is screened in both aquifer 0 and aquifer 1, located at $(x, y) = (100, 100)$ and create contour plot of all aquifers near the well (from (-200,-200) till (200,200)). What are the discharge and the head at the abandoned well? Note that you have to solve the model again!
ml = ModelMaq(kaq=[10, 20, 5], z=[0, -20, -40, -80, -90, -140], c=[4000, 10000]) w = Well(ml, xw=0, yw=0, Qw=10000, rw=0.2, layers=1) Constant(ml, xr=10000, yr=0, hr=20, layer=0) Uflow(ml, slope=0.002, angle=0) wabandoned = Well(ml, xw=100, yw=100, Qw=0, rw=0.2, layers=[0, 1]) ml.solve() ml...
notebooks/timml_notebook1_sol.ipynb
Hugovdberg/timml
mit
Usaremos la librería pymongo para python. La cargamos a continuación.
import pymongo from pymongo import MongoClient
mongo/sesion4.ipynb
dsevilla/bdge
mit
La conexión se inicia con MongoClient en el host descrito en el fichero docker-compose.yml (mongo).
client = MongoClient("mongo",27017) client client.list_database_names()
mongo/sesion4.ipynb
dsevilla/bdge
mit
Format: 7zipped Files: badges.xml UserId, e.g.: "420" Name, e.g.: "Teacher" Date, e.g.: "2008-09-15T08:55:03.923" comments.xml Id PostId Score Text, e.g.: "@Stu Thompson: Seems possible to me - why not try it?" CreationDate, e.g.:"2008-09-06T08:07:10.730" UserId posts.xml Id PostTypeId 1: Question 2: Answer Paren...
db = client.stackoverflow db = client['stackoverflow'] db
mongo/sesion4.ipynb
dsevilla/bdge
mit
Las bases de datos están compuestas por un conjunto de colecciones. Cada colección aglutina a un conjunto de objetos (documentos) del mismo tipo, aunque como vimos en teoría, cada documento puede tener un conjunto de atributos diferente.
posts = db.posts posts
mongo/sesion4.ipynb
dsevilla/bdge
mit
Importación de los ficheros CSV. Por ahora creamos una colección diferente para cada uno. Después estudiaremos cómo poder optimizar el acceso usando agregación.
import os import os.path as path from urllib.request import urlretrieve def download_csv_upper_dir(baseurl, filename): file = path.abspath(path.join(os.getcwd(),os.pardir,filename)) if not os.path.isfile(file): urlretrieve(baseurl + '/' + filename, file) baseurl = 'http://neuromancer.inf.um.es:8080/es...
mongo/sesion4.ipynb
dsevilla/bdge
mit
El API de colección en Python se puede encontrar aquí: https://api.mongodb.com/python/current/api/pymongo/collection.html. La mayoría de libros y referencias muestran el uso de mongo desde Javascript, ya que el shell de MongoDB acepta ese lenguaje. La sintaxis con respecto a Python cambia un poco, y se puede seguir en ...
( db.posts.create_index([('Id', pymongo.HASHED)]), db.comments.create_index([('Id', pymongo.HASHED)]), db.users.create_index([('Id', pymongo.HASHED)]) )
mongo/sesion4.ipynb
dsevilla/bdge
mit
Map-Reduce Mongodb incluye dos APIs para procesar y buscar documentos: el API de Map-Reduce y el API de agregación. Veremos primero el de Map-Reduce. Manual: https://docs.mongodb.com/manual/aggregation/#map-reduce
from bson.code import Code map = Code( ''' function () { emit(this.OwnerUserId, 1); } ''') reduce = Code( ''' function (key, values) { return Array.sum(values); } ''') results = posts.map_reduce(map, reduce, "posts_by_userid") posts_by_userid = db.posts_by_userid list(posts_by_userid.find())
mongo/sesion4.ipynb
dsevilla/bdge
mit
Se le puede añadir una etiqueta para especificar sobre qué elementos queremos trabajar (query): La función map_reduce puede llevar añadida una serie de keywords, los mismos especificados en la documentación: query: Restringe los datos que se tratan sort: Ordena los documentos de entrada por alguna clave limit: Limita ...
db.posts.distinct('Score')
mongo/sesion4.ipynb
dsevilla/bdge
mit
EJERCICIO (resuelto): Construir, con el API de Map-Reduce, una colección 'post_comments', donde se añade el campo 'Comments' a cada Post con la lista de todos los comentarios referidos a un Post. Veremos la resolución de este ejercicio para que haga de ejemplo para los siguientes a implementar. En primer lugar, una ope...
from bson.code import Code comments_map = Code(''' function () { emit(this.PostId, { type: 'comment', comments: [this]}); } ''') comments_reduce = Code(''' function (key, values) { comments = []; values.forEach(function(v) { if ('comments' in v) comments = comments.concat(v.comments) ...
mongo/sesion4.ipynb
dsevilla/bdge
mit
Esto demuestra que en general el esquema de datos en MongoDB no estaría así desde el principio. Después del primer paso de map/reduce, tenemos que construir la colección final que asocia cada Post con sus comentarios. Como hemos construido antes la colección post_comments indizada por el Id del Post, podemos utilizar a...
posts_map = Code(""" function () { this.comments = []; emit(this.Id, this); } """) posts_reduce = Code(""" function (key, values) { comments = []; // The set of comments obj = {}; // The object to return values.forEach(function(v) { if (v['type'] === 'comment') comments = comments.concat(v.comme...
mongo/sesion4.ipynb
dsevilla/bdge
mit
Framework de Agregación Framework de agregación: https://docs.mongodb.com/manual/reference/operator/aggregation/. Y aquí una presentación interesante sobre el tema: https://www.mongodb.com/presentations/aggregation-framework-0?jmp=docs&_ga=1.223708571.1466850754.1477658152 <video style="width:100%;" src="https://docs.m...
respuestas = db['posts'].aggregate( [ {'$project' : { 'Id' : True }}, {'$limit': 20} ]) list(respuestas)
mongo/sesion4.ipynb
dsevilla/bdge
mit
Lookup!
respuestas = posts.aggregate( [ {'$match': { 'Score' : {'$gte': 40}}}, {'$lookup': { 'from': "users", 'localField': "OwnerUserId", 'foreignField': "Id", 'as': "owner"} } ]) list(respuestas)
mongo/sesion4.ipynb
dsevilla/bdge
mit
El $lookup genera un array con todos los resultados. El operador $arrayElementAt accede al primer elemento.
respuestas = db.posts.aggregate( [ {'$match': { 'Score' : {'$gte': 40}}}, {'$lookup': { 'from': "users", 'localField': "OwnerUserId", 'foreignField': "Id", 'as': "owner"} }, { '$project' : { 'Id' : True, 'Sc...
mongo/sesion4.ipynb
dsevilla/bdge
mit
$unwind también puede usarse. "Desdobla" cada fila por cada elemento del array. En este caso, como sabemos que el array sólo contiene un elemento, sólo habrá una fila por fila original, pero sin el array. Finalmente se puede proyectar el campo que se quiera.
respuestas = db.posts.aggregate( [ {'$match': { 'Score' : {'$gte': 40}}}, {'$lookup': { 'from': "users", 'localField': "OwnerUserId", 'foreignField': "Id", 'as': "owner"} }, { '$unwind': '$owner'}, { '$project' : { ...
mongo/sesion4.ipynb
dsevilla/bdge
mit
Ejemplo de realización de la consulta RQ4 Como ejemplo de consulta compleja con el Framework de Agregación, adjunto una posible solución a la consulta RQ4:
RQ4 = db.posts.aggregate( [ { "$match" : {"PostTypeId": 2}}, {'$lookup': { 'from': "posts", 'localField': "ParentId", 'foreignField': "Id", 'as': "question" } }, { '$unwind' : '$question' }, { ...
mongo/sesion4.ipynb
dsevilla/bdge
mit
La explicación es como sigue: Se eligen sólo las respuestas Se accede a la tabla posts para recuperar los datos de la pregunta A continuación se proyectan sólo el usuario que pregunta y el que hace la respuesta El paso más imaginativo es el de agrupación. Lo que se intenta es que ambos pares de usuarios que están rela...
RQ4 = db.posts.aggregate( [ {'$match': { 'PostTypeId' : 2}}, {'$lookup': { 'from': "posts", 'localField': "ParentId", 'foreignField': "Id", 'as': "question"} }, { '$unwind' : '$question' }, { '$proje...
mongo/sesion4.ipynb
dsevilla/bdge
mit
Ejemplo de consulta: Tiempo medio desde que se hace una pregunta hasta que se le da la primera respuesta Veamos cómo calcular el tiempo medio desde que se hace una pregunta hasta que se le da la primera respuesta. En este caso se puede utilizar las respuestas para apuntar a qué pregunta correspondieron. No se considera...
from bson.code import Code # La función map agrupará todas las respuestas, pero también necesita las mapcode = Code(""" function () { if (this.PostTypeId == 2) emit(this.ParentId, {q: null, a: {Id: this.Id, CreationDate: this.CreationDate}, diff: null}) else if (this.PostTypeId == 1) emit(this...
mongo/sesion4.ipynb
dsevilla/bdge
mit
Esto sólo calcula el tiempo mínimo de cada pregunta a su respuesta. Después habría que aplicar lo visto en otros ejemplos para calcular la media. Con agregación, a continuación, sí que se puede calcular la media de forma relativament sencilla:
min_answer_time = db.posts.aggregate([ {"$match" : {"PostTypeId" : 2}}, { '$group' : {'_id' : '$ParentId', # 'answers' : { '$push' : {'Id' : "$Id", 'CreationDate' : "$CreationDate"}}, 'min' : {'$min' : "$CreationDate"} } }, { "$lookup" :...
mongo/sesion4.ipynb
dsevilla/bdge
mit
Vertex SDK: AutoML training image object detection model for export to edge <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online_export_edge.ipynb"> <img src="https://c...
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Tutorial Now you are ready to start creating your own AutoML image object detection model. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv"
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Quick peek at your data This tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
if "IMPORT_FILES" in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the Dataset Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters: display_name: The human readable name for the Dataset resource. gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource. import...
dataset = aip.ImageDataset.create( display_name="Salads" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE], import_schema_uri=aip.schema.dataset.ioformat.image.bounding_box, ) print(dataset.resource_name)
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters: display_name: The human readable name for the T...
dag = aip.AutoMLImageTrainingJob( display_name="salads_" + TIMESTAMP, prediction_type="object_detection", multi_label=False, model_type="MOBILE_TF_LOW_LATENCY_1", base_model=None, ) print(dag)
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for tra...
model = dag.run( dataset=dataset, model_display_name="salads_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, budget_milli_node_hours=20000, disable_early_stopping=False, )
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in y...
# Get model resource ID models = aip.Model.list(filter="display_name=salads_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aip.gapic.ModelServiceClient(client_options=client_options) model_evaluations = model...
notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0