markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
nopython mode Numba can compile a Python function in two modes: 1. python mode. In Python mode, the compiled code relies on the CPython interpreter. (More flexible, but slow). 2. nopython mode. The code is compiled to standalone 100% machine code that doesn't rely on CPython, i.e, when we call the function, it doesn't ...
@numba.jit('float64[:] (float64[:,:], float64[:])', nopython=True) def dot_numba1(A,b): m,n = A.shape c = np.empty(m) for i in range(m): c[i] = np.dot(A[i],b) return c A = np.random.random((1000,1000)) b = np.random.random(1000) %timeit dot_numba1(A,b) @numba.jit('float64[:] (float64[:,:], fl...
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Lets create a silly example where Numba fails at nopython mode. Prepare for a long error...
@numba.jit('float64[:] (float64[:,:], float64[:])', nopython=True) def dot_numba2(A,b): m,n = A.shape c = dict() for i in range(m): c[i] = np.dot(A[i],b) return np.array(c.values)
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
for a full list of the supported native Python features see here. Numba and NumPy One objective of Numba is having a seamless integration with NumPy. Numba excels at generating code that executes on top of NumPy arrays. Numba understands calls to (some, almost all) NumPy features, and is able to generate equivalent n...
#row mean example @numba.jit('float64[:] (float64[:,:])', nopython=True) def row_mean(A): m,n = A.shape mean = np.empty(m) for i in range(m): mean[i] = np.sum(A[i])/n return mean A = np.random.random((10,10)) print( row_mean(A) )
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Ahead of time Compilation Numba also provides a facility for Ahead-of-Time compilation (AOT), which has the following beneficts: 1. AOT compilation produces a compiled extension module which does not depend on Numba. 2. There is no compilation overhead at runtime. But it is much more restrictive than JIT functionality....
def step(): if random.random()>.5: return 1. else: return -1.
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
and create the simulation in pure Python, where the function walk() takes a number of steps as input:
def walk(n): x = np.zeros(n) dx = 1./n for i in range(n-1): x_new = x[i] + dx * step() if abs(x_new) > 5e-3: x[i+1] = 0. else: x[i+1] = x_new return x
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Now we create a random walk, plot it and %timeit its execution:
n = 100000 x = walk(n) plt.figure(figsize=(8,8)) plt.plot(x) plt.show() python_t = %timeit -o walk(n)
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Now, let's JIT-compile this function with Numba:
@numba.jit(nopython=True) def step_numba(): if random.random()>.5: return 1. else: return -1. @numba.jit(nopython=True) def walk_numba(n): x = np.zeros(n) dx = 1./n for i in range(n-1): x_new = x[i] + dx * step_numba() if abs(x_new) > 5e-3: x[i+1] = 0. else: ...
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Mandelbrot fractal Now we will create a Mandelbrot fractal (which is a task that cannot be vectorized) using native Python and Numba... It basically consist in the next iteration, with starting point $z_0 = 0$: $$ z_{i+1} = z_{i}^2 + c $$ for which the values of the sequence remains bounded.
size = 200 iterations = 100 def mandelbrot_python(m, size, iterations): for i in range(size): for j in range(size): c = -2 + 3./size*j + 1j*(1.5-3./size*i) z= 0 for n in range(iterations): if np.abs(z) <= 10: z = z*z + c ...
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Now we evaluate the time taken by this function:
%%timeit m = np.zeros((size,size)) mandelbrot_python(m,size,iterations)
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Next, we add the numba.jit decorator and let Numba infer the types of all the variables (Lazzy Compilation):
@numba.jit def mandelbrot_numba(m, size, iterations): for i in range(size): for j in range(size): c = -2 + 3./size*j + 1j*(1.5-3./size*i) z= 0 for n in range(iterations): if np.abs(z) <= 10: z = z*z + c m[i, j] = n ...
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
<div id='numexpr' /> 4.- NumExpr The problem... As mention in previous lectures, NumPy is good (fast and efficient) at doing vector operations. Moreover, it has some problems when trying to evaluate to complex expressions:
def test_func(a,b,c): """ Consider that a, b and c are 1D ndarrys """ return np.sin(a**2 + np.exp(b)) + np.cos(b**2 + np.exp(c)) + np.tan(a**2+b**2+c**2) n = 1000000 a = np.random.random(n) b = np.random.random(n) c = np.random.random(n) %timeit test_func(a,b,c)
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Let's create now a Numba function that performs the same operations but iteratively:
@numba.jit('float64[:] (float64[:], float64[:], float64[:])', nopython=True) def test_func_numba(a,b,c): n = len(a) res = np.empty(n) for i in range(n): res[i] = np.sin(a[i]**2 + np.exp(b[i])) + np.cos(b[i]**2 + np.exp(c[i])) + np.tan(a[i]**2+b[i]**2+c[i]**2) return res %timeit test_func_numba(...
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Then, what is the problem with NumPy: 1. Implicit copy operations. 2. A lot of iterations for over the same arrays. 3. Bad usage of CPU registers... The solution: NumExpr Numexpr is a fast numerical expression evaluator that use less memory than doing the same calculation. With its multi-threaded capabilities can make...
# Change to size of the arrays to see the differences m = 10000 n = 5000 A = np.random.random((m,n)) B = np.random.random((m,n)) C = np.random.random((m,n)) np_t = %timeit -o test_func(A,B,C) ne_t = %timeit -o ne.evaluate('sin(a**2 + exp(b)) + cos(b**2 + exp(c)) + tan(a**2+b**2+c**2)') print("Improvement: {0} times"...
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Additionally we can explicitly specify the number of threads that NumExpr can use to evaluate the expression with the set_num_threads() function:
n_threads = 4 for i in range(1, n_threads+1): ne.set_num_threads(i) %timeit ne.evaluate('sin(a**2 + exp(b)) + cos(b**2 + exp(c)) + tan(a**2+b**2+c**2)')
04_jit/04_jit.ipynb
mavillan/SciProg
gpl-3.0
Zdaj pa zares! V datoteki spodaj so zajeti naslednji podatki : datum začetka vikenda (petek), ID hotela in ceno namestitve za 2 noči za 2 odrasli osebi.
london[:10]
analiza_London.ipynb
KorosecN13/Potovanje
mit
V datoteki spodaj so zajeti podatki o hotelih: ID hotela, ime, razdalja do središča, ocena gostov, ali je možnost brezplačne prekinitve rezervacije in kratek opis.
hoteli[:3]
analiza_London.ipynb
KorosecN13/Potovanje
mit
Spodaj pa si lahko ogledamo združeno tabelo, kjer so prikazane vse namestitve z dodanimi podatki o hotelih.
merged = london.merge(hoteli, on= 'hotelId') merged[:3]
analiza_London.ipynb
KorosecN13/Potovanje
mit
Kdaj se najbolj splača prespati v Londonu in koliko denarja je potrebno za to odšteti?
urejeni_po_ceni = london.sort_values('price') urejeni_po_ceni[:3] ############popravi na prvih 20 vrstic!!!
analiza_London.ipynb
KorosecN13/Potovanje
mit
Odgovor: Najcenejša nočitev je 3. novembra 2017 in stane samo 62 dolarjev. Če pogledamo malo širše: najcenejših 20 ponudb se giblje med 62 in 74 dolarji. Koliko je to evrov, lahko preverite tukaj. Če pogledamo datume, lahko opazimo, da so prisotni predvsem meseci iz druge polovice leta. Morda je takrat še bolj deževno...
london_po_datumih = london.groupby("friday") london_po_datumih["price"].mean().plot()
analiza_London.ipynb
KorosecN13/Potovanje
mit
Cene kar precej poskakujejo, vendar pa lahko zaznamo naraščanje v prvi polovici leta in potem zopet padanje v drugi polovici leta. Najdražje nočitve so v povprečju junija, najcenejše pa marca. Ali so hoteli z boljšo oceno tudi bliže centru?
razdalje = merged['proximityDistance'] ocene = merged['guestRating'] razdalje, ocene = zip(*sorted(zip(razdalje,ocene), key=lambda x: x[0])) plt.plot(razdalje, ocene)
analiza_London.ipynb
KorosecN13/Potovanje
mit
Iz grafa se težko razbere kakšno smiselno povezavo med oddaljenostjo od centra in oceno gostov. Verjetno k temu pripomorejo dejavniki, ki jih v analizi nisem zajela, npr možnost uporabe in kvaliteta wifija ali kaj podobnega. Opazka: Graf od 6 milje naprej ni reprezentativen. Če bi bil, bi lahko posplošili, da je je oce...
urejeni_po_ceni_skupno = merged.sort_values('price') najcenejsi = urejeni_po_ceni_skupno[:200] najdrazji = urejeni_po_ceni_skupno[-200:] urejeni_po_ceni_skupno[:10] urejeni_po_ceni_skupno[-10:] najcenejsi.mean() najdrazji.mean()
analiza_London.ipynb
KorosecN13/Potovanje
mit
Za vzorec sem vzela 200 najcenejših in 200 najdražjih ponudb. Razlika v povprečni ceni je kar 677,395 dolarjev. Koliko je to evrov, lahko preverite tukaj. Razlika med najcenejšo in najdražjo ponudbo pa je kar 1789 dolarjev. Ali obstaja korelacija med ceno in možnostjo brezplačnega preklica rezervacije? Iz zgornjih dve...
razdalje = merged['proximityDistance'] cene = merged['price'] razdalje, cene = zip(*sorted(zip(razdalje,cene), key=lambda x: x[0])) plt.plot(razdalje, cene)
analiza_London.ipynb
KorosecN13/Potovanje
mit
In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
numbers = [int(number) for number in numbers_str.split(',')] max(numbers)
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Great! We'll be using the numbers list you created above in the next few problems. In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output: [506, 528, 550, 581, 699, 721, 736, 804, 855, 985] (Hint: use a slice.)
sorted(numbers)[-10:]
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output: [120, 171, 258, 279, 528, 699, 804, 855]
[number for number in sorted(numbers) if number % 3 == 0]
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output: [2.6457513110645...
from math import sqrt [sqrt(number) for number in sorted(numbers) if number < 100]
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output: ['Jupiter', 'Saturn', 'Uranus']
# I think that the question has a typo. This is for planets that have a diameter greater than four Earth DIAMETERS [planet['name'] for planet in planets if planet['diameter'] > 4 * planets[2]['diameter']]
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
sum([planet['mass'] for planet in planets])
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output: ['Jupiter', 'Saturn', 'Uranus', 'Neptune']
[planet['name'] for planet in planets if planet['type'].find('giant') > -1]
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Ex...
# TA-COMMENT: (+0.25) You were almost there! Just had to adjust your list comprehension: # [planet['name'] for planet in newlist] newlist = sorted(planets, key=lambda k: k['moons']) [planetfor planet in newlist # sorted_moons = sorted([planet['moons'] for planet in planets]) # sorted([planet['name'] for planet in pla...
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Problem set #3: Regular expressions In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Mak...
import re poem_lines = ['Two roads diverged in a yellow wood,', 'And sorry I could not travel both', 'And be one traveler, long I stood', 'And looked down one as far as I could', 'To where it bent in the undergrowth;', '', 'Then took the other, as just as fair,', 'And having perhaps the better claim,', 'Because...
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library. In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (...
[line for line in poem_lines if re.search(r'\b\w{4} \b\w{4}\b', line)]
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to w...
[line for line in poem_lines if re.search(r'\b\w{5}\W?$', line)]
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output: ['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
re.findall(r'\bI \b(\w+)\b', all_lines)
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop. Expected output: [{'name': ...
# Way 1 menu = [] for item in entrees: food_entry = {} for x in re.findall(r'(^.+) \$', item): food_entry['name'] = str(x) for x in re.findall(r'\$(\d+.\d{2})', item): food_entry['price'] = float(x) if re.search(r'- v$', item): food_entry['vegetarian'] = True else: f...
Data_and_Databases_homework/04/homework_4_schuetz_graded.ipynb
raschuetz/foundations-homework
mit
Setup Resources
!kubectl create namespace cifar10 %%writefile broker.yaml apiVersion: eventing.knative.dev/v1 kind: broker metadata: name: default namespace: cifar10 !kubectl create -f broker.yaml %%writefile event-display.yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello-display namespace: cifar10 spec: repli...
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Create the Kfserving image classification model for Cifar10. We add in a logger for requests - the default destination is the namespace Knative Broker.
%%writefile cifar10.yaml apiVersion: "serving.kubeflow.org/v1alpha2" kind: "InferenceService" metadata: name: "tfserving-cifar10" namespace: cifar10 spec: default: predictor: tensorflow: storageUri: "gs://kfserving-samples/tfserving/cifar10/resnet32" logger: mode: all url: ...
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Create the pretrained Drift Detector. We forward replies to the message-dumper we started. Notice the drift_batch_size. The drift detector will wait until drift_batch_size number of requests are received before making a drift prediction.
%%writefile cifar10cd.yaml apiVersion: serving.knative.dev/v1 kind: Service metadata: name: drift-detector namespace: cifar10 spec: template: metadata: annotations: autoscaling.knative.dev/minScale: "1" spec: containers: - image: seldonio/alibi-detect-server:0.0.2 imagePu...
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Create a Knative trigger to forward logging events to our Outlier Detector.
%%writefile trigger.yaml apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: drift-trigger namespace: cifar10 spec: broker: default filter: attributes: type: org.kubeflow.serving.inference.request subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service n...
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') CLUSTER_IP=CLUSTER_IPS[0] print(CLUSTER_IP)
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following cell. INGRESS_GATEWAY_SERVICE=$(kubectl get svc --namespace istio-system --selector="app=istio-ingressgateway" --output jsonpath='{.items[0].metadata.name}') kubectl port-forward --namespace istio-sys...
#CLUSTER_IP="localhost:8080" SERVICE_HOSTNAMES=!(kubectl get inferenceservice -n cifar10 tfserving-cifar10 -o jsonpath='{.status.url}' | cut -d "/" -f 3) SERVICE_HOSTNAME_CIFAR10=SERVICE_HOSTNAMES[0] print(SERVICE_HOSTNAME_CIFAR10) SERVICE_HOSTNAMES=!(kubectl get ksvc -n cifar10 drift-detector -o jsonpath='{.status.u...
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Normal Prediction
idx = 1 X = X_train[idx:idx+1] show(X) predict(X)
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Test Drift We need to accumulate a large enough batch size so no drift will be tested as yet.
!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
We will now send 5000 requests to the model in batches. The drift detector will run at the end of this as we set the drift_batch_size to 5000 in our yaml above.
from tqdm.notebook import tqdm for i in tqdm(range(0,5000,100)): X = X_train[i:i+100] predict(X)
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Let's check the message dumper and extract the first drift result.
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}') data= [] for i in range(0,len(res)): if res[i] == 'Data,': data.append(res[i+1]) j = json.loads(json.loads(data[0])) print("Drift",j["data"]["is_drift"]==1)
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Now, let's create some CIFAR10 examples with motion blur.
from alibi_detect.datasets import fetch_cifar10c, corruption_types_cifar10c corruption = ['motion_blur'] X_corr, y_corr = fetch_cifar10c(corruption=corruption, severity=5, return_X_y=True) X_corr = X_corr.astype('float32') / 255 show(X_corr[0]) show(X_corr[1]) show(X_corr[2])
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Send these examples to the predictor.
for i in tqdm(range(0,5000,100)): X = X_corr[i:i+100] predict(X)
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Now when we check the message dump we should find a new drift response.
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}') data= [] for i in range(0,len(res)): if res[i] == 'Data,': data.append(res[i+1]) j = json.loads(json.loads(data[-1])) print("Drift",j["data"]["is_drift"]==1)
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Tear Down
!kubectl delete ns cifar10
docs/samples/drift-detection/alibi-detect/cifar10/cifar10_drift.ipynb
kubeflow/kfserving-lts
apache-2.0
Data Exploration In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results. Since the main ...
# TODO: Minimum price of the data minimum_price = None # TODO: Maximum price of the data maximum_price = None # TODO: Mean price of the data mean_price = None # TODO: Median price of the data median_price = None # TODO: Standard deviation of prices of the data std_price = None # Show the calculated statistics prin...
boston_housing/boston_housing_original.ipynb
myfunprograms/machine-learning
apache-2.0
Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood): - 'RM' is the average number of rooms among homes in the neighborhood. - 'LSTAT' is the percentage of homeowners in the neighborhood considered "...
# TODO: Import 'r2_score' def performance_metric(y_true, y_predict): """ Calculates and returns the performance score between true and predicted values based on the metric chosen. """ # TODO: Calculate the performance score between 'y_true' and 'y_predict' score = None # Return the s...
boston_housing/boston_housing_original.ipynb
myfunprograms/machine-learning
apache-2.0
Question 2 - Goodness of Fit Assume that a dataset contains five data points and a model made the following predictions for the target variable: | True Value | Prediction | | :-------------: | :--------: | | 3.0 | 2.5 | | -0.5 | 0.0 | | 2.0 | 2.1 | | 7.0 | 7.8 | | 4.2 | 5.3 | Would you consider this model to have succe...
# Calculate the performance of this model score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3]) print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
boston_housing/boston_housing_original.ipynb
myfunprograms/machine-learning
apache-2.0
Answer: Implementation: Shuffle and Split Data Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of th...
# TODO: Import 'train_test_split' # TODO: Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = (None, None, None, None) # Success print "Training and testing split was successful."
boston_housing/boston_housing_original.ipynb
myfunprograms/machine-learning
apache-2.0
Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint: What could go wrong with not having a way to test your model? Answer: Analyzing Model Performance In this third section of the project, you'll take a look at sev...
# Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices)
boston_housing/boston_housing_original.ipynb
myfunprograms/machine-learning
apache-2.0
Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint: Are the learning curves converging to parti...
vs.ModelComplexity(X_train, y_train)
boston_housing/boston_housing_original.ipynb
myfunprograms/machine-learning
apache-2.0
Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint: How do you know when a model is suffering fro...
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' def fit_model(X, y): """ Performs grid search over the 'max_depth' parameter for a decision tree regressor trained on the input data [X, y]. """ # Create cross-validation sets from the training data cv_sets = ShuffleSpl...
boston_housing/boston_housing_original.ipynb
myfunprograms/machine-learning
apache-2.0
Making Predictions Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. Y...
# Fit the training data to the model using grid search reg = fit_model(X_train, y_train) # Produce the value for 'max_depth' print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
boston_housing/boston_housing_original.ipynb
myfunprograms/machine-learning
apache-2.0
Answer: Question 10 - Predicting Selling Prices Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients: | Feature | Client 1 | Client 2 | Client 3 | ...
# Produce a matrix for client data client_data = [[5, 17, 15], # Client 1 [4, 32, 22], # Client 2 [8, 3, 12]] # Client 3 # Show predictions for i, price in enumerate(reg.predict(client_data)): print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
boston_housing/boston_housing_original.ipynb
myfunprograms/machine-learning
apache-2.0
Answer: Sensitivity An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too...
vs.PredictTrials(features, prices, fit_model, client_data)
boston_housing/boston_housing_original.ipynb
myfunprograms/machine-learning
apache-2.0
First we'll generate a test field grid. You only need to do this the first time you run the simulator.
ztf_sim.fields.generate_test_field_grid()
notebooks/ztf_sim_introduction.ipynb
ZwickyTransientFacility/ztf_sim
bsd-3-clause
Let's load the Fields object with the default field grid. Fields is a thin wrapper around a pandas DataFrame containing the field information.
f = ztf_sim.fields.Fields()
notebooks/ztf_sim_introduction.ipynb
ZwickyTransientFacility/ztf_sim
bsd-3-clause
The raw fieldid and coordinates are stored as a pandas Dataframe in the .fields attribute:
f.fields.head()
notebooks/ztf_sim_introduction.ipynb
ZwickyTransientFacility/ztf_sim
bsd-3-clause
Now let's calculate their altitude and azimuth at a specific time using the astropy.time.Time object:
f.alt_az(Time.now()).head()
notebooks/ztf_sim_introduction.ipynb
ZwickyTransientFacility/ztf_sim
bsd-3-clause
Demonstrating low-level access to fields by the fieldid index (usually not required):
f.fields.loc[853]
notebooks/ztf_sim_introduction.ipynb
ZwickyTransientFacility/ztf_sim
bsd-3-clause
We can select fields with conditionals:
f.fields['dec'] > -30.
notebooks/ztf_sim_introduction.ipynb
ZwickyTransientFacility/ztf_sim
bsd-3-clause
It's easier to use the select_fields convenience function, though. It returns a boolean Series indexed by fieldid that we can use to do calculations on subsets of the field grid.
cuts = f.select_fields(dec_range=[0,10],gridid=0,ecliptic_lat_range=[-5,5]) cuts.head()
notebooks/ztf_sim_introduction.ipynb
ZwickyTransientFacility/ztf_sim
bsd-3-clause
Calculate the current altitude and azimuth of the selected fields:
f.alt_az(Time.now(),cuts=cuts)
notebooks/ztf_sim_introduction.ipynb
ZwickyTransientFacility/ztf_sim
bsd-3-clause
Calculating the overhead time (max of ha, dec, dome slews and readout time):
f.overhead_time(853,Time.now()) f = ztf_sim.fields.Fields() Exposure_time = 60*u.second Night_length=9*u.h time0 = Time('2015-09-10 20:00:00') + 7*u.h time = time0 f.fields = f.fields.join(pd.DataFrame(np.zeros(len(f.fields)),columns=['observed'])) f.fields = f.fields.join(pd.DataFrame(np.zeros(len(f.fields)),column...
notebooks/ztf_sim_introduction.ipynb
ZwickyTransientFacility/ztf_sim
bsd-3-clause
Let's show the symbols data, to see how good the recommender has to be.
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:])))) # Simulate (with new envs, each time) n_epochs = 4 for i in range(n_epochs): tic = time() env.reset(STARTING_DAYS_AHEAD) results_list = si...
notebooks/prod/n10_dyna_q_with_predictor_full_training.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Let's run the trained agent, with the test set First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
TEST_DAYS_AHEAD = 112 env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD) tic = time() results_list = sim.simulate_period(total_data_test_df, SYMBOL, agents[0], learn=False, ...
notebooks/prod/n10_dyna_q_with_predictor_full_training.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
What are the metrics for "holding the position"?
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))
notebooks/prod/n10_dyna_q_with_predictor_full_training.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Define formulae
def peakdens3D(x,k): fd1 = 144*stats.norm.pdf(x)/(29*6**(0.5)-36) fd211 = k**2.*((1.-k**2.)**3. + 6.*(1.-k**2.)**2. + 12.*(1.-k**2.)+24.)*x**2. / (4.*(3.-k**2.)**2.) fd212 = (2.*(1.-k**2.)**3. + 3.*(1.-k**2.)**2.+6.*(1.-k**2.)) / (4.*(3.-k**2.)) fd213 = 3./2. fd21 = (fd211 + fd212 + fd213) fd22 ...
peakdistribution/find_peakdistr_2.ipynb
jbpoline/newpower
mit
Apply formulae to a range of x-values
xs = np.arange(-4,4,0.01).tolist() ys_3d_k01 = [] ys_3d_k05 = [] ys_3d_k1 = [] ys_2d_k01 = [] ys_2d_k05 = [] ys_2d_k1 = [] ys_1d_k01 = [] ys_1d_k05 = [] ys_1d_k1 = [] for x in xs: ys_1d_k01.append(peakdens1D(x,0.1)) ys_1d_k05.append(peakdens1D(x,0.5)) ys_1d_k1.append(peakdens1D(x,1)) ys_2d_k01.append(...
peakdistribution/find_peakdistr_2.ipynb
jbpoline/newpower
mit
Figure 1 from paper
plt.figure(figsize=(7,5)) plt.plot(xs,ys_1d_k01,color="black",ls=":",lw=2) plt.plot(xs,ys_1d_k05,color="black",ls="--",lw=2) plt.plot(xs,ys_1d_k1,color="black",ls="-",lw=2) plt.plot(xs,ys_2d_k01,color="blue",ls=":",lw=2) plt.plot(xs,ys_2d_k05,color="blue",ls="--",lw=2) plt.plot(xs,ys_2d_k1,color="blue",ls="-",lw=2) plt...
peakdistribution/find_peakdistr_2.ipynb
jbpoline/newpower
mit
Apply the distribution to simulated data I now simulate data, extract peaks and compare these simulated peaks with the theoretical distribution.
os.chdir("/Users/Joke/Documents/Onderzoek/Studie_7_newpower/WORKDIR/") sm=1 smooth_FWHM = 3 smooth_sd = smooth_FWHM/(2*math.sqrt(2*math.log(2))) data = surrogate_3d_dataset(n_subj=1,sk=smooth_sd,shape=(500,500,500),noise_level=1) minimum = data.min() newdata = data - minimum #little trick because fsl.model.Cluster ign...
peakdistribution/find_peakdistr_2.ipynb
jbpoline/newpower
mit
Here's a boring example of rendering a DataFrame, without any (visible) styles:
df.style
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Note: The DataFrame.style attribute is a property that returns a Styler object. Styler has a _repr_html_ method defined on it so they are rendered automatically. If you want the actual HTML back for further processing or for writing to file call the .render() method which returns a string. The above output looks very s...
df.style.highlight_null().render().split('\n')[:10]
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
The row0_col2 is the identifier for that particular cell. We've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page (you can set the uuid if you'd like to tie together the styling of two ...
def color_negative_red(val): """ Takes a scalar and returns a string with the css property `'color: red'` for negative strings, black otherwise. """ color = 'red' if val < 0 else 'black' return 'color: %s' % color
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
In this case, the cell's style depends only on it's own value. That means we should use the Styler.applymap method which works elementwise.
s = df.style.applymap(color_negative_red) s
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to be able to resuse your existing knowledge of how to interact with DataFrames. Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in a &lt;styl...
def highlight_max(s): ''' highlight the maximum in a Series yellow. ''' is_max = s == s.max() return ['background-color: yellow' if v else '' for v in is_max] df.style.apply(highlight_max)
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
In this case the input is a Series, one column at a time. Notice that the output shape of highlight_max matches the input shape, an array with len(s) items. We encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain.
df.style.\ applymap(color_negative_red).\ apply(highlight_max)
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Above we used Styler.apply to pass in each column one at a time. <span style="background-color: #DEDEBE">Debugging Tip: If you're having trouble writing your style function, try just passing it into <code style="background-color: #DEDEBE">DataFrame.apply</code>. Internally, <code style="background-color: #DEDEBE">Style...
def highlight_max(data, color='yellow'): ''' highlight the maximum in a Series or DataFrame ''' attr = 'background-color: {}'.format(color) if data.ndim == 1: # Series from .apply(axis=0) or axis=1 is_max = data == data.max() return [attr if v else '' for v in is_max] else: # f...
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index and column labels.
df.style.apply(highlight_max, color='darkorange', axis=None)
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Building Styles Summary Style functions should return strings with one or more CSS attribute: value delimited by semicolons. Use Styler.applymap(func) for elementwise styles Styler.apply(func, axis=0) for columnwise styles Styler.apply(func, axis=1) for rowwise styles Styler.apply(func, axis=None) for tablewise styles...
df.style.apply(highlight_max, subset=['B', 'C', 'D'])
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
For row and column slicing, any valid indexer to .loc will work.
df.style.applymap(color_negative_red, subset=pd.IndexSlice[2:5, ['B', 'D']])
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Only label-based slicing is supported right now, not positional. If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword. python my_func2 = functools.partial(my_func, subset=42) Finer Control: Display Values We distinguish the di...
df.style.format("{:.2%}")
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Use a dictionary to format specific columns.
df.style.format({'B': "{:0<4.0f}", 'D': '{:+.2f}'})
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Or pass in a callable (or dictionary of callables) for more flexible handling.
df.style.format({"B": lambda x: "±{:.2f}".format(abs(x))})
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Builtin Styles Finally, we expect certain styling functions to be common enough that we've included a few "built-in" to the Styler, so you don't have to write them yourself.
df.style.highlight_null(null_color='red')
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
You can create "heatmaps" with the background_gradient method. These require matplotlib, and we'll use Seaborn to get a nice colormap.
import seaborn as sns cm = sns.light_palette("green", as_cmap=True) s = df.style.background_gradient(cmap=cm) s
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend the range of your data by low and high percent so that when we convert the colors, the colormap's entire range isn't used. This is useful so that you can actually read the text still.
# Uses the full color range df.loc[:4].style.background_gradient(cmap='viridis') # Compress the color range (df.loc[:4] .style .background_gradient(cmap='viridis', low=.5, high=0) .highlight_null('red'))
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
There's also .highlight_min and .highlight_max.
df.style.highlight_max(axis=0)
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Use Styler.set_properties when the style doesn't actually depend on the values.
df.style.set_properties(**{'background-color': 'black', 'color': 'lawngreen', 'border-color': 'white'})
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Bar charts You can include "bar charts" in your DataFrame.
df.style.bar(subset=['A', 'B'], color='#d65f5f')
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
New in version 0.20.0 is the ability to customize further the bar chart: You can now have the df.style.bar be centered on zero or midpoint value (in addition to the already existing way of having the min value at the left side of the cell), and you can pass a list of [color_negative, color_positive]. Here's how you can...
df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d'])
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
The following example aims to give a highlight of the behavior of the new align options:
import pandas as pd from IPython.display import HTML # Test series test1 = pd.Series([-100,-60,-30,-20], name='All Negative') test2 = pd.Series([10,20,50,100], name='All Positive') test3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg') head = """ <table> <thead> <th>Align</th> <th>All Negative<...
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Sharing Styles Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set
df2 = -df style1 = df.style.applymap(color_negative_red) style1 style2 = df2.style style2.use(style1.export()) style2
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Notice that you're able share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon. Other Options You've seen a few methods for data-driven styling. Styler also provides a few other options for styles that don't depend on the data. precision captions table-...
with pd.option_context('display.precision', 2): html = (df.style .applymap(color_negative_red) .apply(highlight_max)) html
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Or through a set_precision method.
df.style\ .applymap(color_negative_red)\ .apply(highlight_max)\ .set_precision(2)
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0
Setting the precision only affects the printed number; the full-precision values are always passed to your style functions. You can always use df.round(2).style if you'd prefer to round from the start. Captions Regular table captions can be added in a few ways.
df.style.set_caption('Colormaps, with a caption.')\ .background_gradient(cmap=cm)
pandas/doc/source/style.ipynb
Ziqi-Li/bknqgis
gpl-2.0