markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
So, now let's plot the results-- How does the sentiment of Lovecraft's story change over the course of the book? | df.cum_sum.plot(figsize=(12,5), color='r',
title='Sentiment Polarity cumulative summation for HP Lovecraft\'s The Shunned House')
plt.xlabel('Sentence number')
plt.ylabel('Cumulative sum of sentiment polarity') | textblob_lovecraft.ipynb | dagrha/textual-analysis | mit |
The climax of Lovecraft's story appears to be around sentence 255 or so. Things really drop off at that point and get dark, according to the TextBlob sentiment analysis.
What's the dataframe look like? | df.head() | textblob_lovecraft.ipynb | dagrha/textual-analysis | mit |
Let's get some basic statistical information about sentence seniments: | df.describe() | textblob_lovecraft.ipynb | dagrha/textual-analysis | mit |
For fun, let's just see what TextBlob thinks are the most negatively polar sentences in the short story: | for i in df[df.polarity < -0.5].index:
print i, tb.sentences[i]
words = re.findall(r'\w+', open('lovecraft.txt').read().lower())
collections.Counter(words).most_common(10) | textblob_lovecraft.ipynb | dagrha/textual-analysis | mit |
Let's take a quick peak at word frequencies by using the re and collections library. Here we'll use the Counter() and most_common() methods to return a list of tuples of the most common words in the story: | words = re.findall(r'\w+', ushunned.lower())
common = collections.Counter(words).most_common()
df_freq = pd.DataFrame(common, columns=['word', 'freq'])
df_freq.set_index('word').head() | textblob_lovecraft.ipynb | dagrha/textual-analysis | mit |
Run POD
Test shape of matrix needed for POD function
num_vecs = 50; #<-- equivalent to the number of PIV snapshots (Also number of total POD modes)
vecs = np.random.random((100, num_vecs))
vecs.shape | uAll = np.concatenate((Uf.reshape(uSize[0]*uSize[1],uSize[2]), Vf.reshape(uSize[0]*uSize[1],uSize[2]), Wf.reshape(uSize[0]*uSize[1],uSize[2])), axis = 0)
uAll.shape
num_modes = 50;
modes, eig_vals = mr.compute_POD_matrices_snaps_method(uAll, list(range(num_modes)))
menergy = eig_vals/np.sum(eig_vals)
menergy_su... | runPOD-BigNeutral.ipynb | owenjhwilliams/ASIIT | mit |
fig, ax = plt.subplots()
ax.bar(range(num_modes),menergy[1:num_modes]*100) | reload(PODutils)
Umodes, Vmodes, Wmodes = PODutils.reconstructPODmodes(modes,uSize,num_modes,3)
Wmodes.shape
#Calculate the mode coefficients
C = modes.transpose()*uAll
C.shape | runPOD-BigNeutral.ipynb | owenjhwilliams/ASIIT | mit |
Plot modal energy and contribution total | ind = np.arange(num_modes) # the x locations for the groups
width = 1 # the width of the bars
f = plt.figure()
ax = plt.gca()
ax2 = plt.twinx()
rect = ax.bar(ind,menergy[:num_modes], width, color='gray')
line = ax2.plot(ind,menergy_sum[:num_modes],'--r')
ax.set_xlabel("Mode Number",fontsize=14)
ax.set_ylabel("... | runPOD-BigNeutral.ipynb | owenjhwilliams/ASIIT | mit |
Plot some modes | reload(PODutils)
PODutils.plotPODmodes3D(X,Y,Umodes,Vmodes,Wmodes,list(range(5))) | runPOD-BigNeutral.ipynb | owenjhwilliams/ASIIT | mit |
Plot the variation of the coefficients | reload(PODutils)
PODutils.plotPODcoeff(C,list(range(6)),20) | runPOD-BigNeutral.ipynb | owenjhwilliams/ASIIT | mit |
Python Generators | # expression generator
spam = [0, 1, 2, 3, 4]
fooo = (2 ** s for s in spam) # Syntax similar to list comprehension but between parentheses
print fooo
print fooo.next()
print fooo.next()
print fooo.next()
print fooo.next()
print fooo.next()
# Generator is exhausted
print fooo.next() | advanced/0_Iterators_generators_and_coroutines.ipynb | ealogar/curso-python | apache-2.0 |
Generators are a simple and powerful tool for creating iterators.
Each iteration is computed on demand
In general terms they are more efficient than list comprehension or loops
If not the whole sequence is traversed
When looking for a certain element
When an exception is raised
So they save computing power and memory... | def countdown(n):
while n > 0:
yield n
n -= 1
gen_5 = countdown(5)
gen_5
# where is the sequence?
print gen_5.next()
print gen_5.next()
print gen_5.next()
print gen_5.next()
print gen_5.next()
gen_5.next()
for i in countdown(5):
print i, | advanced/0_Iterators_generators_and_coroutines.ipynb | ealogar/curso-python | apache-2.0 |
yield makes a function a generator
the function only executes on next (easier than implements iteration)
it produces a value and suspend the execution of the function | # Let's see another example with yield tail -f and grep
import time
def follow(thefile):
thefile.seek(0, 2) # Go to the end of the file
while True:
line = thefile.readline()
if not line:
time.sleep(0.1) # Sleep briefly
continue
yield line
logfile = open("fi... | advanced/0_Iterators_generators_and_coroutines.ipynb | ealogar/curso-python | apache-2.0 |
using generators to build a pipeline as unix (tail + grep) | def grep(pattern, lines):
for line in lines:
if pattern in line:
yield line
# TODO: use a generator expression
# Set up a processing pipe : tail -f | grep "tefcon"
logfile = open("fichero.txt")
loglines = follow(logfile)
pylines = grep("python", loglines)
# nothing happens until now
# Pull re... | advanced/0_Iterators_generators_and_coroutines.ipynb | ealogar/curso-python | apache-2.0 |
Coroutines
Using yield as this way we get a coroutine
function not just returns values, it can consume values that we send | g = g_grep("python")
g.next()
g.send("Prueba a ver si encontramos algo")
g.send("Hemos recibido python") | advanced/0_Iterators_generators_and_coroutines.ipynb | ealogar/curso-python | apache-2.0 |
Sent values are returned in (yield)
Execution as a generator function
coroutines responds to next and send | # avoid the first next call -> decorator
import functools
def coroutine(func):
def wrapper(*args, **kwargs):
cr = func(*args, **kwargs)
cr.next()
return cr
return wrapper
@coroutine
def cool_grep(pattern):
print "Looking for %s" % pattern
while True:
line = (yield)
... | advanced/0_Iterators_generators_and_coroutines.ipynb | ealogar/curso-python | apache-2.0 |
generators produces values and coroutines mostly consumes
DO NOT mix the concepts to avoid exploiting your mind
Coroutines are not for iteratin | def countdown_bug(n):
print "Counting down from", n
while n >= 0:
newvalue = (yield n)
# If a new value got sent in, reset n with it
if newvalue is not None:
n = newvalue
else:
n -= 1
c = countdown_bug(5)
for n in c:
print n
if n == 5:
c.s... | advanced/0_Iterators_generators_and_coroutines.ipynb | ealogar/curso-python | apache-2.0 |
What has happened here?
chain coroutines together and push data through the pipe using send()
you need a source that normally is not a coroutine
you will also needs a pipelines sinks (end-point) that consumes data and processes
don't mix the concepts too much
lets go back to the tail -f and grep
our source is tail -f | import time
def c_follow(thefile, target):
thefile.seek(0,2) # Go to the end of the file
while True:
line = thefile.readline()
if not line:
time.sleep(0.1) # Sleep briefly
else:
target.send(line)
# a sink: just print
@coroutine
def printer(name):
while Tr... | advanced/0_Iterators_generators_and_coroutines.ipynb | ealogar/curso-python | apache-2.0 |
coroutines add routing
complex arrrangment of pipes, branches, merging... | if f and not f.closed:
f.close()
f = open("fichero.txt")
p = printer("uno")
c_follow(f,
broadcast([c_grep('python', p),
c_grep('hodor', p),
c_grep('hold', p)])
)
if f and not f.closed:
f.close() | advanced/0_Iterators_generators_and_coroutines.ipynb | ealogar/curso-python | apache-2.0 |
Annotating continuous data
This tutorial describes adding annotations to a :class:~mne.io.Raw object,
and how annotations are used in later stages of data processing.
:depth: 1
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and (since we won't actually analyz... | import os
from datetime import datetime
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).... | 0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
:class:~mne.Annotations in MNE-Python are a way of storing short strings of
information about temporal spans of a :class:~mne.io.Raw object. Below the
surface, :class:~mne.Annotations are :class:list-like <list> objects,
where each element comprises three pieces of information: an onset time
(in seconds), a durat... | my_annot = mne.Annotations(onset=[3, 5, 7],
duration=[1, 0.5, 0.25],
description=['AAA', 'BBB', 'CCC'])
print(my_annot) | 0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Notice that orig_time is None, because we haven't specified it. In
those cases, when you add the annotations to a :class:~mne.io.Raw object,
it is assumed that the orig_time matches the time of the first sample of
the recording, so orig_time will be set to match the recording
measurement date (raw.info['meas_date']). | raw.set_annotations(my_annot)
print(raw.annotations)
# convert meas_date (a tuple of seconds, microseconds) into a float:
meas_date = raw.info['meas_date'][0] + raw.info['meas_date'][1] / 1e6
orig_time = raw.annotations.orig_time
print(meas_date == orig_time) | 0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
If you know that your annotation onsets are relative to some other time, you
can set orig_time before you call :meth:~mne.io.Raw.set_annotations,
and the onset times will get adjusted based on the time difference between
your specified orig_time and raw.info['meas_date'], but without the
additional adjustment for raw.f... | time_format = '%Y-%m-%d %H:%M:%S.%f'
new_orig_time = datetime.utcfromtimestamp(meas_date + 50).strftime(time_format)
print(new_orig_time)
later_annot = mne.Annotations(onset=[3, 5, 7],
duration=[1, 0.5, 0.25],
description=['DDD', 'EEE', 'FFF'],
... | 0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
The three annotations appear as differently colored rectangles because they
have different description values (which are printed along the top
edge of the plot area). Notice also that colored spans appear in the small
scroll bar at the bottom of the plot window, making it easy to quickly view
where in a :class:~mne.io.... | fig.canvas.key_press_event('a') | 0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
The colored rings are clickable, and determine which existing label will be
created by the next click-and-drag operation in the main plot window. New
annotation descriptions can be added by typing the new description,
clicking the :guilabel:Add label button; the new description will be added
to the list of descriptions... | new_annot = mne.Annotations(onset=3.75, duration=0.75, description='AAA')
raw.set_annotations(my_annot + new_annot)
raw.plot(start=2, duration=6) | 0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Reading and writing Annotations to/from a file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
:class:~mne.Annotations objects have a :meth:~mne.Annotations.save method
which can write :file:.fif, :file:.csv, and :file:.txt formats (the
format to write is inferred from the file extension in the filename you
provide). Th... | raw.annotations.save('saved-annotations.csv')
annot_from_file = mne.read_annotations('saved-annotations.csv')
print(annot_from_file) | 0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Vertex client library: AutoML image classification model for batch prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb">
<img src="https://cl... | import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG | notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Tutorial
Now you are ready to start creating your own AutoML image classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in... | # client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
... | notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Model deployment for batch prediction
Now deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction.
For online prediction, you:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the En... | test_items = !gsutil cat $IMPORT_FILE | head -n2
if len(str(test_items[0]).split(",")) == 3:
_, test_item_1, test_label_1 = str(test_items[0]).split(",")
_, test_item_2, test_label_2 = str(test_items[1]).split(",")
else:
test_item_1, test_label_1 = str(test_items[0]).split(",")
test_item_2, test_label_2... | notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Make batch prediction request
Now that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters:
display_name: The human readable name for the prediction job.
model_name: The Vertex fully qualified identifier for the Model re... | BATCH_MODEL = "flowers_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPL... | notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Get the predictions
When the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED.
Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolde... | def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subf... | notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create Topics
We select the "english" as the main language for our documents. If you want a multilingual model that supports 50+ languages, please select "multilingual" instead. | model = BERTopic(language="english")
topics, probs = model.fit_transform(docs) | notebooks/BERTopic.ipynb | MaartenGr/BERTopic | mit |
We can then extract most frequent topics: | model.get_topic_freq().head(5) | notebooks/BERTopic.ipynb | MaartenGr/BERTopic | mit |
-1 refers to all outliers and should typically be ignored. Next, let's take a look at the most frequent topic that was generated: | model.get_topic(49)[:10] | notebooks/BERTopic.ipynb | MaartenGr/BERTopic | mit |
Note that the model is stocastich which mmeans that the topics might differ across runs.
For a full list of support languages, see the values below: | from bertopic import languages
print(languages) | notebooks/BERTopic.ipynb | MaartenGr/BERTopic | mit |
Embedding model
You can select any model from sentence-transformers and use it instead of the preselected models by simply passing the model through
BERTopic with embedding_model: | # st_model = BERTopic(embedding_model="xlm-r-bert-base-nli-stsb-mean-tokens") | notebooks/BERTopic.ipynb | MaartenGr/BERTopic | mit |
Click here for a list of supported sentence transformers models.
Visualize Topics
After having trained our BERTopic model, we can iteratively go through perhaps a hundred topic to get a good
understanding of the topics that were extract. However, that takes quite some time and lacks a global representation.
Instead... | model.visualize_topics() | notebooks/BERTopic.ipynb | MaartenGr/BERTopic | mit |
Visualize Topic Probabilities
The variable probabilities that is returned from transform() or fit_transform() can
be used to understand how confident BERTopic is that certain topics can be found in a document.
To visualize the distributions, we simply call: | model.visualize_distribution(probs[0]) | notebooks/BERTopic.ipynb | MaartenGr/BERTopic | mit |
Topic Reduction
Finally, we can also reduce the number of topics after having trained a BERTopic model. The advantage of doing so,
is that you can decide the number of topics after knowing how many are actually created. It is difficult to
predict before training your model how many topics that are in your documents a... | new_topics, new_probs = model.reduce_topics(docs, topics, probs, nr_topics=60) | notebooks/BERTopic.ipynb | MaartenGr/BERTopic | mit |
The reasoning for putting docs, topics, and probs as parameters is that these values are not saved within
BERTopic on purpose. If you were to have a million documents, it seems very inefficient to save those in BERTopic
instead of a dedicated database.
Topic Representation
When you have trained a model and viewed t... | model.update_topics(docs, topics, n_gram_range=(1, 3), stop_words="english") | notebooks/BERTopic.ipynb | MaartenGr/BERTopic | mit |
Search Topics
After having trained our model, we can use find_topics to search for topics that are similar
to an input search_term. Here, we are going to be searching for topics that closely relate the
search term "vehicle". Then, we extract the most similar topic and check the results: | similar_topics, similarity = model.find_topics("vehicle", top_n=5); similar_topics
model.get_topic(28) | notebooks/BERTopic.ipynb | MaartenGr/BERTopic | mit |
Model serialization
The model and its internal settings can easily be saved. Note that the documents and embeddings will not be saved. However, UMAP and HDBSCAN will be saved. | # Save model
model.save("my_model")
# Load model
my_model = BERTopic.load("my_model") | notebooks/BERTopic.ipynb | MaartenGr/BERTopic | mit |
Flight Metadata
There's a lot of information contained in the two flight metadata files (websource and track). | flight_websource.head(1)
flight_track.head(1) | binder/gliding-data-starter.ipynb | ezgliding/goigc | apache-2.0 |
The most useful information comes from the websource file as this information is passed directly by the pilot when submitting the flight to the online competition. Things like Country or Region provide useful statistics on how popular the sport is in different areas. As an example, what are the most popular regions con... | flight_websource.groupby(['Country', 'Region'])['Region'].value_counts().sort_values(ascending=False).head(3)
flight_websource.groupby(['Country', 'Year', 'Takeoff'])['Takeoff'].value_counts().sort_values(ascending=False).head(3) | binder/gliding-data-starter.ipynb | ezgliding/goigc | apache-2.0 |
The three regions above match the Alps area, which is an expected result given this is a gliding Meca. The second result shows Vinon as the most popular takeoff in 2016, a big club also in the Southern Alps. But it's interesting to notice that more recently a club near Montpellier took over as the one with the most act... | flight_websource['DayOfWeek'] = flight_websource.apply(lambda r: datetime.datetime.strptime(r['Date'], "%Y-%m-%dT%H:%M:%SZ").strftime("%A"), axis=1)
flight_websource.groupby(['DayOfWeek'])['DayOfWeek'].count().plot.bar()
flight_websource['Month'] = flight_websource.apply(lambda r: datetime.datetime.strptime(r['Date'],... | binder/gliding-data-starter.ipynb | ezgliding/goigc | apache-2.0 |
Merging Data
We can get additional flight metadata information from the flight_track file, but you can expect this data to be less reliable as it's often the case that the metadata in the flight recorder is not updated before a flight. It is very useful though to know more about what type of recorder was used, calibrat... | flight_all = pd.merge(flight_websource, flight_track, how='left', on='ID')
flight_all.head(1) | binder/gliding-data-starter.ipynb | ezgliding/goigc | apache-2.0 |
Flight Phases
In addition to the flight metadata provided in the files above, by analysing the GPS flight tracks we can generate a lot more interesting data.
Here we take a look at flight phases, calculated using the goigc tool. As described earlier to travel further glider pilots use thermals to gain altitude and then... | flight_phases.head(1) | binder/gliding-data-starter.ipynb | ezgliding/goigc | apache-2.0 |
Data Preparation
As a quick example of what is possible with this kind of data let's take try to map all circling phases as a HeatMap.
First we need to do some treatment of the data: convert coordinates from radians to degrees, filter out unreasonable values (climb rates above 15m/s are due to errors in the recording d... | phases = pd.merge(flight_phases, flight_websource[['TrackID', 'Distance', 'Speed']], on='TrackID')
phases['Lat'] = np.rad2deg(phases['CentroidLatitude'])
phases['Lng'] = np.rad2deg(phases['CentroidLongitude'])
phases_copy = phases[phases.Type==5][phases.AvgVario<10][phases.AvgVario>2].copy()
phases_copy.head(2)
#phas... | binder/gliding-data-starter.ipynb | ezgliding/goigc | apache-2.0 |
Visualization
Once we have the data ready we can visualize it over a map. We rely on folium for this. | # This is a workaround for this known issue:
# https://github.com/python-visualization/folium/issues/812#issuecomment-582213307
!pip install git+https://github.com/python-visualization/branca
!pip install git+https://github.com/sknzl/folium@update-css-url-to-https
import folium
from folium import plugins
from folium i... | binder/gliding-data-starter.ipynb | ezgliding/goigc | apache-2.0 |
Single HeatMap | # we use a smaller sample to improve the visualization
# a better alternative is to group entries by CellID, an example of this will be added later
phases_single = phases_copy.sample(frac=0.01, random_state=1)
m_5 = folium.Map(location=[47.06318, 5.41938], tiles='stamen terrain', zoom_start=7)
HeatMap(
phases_singl... | binder/gliding-data-starter.ipynb | ezgliding/goigc | apache-2.0 |
HeatMap over Time
Another cool possibility is to visualize the same data over time.
In this case we're grouping weekly and playing the data over one year.
Both the most popular areas and times of the year are pretty clear from this animation. | m_5 = folium.Map(location=[47.06318, 5.41938], tiles='stamen terrain', zoom_start=7)
groups = phases_copy.Group.sort_values().unique()
data = []
for g in groups:
data.append(phases_copy.loc[phases_copy.Group==g,['Group','Lat','Lng','AvgVario']].groupby(['Lat','Lng']).sum().reset_index().values.tolist())
HeatM... | binder/gliding-data-starter.ipynb | ezgliding/goigc | apache-2.0 |
0.2 View graph in TensorBoard | !python -m model_inspect --model_name={MODEL} --logdir=logs &> /dev/null
%load_ext tensorboard
%tensorboard --logdir logs | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
1. inference
1.1 Benchmark network latency
There are two types of latency:
network latency and end-to-end latency.
network latency: from the first conv op to the network class and box prediction.
end-to-end latency: from image preprocessing, network, to the final postprocessing to generate a annotated new image. | # benchmark network latency
!python -m tf2.inspector --mode=benchmark --model_name={MODEL} --hparams="mixed_precision=true" --only_network
# With colab + Tesla T4 GPU, here are the batch size 1 latency summary:
# D0 (AP=33.5): 14.9ms, FPS = 67.2 (batch size 8 FPS=)
# D1 (AP=39.6): 22.7ms, FPS = 44.1 (batch siz... | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
1.2 Benchmark end-to-end latency | # Benchmark end-to-end latency (: preprocess + network + posprocess).
#
# With colab + Tesla T4 GPU, here are the batch size 1 latency summary:
# D0 (AP=33.5): 22.7ms, FPS = 43.1 (batch size 4, FPS=)
# D1 (AP=39.6): 34.3ms, FPS = 29.2 (batch size 4, FPS=)
# D2 (AP=43.0): 42.5ms, FPS = 23.5 (batch size 4, FPS=)... | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
1.3 Inference images. | # first export a saved model.
saved_model_dir = 'savedmodel'
!rm -rf {saved_model_dir}
!python -m tf2.inspector --mode=export --model_name={MODEL} \
--model_dir={ckpt_path} --saved_model_dir={saved_model_dir}
# Then run saved_model_infer to do inference.
# Notably: batch_size, image_size must be the same as when it ... | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
1.4 Inference video | # step 0: download video
video_url = 'https://storage.googleapis.com/cloud-tpu-checkpoints/efficientdet/data/video480p.mov' # @param
!wget {video_url} -O input.mov
# Step 1: export model
saved_model_dir = 'savedmodel'
!rm -rf {saved_model_dir}
!python -m tf2.inspector --mode=export \
--model_name={MODEL} --model_d... | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
2. TFlite
2.1 COCO evaluation on validation set. | if 'val2017' not in os.listdir():
!wget http://images.cocodataset.org/zips/val2017.zip
!wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
!unzip -q val2017.zip
!unzip annotations_trainval2017.zip
!mkdir tfrecord
!PYTHONPATH=".:$PYTHONPATH" python dataset/create_coco_tfrecord.py \... | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
2.2 TFlite export INT8 model | # In case you need to specify different image size or batch size or #boxes, then
# you need to export a new saved model and run the inferernce.
serve_image_out = 'serve_image_out'
!mkdir {serve_image_out}
saved_model_dir = 'savedmodel'
!rm -rf {saved_model_dir}
# # Step 1: export model
!python -m tf2.inspector --mode... | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
2.3 Compile EdgeTPU model (Optional) | # install edgetpu compiler
!curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
!echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
!sudo apt-get update
!sudo apt-get install edgetpu-compiler | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
The EdgeTPU has 8MB of SRAM for caching model paramaters (more info). This means that for models that are larger than 8MB, inference time will be increased in order to transfer over model paramaters. One way to avoid this is Model Pipelining - splitting the model into segments that can have a dedicated EdgeTPU. This ca... | NUMBER_OF_TPUS = 1
!edgetpu_compiler {saved_model_dir}/int8.tflite --num_segments=$NUMBER_OF_TPUS | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
3. COCO evaluation
3.1 COCO evaluation on validation set. | if 'val2017' not in os.listdir():
!wget http://images.cocodataset.org/zips/val2017.zip
!wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
!unzip -q val2017.zip
!unzip annotations_trainval2017.zip
!mkdir tfrecord
!python -m dataset.create_coco_tfrecord \
--image_dir=val2017 \... | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
4. Training EfficientDets on PASCAL.
4.1 Prepare data | # Get pascal voc 2012 trainval data
import os
if 'VOCdevkit' not in os.listdir():
!wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
!tar xf VOCtrainval_06-Nov-2007.tar
!mkdir tfrecord
!python -m dataset.create_pascal_tfrecord \
--data_dir=VOCdevkit --year=VOC2007 --output_p... | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
4.2 Train Pascal VOC 2007 from ImageNet checkpoint for Backbone. | # Train efficientdet from scratch with backbone checkpoint.
backbone_name = {
'efficientdet-d0': 'efficientnet-b0',
'efficientdet-d1': 'efficientnet-b1',
'efficientdet-d2': 'efficientnet-b2',
'efficientdet-d3': 'efficientnet-b3',
'efficientdet-d4': 'efficientnet-b4',
'efficientdet-d5': 'efficien... | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
4.3 Train Pascal VOC 2007 from COCO checkpoint for the whole net. | # generating train tfrecord is large, so we skip the execution here.
import os
if MODEL not in os.listdir():
download(MODEL)
!mkdir /tmp/model_dir/
# key option: use --ckpt rather than --backbone_ckpt.
!python -m tf2.train --mode=traineval \
--train_file_pattern=tfrecord/{file_pattern} \
--val_file_pattern=t... | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
4.4 View tensorboard for loss and accuracy. | %load_ext tensorboard
%tensorboard --logdir /tmp/model_dir/
# Notably, this is just a demo with almost zero accuracy due to very limited
# training steps, but we can see finetuning has smaller loss than training
# from scratch at the begining. | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
5. Export to onnx | !pip install tf2onnx
!python -m tf2.inspector --mode=export --model_name={MODEL} --model_dir={MODEL} --saved_model_dir={saved_model_dir} --hparams="nms_configs.method='hard', nms_configs.iou_thresh=0.5, nms_configs.sigma=0.0"
!python -m tf2onnx.convert --saved-model={saved_model_dir} --output={saved_model_dir}/model.... | efficientdet/tf2/tutorial.ipynb | google/automl | apache-2.0 |
Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts. | data = np.loadtxt('yearssn.dat')
years = data[::1,0]
ssc = data[::1,1]
assert len(years)==315
assert years.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float) | assignments/assignment04/MatplotlibEx01.ipynb | bjshaw/phys202-2015-work | mit |
Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data. | fig = plt.figure(figsize=(150,10))
plt.plot(years, ssc)
plt.title('Sun Spot Count vs. Time')
plt.xlabel('Time(years)')
plt.ylabel('Sun Spot Count')
plt.tick_params(top = False, right=False)
plt.xlim(1700.5, 2015)
assert True # leave for grading | assignments/assignment04/MatplotlibEx01.ipynb | bjshaw/phys202-2015-work | mit |
Describe the choices you have made in building this visualization and how they make it effective.
With the requirement of the maximum slope, using just one graph is not effective for this particular data set. The aspect ratio makes the graph unreadable without zooming in a lot. However, creating four seperate graphs ... |
f, ax = plt.subplots(2,2, sharey=True, figsize=(25,7))
for n in range(1,5):
plt.subplot(2,2,n)
plt.tick_params(top=False, right=False)
plt.tight_layout()
for m in range(1,5):
if m % 2 != 0:
plt.subplot(2,2,m)
plt.ylabel('Sun Spot Count')
if m == 3 or m == 4:
plt.subplot(2,2,m)
... | assignments/assignment04/MatplotlibEx01.ipynb | bjshaw/phys202-2015-work | mit |
Log loss over image size
Looking at the effect of image size on log loss. Want to see how much it affects our ability to classify an image when it has been radically resized. | N = y.shape[0]
logloss = lambda x: -(1./N)*np.log(x[0][x[1]])
# iterate over all the images and make a list of log losses
ilabels = np.where(labels)[1]
losses = []
for r,l in zip(y,ilabels):
losses.append(logloss([r,l]))
losses = np.array(losses) | notebooks/preliminary_data_analysis/Log loss over image size.ipynb | Neuroglycerin/neukrill-net-work | mit |
Getting an array of original image sizes: | sizes = []
for i in dataset.X:
sizes.append(np.sqrt(i.size))
sizes = np.array(sizes) | notebooks/preliminary_data_analysis/Log loss over image size.ipynb | Neuroglycerin/neukrill-net-work | mit |
Then we can just make a scatter plot: | plt.scatter(sizes,losses)
plt.grid() | notebooks/preliminary_data_analysis/Log loss over image size.ipynb | Neuroglycerin/neukrill-net-work | mit |
Doesn't appear to be a clear relationship between image size and log loss. We can look at the average log loss in a rolling average over the above plot and see where most of the log loss is: | np.hanning(50)*np.ones(50)
ravg = np.zeros(450)
for i in range(450):
s = losses[(sizes > i-50) * (sizes < i+50)]
# apply window function
s = s*np.hanning(s.size)
if s.size != 0:
ravg[i] = np.mean(s)
else:
ravg[i] = 0
plt.plot(range(450),ravg) | notebooks/preliminary_data_analysis/Log loss over image size.ipynb | Neuroglycerin/neukrill-net-work | mit |
So it looks like we are losing more log loss per image at higher image sizes. However, most of log loss is going to be at the lower image sizes because there are more images there. If we do a rolling sum we'll see this: | rsum = np.zeros(450)
for i in range(450):
s = losses[(sizes > i-30) * (sizes < i+30)]
# apply window function
s = s*np.hanning(s.size)
if s.size != 0:
rsum[i] = np.sum(s)
else:
rsum[i] = 0
plt.plot(range(450),rsum) | notebooks/preliminary_data_analysis/Log loss over image size.ipynb | Neuroglycerin/neukrill-net-work | mit |
So, cumulatively we should probably worry more about the lower image sizes anyway.
Linear regression
Before finishing this notebook should probably run linear regression on this data and see if we can fit a line saying that image size is correlated with log loss. Using sklearn due to familiarity. | import sklearn.linear_model
lreg = sklearn.linear_model.LinearRegression()
lreg.fit(sizes.reshape(-1,1),losses)
plt.scatter(sizes,losses)
plt.plot(sizes,lreg.predict(sizes.reshape(-1,1)))
plt.plot(sizes,lreg.predict(sizes.reshape(-1,1))) | notebooks/preliminary_data_analysis/Log loss over image size.ipynb | Neuroglycerin/neukrill-net-work | mit |
So there is a very small correlation between image size and log loss. | lreg.coef_[0] | notebooks/preliminary_data_analysis/Log loss over image size.ipynb | Neuroglycerin/neukrill-net-work | mit |
Does the parallel model do better?
We've trained a model with parallel streams which should in theory do better at this kind of task; as it has more information about the size of different classes. | settings = neukrill_net.utils.Settings("settings.json")
run_settings = neukrill_net.utils.load_run_settings(
"run_settings/parallel_conv.json", settings, force=True)
model = pylearn2.utils.serial.load(run_settings['pickle abspath'])
# loading the data
yaml_string = neukrill_net.utils.format_yaml(run_settings, set... | notebooks/preliminary_data_analysis/Log loss over image size.ipynb | Neuroglycerin/neukrill-net-work | mit |
The scatter plot looks slightly better at the higher values: | plt.scatter(sizes,losses)
plt.grid()
ravg = np.zeros(450)
for i in range(450):
s = losses[(sizes > i-50) * (sizes < i+50)]
# apply window function
s = s*np.hanning(s.size)
if s.size != 0:
ravg[i] = np.mean(s)
else:
ravg[i] = 0
plt.plot(range(450),ravg)
rsum = np.zeros(450)
for i in... | notebooks/preliminary_data_analysis/Log loss over image size.ipynb | Neuroglycerin/neukrill-net-work | mit |
Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels. | from sklearn.preprocessing import LabelBinarizer
labelBinarizer = LabelBinarizer()
labelBinarizer.fit(labels)
labels_vecs = labelBinarizer.transform(labels) | transfer-learning/.ipynb_checkpoints/Transfer_Learning-checkpoint.ipynb | swirlingsand/deep-learning-foundations | mit |
์ถ์ ๊ธฐ(Estimator)๋ฅผ ์ฌ์ฉํ ๋ค์ค ์์
์ ํ๋ จ
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/multi_worker_with_estimator"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org์์ ๋ณด๊ธฐ</a> </td>
<td><a target="_blank" href="ht... | import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
import os, json | site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb | tensorflow/docs-l10n | apache-2.0 |
์ฐธ๊ณ : TF2.4๋ถํฐ ๋ค์ค ์์
์ ๋ฏธ๋ฌ๋ง ๋ฐฉ๋ฒ์ ์ฆ์ ์คํ์ด ํ์ฑํ๋ ๊ฒฝ์ฐ(๊ธฐ๋ณธ ์ค์ ) ์ถ์ ๊ธฐ์์ ์ค๋ฅ๋ฅผ ์ผ์ผํต๋๋ค. TF2.4์ ์ค๋ฅ๋ TypeError: cannot pickle '_thread.lock' object์
๋๋ค. ์์ธํ ๋ด์ฉ์ ์ด์ #46556์ ์ฐธ์กฐํ์ธ์. ํด๊ฒฐ ๋ฐฉ๋ฒ์ ์ฆ์ ์คํ์ ๋นํ์ฑํํ๋ ๊ฒ์
๋๋ค. | tf.compat.v1.disable_eager_execution() | site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb | tensorflow/docs-l10n | apache-2.0 |
์
๋ ฅ ํจ์
์ด ํํ ๋ฆฌ์ผ์ TensorFlow ๋ฐ์ดํฐ์ธํธ์ MNIST ๋ฐ์ดํฐ์ธํธ๋ฅผ ์ฌ์ฉํฉ๋๋ค. ์ฝ๋ ๋ด์ฉ์ ๋ค์ค GPU ํ๋ จ ํํ ๋ฆฌ์ผ๊ณผ ์ ์ฌํ์ง๋ง ํฐ ์ฐจ์ด์ ์ด ํ๋ ์์ต๋๋ค. ๋ฐ๋ก ์ถ์ ๊ธฐ๋ฅผ ์ฌ์ฉํ์ฌ ๋ค์ค ์์
์ ํ๋ จ์ ํ ๋๋ ๋ฐ์ดํฐ์ธํธ๋ฅผ ์์
์ ์ซ์๋๋ก ๋๋์ด ์ฃผ์ด์ผ ๋ชจ๋ธ์ด ์๋ ดํฉ๋๋ค. ์
๋ ฅ ๋ฐ์ดํฐ๋ ์์
์ ์ธ๋ฑ์ค๋ก ์ค๋ฉ(shard)ํฉ๋๋ค. ๊ทธ๋ฌ๋ฉด ๊ฐ ์์
์๊ฐ ๋ฐ์ดํฐ์ธํธ์ 1/num_workers ๊ณ ์ ๋ถ๋ถ์ ์ฒ๋ฆฌํฉ๋๋ค. | BUFFER_SIZE = 10000
BATCH_SIZE = 64
def input_fn(mode, input_context=None):
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
mnist_dataset = (datasets['train'] if mode == tf.estimator.ModeKeys.TRAIN else
... | site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb | tensorflow/docs-l10n | apache-2.0 |
ํ๋ จ์ ์๋ ด์ํค๊ธฐ ์ํ ๋ ๋ค๋ฅธ ๋ฐฉ๋ฒ์ผ๋ก ๊ฐ ์์
์์์ ๋ฐ์ดํฐ์ธํธ๋ฅผ ์ ๊ฐ๊ธฐ ๋ค๋ฅธ ์๋ ๊ฐ์ผ๋ก ์
ํํ๋ ๊ฒ๋ ์์ต๋๋ค.
๋ค์ค ์์
์ ๊ตฌ์ฑ
๋ค์ค GPU ํ๋ จ ํํ ๋ฆฌ์ผ๊ณผ ๋น๊ตํ ๋ ๊ฐ์ฅ ํฐ ์ฐจ์ด ์ค ํ๋๋ ๋ค์ค ์์ปค๋ฅผ ์ค์ ํ๋ ๋ถ๋ถ์
๋๋ค. TF_CONFIG ํ๊ฒฝ ๋ณ์๋ ํด๋ฌ์คํฐ๋ฅผ ์ด๋ฃจ๋ ๊ฐ ์์ปค์ ํด๋ฌ์คํฐ ์ค์ ์ ์ง์ ํ๋ ํ์ค ๋ฐฉ๋ฒ์
๋๋ค.
TF_CONFIG์๋ cluster์ task๋ผ๋ ๋ ๊ฐ์ง ๊ตฌ์ฑ ์์๊ฐ ์์ต๋๋ค. cluster๋ ์ ์ฒด ํด๋ฌ์คํฐ, ๋ค์ ๋งํด ํด๋ฌ์คํฐ์ ์ํ ์์
์์ ๋งค๊ฐ๋ณ์ ์๋ฒ์ ๋ํ ์ ๋ณด๋ฅผ ์ ๊ณตํฉ๋๋ค. task๋ ํ์ฌ ์์
์ ๋ํ ์ ๋ณด๋ฅผ ์ ๊ณตํฉ๋๋ค... | LEARNING_RATE = 1e-4
def model_fn(features, labels, mode):
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Den... | site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb | tensorflow/docs-l10n | apache-2.0 |
Note: ์ด ์์ ์์๋ ํ์ต๋ฅ ์ด ๊ณ ์ ๋์ด์์ต๋๋ค. ํ์ง๋ง ์ค์ ๋ก๋ ์ ์ญ ๋ฐฐ์น ํฌ๊ธฐ์ ๋ฐ๋ผ ํ์ต๋ฅ ์ ์กฐ์ ํด์ผ ํ ์ ์์ต๋๋ค.
MultiWorkerMirroredStrategy
๋ชจ๋ธ์ ํ๋ จํ๊ธฐ ์ํ์ฌ tf.distribute.experimental.MultiWorkerMirroredStrategy์ ์ธ์คํด์ค๋ฅผ ์ฌ์ฉํ์ธ์. MultiWorkerMirroredStrategy๋ ๋ชจ๋ ์์ปค์ ๊ฐ ์ฅ๋น์, ๋ชจ๋ธ์ ๋ ์ด์ด์ ์๋ ๋ชจ๋ ๋ณ์์ ๋ณต์ฌ๋ณธ์ ๋ง๋ญ๋๋ค. ์ด ์ ๋ต์ CollectiveOps๋ผ๋ ์์ง์ ์ํ ํต์ ์ฉ ํ
์ํ๋ก ์ฐ์ฐ์ ์ฌ์ฉํ์ฌ ๊ทธ๋๋์ธํธ๋ฅผ ๋ชจ์ผ๊ณ , ๋ณ์๋ค์ ๊ฐ์... | strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() | site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb | tensorflow/docs-l10n | apache-2.0 |
๋ชจ๋ธ ํ๋ จ ๋ฐ ํ๊ฐํ๊ธฐ
๋ค์์ผ๋ก, ์ถ์ ๊ธฐ์ RunConfig์ ๋ถ์ฐ ์ ๋ต์ ์ง์ ํ์ญ์์ค. ๊ทธ๋ฆฌ๊ณ tf.estimator.train_and_evaluate๋ก ํ๋ จ ๋ฐ ํ๊ฐ๋ฅผ ํฉ๋๋ค. ์ด ํํ ๋ฆฌ์ผ์์๋ train_distribute๋ก๋ง ์ ๋ต์ ์ง์ ํ์๊ธฐ ๋๋ฌธ์ ํ๋ จ ๊ณผ์ ๋ง ๋ถ์ฐ ์ฒ๋ฆฌํฉ๋๋ค. eval_distribute๋ฅผ ์ง์ ํ์ฌ ํ๊ฐ๋ ๋ถ์ฐ ์ฒ๋ฆฌํ ์ ์์ต๋๋ค. | config = tf.estimator.RunConfig(train_distribute=strategy)
classifier = tf.estimator.Estimator(
model_fn=model_fn, model_dir='/tmp/multiworker', config=config)
tf.estimator.train_and_evaluate(
classifier,
train_spec=tf.estimator.TrainSpec(input_fn=input_fn),
eval_spec=tf.estimator.EvalSpec(input_fn=inp... | site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb | tensorflow/docs-l10n | apache-2.0 |
If you examine the source code of sklearn.naive_bayes.GaussianNB,
you'll see that internally it finds a best-fit Gaussian for each
distribution, and uses these as a smooth description of each distribution.
We can use the internals of GaussianNB to visualize the distributions
it uses: | from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB().fit(X, y)
print(gnb.theta_) # centers of the distributions
print(gnb.sigma_) # widths of the distributions
# create a grid on which to evaluate the distributions
grid = np.linspace(-3, 6, 100)
xgrid, ygrid = np.meshgrid(grid, grid)
Xgrid = np.vstack([xgrid... | AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
When a new item is to be classified, its probability is evaluated
for each distribution, and the distribution with the highest
probability wins. We can see now why this is called "naive". What
if the distribution is not well-fit by an uncorrelated Gaussian?
In the following, we'll develop a classifier which solves th... | import os
DATA_HOME = os.path.abspath('../data/sdss_colors')
import numpy as np
train_data = np.load(os.path.join(DATA_HOME,
'sdssdr6_colors_class_train.npy'))
test_data = np.load(os.path.join(DATA_HOME,
'sdssdr6_colors_class.200000.npy')) | AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
Setting Up the Data | # set the number of training points: using all points leads to a very
# long running time. We'll start with 10000 training points. This
# can be increased if desired.
Ntrain = 10000
#Ntrain = len(train_data)
# Split training data into training and cross-validation sets
np.random.seed(0)
np.random.shuffle(train_data)... | AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
Just for good measure, let's plot the first two dimensions of the data
to see a bit of what we're working with: | pl.scatter(X_train[:, 0], X_train[:, 1], c=y_train, s=4, linewidths=0)
pl.xlim(-2, 5)
pl.ylim(-1, 3) | AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
We have training distributions which are fairly well-separated.
Note, though, that these distributions are not well-approximated
by a single Gaussian! Still, Gaussian Naive Bayes can be a
useful classifier.
Exercise 1: Recreating Gaussian Naive Bayes
Gaussian Naive Bayes is a very fast estimator, and predicted labels ... | from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(X_train, y_train)
y_gnb = gnb.predict(X_crossval) | AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
Part 1
Here we will use Gaussian Mixture Models to duplicate our Gaussian
Naive Bayes results from earlier. You'll create two sklearn.gmm.GMM()
classifier instances, named clf_0 and clf_1. Each should be
initialized with a single component, and diagonal covariance.
(hint: look at the doc string for sklearn.gmm.GMM to... | from sklearn.mixture import gmm
# Objects to create:
# - clf_0 : trained on the portion of the training data with y == 0
# - clf_1 : trained on the portion of the training data with y == 1
| AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
If the notebook is within the tutorial directory structure,
the following command will load the solution: | %load soln/01-01.py
# 01-01.py
clf_0 = gmm.GMM(1, 'diag')
i0 = (y_train == 0)
clf_0.fit(X_train[i0])
clf_1 = gmm.GMM(1, 'diag')
i1 = (y_train == 1)
clf_1.fit(X_train[i1])
| AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
Part 2
Next we must construct the prior. The prior is the fraction of training
points of each type. | # variables to compute:
# - prior0 : fraction of training points with y == 0
# - prior1 : fraction of training points with y == 1
| AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
If the notebook is within the tutorial directory structure,
the following command will load the solution: | %load soln/01-02.py | AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
Part 3
Now we use the prior and the classifiation to compute the log-likelihoods
of the cross-validation points. The log likelihood for a test point x
is given by
logL(x) = clf.score(x) + log(prior)
You can use the function np.log() to compute the logarithm of the prior. | # variables to compute:
# logL : array, shape = (2, Ncrossval)
# logL[0] is the log-likelihood for y == 0
# logL[1] is the log-likelihood for y == 1
| AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
If the notebook is within the tutorial directory structure,
the following command will load the solution: | %load soln/01-03.py | AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
Once logL is computed, the predicted value for each sample
is the index with the largest log-likelihood. | y_pred = np.argmax(logL, 0) | AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
Comparing to GNB
Now we compare our predicted labels to y_gnb, the labels we
printed above. We'll use the built-in classification
report function in sklearn.metrics. This computes the precision,
recall, and f1-score for each class. | from sklearn import metrics
print "-----------------------------------------------------"
print "One-component Gaussian Mixture:"
print metrics.classification_report(y_crossval, y_pred,
target_names=['stars', 'QSOs'])
print "-----------------------------------------------------"
pr... | AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
In theory, the results of these two should be identical. In reality, the
two algorithms approach the fits differently, leading to slightly different
results. The precision, recall, and F1-score should match to within ~0.01.
If this is the case, then we can go on and experiment with a more complicated
model.
Exercise ... | # finish this function. For n_components=1 and
# covariance_type='diag', it should give results
# identical to what we saw above.
# the function should return the predicted
# labels y_pred.
def GMMBayes(X_test, n_components, covariance_type):
pass | AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
If the notebook is within the tutorial directory structure,
the following command will load the solution: | %load soln/01-04.py | AstroML/notebooks/10_exercise01.ipynb | diego0020/va_course_2015 | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.