markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
So, now let's plot the results-- How does the sentiment of Lovecraft's story change over the course of the book?
df.cum_sum.plot(figsize=(12,5), color='r', title='Sentiment Polarity cumulative summation for HP Lovecraft\'s The Shunned House') plt.xlabel('Sentence number') plt.ylabel('Cumulative sum of sentiment polarity')
textblob_lovecraft.ipynb
dagrha/textual-analysis
mit
The climax of Lovecraft's story appears to be around sentence 255 or so. Things really drop off at that point and get dark, according to the TextBlob sentiment analysis. What's the dataframe look like?
df.head()
textblob_lovecraft.ipynb
dagrha/textual-analysis
mit
Let's get some basic statistical information about sentence seniments:
df.describe()
textblob_lovecraft.ipynb
dagrha/textual-analysis
mit
For fun, let's just see what TextBlob thinks are the most negatively polar sentences in the short story:
for i in df[df.polarity < -0.5].index: print i, tb.sentences[i] words = re.findall(r'\w+', open('lovecraft.txt').read().lower()) collections.Counter(words).most_common(10)
textblob_lovecraft.ipynb
dagrha/textual-analysis
mit
Let's take a quick peak at word frequencies by using the re and collections library. Here we'll use the Counter() and most_common() methods to return a list of tuples of the most common words in the story:
words = re.findall(r'\w+', ushunned.lower()) common = collections.Counter(words).most_common() df_freq = pd.DataFrame(common, columns=['word', 'freq']) df_freq.set_index('word').head()
textblob_lovecraft.ipynb
dagrha/textual-analysis
mit
Run POD Test shape of matrix needed for POD function num_vecs = 50; #<-- equivalent to the number of PIV snapshots (Also number of total POD modes) vecs = np.random.random((100, num_vecs)) vecs.shape
uAll = np.concatenate((Uf.reshape(uSize[0]*uSize[1],uSize[2]), Vf.reshape(uSize[0]*uSize[1],uSize[2]), Wf.reshape(uSize[0]*uSize[1],uSize[2])), axis = 0) uAll.shape num_modes = 50; modes, eig_vals = mr.compute_POD_matrices_snaps_method(uAll, list(range(num_modes))) menergy = eig_vals/np.sum(eig_vals) menergy_sum = np.zeros(len(menergy)) for i in range(len(menergy)): menergy_sum[i] = np.sum(menergy[:i]); menergy_sum[-1]
runPOD-BigNeutral.ipynb
owenjhwilliams/ASIIT
mit
fig, ax = plt.subplots() ax.bar(range(num_modes),menergy[1:num_modes]*100)
reload(PODutils) Umodes, Vmodes, Wmodes = PODutils.reconstructPODmodes(modes,uSize,num_modes,3) Wmodes.shape #Calculate the mode coefficients C = modes.transpose()*uAll C.shape
runPOD-BigNeutral.ipynb
owenjhwilliams/ASIIT
mit
Plot modal energy and contribution total
ind = np.arange(num_modes) # the x locations for the groups width = 1 # the width of the bars f = plt.figure() ax = plt.gca() ax2 = plt.twinx() rect = ax.bar(ind,menergy[:num_modes], width, color='gray') line = ax2.plot(ind,menergy_sum[:num_modes],'--r') ax.set_xlabel("Mode Number",fontsize=14) ax.set_ylabel("Scaled Mode Energy",fontsize=14) ax2.set_ylabel("Integrated Energy",fontsize=14,color='red')
runPOD-BigNeutral.ipynb
owenjhwilliams/ASIIT
mit
Plot some modes
reload(PODutils) PODutils.plotPODmodes3D(X,Y,Umodes,Vmodes,Wmodes,list(range(5)))
runPOD-BigNeutral.ipynb
owenjhwilliams/ASIIT
mit
Plot the variation of the coefficients
reload(PODutils) PODutils.plotPODcoeff(C,list(range(6)),20)
runPOD-BigNeutral.ipynb
owenjhwilliams/ASIIT
mit
Python Generators
# expression generator spam = [0, 1, 2, 3, 4] fooo = (2 ** s for s in spam) # Syntax similar to list comprehension but between parentheses print fooo print fooo.next() print fooo.next() print fooo.next() print fooo.next() print fooo.next() # Generator is exhausted print fooo.next()
advanced/0_Iterators_generators_and_coroutines.ipynb
ealogar/curso-python
apache-2.0
Generators are a simple and powerful tool for creating iterators. Each iteration is computed on demand In general terms they are more efficient than list comprehension or loops If not the whole sequence is traversed When looking for a certain element When an exception is raised So they save computing power and memory Used to operate with I/O, with big amounts of data (e.g. DB queries)... yield
def countdown(n): while n > 0: yield n n -= 1 gen_5 = countdown(5) gen_5 # where is the sequence? print gen_5.next() print gen_5.next() print gen_5.next() print gen_5.next() print gen_5.next() gen_5.next() for i in countdown(5): print i,
advanced/0_Iterators_generators_and_coroutines.ipynb
ealogar/curso-python
apache-2.0
yield makes a function a generator the function only executes on next (easier than implements iteration) it produces a value and suspend the execution of the function
# Let's see another example with yield tail -f and grep import time def follow(thefile): thefile.seek(0, 2) # Go to the end of the file while True: line = thefile.readline() if not line: time.sleep(0.1) # Sleep briefly continue yield line logfile = open("fichero.txt") for line in follow(logfile): print line, # Ensure f is closed if logfile and not logfile.closed: logfile.close()
advanced/0_Iterators_generators_and_coroutines.ipynb
ealogar/curso-python
apache-2.0
using generators to build a pipeline as unix (tail + grep)
def grep(pattern, lines): for line in lines: if pattern in line: yield line # TODO: use a generator expression # Set up a processing pipe : tail -f | grep "tefcon" logfile = open("fichero.txt") loglines = follow(logfile) pylines = grep("python", loglines) # nothing happens until now # Pull results out of the processing pipeline for line in pylines: print line, # Ensure f is closed if logfile and not logfile.closed: logfile.close() # Yield can be used as an expression too def g_grep(pattern): print "Looking for %s" % pattern while True: line = (yield) if pattern in line: print line,
advanced/0_Iterators_generators_and_coroutines.ipynb
ealogar/curso-python
apache-2.0
Coroutines Using yield as this way we get a coroutine function not just returns values, it can consume values that we send
g = g_grep("python") g.next() g.send("Prueba a ver si encontramos algo") g.send("Hemos recibido python")
advanced/0_Iterators_generators_and_coroutines.ipynb
ealogar/curso-python
apache-2.0
Sent values are returned in (yield) Execution as a generator function coroutines responds to next and send
# avoid the first next call -> decorator import functools def coroutine(func): def wrapper(*args, **kwargs): cr = func(*args, **kwargs) cr.next() return cr return wrapper @coroutine def cool_grep(pattern): print "Looking for %s" % pattern while True: line = (yield) if pattern in line: print line, g = cool_grep("python") # no need of call next g.send("Prueba a ver si encontramos algo") g.send("Prueba a ver si python es cool") # use close to shutdown a coroutine (can run forever) @coroutine def last_grep(pattern): print "Looking for %s" % pattern try: while True: line = (yield) if pattern in line: print line, except GeneratorExit: print "Going away. Goodbye" # Exceptions can be thrown inside a coroutine g = last_grep("python") g.send("Prueba a ver si encontramos algo") g.send("Prueba a ver si python es cool") g.close() g.send("prueba a ver si python es cool") # can send exceptions g.throw(RuntimeError, "Lanza una excepcion")
advanced/0_Iterators_generators_and_coroutines.ipynb
ealogar/curso-python
apache-2.0
generators produces values and coroutines mostly consumes DO NOT mix the concepts to avoid exploiting your mind Coroutines are not for iteratin
def countdown_bug(n): print "Counting down from", n while n >= 0: newvalue = (yield n) # If a new value got sent in, reset n with it if newvalue is not None: n = newvalue else: n -= 1 c = countdown_bug(5) for n in c: print n if n == 5: c.send(3)
advanced/0_Iterators_generators_and_coroutines.ipynb
ealogar/curso-python
apache-2.0
What has happened here? chain coroutines together and push data through the pipe using send() you need a source that normally is not a coroutine you will also needs a pipelines sinks (end-point) that consumes data and processes don't mix the concepts too much lets go back to the tail -f and grep our source is tail -f
import time def c_follow(thefile, target): thefile.seek(0,2) # Go to the end of the file while True: line = thefile.readline() if not line: time.sleep(0.1) # Sleep briefly else: target.send(line) # a sink: just print @coroutine def printer(name): while True: line = (yield) print name + " : " + line, # example f = open("fichero.txt") c_follow(f, printer("uno")) # Ensure f is closed if f and not f.closed: f.close() # Pipeline filters: grep @coroutine def c_grep(pattern,target): while True: line = (yield) # Receive a line if pattern in line: target.send(line) # Send to next stage # Exercise: tail -f "fichero.txt" | grep "python" # do not forget the last print as sink # We have the same, with iterators we pull data with iteration # With coroutines we push data with send # BROADCAST @coroutine def broadcast(targets): while True: item = (yield) for target in targets: target.send(item) f = open("fichero.txt") c_follow(f, broadcast([c_grep('python', printer("uno")), c_grep('hodor', printer("dos")), c_grep('hold', printer("tres"))]) )
advanced/0_Iterators_generators_and_coroutines.ipynb
ealogar/curso-python
apache-2.0
coroutines add routing complex arrrangment of pipes, branches, merging...
if f and not f.closed: f.close() f = open("fichero.txt") p = printer("uno") c_follow(f, broadcast([c_grep('python', p), c_grep('hodor', p), c_grep('hold', p)]) ) if f and not f.closed: f.close()
advanced/0_Iterators_generators_and_coroutines.ipynb
ealogar/curso-python
apache-2.0
Annotating continuous data This tutorial describes adding annotations to a :class:~mne.io.Raw object, and how annotations are used in later stages of data processing. :depth: 1 As usual we'll start by importing the modules we need, loading some example data &lt;sample-dataset&gt;, and (since we won't actually analyze the raw data in this tutorial) cropping the :class:~mne.io.Raw object to just 60 seconds before loading it into RAM to save memory:
import os from datetime import datetime import mne sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False) raw.crop(tmax=60).load_data()
0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
:class:~mne.Annotations in MNE-Python are a way of storing short strings of information about temporal spans of a :class:~mne.io.Raw object. Below the surface, :class:~mne.Annotations are :class:list-like &lt;list&gt; objects, where each element comprises three pieces of information: an onset time (in seconds), a duration (also in seconds), and a description (a text string). Additionally, the :class:~mne.Annotations object itself also keeps track of orig_time, which is a POSIX timestamp_ denoting a real-world time relative to which the annotation onsets should be interpreted. Creating annotations programmatically ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you know in advance what spans of the :class:~mne.io.Raw object you want to annotate, :class:~mne.Annotations can be created programmatically, and you can even pass lists or arrays to the :class:~mne.Annotations constructor to annotate multiple spans at once:
my_annot = mne.Annotations(onset=[3, 5, 7], duration=[1, 0.5, 0.25], description=['AAA', 'BBB', 'CCC']) print(my_annot)
0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Notice that orig_time is None, because we haven't specified it. In those cases, when you add the annotations to a :class:~mne.io.Raw object, it is assumed that the orig_time matches the time of the first sample of the recording, so orig_time will be set to match the recording measurement date (raw.info['meas_date']).
raw.set_annotations(my_annot) print(raw.annotations) # convert meas_date (a tuple of seconds, microseconds) into a float: meas_date = raw.info['meas_date'][0] + raw.info['meas_date'][1] / 1e6 orig_time = raw.annotations.orig_time print(meas_date == orig_time)
0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If you know that your annotation onsets are relative to some other time, you can set orig_time before you call :meth:~mne.io.Raw.set_annotations, and the onset times will get adjusted based on the time difference between your specified orig_time and raw.info['meas_date'], but without the additional adjustment for raw.first_samp. orig_time can be specified in various ways (see the documentation of :class:~mne.Annotations for the options); here we'll use an ISO 8601_ formatted string, and set it to be 50 seconds later than raw.info['meas_date'].
time_format = '%Y-%m-%d %H:%M:%S.%f' new_orig_time = datetime.utcfromtimestamp(meas_date + 50).strftime(time_format) print(new_orig_time) later_annot = mne.Annotations(onset=[3, 5, 7], duration=[1, 0.5, 0.25], description=['DDD', 'EEE', 'FFF'], orig_time=new_orig_time) raw2 = raw.copy().set_annotations(later_annot) print(later_annot.onset) print(raw2.annotations.onset)
0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The three annotations appear as differently colored rectangles because they have different description values (which are printed along the top edge of the plot area). Notice also that colored spans appear in the small scroll bar at the bottom of the plot window, making it easy to quickly view where in a :class:~mne.io.Raw object the annotations are so you can easily browse through the data to find and examine them. Annotating Raw objects interactively ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Annotations can also be added to a :class:~mne.io.Raw object interactively by clicking-and-dragging the mouse in the plot window. To do this, you must first enter "annotation mode" by pressing :kbd:a while the plot window is focused; this will bring up the annotation controls window:
fig.canvas.key_press_event('a')
0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The colored rings are clickable, and determine which existing label will be created by the next click-and-drag operation in the main plot window. New annotation descriptions can be added by typing the new description, clicking the :guilabel:Add label button; the new description will be added to the list of descriptions and automatically selected. During interactive annotation it is also possible to adjust the start and end times of existing annotations, by clicking-and-dragging on the left or right edges of the highlighting rectangle corresponding to that annotation. <div class="alert alert-danger"><h4>Warning</h4><p>Calling :meth:`~mne.io.Raw.set_annotations` **replaces** any annotations currently stored in the :class:`~mne.io.Raw` object, so be careful when working with annotations that were created interactively (you could lose a lot of work if you accidentally overwrite your interactive annotations). A good safeguard is to run ``interactive_annot = raw.annotations`` after you finish an interactive annotation session, so that the annotations are stored in a separate variable outside the :class:`~mne.io.Raw` object.</p></div> How annotations affect preprocessing and analysis ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You may have noticed that the description for new labels in the annotation controls window defaults to BAD_. The reason for this is that annotation is often used to mark bad temporal spans of data (such as movement artifacts or environmental interference that cannot be removed in other ways such as projection &lt;tut-projectors-background&gt; or filtering). Several MNE-Python operations are "annotation aware" and will avoid using data that is annotated with a description that begins with "bad" or "BAD"; such operations typically have a boolean reject_by_annotation parameter. Examples of such operations are independent components analysis (:class:mne.preprocessing.ICA), functions for finding heartbeat and blink artifacts (:func:~mne.preprocessing.find_ecg_events, :func:~mne.preprocessing.find_eog_events), and creation of epoched data from continuous data (:class:mne.Epochs). See tut-reject-data-spans for details. Operations on Annotations objects ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ :class:~mne.Annotations objects can be combined by simply adding them with the + operator, as long as they share the same orig_time:
new_annot = mne.Annotations(onset=3.75, duration=0.75, description='AAA') raw.set_annotations(my_annot + new_annot) raw.plot(start=2, duration=6)
0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Reading and writing Annotations to/from a file ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ :class:~mne.Annotations objects have a :meth:~mne.Annotations.save method which can write :file:.fif, :file:.csv, and :file:.txt formats (the format to write is inferred from the file extension in the filename you provide). There is a corresponding :func:~mne.read_annotations function to load them from disk:
raw.annotations.save('saved-annotations.csv') annot_from_file = mne.read_annotations('saved-annotations.csv') print(annot_from_file)
0.19/_downloads/7b0095430c62d9ef92be2dd3af2614f6/plot_30_annotate_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Vertex client library: AutoML image classification model for batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex client library for Python to create image classification models and do batch prediction using Google Cloud's AutoML. Dataset The dataset used for this tutorial is the Flowers dataset from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip. Objective In this tutorial, you create an AutoML image classification model from a Python script, and then do a batch prediction using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. View the model evaluation. Make a batch prediction. There is one key difference between using batch prediction and using online prediction: Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time. Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. Costs This tutorial uses billable components of Google Cloud (GCP): Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest version of Vertex client library.
import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG
notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Tutorial Now you are ready to start creating your own AutoML image classification model. Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. Dataset Service for Dataset resources. Model Service for Model resources. Pipeline Service for training. Job Service for batch prediction and custom training.
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_pipeline_client(): client = aip.PipelineServiceClient(client_options=client_options) return client def create_job_client(): client = aip.JobServiceClient(client_options=client_options) return client clients = {} clients["dataset"] = create_dataset_client() clients["model"] = create_model_client() clients["pipeline"] = create_pipeline_client() clients["job"] = create_job_client() for client in clients.items(): print(client)
notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Model deployment for batch prediction Now deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction. For online prediction, you: Create an Endpoint resource for deploying the Model resource to. Deploy the Model resource to the Endpoint resource. Make online prediction requests to the Endpoint resource. For batch-prediction, you: Create a batch prediction job. The job service will provision resources for the batch prediction request. The results of the batch prediction request are returned to the caller. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction request Now do a batch prediction to your deployed model. Get test item(s) Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
test_items = !gsutil cat $IMPORT_FILE | head -n2 if len(str(test_items[0]).split(",")) == 3: _, test_item_1, test_label_1 = str(test_items[0]).split(",") _, test_item_2, test_label_2 = str(test_items[1]).split(",") else: test_item_1, test_label_1 = str(test_items[0]).split(",") test_item_2, test_label_2 = str(test_items[1]).split(",") print(test_item_1, test_label_1) print(test_item_2, test_label_2)
notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make batch prediction request Now that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters: display_name: The human readable name for the prediction job. model_name: The Vertex fully qualified identifier for the Model resource. gcs_source_uri: The Cloud Storage path to the input file -- which you created above. gcs_destination_output_uri_prefix: The Cloud Storage path that the service will write the predictions to. parameters: Additional filtering parameters for serving prediction results. The helper function calls the job client service's create_batch_prediction_job metho, with the following parameters: parent: The Vertex location root path for Dataset, Model and Pipeline resources. batch_prediction_job: The specification for the batch prediction job. Let's now dive into the specification for the batch_prediction_job: display_name: The human readable name for the prediction batch job. model: The Vertex fully qualified identifier for the Model resource. dedicated_resources: The compute resources to provision for the batch prediction job. machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated. starting_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES. max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES. model_parameters: Additional filtering parameters for serving prediction results. confidence_threshold: The threshold for returning predictions. Must be between 0 and 1. max_predictions: The maximum number of predictions to return per classification, sorted by confidence. input_config: The input source and format type for the instances to predict. instances_format: The format of the batch prediction request file: jsonl only supported. gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests. output_config: The output destination and format for the predictions. prediction_format: The format of the batch prediction response file: jsonl only supported. gcs_destination: The output destination for the predictions. You might ask, how does confidence_threshold affect the model accuracy? The threshold won't change the accuracy. What it changes is recall and precision. - Precision: The higher the precision the more likely what is predicted is the correct prediction, but return fewer predictions. Increasing the confidence threshold increases precision. - Recall: The higher the recall the more likely a correct prediction is returned in the result, but return more prediction with incorrect prediction. Decreasing the confidence threshold increases recall. In this example, you will predict for precision. You set the confidence threshold to 0.5 and the maximum number of predictions for a classification to two. Since, all the confidence values across the classes must add up to one, there are only two possible outcomes: 1. There is a tie, both 0.5, and returns two predictions. 2. One value is above 0.5 and the rest are below 0.5, and returns one prediction. This call is an asychronous operation. You will print from the response object a few select fields, including: name: The Vertex fully qualified identifier assigned to the batch prediction job. display_name: The human readable name for the prediction batch job. model: The Vertex fully qualified identifier for the Model resource. generate_explanations: Whether True/False explanations were provided with the predictions (explainability). state: The state of the prediction job (pending, running, etc). Since this call will take a few moments to execute, you will likely get JobState.JOB_STATE_PENDING for state.
BATCH_MODEL = "flowers_batch-" + TIMESTAMP def create_batch_prediction_job( display_name, model_name, gcs_source_uri, gcs_destination_output_uri_prefix, parameters=None, ): if DEPLOY_GPU: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_type": DEPLOY_GPU, "accelerator_count": DEPLOY_NGPU, } else: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_count": 0, } batch_prediction_job = { "display_name": display_name, # Format: 'projects/{project}/locations/{location}/models/{model_id}' "model": model_name, "model_parameters": json_format.ParseDict(parameters, Value()), "input_config": { "instances_format": IN_FORMAT, "gcs_source": {"uris": [gcs_source_uri]}, }, "output_config": { "predictions_format": OUT_FORMAT, "gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix}, }, "dedicated_resources": { "machine_spec": machine_spec, "starting_replica_count": MIN_NODES, "max_replica_count": MAX_NODES, }, } response = clients["job"].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job ) print("response") print(" name:", response.name) print(" display_name:", response.display_name) print(" model:", response.model) try: print(" generate_explanation:", response.generate_explanation) except: pass print(" state:", response.state) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", response.labels) return response IN_FORMAT = "jsonl" OUT_FORMAT = "jsonl" # [jsonl] response = create_batch_prediction_job( BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, {"confidenceThreshold": 0.5, "maxPredictions": 2}, )
notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get the predictions When the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED. Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name prediction, and under that folder will be a file called predictions*.jsonl. Now display (cat) the contents. You will see multiple JSON objects, one for each prediction. The first field ID is the image file you did the prediction on, and the second field annotations is the prediction, which is further broken down into: confidences: The percent of confidence between 0 and 1. display_name: The corresponding class name.
def get_latest_predictions(gcs_out_dir): """ Get the latest prediction subfolder using the timestamp in the subfolder name""" folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split("/")[-2] if subfolder.startswith("prediction-"): if subfolder > latest: latest = folder[:-1] return latest while True: predictions, state = get_batch_prediction_job(batch_job_id, True) if state != aip.JobState.JOB_STATE_SUCCEEDED: print("The job has not completed:", state) if state == aip.JobState.JOB_STATE_FAILED: raise Exception("Batch Job Failed") else: folder = get_latest_predictions(predictions) ! gsutil ls $folder/prediction*.jsonl ! gsutil cat $folder/prediction*.jsonl break time.sleep(60)
notebooks/community/gapic/automl/showcase_automl_image_classification_batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create Topics We select the "english" as the main language for our documents. If you want a multilingual model that supports 50+ languages, please select "multilingual" instead.
model = BERTopic(language="english") topics, probs = model.fit_transform(docs)
notebooks/BERTopic.ipynb
MaartenGr/BERTopic
mit
We can then extract most frequent topics:
model.get_topic_freq().head(5)
notebooks/BERTopic.ipynb
MaartenGr/BERTopic
mit
-1 refers to all outliers and should typically be ignored. Next, let's take a look at the most frequent topic that was generated:
model.get_topic(49)[:10]
notebooks/BERTopic.ipynb
MaartenGr/BERTopic
mit
Note that the model is stocastich which mmeans that the topics might differ across runs. For a full list of support languages, see the values below:
from bertopic import languages print(languages)
notebooks/BERTopic.ipynb
MaartenGr/BERTopic
mit
Embedding model You can select any model from sentence-transformers and use it instead of the preselected models by simply passing the model through BERTopic with embedding_model:
# st_model = BERTopic(embedding_model="xlm-r-bert-base-nli-stsb-mean-tokens")
notebooks/BERTopic.ipynb
MaartenGr/BERTopic
mit
Click here for a list of supported sentence transformers models. Visualize Topics After having trained our BERTopic model, we can iteratively go through perhaps a hundred topic to get a good understanding of the topics that were extract. However, that takes quite some time and lacks a global representation. Instead, we can visualize the topics that were generated in a way very similar to LDAvis:
model.visualize_topics()
notebooks/BERTopic.ipynb
MaartenGr/BERTopic
mit
Visualize Topic Probabilities The variable probabilities that is returned from transform() or fit_transform() can be used to understand how confident BERTopic is that certain topics can be found in a document. To visualize the distributions, we simply call:
model.visualize_distribution(probs[0])
notebooks/BERTopic.ipynb
MaartenGr/BERTopic
mit
Topic Reduction Finally, we can also reduce the number of topics after having trained a BERTopic model. The advantage of doing so, is that you can decide the number of topics after knowing how many are actually created. It is difficult to predict before training your model how many topics that are in your documents and how many will be extracted. Instead, we can decide afterwards how many topics seems realistic:
new_topics, new_probs = model.reduce_topics(docs, topics, probs, nr_topics=60)
notebooks/BERTopic.ipynb
MaartenGr/BERTopic
mit
The reasoning for putting docs, topics, and probs as parameters is that these values are not saved within BERTopic on purpose. If you were to have a million documents, it seems very inefficient to save those in BERTopic instead of a dedicated database. Topic Representation When you have trained a model and viewed the topics and the words that represent them, you might not be satisfied with the representation. Perhaps you forgot to remove stop_words or you want to try out a different n_gram_range. We can use the function update_topics to update the topic representation with new parameters for c-TF-IDF:
model.update_topics(docs, topics, n_gram_range=(1, 3), stop_words="english")
notebooks/BERTopic.ipynb
MaartenGr/BERTopic
mit
Search Topics After having trained our model, we can use find_topics to search for topics that are similar to an input search_term. Here, we are going to be searching for topics that closely relate the search term "vehicle". Then, we extract the most similar topic and check the results:
similar_topics, similarity = model.find_topics("vehicle", top_n=5); similar_topics model.get_topic(28)
notebooks/BERTopic.ipynb
MaartenGr/BERTopic
mit
Model serialization The model and its internal settings can easily be saved. Note that the documents and embeddings will not be saved. However, UMAP and HDBSCAN will be saved.
# Save model model.save("my_model") # Load model my_model = BERTopic.load("my_model")
notebooks/BERTopic.ipynb
MaartenGr/BERTopic
mit
Flight Metadata There's a lot of information contained in the two flight metadata files (websource and track).
flight_websource.head(1) flight_track.head(1)
binder/gliding-data-starter.ipynb
ezgliding/goigc
apache-2.0
The most useful information comes from the websource file as this information is passed directly by the pilot when submitting the flight to the online competition. Things like Country or Region provide useful statistics on how popular the sport is in different areas. As an example, what are the most popular regions considering the total number of flights? What about the most popular Takeoff location?
flight_websource.groupby(['Country', 'Region'])['Region'].value_counts().sort_values(ascending=False).head(3) flight_websource.groupby(['Country', 'Year', 'Takeoff'])['Takeoff'].value_counts().sort_values(ascending=False).head(3)
binder/gliding-data-starter.ipynb
ezgliding/goigc
apache-2.0
The three regions above match the Alps area, which is an expected result given this is a gliding Meca. The second result shows Vinon as the most popular takeoff in 2016, a big club also in the Southern Alps. But it's interesting to notice that more recently a club near Montpellier took over as the one with the most activity in terms of number of flights. Gliding is a seasonal activity peaking in summer months.
flight_websource['DayOfWeek'] = flight_websource.apply(lambda r: datetime.datetime.strptime(r['Date'], "%Y-%m-%dT%H:%M:%SZ").strftime("%A"), axis=1) flight_websource.groupby(['DayOfWeek'])['DayOfWeek'].count().plot.bar() flight_websource['Month'] = flight_websource.apply(lambda r: datetime.datetime.strptime(r['Date'], "%Y-%m-%dT%H:%M:%SZ").strftime("%m"), axis=1) flight_websource.groupby(['Month'])['Month'].count().plot.bar()
binder/gliding-data-starter.ipynb
ezgliding/goigc
apache-2.0
Merging Data We can get additional flight metadata information from the flight_track file, but you can expect this data to be less reliable as it's often the case that the metadata in the flight recorder is not updated before a flight. It is very useful though to know more about what type of recorder was used, calibration settings, etc. It is also the source of the flight tracks used to generate the data in the phases file. In some cases we want to handle columns from both flight metadata files, so it's useful to join the two sets. We can rely on the ID for this purpose.
flight_all = pd.merge(flight_websource, flight_track, how='left', on='ID') flight_all.head(1)
binder/gliding-data-starter.ipynb
ezgliding/goigc
apache-2.0
Flight Phases In addition to the flight metadata provided in the files above, by analysing the GPS flight tracks we can generate a lot more interesting data. Here we take a look at flight phases, calculated using the goigc tool. As described earlier to travel further glider pilots use thermals to gain altitude and then convert that altitude into distance. In the phases file we have a record of each individual phase detected for each of the 100k flights, and we'll focus on: * Circling (5): phase where a glider is gaining altitude by circling in an area of rising air * Cruising (3): phase where a glider is flying straight converting altitude into distance These are indicated by the integer field Type below. Each phase has a set of additional fields with relevant statistics for each phase type: while circling the average climb rate (vario) and duration are interesting; while cruising the distance covered and LD (glide ratio) are more interesting.
flight_phases.head(1)
binder/gliding-data-starter.ipynb
ezgliding/goigc
apache-2.0
Data Preparation As a quick example of what is possible with this kind of data let's take try to map all circling phases as a HeatMap. First we need to do some treatment of the data: convert coordinates from radians to degrees, filter out unreasonable values (climb rates above 15m/s are due to errors in the recording device), convert the date to the expected format and desired grouping. In this case we're grouping all thermal phases by week.
phases = pd.merge(flight_phases, flight_websource[['TrackID', 'Distance', 'Speed']], on='TrackID') phases['Lat'] = np.rad2deg(phases['CentroidLatitude']) phases['Lng'] = np.rad2deg(phases['CentroidLongitude']) phases_copy = phases[phases.Type==5][phases.AvgVario<10][phases.AvgVario>2].copy() phases_copy.head(2) #phases_copy['AM'] = phases_copy.apply(lambda r: datetime.datetime.strptime(r['StartTime'], "%Y-%m-%dT%H:%M:%SZ").strftime("%p"), axis=1) #phases_copy['Day'] = phases_copy.apply(lambda r: datetime.datetime.strptime(r['StartTime'], "%Y-%m-%dT%H:%M:%SZ").strftime("%j"), axis=1) #phases_copy['Week'] = phases_copy.apply(lambda r: datetime.datetime.strptime(r['StartTime'], "%Y-%m-%dT%H:%M:%SZ").strftime("%W"), axis=1) #phases_copy['Month'] = phases_copy.apply(lambda r: r['StartTime'][5:7], axis=1) #phases_copy['Year'] = phases_copy.apply(lambda r: r['StartTime'][0:4], axis=1) #phases_copy['YearMonth'] = phases_copy.apply(lambda r: r['StartTime'][0:7], axis=1) #phases_copy['YearMonthDay'] = phases_copy.apply(lambda r: r['StartTime'][0:10], axis=1) # use the corresponding function above to update the grouping to something other than week phases_copy['Group'] = phases_copy.apply(lambda r: datetime.datetime.strptime(r['StartTime'], "%Y-%m-%dT%H:%M:%SZ").strftime("%W"), axis=1) phases_copy.head(1)
binder/gliding-data-starter.ipynb
ezgliding/goigc
apache-2.0
Visualization Once we have the data ready we can visualize it over a map. We rely on folium for this.
# This is a workaround for this known issue: # https://github.com/python-visualization/folium/issues/812#issuecomment-582213307 !pip install git+https://github.com/python-visualization/branca !pip install git+https://github.com/sknzl/folium@update-css-url-to-https import folium from folium import plugins from folium import Choropleth, Circle, Marker from folium.plugins import HeatMap, HeatMapWithTime, MarkerCluster # folium.__version__ # should be '0.10.1+8.g4ea1307' # folium.branca.__version__ # should be '0.4.0+4.g6ac241a'
binder/gliding-data-starter.ipynb
ezgliding/goigc
apache-2.0
Single HeatMap
# we use a smaller sample to improve the visualization # a better alternative is to group entries by CellID, an example of this will be added later phases_single = phases_copy.sample(frac=0.01, random_state=1) m_5 = folium.Map(location=[47.06318, 5.41938], tiles='stamen terrain', zoom_start=7) HeatMap( phases_single[['Lat','Lng','AvgVario']], gradient={0.5: 'blue', 0.7: 'yellow', 1: 'red'}, min_opacity=5, max_val=phases_single.AvgVario.max(), radius=4, max_zoom=7, blur=4, use_local_extrema=False).add_to(m_5) m_5
binder/gliding-data-starter.ipynb
ezgliding/goigc
apache-2.0
HeatMap over Time Another cool possibility is to visualize the same data over time. In this case we're grouping weekly and playing the data over one year. Both the most popular areas and times of the year are pretty clear from this animation.
m_5 = folium.Map(location=[47.06318, 5.41938], tiles='stamen terrain', zoom_start=7) groups = phases_copy.Group.sort_values().unique() data = [] for g in groups: data.append(phases_copy.loc[phases_copy.Group==g,['Group','Lat','Lng','AvgVario']].groupby(['Lat','Lng']).sum().reset_index().values.tolist()) HeatMapWithTime( data, index = list(phases_copy.Group.sort_values().unique()), gradient={0.1: 'blue', 0.3: 'yellow', 0.8: 'red'}, auto_play=True, scale_radius=False, display_index=True, radius=4, min_speed=1, max_speed=6, speed_step=1, min_opacity=1, max_opacity=phases_copy.AvgVario.max(), use_local_extrema=True).add_to(m_5) m_5
binder/gliding-data-starter.ipynb
ezgliding/goigc
apache-2.0
0.2 View graph in TensorBoard
!python -m model_inspect --model_name={MODEL} --logdir=logs &> /dev/null %load_ext tensorboard %tensorboard --logdir logs
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
1. inference 1.1 Benchmark network latency There are two types of latency: network latency and end-to-end latency. network latency: from the first conv op to the network class and box prediction. end-to-end latency: from image preprocessing, network, to the final postprocessing to generate a annotated new image.
# benchmark network latency !python -m tf2.inspector --mode=benchmark --model_name={MODEL} --hparams="mixed_precision=true" --only_network # With colab + Tesla T4 GPU, here are the batch size 1 latency summary: # D0 (AP=33.5): 14.9ms, FPS = 67.2 (batch size 8 FPS=) # D1 (AP=39.6): 22.7ms, FPS = 44.1 (batch size 8 FPS=) # D2 (AP=43.0): 27.9ms, FPS = 35.8 (batch size 8 FPS=) # D3 (AP=45.8): 48.1ms, FPS = 20.8 (batch size 8 FPS=) # D4 (AP=49.4): 81.9ms, FPS = 12.2 (batch size 8 FPS=)
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
1.2 Benchmark end-to-end latency
# Benchmark end-to-end latency (: preprocess + network + posprocess). # # With colab + Tesla T4 GPU, here are the batch size 1 latency summary: # D0 (AP=33.5): 22.7ms, FPS = 43.1 (batch size 4, FPS=) # D1 (AP=39.6): 34.3ms, FPS = 29.2 (batch size 4, FPS=) # D2 (AP=43.0): 42.5ms, FPS = 23.5 (batch size 4, FPS=) # D3 (AP=45.8): 64.8ms, FPS = 15.4 (batch size 4, FPS=) # D4 (AP=49.4): 93.7ms, FPS = 10.7 (batch size 4, FPS=) batch_size = 1 # @param saved_model_dir = 'savedmodel' !rm -rf {saved_model_dir} !python -m tf2.inspector --mode=export --model_name={MODEL} \ --model_dir={ckpt_path} --saved_model_dir={saved_model_dir} \ --batch_size={batch_size} --hparams="mixed_precision=true" !python -m tf2.inspector --mode=benchmark --model_name={MODEL} \ --saved_model_dir={saved_model_dir} \ --batch_size=1 --hparams="mixed_precision=true" --input_image=testdata/img1.jpg
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
1.3 Inference images.
# first export a saved model. saved_model_dir = 'savedmodel' !rm -rf {saved_model_dir} !python -m tf2.inspector --mode=export --model_name={MODEL} \ --model_dir={ckpt_path} --saved_model_dir={saved_model_dir} # Then run saved_model_infer to do inference. # Notably: batch_size, image_size must be the same as when it is exported. serve_image_out = 'serve_image_out' !mkdir {serve_image_out} !python -m tf2.inspector --mode=infer \ --saved_model_dir={saved_model_dir} \ --model_name={MODEL} --input_image=testdata/img1.jpg \ --output_image_dir={serve_image_out} from IPython import display display.display(display.Image(os.path.join(serve_image_out, '0.jpg'))) # In case you need to specify different image size or batch size or #boxes, then # you need to export a new saved model and run the inferernce. serve_image_out = 'serve_image_out' !mkdir {serve_image_out} saved_model_dir = 'savedmodel' !rm -rf {saved_model_dir} # Step 1: export model !python -m tf2.inspector --mode=export \ --model_name={MODEL} --model_dir={MODEL} \ --hparams="image_size=1920x1280" --saved_model_dir={saved_model_dir} # Step 2: do inference with saved model. !python -m tf2.inspector --mode=infer \ --model_name={MODEL} --saved_model_dir={saved_model_dir} \ --input_image=img.png --output_image_dir={serve_image_out} \ from IPython import display display.display(display.Image(os.path.join(serve_image_out, '0.jpg')))
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
1.4 Inference video
# step 0: download video video_url = 'https://storage.googleapis.com/cloud-tpu-checkpoints/efficientdet/data/video480p.mov' # @param !wget {video_url} -O input.mov # Step 1: export model saved_model_dir = 'savedmodel' !rm -rf {saved_model_dir} !python -m tf2.inspector --mode=export \ --model_name={MODEL} --model_dir={MODEL} \ --saved_model_dir={saved_model_dir} --hparams="mixed_precision=true" # Step 2: do inference with saved model using saved_model_video !python -m tf2.inspector --mode=video \ --model_name={MODEL} \ --saved_model_dir={saved_model_dir} --hparams="mixed_precision=true" \ --input_video=input.mov --output_video=output.mov # Then you can view the output.mov
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
2. TFlite 2.1 COCO evaluation on validation set.
if 'val2017' not in os.listdir(): !wget http://images.cocodataset.org/zips/val2017.zip !wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip !unzip -q val2017.zip !unzip annotations_trainval2017.zip !mkdir tfrecord !PYTHONPATH=".:$PYTHONPATH" python dataset/create_coco_tfrecord.py \ --image_dir=val2017 \ --caption_annotations_file=annotations/captions_val2017.json \ --output_file_prefix=tfrecord/val \ --num_shards=32
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
2.2 TFlite export INT8 model
# In case you need to specify different image size or batch size or #boxes, then # you need to export a new saved model and run the inferernce. serve_image_out = 'serve_image_out' !mkdir {serve_image_out} saved_model_dir = 'savedmodel' !rm -rf {saved_model_dir} # # Step 1: export model !python -m tf2.inspector --mode=export --file_pattern=tfrecord/*.tfrecord \ --model_name={MODEL} --model_dir={MODEL} --num_calibration_steps=100 \ --saved_model_dir={saved_model_dir} --use_xla --tflite=INT8 # Step 2: do inference with saved model. !python -m tf2.inspector --mode=infer --use_xla \ --model_name={MODEL} --saved_model_dir={saved_model_dir}/int8.tflite \ --input_image=testdata/img1.jpg --output_image_dir={serve_image_out} from IPython import display display.display(display.Image(os.path.join(serve_image_out, '0.jpg'))) # Evalute on validation set (takes about 10 mins for efficientdet-d0) !python -m tf2.eval_tflite \ --model_name={MODEL} --tflite_path={saved_model_dir}/int8.tflite \ --val_file_pattern=tfrecord/val* \ --val_json_file=annotations/instances_val2017.json --eval_samples=100
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
2.3 Compile EdgeTPU model (Optional)
# install edgetpu compiler !curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - !echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list !sudo apt-get update !sudo apt-get install edgetpu-compiler
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
The EdgeTPU has 8MB of SRAM for caching model paramaters (more info). This means that for models that are larger than 8MB, inference time will be increased in order to transfer over model paramaters. One way to avoid this is Model Pipelining - splitting the model into segments that can have a dedicated EdgeTPU. This can significantly improve latency. The below table can be used as a reference for the number of Edge TPUs to use - the larger models will not compile for a single TPU as the intermediate tensors can't fit in on-chip memory. | Model architecture | Minimum TPUs | Recommended TPUs |--------------------|-------|-------| | EfficientDet-Lite0 | 1 | 1 | | EfficientDet-Lite1 | 1 | 1 | | EfficientDet-Lite2 | 1 | 2 | | EfficientDet-Lite3 | 2 | 2 | | EfficientDet-Lite4 | 2 | 3 |
NUMBER_OF_TPUS = 1 !edgetpu_compiler {saved_model_dir}/int8.tflite --num_segments=$NUMBER_OF_TPUS
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
3. COCO evaluation 3.1 COCO evaluation on validation set.
if 'val2017' not in os.listdir(): !wget http://images.cocodataset.org/zips/val2017.zip !wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip !unzip -q val2017.zip !unzip annotations_trainval2017.zip !mkdir tfrecord !python -m dataset.create_coco_tfrecord \ --image_dir=val2017 \ --caption_annotations_file=annotations/captions_val2017.json \ --output_file_prefix=tfrecord/val \ --num_shards=32 # Evalute on validation set (takes about 10 mins for efficientdet-d0) !python -m tf2.eval \ --model_name={MODEL} --model_dir={ckpt_path} \ --val_file_pattern=tfrecord/val* \ --val_json_file=annotations/instances_val2017.json
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
4. Training EfficientDets on PASCAL. 4.1 Prepare data
# Get pascal voc 2012 trainval data import os if 'VOCdevkit' not in os.listdir(): !wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar !tar xf VOCtrainval_06-Nov-2007.tar !mkdir tfrecord !python -m dataset.create_pascal_tfrecord \ --data_dir=VOCdevkit --year=VOC2007 --output_path=tfrecord/pascal # Pascal has 5717 train images with 100 shards epoch, here we use a single shard # for demo, but users should use all shards pascal-*-of-00100.tfrecord. file_pattern = 'pascal-00000-of-00100.tfrecord' # @param images_per_epoch = 57 * len(tf.io.gfile.glob('tfrecord/' + file_pattern)) images_per_epoch = images_per_epoch // 8 * 8 # round to 64. print('images_per_epoch = {}'.format(images_per_epoch))
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
4.2 Train Pascal VOC 2007 from ImageNet checkpoint for Backbone.
# Train efficientdet from scratch with backbone checkpoint. backbone_name = { 'efficientdet-d0': 'efficientnet-b0', 'efficientdet-d1': 'efficientnet-b1', 'efficientdet-d2': 'efficientnet-b2', 'efficientdet-d3': 'efficientnet-b3', 'efficientdet-d4': 'efficientnet-b4', 'efficientdet-d5': 'efficientnet-b5', 'efficientdet-d6': 'efficientnet-b6', 'efficientdet-d7': 'efficientnet-b6', 'efficientdet-lite0': 'efficientnet-lite0', 'efficientdet-lite1': 'efficientnet-lite1', 'efficientdet-lite2': 'efficientnet-lite2', 'efficientdet-lite3': 'efficientnet-lite3', 'efficientdet-lite3x': 'efficientnet-lite3', 'efficientdet-lite4': 'efficientnet-lite4', }[MODEL] # generating train tfrecord is large, so we skip the execution here. import os if backbone_name not in os.listdir(): if backbone_name.find('lite'): !wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/lite/{backbone_name}.tar.gz else: !wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/ckptsaug/{backbone_name}.tar.gz !tar xf {backbone_name}.tar.gz !mkdir /tmp/model_dir # key option: use --backbone_ckpt rather than --ckpt. # Don't use ema since we only train a few steps. !python -m tf2.train --mode=traineval \ --train_file_pattern=tfrecord/{file_pattern} \ --val_file_pattern=tfrecord/{file_pattern} \ --model_name={MODEL} \ --model_dir=/tmp/model_dir/{MODEL}-scratch \ --pretrained_ckpt={backbone_name} \ --batch_size=16 \ --eval_samples={images_per_epoch} \ --num_examples_per_epoch={images_per_epoch} --num_epochs=1 \ --hparams="num_classes=20,moving_average_decay=0,mixed_precision=true"
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
4.3 Train Pascal VOC 2007 from COCO checkpoint for the whole net.
# generating train tfrecord is large, so we skip the execution here. import os if MODEL not in os.listdir(): download(MODEL) !mkdir /tmp/model_dir/ # key option: use --ckpt rather than --backbone_ckpt. !python -m tf2.train --mode=traineval \ --train_file_pattern=tfrecord/{file_pattern} \ --val_file_pattern=tfrecord/{file_pattern} \ --model_name={MODEL} \ --model_dir=/tmp/model_dir/{MODEL}-finetune \ --pretrained_ckpt={MODEL} \ --batch_size=16 \ --eval_samples={images_per_epoch} \ --num_examples_per_epoch={images_per_epoch} --num_epochs=1 \ --hparams="num_classes=20,moving_average_decay=0,mixed_precision=true"
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
4.4 View tensorboard for loss and accuracy.
%load_ext tensorboard %tensorboard --logdir /tmp/model_dir/ # Notably, this is just a demo with almost zero accuracy due to very limited # training steps, but we can see finetuning has smaller loss than training # from scratch at the begining.
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
5. Export to onnx
!pip install tf2onnx !python -m tf2.inspector --mode=export --model_name={MODEL} --model_dir={MODEL} --saved_model_dir={saved_model_dir} --hparams="nms_configs.method='hard', nms_configs.iou_thresh=0.5, nms_configs.sigma=0.0" !python -m tf2onnx.convert --saved-model={saved_model_dir} --output={saved_model_dir}/model.onnx --opset=11
efficientdet/tf2/tutorial.ipynb
google/automl
apache-2.0
Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
data = np.loadtxt('yearssn.dat') years = data[::1,0] ssc = data[::1,1] assert len(years)==315 assert years.dtype==np.dtype(float) assert len(ssc)==315 assert ssc.dtype==np.dtype(float)
assignments/assignment04/MatplotlibEx01.ipynb
bjshaw/phys202-2015-work
mit
Make a line plot showing the sunspot count as a function of year. Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data.
fig = plt.figure(figsize=(150,10)) plt.plot(years, ssc) plt.title('Sun Spot Count vs. Time') plt.xlabel('Time(years)') plt.ylabel('Sun Spot Count') plt.tick_params(top = False, right=False) plt.xlim(1700.5, 2015) assert True # leave for grading
assignments/assignment04/MatplotlibEx01.ipynb
bjshaw/phys202-2015-work
mit
Describe the choices you have made in building this visualization and how they make it effective. With the requirement of the maximum slope, using just one graph is not effective for this particular data set. The aspect ratio makes the graph unreadable without zooming in a lot. However, creating four seperate graphs below creates a much better visualization. Sharing the y-axis scaling and titles horizontally make the graphs simpler, as well as sharing the x-axis title vertically. Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above: Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data.
f, ax = plt.subplots(2,2, sharey=True, figsize=(25,7)) for n in range(1,5): plt.subplot(2,2,n) plt.tick_params(top=False, right=False) plt.tight_layout() for m in range(1,5): if m % 2 != 0: plt.subplot(2,2,m) plt.ylabel('Sun Spot Count') if m == 3 or m == 4: plt.subplot(2,2,m) plt.xlabel('Time') plt.subplot(2,2,1) plt.plot(years[0:101:1], ssc[0:101:1]) plt.xlim(1700.5, 1800.5) plt.ylim(0,200) plt.title('Sun Spot Count vs. Time') plt.subplot(2,2,2) plt.plot(years[100:201:1], ssc[100:201:1]) plt.xlim(1800.5, 1900.5) plt.ylim(0,200) plt.tick_params(labelleft=False) plt.subplot(2,2,3) plt.plot(years[200:301:1], ssc[200:301:1]) plt.xlim(1900.5,2000.5) plt.ylim(0,200) plt.subplot(2,2,4) plt.plot(years[300:315:1], ssc[300:315:1]) plt.xlim(2000.5, 2015) plt.ylim(0,200) plt.tick_params(labelleft=False) plt.tight_layout() assert True # leave for grading
assignments/assignment04/MatplotlibEx01.ipynb
bjshaw/phys202-2015-work
mit
Log loss over image size Looking at the effect of image size on log loss. Want to see how much it affects our ability to classify an image when it has been radically resized.
N = y.shape[0] logloss = lambda x: -(1./N)*np.log(x[0][x[1]]) # iterate over all the images and make a list of log losses ilabels = np.where(labels)[1] losses = [] for r,l in zip(y,ilabels): losses.append(logloss([r,l])) losses = np.array(losses)
notebooks/preliminary_data_analysis/Log loss over image size.ipynb
Neuroglycerin/neukrill-net-work
mit
Getting an array of original image sizes:
sizes = [] for i in dataset.X: sizes.append(np.sqrt(i.size)) sizes = np.array(sizes)
notebooks/preliminary_data_analysis/Log loss over image size.ipynb
Neuroglycerin/neukrill-net-work
mit
Then we can just make a scatter plot:
plt.scatter(sizes,losses) plt.grid()
notebooks/preliminary_data_analysis/Log loss over image size.ipynb
Neuroglycerin/neukrill-net-work
mit
Doesn't appear to be a clear relationship between image size and log loss. We can look at the average log loss in a rolling average over the above plot and see where most of the log loss is:
np.hanning(50)*np.ones(50) ravg = np.zeros(450) for i in range(450): s = losses[(sizes > i-50) * (sizes < i+50)] # apply window function s = s*np.hanning(s.size) if s.size != 0: ravg[i] = np.mean(s) else: ravg[i] = 0 plt.plot(range(450),ravg)
notebooks/preliminary_data_analysis/Log loss over image size.ipynb
Neuroglycerin/neukrill-net-work
mit
So it looks like we are losing more log loss per image at higher image sizes. However, most of log loss is going to be at the lower image sizes because there are more images there. If we do a rolling sum we'll see this:
rsum = np.zeros(450) for i in range(450): s = losses[(sizes > i-30) * (sizes < i+30)] # apply window function s = s*np.hanning(s.size) if s.size != 0: rsum[i] = np.sum(s) else: rsum[i] = 0 plt.plot(range(450),rsum)
notebooks/preliminary_data_analysis/Log loss over image size.ipynb
Neuroglycerin/neukrill-net-work
mit
So, cumulatively we should probably worry more about the lower image sizes anyway. Linear regression Before finishing this notebook should probably run linear regression on this data and see if we can fit a line saying that image size is correlated with log loss. Using sklearn due to familiarity.
import sklearn.linear_model lreg = sklearn.linear_model.LinearRegression() lreg.fit(sizes.reshape(-1,1),losses) plt.scatter(sizes,losses) plt.plot(sizes,lreg.predict(sizes.reshape(-1,1))) plt.plot(sizes,lreg.predict(sizes.reshape(-1,1)))
notebooks/preliminary_data_analysis/Log loss over image size.ipynb
Neuroglycerin/neukrill-net-work
mit
So there is a very small correlation between image size and log loss.
lreg.coef_[0]
notebooks/preliminary_data_analysis/Log loss over image size.ipynb
Neuroglycerin/neukrill-net-work
mit
Does the parallel model do better? We've trained a model with parallel streams which should in theory do better at this kind of task; as it has more information about the size of different classes.
settings = neukrill_net.utils.Settings("settings.json") run_settings = neukrill_net.utils.load_run_settings( "run_settings/parallel_conv.json", settings, force=True) model = pylearn2.utils.serial.load(run_settings['pickle abspath']) # loading the data yaml_string = neukrill_net.utils.format_yaml(run_settings, settings) proxd = pylearn2.config.yaml_parse.load(yaml_string, instantiate=False).keywords['dataset'] proxd.keywords['force'] = True proxd.keywords['training_set_mode'] = 'validation' dataset = pylearn2.config.yaml_parse._instantiate(proxd) batch_size=500 while len(dataset.X)%batch_size != 0: batch_size += 1 n_batches = int(len(dataset.X)/batch_size) # set this batch size model.set_batch_size(batch_size) # compile Theano function X = model.get_input_space().make_batch_theano() Y = model.fprop(X) f = theano.function(X,Y) %%time y = np.zeros((len(dataset.X),len(settings.classes))) i=0 iterator = dataset.iterator(batch_size=batch_size,num_batches=n_batches, mode='even_sequential') for batch in iterator: print(i) y[i*batch_size:(i+1)*batch_size] = f(batch[0],batch[1]) i+=1 losses = [] for r,l in zip(y,ilabels): losses.append(logloss([r,l])) losses = np.array(losses)
notebooks/preliminary_data_analysis/Log loss over image size.ipynb
Neuroglycerin/neukrill-net-work
mit
The scatter plot looks slightly better at the higher values:
plt.scatter(sizes,losses) plt.grid() ravg = np.zeros(450) for i in range(450): s = losses[(sizes > i-50) * (sizes < i+50)] # apply window function s = s*np.hanning(s.size) if s.size != 0: ravg[i] = np.mean(s) else: ravg[i] = 0 plt.plot(range(450),ravg) rsum = np.zeros(450) for i in range(450): s = losses[(sizes > i-30) * (sizes < i+30)] # apply window function s = s*np.hanning(s.size) if s.size != 0: rsum[i] = np.sum(s) else: rsum[i] = 0 plt.plot(range(450),rsum)
notebooks/preliminary_data_analysis/Log loss over image size.ipynb
Neuroglycerin/neukrill-net-work
mit
Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
from sklearn.preprocessing import LabelBinarizer labelBinarizer = LabelBinarizer() labelBinarizer.fit(labels) labels_vecs = labelBinarizer.transform(labels)
transfer-learning/.ipynb_checkpoints/Transfer_Learning-checkpoint.ipynb
swirlingsand/deep-learning-foundations
mit
์ถ”์ •๊ธฐ(Estimator)๋ฅผ ์‚ฌ์šฉํ•œ ๋‹ค์ค‘ ์ž‘์—…์ž ํ›ˆ๋ จ <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/multi_worker_with_estimator"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org์—์„œ ๋ณด๊ธฐ</a> </td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab์—์„œ ์‹คํ–‰</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub์—์„œ ์†Œ์Šค ๋ณด๊ธฐ</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">๋…ธํŠธ๋ถ ๋‹ค์šด๋กœ๋“œ</a></td> </table> ๊ฐœ์š” ์ฐธ๊ณ : tf.distribute API์™€ ํ•จ๊ป˜ ์ถ”์ •๊ธฐ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋Š” ์žˆ์ง€๋งŒ, tf.distribute์™€ ํ•จ๊ป˜ Keras๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. Keras๋ฅผ ์‚ฌ์šฉํ•œ ๋‹ค์ค‘ ์ž‘์—…์ž ํ›ˆ๋ จ์„ ์ฐธ์กฐํ•˜์„ธ์š”. tf.distribute.Strategy๋ฅผ ์ถ”์ •๊ธฐ์™€ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ๋ถ€๋ถ„์ ์œผ๋กœ๋งŒ ์ง€์›ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์€ tf.estimator๋ฅผ ์ด์šฉํ•œ ๋ถ„์‚ฐ ๋‹ค์ค‘ ์ž‘์—…์ž ํ›ˆ๋ จ์„ ์œ„ํ•ด tf.distribute.Strategy๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. tf.estimator๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๊ณ  ์žˆ๊ณ , ๊ณ ์„ฑ๋Šฅ์˜ ์žฅ๋น„ ํ•œ ๋Œ€๋กœ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋Š” ๊ฒƒ๋ณด๋‹ค ๋” ํฐ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐ ๊ด€์‹ฌ์ด ์žˆ๋‹ค๋ฉด ์ด ํŠœํ† ๋ฆฌ์–ผ์ด ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ๋ถ„์‚ฐ ์ „๋žต ๊ฐ€์ด๋“œ๋ฅผ ์ฝ์–ด์ฃผ์„ธ์š”. ์ด ํŠœํ† ๋ฆฌ์–ผ๊ณผ ๊ฐ™์€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๋‹ค์ค‘ GPU ํ›ˆ๋ จ ํŠœํ† ๋ฆฌ์–ผ๋„ ๊ด€๋ จ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์„ค์ • ๋จผ์ €, ํ…์„œํ”Œ๋กœ๋ฅผ ์„ค์ •ํ•˜๊ณ  ํ•„์š”ํ•œ ํŒจํ‚ค์ง€๋“ค์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค.
import tensorflow_datasets as tfds import tensorflow as tf tfds.disable_progress_bar() import os, json
site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
์ฐธ๊ณ : TF2.4๋ถ€ํ„ฐ ๋‹ค์ค‘ ์ž‘์—…์ž ๋ฏธ๋Ÿฌ๋ง ๋ฐฉ๋ฒ•์€ ์ฆ‰์‹œ ์‹คํ–‰์ด ํ™œ์„ฑํ™”๋œ ๊ฒฝ์šฐ(๊ธฐ๋ณธ ์„ค์ •) ์ถ”์ •๊ธฐ์—์„œ ์˜ค๋ฅ˜๋ฅผ ์ผ์œผํ‚ต๋‹ˆ๋‹ค. TF2.4์˜ ์˜ค๋ฅ˜๋Š” TypeError: cannot pickle '_thread.lock' object์ž…๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์ด์Šˆ #46556์„ ์ฐธ์กฐํ•˜์„ธ์š”. ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•์€ ์ฆ‰์‹œ ์‹คํ–‰์„ ๋น„ํ™œ์„ฑํ™”ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.
tf.compat.v1.disable_eager_execution()
site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
์ž…๋ ฅ ํ•จ์ˆ˜ ์ด ํŠœํ† ๋ฆฌ์–ผ์€ TensorFlow ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ MNIST ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ฝ”๋“œ ๋‚ด์šฉ์€ ๋‹ค์ค‘ GPU ํ›ˆ๋ จ ํŠœํ† ๋ฆฌ์–ผ๊ณผ ์œ ์‚ฌํ•˜์ง€๋งŒ ํฐ ์ฐจ์ด์ ์ด ํ•˜๋‚˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ”๋กœ ์ถ”์ •๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์ค‘ ์ž‘์—…์ž ํ›ˆ๋ จ์„ ํ•  ๋•Œ๋Š” ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ์ž‘์—…์ž ์ˆซ์ž๋Œ€๋กœ ๋‚˜๋ˆ„์–ด ์ฃผ์–ด์•ผ ๋ชจ๋ธ์ด ์ˆ˜๋ ดํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๋Š” ์ž‘์—…์ž ์ธ๋ฑ์Šค๋กœ ์ƒค๋”ฉ(shard)ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฉด ๊ฐ ์ž‘์—…์ž๊ฐ€ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ 1/num_workers ๊ณ ์œ  ๋ถ€๋ถ„์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค.
BUFFER_SIZE = 10000 BATCH_SIZE = 64 def input_fn(mode, input_context=None): datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True) mnist_dataset = (datasets['train'] if mode == tf.estimator.ModeKeys.TRAIN else datasets['test']) def scale(image, label): image = tf.cast(image, tf.float32) image /= 255 return image, label if input_context: mnist_dataset = mnist_dataset.shard(input_context.num_input_pipelines, input_context.input_pipeline_id) return mnist_dataset.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
ํ›ˆ๋ จ์„ ์ˆ˜๋ ด์‹œํ‚ค๊ธฐ ์œ„ํ•œ ๋˜ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์œผ๋กœ ๊ฐ ์ž‘์—…์ž์—์„œ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ์ œ๊ฐ๊ธฐ ๋‹ค๋ฅธ ์‹œ๋“œ ๊ฐ’์œผ๋กœ ์…”ํ”Œํ•˜๋Š” ๊ฒƒ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์ค‘ ์ž‘์—…์ž ๊ตฌ์„ฑ ๋‹ค์ค‘ GPU ํ›ˆ๋ จ ํŠœํ† ๋ฆฌ์–ผ๊ณผ ๋น„๊ตํ•  ๋•Œ ๊ฐ€์žฅ ํฐ ์ฐจ์ด ์ค‘ ํ•˜๋‚˜๋Š” ๋‹ค์ค‘ ์›Œ์ปค๋ฅผ ์„ค์ •ํ•˜๋Š” ๋ถ€๋ถ„์ž…๋‹ˆ๋‹ค. TF_CONFIG ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋Š” ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ ์ด๋ฃจ๋Š” ๊ฐ ์›Œ์ปค์— ํด๋Ÿฌ์Šคํ„ฐ ์„ค์ •์„ ์ง€์ •ํ•˜๋Š” ํ‘œ์ค€ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. TF_CONFIG์—๋Š” cluster์™€ task๋ผ๋Š” ๋‘ ๊ฐ€์ง€ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. cluster๋Š” ์ „์ฒด ํด๋Ÿฌ์Šคํ„ฐ, ๋‹ค์‹œ ๋งํ•ด ํด๋Ÿฌ์Šคํ„ฐ์— ์†ํ•œ ์ž‘์—…์ž์™€ ๋งค๊ฐœ๋ณ€์ˆ˜ ์„œ๋ฒ„์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. task๋Š” ํ˜„์žฌ ์ž‘์—…์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ ๊ตฌ์„ฑ ์š”์†Œ cluster๋Š” ๋ชจ๋“  ์ž‘์—…์ž ๋ฐ ๋งค๊ฐœ๋ณ€์ˆ˜ ์„œ๋ฒ„์— ๋Œ€ํ•ด ๋™์ผํ•˜๋ฉฐ ๋‘ ๋ฒˆ์งธ ๊ตฌ์„ฑ ์š”์†Œ task๋Š” ๊ฐ ์ž‘์—…์ž ๋ฐ ๋งค๊ฐœ๋ณ€์ˆ˜ ์„œ๋ฒ„์—์„œ ๋‹ค๋ฅด๋ฉฐ ๊ณ ์œ ํ•œ type ๋ฐ index๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ์—์„œ๋Š” ์ž‘์—…์˜ type์ด worker์ด๊ณ , ์ž‘์—…์˜ index๋Š” 0์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๊ธฐ ์œ„ํ•ด ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‘ ๊ฐœ์˜ ์ž‘์—…์ž๋ฅผ localhost์— ๋„์šธ ๋•Œ์˜ TF_CONFIG๋ฅผ ๋ณด์—ฌ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ๋Š” ์™ธ๋ถ€ IP ์ฃผ์†Œ ๋ฐ ํฌํŠธ์— ์—ฌ๋Ÿฌ ์ž‘์—…์ž๋ฅผ ๋งŒ๋“ค๊ณ  ๊ฐ ์ž‘์—…์ž์— ๋Œ€ํ•ด TF_CONFIG๋ฅผ ์ ์ ˆํ•˜๊ฒŒ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด index ์ž‘์—…์„ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ฒฝ๊ณ : Colab์—์„œ ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ๋งˆ์„ธ์š”. TensorFlow์˜ ๋Ÿฐํƒ€์ž„์€ ์ง€์ •๋œ IP ์ฃผ์†Œ ๋ฐ ํฌํŠธ์—์„œ gRPC ์„œ๋ฒ„๋ฅผ ์ƒ์„ฑํ•˜๋ ค๊ณ  ์‹œ๋„ํ•˜์ง€๋งŒ ์‹คํŒจํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ๋‹จ์ผ ์‹œ์Šคํ…œ์—์„œ ์—ฌ๋Ÿฌ ์ž‘์—…์ž๋ฅผ ํ…Œ์ŠคํŠธ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์˜ ์˜ˆ๋Š” ์ด ํŠœํ† ๋ฆฌ์–ผ์˜ keras ๋ฒ„์ „์„ ์ฐธ์กฐํ•˜์„ธ์š”. os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"] }, 'task': {'type': 'worker', 'index': 0} }) ๋ชจ๋ธ ์ •์˜ํ•˜๊ธฐ ํ›ˆ๋ จ์„ ์œ„ํ•˜์—ฌ ๋ ˆ์ด์–ด์™€ ์˜ตํ‹ฐ๋งˆ์ด์ €, ์†์‹ค ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜์„ธ์š”. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์ค‘ GPU ํ›ˆ๋ จ ํŠœํ† ๋ฆฌ์–ผ๊ณผ ๋น„์Šทํ•˜๊ฒŒ ์ผ€๋ผ์Šค ๋ ˆ์ด์–ด๋กœ ๋ชจ๋ธ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค.
LEARNING_RATE = 1e-4 def model_fn(features, labels, mode): model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10) ]) logits = model(features, training=False) if mode == tf.estimator.ModeKeys.PREDICT: predictions = {'logits': logits} return tf.estimator.EstimatorSpec(labels=labels, predictions=predictions) optimizer = tf.compat.v1.train.GradientDescentOptimizer( learning_rate=LEARNING_RATE) loss = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction=tf.keras.losses.Reduction.NONE)(labels, logits) loss = tf.reduce_sum(loss) * (1. / BATCH_SIZE) if mode == tf.estimator.ModeKeys.EVAL: return tf.estimator.EstimatorSpec(mode, loss=loss) return tf.estimator.EstimatorSpec( mode=mode, loss=loss, train_op=optimizer.minimize( loss, tf.compat.v1.train.get_or_create_global_step()))
site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
Note: ์ด ์˜ˆ์ œ์—์„œ๋Š” ํ•™์Šต๋ฅ ์ด ๊ณ ์ •๋˜์–ด์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์‹ค์ œ๋กœ๋Š” ์ „์—ญ ๋ฐฐ์น˜ ํฌ๊ธฐ์— ๋”ฐ๋ผ ํ•™์Šต๋ฅ ์„ ์กฐ์ •ํ•ด์•ผ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. MultiWorkerMirroredStrategy ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•˜์—ฌ tf.distribute.experimental.MultiWorkerMirroredStrategy์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. MultiWorkerMirroredStrategy๋Š” ๋ชจ๋“  ์›Œ์ปค์˜ ๊ฐ ์žฅ๋น„์—, ๋ชจ๋ธ์˜ ๋ ˆ์ด์–ด์— ์žˆ๋Š” ๋ชจ๋“  ๋ณ€์ˆ˜์˜ ๋ณต์‚ฌ๋ณธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์ด ์ „๋žต์€ CollectiveOps๋ผ๋Š” ์ˆ˜์ง‘์„ ์œ„ํ•œ ํ†ต์‹ ์šฉ ํ…์„œํ”Œ๋กœ ์—ฐ์‚ฐ์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ๋ชจ์œผ๊ณ , ๋ณ€์ˆ˜๋“ค์˜ ๊ฐ’์„ ๋™์ผํ•˜๊ฒŒ ๋งž์ถฅ๋‹ˆ๋‹ค. ํ…์„œํ”Œ๋กœ๋กœ ๋ถ„์‚ฐ ํ›ˆ๋ จํ•˜๊ธฐ์— ์ด ์ „๋žต์— ๋Œ€ํ•œ ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์ด ์žˆ์Šต๋‹ˆ๋‹ค.
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ ํ›ˆ๋ จ ๋ฐ ํ‰๊ฐ€ํ•˜๊ธฐ ๋‹ค์Œ์œผ๋กœ, ์ถ”์ •๊ธฐ์˜ RunConfig์— ๋ถ„์‚ฐ ์ „๋žต์„ ์ง€์ •ํ•˜์‹ญ์‹œ์˜ค. ๊ทธ๋ฆฌ๊ณ  tf.estimator.train_and_evaluate๋กœ ํ›ˆ๋ จ ๋ฐ ํ‰๊ฐ€๋ฅผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” train_distribute๋กœ๋งŒ ์ „๋žต์„ ์ง€์ •ํ•˜์˜€๊ธฐ ๋•Œ๋ฌธ์— ํ›ˆ๋ จ ๊ณผ์ •๋งŒ ๋ถ„์‚ฐ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. eval_distribute๋ฅผ ์ง€์ •ํ•˜์—ฌ ํ‰๊ฐ€๋„ ๋ถ„์‚ฐ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
config = tf.estimator.RunConfig(train_distribute=strategy) classifier = tf.estimator.Estimator( model_fn=model_fn, model_dir='/tmp/multiworker', config=config) tf.estimator.train_and_evaluate( classifier, train_spec=tf.estimator.TrainSpec(input_fn=input_fn), eval_spec=tf.estimator.EvalSpec(input_fn=input_fn) )
site/ko/tutorials/distribute/multi_worker_with_estimator.ipynb
tensorflow/docs-l10n
apache-2.0
If you examine the source code of sklearn.naive_bayes.GaussianNB, you'll see that internally it finds a best-fit Gaussian for each distribution, and uses these as a smooth description of each distribution. We can use the internals of GaussianNB to visualize the distributions it uses:
from sklearn.naive_bayes import GaussianNB gnb = GaussianNB().fit(X, y) print(gnb.theta_) # centers of the distributions print(gnb.sigma_) # widths of the distributions # create a grid on which to evaluate the distributions grid = np.linspace(-3, 6, 100) xgrid, ygrid = np.meshgrid(grid, grid) Xgrid = np.vstack([xgrid.ravel(), ygrid.ravel()]).T # now evaluate and plot the probability grid prob_grid = np.exp(gnb._joint_log_likelihood(Xgrid)) for i, c in enumerate(['blue', 'red']): pl.contour(xgrid, ygrid, prob_grid[:, i].reshape((100, 100)), 3, colors=c) # plot the points as above pl.scatter(X[:, 0], X[:, 1], c=y, linewidth=0)
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
When a new item is to be classified, its probability is evaluated for each distribution, and the distribution with the highest probability wins. We can see now why this is called "naive". What if the distribution is not well-fit by an uncorrelated Gaussian? In the following, we'll develop a classifier which solves this issue by fitting a sum of gaussians to each distribution. This should lead to improved classifications. For our data, we'll use photometric observations of stars and quasars from the Sloan Digital Sky Survey. Loading Data This tutorial assumes the notebook is within the tutorial directory structure, and that the fetch_data.py script has been run to download the data locally. If the data is in a different location, you can change the DATA_HOME variable below.
import os DATA_HOME = os.path.abspath('../data/sdss_colors') import numpy as np train_data = np.load(os.path.join(DATA_HOME, 'sdssdr6_colors_class_train.npy')) test_data = np.load(os.path.join(DATA_HOME, 'sdssdr6_colors_class.200000.npy'))
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
Setting Up the Data
# set the number of training points: using all points leads to a very # long running time. We'll start with 10000 training points. This # can be increased if desired. Ntrain = 10000 #Ntrain = len(train_data) # Split training data into training and cross-validation sets np.random.seed(0) np.random.shuffle(train_data) train_data = train_data[:Ntrain] N_crossval = Ntrain / 5 train_data = train_data[:-N_crossval] crossval_data = train_data[-N_crossval:] # construct training data X_train = np.zeros((train_data.size, 4), dtype=float) X_train[:, 0] = train_data['u-g'] X_train[:, 1] = train_data['g-r'] X_train[:, 2] = train_data['r-i'] X_train[:, 3] = train_data['i-z'] y_train = (train_data['redshift'] > 0).astype(int) Ntrain = len(y_train) # construct cross-validation data X_crossval = np.zeros((crossval_data.size, 4), dtype=float) X_crossval[:, 0] = crossval_data['u-g'] X_crossval[:, 1] = crossval_data['g-r'] X_crossval[:, 2] = crossval_data['r-i'] X_crossval[:, 3] = crossval_data['i-z'] y_crossval = (crossval_data['redshift'] > 0).astype(int) Ncrossval = len(y_crossval)
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
Just for good measure, let's plot the first two dimensions of the data to see a bit of what we're working with:
pl.scatter(X_train[:, 0], X_train[:, 1], c=y_train, s=4, linewidths=0) pl.xlim(-2, 5) pl.ylim(-1, 3)
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
We have training distributions which are fairly well-separated. Note, though, that these distributions are not well-approximated by a single Gaussian! Still, Gaussian Naive Bayes can be a useful classifier. Exercise 1: Recreating Gaussian Naive Bayes Gaussian Naive Bayes is a very fast estimator, and predicted labels can be computed as follows:
from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() gnb.fit(X_train, y_train) y_gnb = gnb.predict(X_crossval)
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
Part 1 Here we will use Gaussian Mixture Models to duplicate our Gaussian Naive Bayes results from earlier. You'll create two sklearn.gmm.GMM() classifier instances, named clf_0 and clf_1. Each should be initialized with a single component, and diagonal covariance. (hint: look at the doc string for sklearn.gmm.GMM to see how to set this up). The results should be compared to Gaussian Naive Bayes to check if they're correct.
from sklearn.mixture import gmm # Objects to create: # - clf_0 : trained on the portion of the training data with y == 0 # - clf_1 : trained on the portion of the training data with y == 1
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
If the notebook is within the tutorial directory structure, the following command will load the solution:
%load soln/01-01.py # 01-01.py clf_0 = gmm.GMM(1, 'diag') i0 = (y_train == 0) clf_0.fit(X_train[i0]) clf_1 = gmm.GMM(1, 'diag') i1 = (y_train == 1) clf_1.fit(X_train[i1])
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
Part 2 Next we must construct the prior. The prior is the fraction of training points of each type.
# variables to compute: # - prior0 : fraction of training points with y == 0 # - prior1 : fraction of training points with y == 1
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
If the notebook is within the tutorial directory structure, the following command will load the solution:
%load soln/01-02.py
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
Part 3 Now we use the prior and the classifiation to compute the log-likelihoods of the cross-validation points. The log likelihood for a test point x is given by logL(x) = clf.score(x) + log(prior) You can use the function np.log() to compute the logarithm of the prior.
# variables to compute: # logL : array, shape = (2, Ncrossval) # logL[0] is the log-likelihood for y == 0 # logL[1] is the log-likelihood for y == 1
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
If the notebook is within the tutorial directory structure, the following command will load the solution:
%load soln/01-03.py
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
Once logL is computed, the predicted value for each sample is the index with the largest log-likelihood.
y_pred = np.argmax(logL, 0)
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
Comparing to GNB Now we compare our predicted labels to y_gnb, the labels we printed above. We'll use the built-in classification report function in sklearn.metrics. This computes the precision, recall, and f1-score for each class.
from sklearn import metrics print "-----------------------------------------------------" print "One-component Gaussian Mixture:" print metrics.classification_report(y_crossval, y_pred, target_names=['stars', 'QSOs']) print "-----------------------------------------------------" print "Gaussian Naive Bayes:" print metrics.classification_report(y_crossval, y_gnb, target_names=['stars', 'QSOs'])
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
In theory, the results of these two should be identical. In reality, the two algorithms approach the fits differently, leading to slightly different results. The precision, recall, and F1-score should match to within ~0.01. If this is the case, then we can go on and experiment with a more complicated model. Exercise 2: Parameter Optimization Now we'll take some time to experiment with the hyperparameters of our GMM Bayesian classifier. These include the number of components for each model and the covariance type for each model (i.e. parameters that are decided prior to the fitting of the model on training data. Note that for a large number of components, the fit can take a long time, and will be dependent on the starting position. Use the documentation string of GMM to determine the options for covariance. Note that there are tools within scikit-learn to perform hyperparameter estimation, in the module sklearn.grid_search. Here we will be doing it by hand. Part 1 The first part of this exercise is to re-implement the GMM estimator above in a single function which allows the number of clusters and covariance type to be specified. To follow the scikit-learn syntax, this should be a class with methods like fit(), predict(), predict_proba(), etc. This would be an interesting project (and could even be a useful contribution to scikit-learn!). For now, we'll take a shortcut and just define a stand-alone function.
# finish this function. For n_components=1 and # covariance_type='diag', it should give results # identical to what we saw above. # the function should return the predicted # labels y_pred. def GMMBayes(X_test, n_components, covariance_type): pass
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit
If the notebook is within the tutorial directory structure, the following command will load the solution:
%load soln/01-04.py
AstroML/notebooks/10_exercise01.ipynb
diego0020/va_course_2015
mit