markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The fourth way to run Python, which is fairly complex-looking at first but becomes much more useful the more you program, is through an IDE. When you installed Anaconda you got an IDE called "Spyder" along with it. Go ahead and start it up and have a look around. Open the Python file you ran at the command line a while ago, and run it in Spyder. Spyder is also a non-interactive version of Python - when you run a program, it starts over fresh from the beginning and forgets everything that you ran before. However, like all IDEs Spyder has a debugger, which means that you can stop the program in the middle and have a closer look at what it is doing! Getting around your filesystem from Python You have seen that Spyder asks you questions about your "working directory", and that it starts you with a temporary file in some crazy location you don't want to remember. This is a good moment to talk about how Python works with the filesystem that we learned about in week 1. Python has a bunch of tools for interacting with your operating system, and they are all in the os module. We can get access to it with the following line of code:
import os
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
We can list the contents of any directory by using the os.listdir() method. Without any arguments, it gives us the contents of the directory we are standing in.
os.listdir()
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
We can also list the contents of any other directory. For example, the Downloads folder on my Mac is in the folder with my short name (tla), which is in a folder called Users, which is right on the Macintosh HD. So I say /Users/tla/Downloads to get at it. That is: Starting at the root (/), look in Users; starting at Users (/Users/), look in tla; starting at tla (/Users/tla/), look in Downloads. If you are on Windows you will probably want something like /Users/Tara Andrews/Downloads instead. Let's print out each of the filenames we find in that directory.
for f in os.listdir("/Users/tla/Downloads"): print(f)
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Python (or any program running on your computer for that matter) always runs from some specific place in this filesystem - that is, some specific directory on your drive. To see where you are running now, you can ask for the current working directory, like so. Try this command in Python at the command line, in IPython, and in Spyder, and see what you get.
os.getcwd()
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
This will look a little different on Mac vs. Windows, because they use different notations (called the path) for their directory structure. C:\Users\Tara L Andrews\Documents\IPython Notebooks vs. /Users/tla/Documents/2014 FS The two important lessons to take from this are: If you are not sure where you are, then os.getcwd() will tell you. If you value your sanity, don't use / or \ when you name your folders and files. Once you know where you are on the filesystem, you can change directory. This is accomplished with the os.chdir() method. For example:
os.chdir("/Users/tla") # go to the directory tla in the directory Users at the base of the drive os.chdir("..") # go one directory down print(os.getcwd()) # see where we are os.chdir("./tla/Downloads") # go to the directory Downloads in the directory tla that is here where we are print(os.getcwd()) # see where we are now
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
You will remember that there are a few special names when you look at files and directories: . - the directory I'm standing in .. - the directory that holds the directory I'm standing in / - (all by itself) the base ("root") directory IF YOU ARE ON WINDOWS: The / is a little more complicated than that if you are on Windows. As you will have noticed, you often see \ instead of /, but Python (like Powershell) lets you call it / anyway. Moreover, it is the base directory of the drive you are currently looking at (e.g. C:, D:, or whatever.) IF YOU ARE ON MAC OR LINUX: Unix machines (including Macs) don't deal with drives in the same way, so / just means the root of your OS filesystem. So by saying os.chdir("..") we have moved down a directory. We could keep doing this all the way to the bottom, if we felt like it. But generally it is a good idea to stay in your home directory if you have the choice. Here is how Python knows what your home directory is:
print(os.getenv("HOME", "I don't have a HOME variable, so I must be on Windows")) print(os.getenv("USERPROFILE", "I don't have a USERPROFILE variable, so I must be on Mac or Linux" ))
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Once you know which variable contains your home directory, you can save it into a variable and use it.
my_home = os.getenv("HOME") for f in os.listdir( my_home ): print(f)
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Now, if I want to, I can look to see what we have in my Projects folder - this is where I usually keep the code I've worked on. I can do this by joining the directory name ('Projects') to my home directory (which I've saved in my_home.)
doc_path = os.path.join( my_home, "Projects" ) print( "The join operation has given me the path", doc_path ) os.listdir( doc_path )
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Using the os.path.join command is a very good idea when you don't know what kind of computer might be running your script - it means that you are letting Python deal with the question of whether the path separator is / or \ or : or % or whatever. But you don't have to use the os.path.join command if you know that / will work; you can also just type the path directly.
os.listdir( "/Users/tla/Dropbox/book" ) # Look at the 'book' folder in the Dropbox folder in my home directory
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
So to sum up, here are some equivalences between the command line and Python: cd DIRECTORY ==> os.chdir("DIRECTORY") ls DIRECTORY ==> os.listdir("DIRECTORY") pwd ==> os.getcwd() echo $VAR ==> os.getenv("VAR") Reading files Now that we know how to navigate around our directory, we might also want to look inside some files! Here's how we do that.
fh = open(my_home + "/Dropbox/beef_stew.txt") # Open the file contents = fh.read() # Read its contents fh.close() # Close the file print(contents) # Do something with the contents
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
There are three steps to dealing with files from Python: Open it Read (or write) whatever you need Close it When you open a file, you make something called a filehandle. The filehandle, well, handles the file - does the reading, writing, closing, etc. that you need to be done. When you read a file, you have two choices: read in the entire thing, or read it line-by-line. If the file is small, or if you don't want to process it line-by-line, then you will read it all in one go. If it makes sense to process one line at a time (and especially if it is a huge file), then by all means do so. So, for example, we could add line numbers to what is in this file.
os.chdir(my_home) fh = open( "Dropbox/beef_stew.txt" ) counter = 1 for line in fh: print("%d: %s" % ( counter, line )) counter += 1 fh.close()
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Huh, great, but why did that double-space all of a sudden? Here's what happened: every line in the file ends with a line break where I pressed Return. The print function adds a line break after everything it prints, by default. Since the lines in the file already have one, if we are not careful we will also be double-spacing the printout! In order to avoid that, we end the print statement with this parameter end="" which says "Instead of the newline you would normally print at the end, print nothing instead."
fh = open( "Dropbox/beef_stew.txt" ) counter = 1 for line in fh: print("%d: %s" % ( counter, line ), end="") counter += 1 fh.close()
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Writing to files When we open a filehandle on a particular file, that filehandle can either be reading from the file or writing to it (but not both, at least not until we understand the mechanics and the dangers of doing that.) When we write to a file that already exists, there are two options: either we will replace whatever was there before, or we will add to it. Let's try it, first to a new file and then adding something to that file. When we use the open() function, it takes two parameters: the filename and a letter that indicates whether we want to read or write or what. If we don't give any letter, it assumes we meant 'r' for 'read'. The options are: r - open the file to reading w - empty the file and open it to writing a - open the file for writing (appending) to the end; do NOT empty it. We can see this in action, by opening the old recipe for reading and a new recipe for writing. Where we use .read() on the old filehandle, we will use .write() on the new filehandle. The write function is a lot like print, only it does NOT assume that you want a line break at the end. In this case that is pretty convenient, since we don't!
old_recipe = open( "Dropbox/beef_stew.txt" ) new_recipe = open( "Dropbox/numbered_beef_stew.txt", "w" ) counter = 1 for line in old_recipe: new_recipe.write( "%d: %s" % ( counter, line ) ) counter += 1 # this is the same as counter = counter + 1 old_recipe.close() new_recipe.close()
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
So let's look at the new recipe! We open it for reading this time, instead of writing. And instead of reading it line by line, we will read it all in one go, so that we don't have to worry about what print does with line endings.
new_recipe = open( "Dropbox/numbered_beef_stew.txt" ) contents = new_recipe.read() new_recipe.close() print(contents)
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Let's say we forgot a step at the end, and want to add it. When we write a new line to the file, since we are using .write() and not print(), we have to make sure to add the carriage return at the end of the line. In Python this can usually be done with the term "\n", no matter which sort of computer you are on. (This fact can confuse nerds used to the old ways of things, but is convenient for you!) Finally, although it is always important to close the files we open, it is especially important if we are writing to the file. If we forget to close a file we're writing to, then it's possible that not everything will get written!
new_recipe = open( "Dropbox/numbered_beef_stew.txt", "a" ) new_recipe.write( "18: give the leftovers to the cat\n" ) new_recipe.close()
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
And let's see if that worked. This time, instead of using .read() to put the file into a single variable, we will use .readlines() to put the file line by line into an array. This is useful if you want to read the file all in one go, but are going to want to do something with its contents line by line.
new_recipe = open( "Dropbox/numbered_beef_stew.txt" ) contents = new_recipe.readlines() new_recipe.close() for line in contents: print("-", line, end='')
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
<li>How much time has passed?
d1 - d2
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
<h4>Obviously that's not going to work. </h4> <h4>We can't do date operations on strings</h4> <h4>Let's see what happens with datetime</h4>
import datetime d1 = datetime.date(2016,11,24) d2 = datetime.date(2017,10,24) max(d1,d2) print(d2 - d1)
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
<li>datetime objects understand time <h3>The datetime library contains several useful types</h3> <li>date: stores the date (month,day,year) <li>time: stores the time (hours,minutes,seconds) <li>datetime: stores the date as well as the time (month,day,year,hours,minutes,seconds) <li>timedelta: duration between two datetime or date objects <h3>datetime.date</h3>
import datetime century_start = datetime.date(2000,1,1) today = datetime.date.today() print(century_start,today) print("We are",today-century_start,"days into this century") print(type(century_start)) print(type(today))
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
<h3>For a cleaner output</h3>
print("We are",(today-century_start).days,"days into this century")
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
<h3>datetime.datetime</h3>
century_start = datetime.datetime(2000,1,1,0,0,0) time_now = datetime.datetime.now() print(century_start,time_now) print("we are",time_now - century_start,"days, hour, minutes and seconds into this century")
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
<h4>datetime objects can check validity</h4> <li>A ValueError exception is raised if the object is invalid</li>
some_date=datetime.date(2015,2,29) #some_date =datetime.date(2016,2,29) #some_time=datetime.datetime(2015,2,28,23,60,0)
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
<h3>datetime.timedelta</h3> <h4>Used to store the duration between two points in time</h4>
century_start = datetime.datetime(2050,1,1,0,0,0) time_now = datetime.datetime.now() time_since_century_start = time_now - century_start print("days since century start",time_since_century_start.days) print("seconds since century start",time_since_century_start.total_seconds()) print("minutes since century start",time_since_century_start.total_seconds()/60) print("hours since century start",time_since_century_start.total_seconds()/60/60)
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
<h3>datetime.time</h3>
date_and_time_now = datetime.datetime.now() time_now = date_and_time_now.time() print(time_now)
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
<h4>You can do arithmetic operations on datetime objects</h4> <li>You can use timedelta objects to calculate new dates or times from a given date
today=datetime.date.today() five_days_later=today+datetime.timedelta(days=5) print(five_days_later) now=datetime.datetime.today() five_minutes_and_five_seconds_later = now + datetime.timedelta(minutes=5,seconds=5) print(five_minutes_and_five_seconds_later) now=datetime.datetime.today() five_minutes_and_five_seconds_earlier = now+datetime.timedelta(minutes=-5,seconds=-5) print(five_minutes_and_five_seconds_earlier)
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
<li>But you can't use timedelta on time objects. If you do, you'll get a TypeError exception
time_now=datetime.datetime.now().time() #Returns the time component (drops the day) print(time_now) thirty_seconds=datetime.timedelta(seconds=30) time_later=time_now+thirty_seconds #Bug or feature? #But this is Python #And we can always get around something by writing a new function! #Let's write a small function to get around this problem def add_to_time(time_object,time_delta): import datetime temp_datetime_object = datetime.datetime(500,1,1,time_object.hour,time_object.minute,time_object.second) print(temp_datetime_object) return (temp_datetime_object+time_delta).time() #And test it time_now=datetime.datetime.now().time() thirty_seconds=datetime.timedelta(seconds=30) print(time_now,add_to_time(time_now,thirty_seconds))
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
<h2>datetime and strings</h2> <h4>datetime.strptime</h4> <li>datetime.strptime(): grabs time from a string and creates a date or datetime or time object <li>The programmer needs to tell the function what format the string is using <li> See http://pubs.opengroup.org/onlinepubs/009695399/functions/strptime.html for how to specify the format
date='01-Apr-03' date_object=datetime.datetime.strptime(date,'%d-%b-%y') print(date_object) #Unfortunately, there is no similar thing for time delta #So we have to be creative! bus_travel_time='2:15:30' hours,minutes,seconds=bus_travel_time.split(':') x=datetime.timedelta(hours=int(hours),minutes=int(minutes),seconds=int(seconds)) print(x) #Or write a function that will do this for a particular format def get_timedelta(time_string): hours,minutes,seconds = time_string.split(':') import datetime return datetime.timedelta(hours=int(hours),minutes=int(minutes),seconds=int(seconds))
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
<h4>datetime.strftime</h4> <li>The strftime function flips the strptime function. It converts a datetime object to a string <li>with the specified format
now = datetime.datetime.now() string_now = datetime.datetime.strftime(now,'%m/%d/%y %H:%M:%S') print(now,string_now) print(str(now)) #Or you can use the default conversion
BAMM.101x/datetime_objects.ipynb
KECB/learn
mit
Showing and Saving NOTE: in IPython notebooks calling plot will display directly below the call to plot. When not in IPython you have several options for viewing the figure: call b.show or b.savefig after calling plot use the returned autofig and matplotlib figures however you'd like pass show=True to the plot method. pass save='myfilename' to the plot method. (same as calling plt.savefig('myfilename')) Default Plots To see the options for plotting that are dataset-dependent see the tutorials on that dataset method: ORB dataset MESH dataset LC dataset RV dataset LP dataset By calling the plot method on the bundle (or any ParameterSet) without any arguments, a plot or series of subplots will be built based on the contents of that ParameterSet.
afig, mplfig = b.plot(show=True)
2.1/tutorials/plotting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
For more information on each of the available arrays, see the relevant tutorial on that dataset method: ORB dataset MESH dataset LC dataset RV dataset LP dataset Selecting Phase And to plot in phase we just send x='phases' or x='phases:binary'. Setting x='phases' will use the ephemeris from the top-level of the hierarchy (as if you called b.get_ephemeris()), whereas passing a string after the colon, will use the ephemeris of that component.
afig, mplfig = b['lc01@dataset'].plot(x='phases', z=0, show=True)
2.1/tutorials/plotting.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Basic training loops <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/basic_training_loops"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/basic_training_loops.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/basic_training_loops.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/basic_training_loops.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> In the previous guides, you have learned about tensors, variables, gradient tape, and modules. In this guide, you will fit these all together to train models. TensorFlow also includes the tf.Keras API, a high-level neural network API that provides useful abstractions to reduce boilerplate. However, in this guide, you will use basic classes. Setup
import tensorflow as tf import matplotlib.pyplot as plt colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
Solving machine learning problems Solving a machine learning problem usually consists of the following steps: Obtain training data. Define the model. Define a loss function. Run through the training data, calculating loss from the ideal value Calculate gradients for that loss and use an optimizer to adjust the variables to fit the data. Evaluate your results. For illustration purposes, in this guide you'll develop a simple linear model, $f(x) = x * W + b$, which has two variables: $W$ (weights) and $b$ (bias). This is the most basic of machine learning problems: Given $x$ and $y$, try to find the slope and offset of a line via simple linear regression. Data Supervised learning uses inputs (usually denoted as x) and outputs (denoted y, often called labels). The goal is to learn from paired inputs and outputs so that you can predict the value of an output from an input. Each input of your data, in TensorFlow, is almost always represented by a tensor, and is often a vector. In supervised training, the output (or value you'd like to predict) is also a tensor. Here is some data synthesized by adding Gaussian (Normal) noise to points along a line.
# The actual line TRUE_W = 3.0 TRUE_B = 2.0 NUM_EXAMPLES = 201 # A vector of random x values x = tf.linspace(-2,2, NUM_EXAMPLES) x = tf.cast(x, tf.float32) def f(x): return x * TRUE_W + TRUE_B # Generate some noise noise = tf.random.normal(shape=[NUM_EXAMPLES]) # Calculate y y = f(x) + noise # Plot all the data plt.plot(x, y, '.') plt.show()
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
Tensors are usually gathered together in batches, or groups of inputs and outputs stacked together. Batching can confer some training benefits and works well with accelerators and vectorized computation. Given how small this dataset is, you can treat the entire dataset as a single batch. Define the model Use tf.Variable to represent all weights in a model. A tf.Variable stores a value and provides this in tensor form as needed. See the variable guide for more details. Use tf.Module to encapsulate the variables and the computation. You could use any Python object, but this way it can be easily saved. Here, you define both w and b as variables.
class MyModel(tf.Module): def __init__(self, **kwargs): super().__init__(**kwargs) # Initialize the weights to `5.0` and the bias to `0.0` # In practice, these should be randomly initialized self.w = tf.Variable(5.0) self.b = tf.Variable(0.0) def __call__(self, x): return self.w * x + self.b model = MyModel() # List the variables tf.modules's built-in variable aggregation. print("Variables:", model.variables) # Verify the model works assert model(3.0).numpy() == 15.0
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
The initial variables are set here in a fixed way, but Keras comes with any of a number of initializers you could use, with or without the rest of Keras. Define a loss function A loss function measures how well the output of a model for a given input matches the target output. The goal is to minimize this difference during training. Define the standard L2 loss, also known as the "mean squared" error:
# This computes a single loss value for an entire batch def loss(target_y, predicted_y): return tf.reduce_mean(tf.square(target_y - predicted_y))
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
Before training the model, you can visualize the loss value by plotting the model's predictions in red and the training data in blue:
plt.plot(x, y, '.', label="Data") plt.plot(x, f(x), label="Ground truth") plt.plot(x, model(x), label="Predictions") plt.legend() plt.show() print("Current loss: %1.6f" % loss(y, model(x)).numpy())
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
Define a training loop The training loop consists of repeatedly doing three tasks in order: Sending a batch of inputs through the model to generate outputs Calculating the loss by comparing the outputs to the output (or label) Using gradient tape to find the gradients Optimizing the variables with those gradients For this example, you can train the model using gradient descent. There are many variants of the gradient descent scheme that are captured in tf.keras.optimizers. But in the spirit of building from first principles, here you will implement the basic math yourself with the help of tf.GradientTape for automatic differentiation and tf.assign_sub for decrementing a value (which combines tf.assign and tf.sub):
# Given a callable model, inputs, outputs, and a learning rate... def train(model, x, y, learning_rate): with tf.GradientTape() as t: # Trainable variables are automatically tracked by GradientTape current_loss = loss(y, model(x)) # Use GradientTape to calculate the gradients with respect to W and b dw, db = t.gradient(current_loss, [model.w, model.b]) # Subtract the gradient scaled by the learning rate model.w.assign_sub(learning_rate * dw) model.b.assign_sub(learning_rate * db)
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
For a look at training, you can send the same batch of x and y through the training loop, and see how W and b evolve.
model = MyModel() # Collect the history of W-values and b-values to plot later weights = [] biases = [] epochs = range(10) # Define a training loop def report(model, loss): return f"W = {model.w.numpy():1.2f}, b = {model.b.numpy():1.2f}, loss={loss:2.5f}" def training_loop(model, x, y): for epoch in epochs: # Update the model with the single giant batch train(model, x, y, learning_rate=0.1) # Track this before I update weights.append(model.w.numpy()) biases.append(model.b.numpy()) current_loss = loss(y, model(x)) print(f"Epoch {epoch:2d}:") print(" ", report(model, current_loss))
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
Do the training
current_loss = loss(y, model(x)) print(f"Starting:") print(" ", report(model, current_loss)) training_loop(model, x, y)
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
Plot the evolution of the weights over time:
plt.plot(epochs, weights, label='Weights', color=colors[0]) plt.plot(epochs, [TRUE_W] * len(epochs), '--', label = "True weight", color=colors[0]) plt.plot(epochs, biases, label='bias', color=colors[1]) plt.plot(epochs, [TRUE_B] * len(epochs), "--", label="True bias", color=colors[1]) plt.legend() plt.show()
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
Visualize how the trained model performs
plt.plot(x, y, '.', label="Data") plt.plot(x, f(x), label="Ground truth") plt.plot(x, model(x), label="Predictions") plt.legend() plt.show() print("Current loss: %1.6f" % loss(model(x), y).numpy())
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
The same solution, but with Keras It's useful to contrast the code above with the equivalent in Keras. Defining the model looks exactly the same if you subclass tf.keras.Model. Remember that Keras models inherit ultimately from module.
class MyModelKeras(tf.keras.Model): def __init__(self, **kwargs): super().__init__(**kwargs) # Initialize the weights to `5.0` and the bias to `0.0` # In practice, these should be randomly initialized self.w = tf.Variable(5.0) self.b = tf.Variable(0.0) def call(self, x): return self.w * x + self.b keras_model = MyModelKeras() # Reuse the training loop with a Keras model training_loop(keras_model, x, y) # You can also save a checkpoint using Keras's built-in support keras_model.save_weights("my_checkpoint")
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
Rather than write new training loops each time you create a model, you can use the built-in features of Keras as a shortcut. This can be useful when you do not want to write or debug Python training loops. If you do, you will need to use model.compile() to set the parameters, and model.fit() to train. It can be less code to use Keras implementations of L2 loss and gradient descent, again as a shortcut. Keras losses and optimizers can be used outside of these convenience functions, too, and the previous example could have used them.
keras_model = MyModelKeras() # compile sets the training parameters keras_model.compile( # By default, fit() uses tf.function(). You can # turn that off for debugging, but it is on now. run_eagerly=False, # Using a built-in optimizer, configuring as an object optimizer=tf.keras.optimizers.SGD(learning_rate=0.1), # Keras comes with built-in MSE error # However, you could use the loss function # defined above loss=tf.keras.losses.mean_squared_error, )
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
Keras fit expects batched data or a complete dataset as a NumPy array. NumPy arrays are chopped into batches and default to a batch size of 32. In this case, to match the behavior of the hand-written loop, you should pass x in as a single batch of size 1000.
print(x.shape[0]) keras_model.fit(x, y, epochs=10, batch_size=1000)
site/en/guide/basic_training_loops.ipynb
tensorflow/docs
apache-2.0
Let's show the symbols data, to see how good the recommender has to be.
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:])))) # Simulate (with new envs, each time) n_epochs = 25 for i in range(n_epochs): tic = time() env.reset(STARTING_DAYS_AHEAD) results_list = sim.simulate_period(total_data_in_df, SYMBOL, agents[0], starting_days_ahead=STARTING_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS, verbose=False, other_env=env) toc = time() print('Epoch: {}'.format(i)) print('Elapsed time: {} seconds.'.format((toc-tic))) print('Random Actions Rate: {}'.format(agents[0].random_actions_rate)) show_results([results_list], data_in_df) env.reset(STARTING_DAYS_AHEAD) results_list = sim.simulate_period(total_data_in_df, SYMBOL, agents[0], learn=False, starting_days_ahead=STARTING_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS, other_env=env) show_results([results_list], data_in_df, graph=True)
notebooks/prod/n08_simple_q_learner_1000_states_full_training_25_epochs.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Ok, let's save that
import pickle with open('../../data/simple_q_learner_1000_states_full_training_25_epochs.pkl', 'wb') as best_agent: pickle.dump(agents[0], best_agent)
notebooks/prod/n08_simple_q_learner_1000_states_full_training_25_epochs.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Firstly we will calculate the features required to characterise the pointcloud. These are calculated on 3 scales which by default are k=10, 20 & 30 nearest-neighbours. If wish to alter this go ahead! The features are: - anisotropy - curvature - eigenentropy - eigen_sum - linearity - omnivariance - planarity - sphericity As well as these, the RGB and normals are used as features. We do not however uses XYZ as these always have negative effect on results.
ln.ply_features(incloud)
example_notebooks/PointCloudClassification.ipynb
Ciaran1981/geospatial-learn
gpl-3.0
This snippet is used to create the column names for the 128 attributes (16 sensors and 8 attributes measured by each sensor), the target label and batch number for the corresponding row. We add '\n' to the 'Batch_no' label to signify EOL. If we use 'csv' package then we don't need to add that, the 'writer' method will handle it
col_names = ['Label'] for x in map(chr,range(65,81)): for y in map(str,range(1,9)): col_names.append('Sensor_'+x+y) col_names.append('Batch_No\n') print 'Number of columns -',len(col_names)
data_formatting.ipynb
aamirg/athena-hacks-honeywell
apache-2.0
Open a csv file and write the column names to it
out=open('formatted_data.csv','w') out.write(','.join(col_names))
data_formatting.ipynb
aamirg/athena-hacks-honeywell
apache-2.0
Get the file names for all the files from the 'raw_data' directory
raw_files = [f for f in listdir('./raw_data') if isfile(join('./raw_data', f))]
data_formatting.ipynb
aamirg/athena-hacks-honeywell
apache-2.0
Read the data in from each file, format it by - Strip extra whitespaces and split on whitespace Add the batch number from file name, the target label Split the key value pairs on ':' and retrieve the values
for file_name in raw_files: with open('./raw_data/'+file_name,'r') as f: for i in f: j=i.strip().split(' ') out.write(','.join([j[0]]+[k.split(':')[1] for k in j[1:]]+[file_name.strip('batch').split('.')[0],'\n'])) out.close()
data_formatting.ipynb
aamirg/athena-hacks-honeywell
apache-2.0
The above snippet can be rewritten in the long form as
#for file_name in raw_files: # with open('./raw_data/'+file_name,'r') as f: # for i in f: # j=i.strip().split(' ') # target_label = [j[0]] # attributes = [k.split(':')[1] for k in j[1:]] # batch_no = [file_name.strip('batch').split('.')[0],'\n'] # out.write(','.join(target_label+attributes+batch_no)) #out.close()
data_formatting.ipynb
aamirg/athena-hacks-honeywell
apache-2.0
Initialisation
# Do a SYGMA run for each NuGrid metallicity s_02 = s.sygma(iniZ=0.02, imf_type='salpeter') s_01 = s.sygma(iniZ=0.01, imf_type='salpeter') s_006 = s.sygma(iniZ=0.006, imf_type='salpeter') s_001 = s.sygma(iniZ=0.001, imf_type='salpeter') s_0001 = s.sygma(iniZ=0.0001, imf_type='salpeter') # Show the number of neutron star mergers at each timestep; should not be negative, and should be roughly equal! print np.sum(s_02.history.nsm_numbers) print np.sum(s_01.history.nsm_numbers) print np.sum(s_006.history.nsm_numbers) print np.sum(s_001.history.nsm_numbers) print np.sum(s_0001.history.nsm_numbers)
NSM_test_suite.ipynb
NuGrid/NuPyCEE
bsd-3-clause
IMF check The Salpeter allows one to calculate the number of stars $N_{12}$ in the mass interval [m1, m2] with (I) $N_{12} = k_N \int _{m1}^{m2} m^{-2.35} dm$ Where $k_{N}$ is the normalization constant, which can be derived from the total amount of mass in the system $M_{tot}$. Since the total mass $M_{12}$ in the mass interval above can be estimated with (II) $M_{12} = k_N \int _{m1}^{m2} m^{-1.35} dm$ the mass interval of [8, 100] for neutron star progenitors, [0.1, 100] for the total mass, and $M_{tot}=1e4$ will yield for $k_N$: $1e4 = \frac{k_N}{0.35}(0.1^{-0.35} - 100^{-0.35})$
# Name the relevant quantities mtot = s_001.mgal nsm_l = s_001.transitionmass imf0 = s_001.imf_bdys[0] imf1 = s_001.imf_bdys[1] # Compute the normalization constant as defined above k_N = (mtot*0.35) / (imf0**-0.35 - imf1**-0.35) #(I)
NSM_test_suite.ipynb
NuGrid/NuPyCEE
bsd-3-clause
The total number of NS merger progenitors $N_{12}$ is then: $N_{12} = \frac{k_N}{1.35}(8^{-1.35} - 100^{-1.35})$
# Compute the total number of neutron star merger progenitors as defined above N_nsm = (k_N/1.35) * (nsm_l**-1.35 - imf1**-1.35)
NSM_test_suite.ipynb
NuGrid/NuPyCEE
bsd-3-clause
Compared to a SYGMA run:
# Compute the number of NS merger progenitors in one of the SYGMA runs (normalize to mgal) A_imf = mtot / s_001._imf(imf0, imf1, 2) N_sim = A_imf * s_001._imf(nsm_l, imf1, 1) print 'Theoretical number of neutron star progenitors: ', N_nsm print 'Number of neutron star progenitors in SYGMA run: ', N_sim print 'Ratio (should be ~1): ', N_sim / N_nsm
NSM_test_suite.ipynb
NuGrid/NuPyCEE
bsd-3-clause
Ensure r-process yields are being read in properly
# Obtain the fractional isotope yields from a SYGMA run l = len(s_01.history.ism_iso_yield_nsm) y = s_01.history.ism_iso_yield_nsm[l-1] n = np.sum(y) yields = y / n # Exclude isotopes for which there are no r-process yields in the yield table nonzero = np.nonzero(yields) yields = yields[nonzero] # Obtain the mass numbers for the isotopes (x-axis ticks) massnums = [] for i in s_01.history.isotopes: massnum = i.split('-')[1] massnums.append(float(massnum)) # Again exclude zero values massnums = np.asarray(massnums) massnums = massnums[nonzero] # Hacky text parser to get all the fractional isotope values from the r-process yield table r_yields_text = open('yield_tables/r_process_rosswog_2014.txt') r_yields = r_yields_text.readlines() lines = [] for line in r_yields: lines.append(line) newlines = [] for line in lines: if '&' in line: new = line.strip() new = new.split('&') newlines.append(new) massfracs = [] rmassnums = [] for ind, el in enumerate(newlines): if ind is not 0: massfracs.append(float(el[2])) rmassnums.append(float(el[1].split('-')[1])) # Array of r-process yields to compare with simulation yields massfracs = np.asarray(massfracs) # Plot r-process yields against neutron star merger simulation yields (should be nearly identical) plt.figure(figsize=(12,8)) plt.scatter(massnums, yields, marker='x', s=32, color='red', label='Final neutron star merger ejecta') plt.scatter(rmassnums, massfracs, s=8, label='r-process yields') plt.xlim(80, 250) plt.ylim(0.000000000001, 1) plt.yscale('log') plt.xlabel('Mass number') plt.ylabel('Mass fraction') plt.legend(loc=4) #plt.savefig('yield_comparison.png', dpi=200)
NSM_test_suite.ipynb
NuGrid/NuPyCEE
bsd-3-clause
Check DTD fits
# Define the three functions which are fit to the DTD (Power law, 5th and 6th degree polynomials) def quintic(t, a, b, c, d, e, f): y = (a*(t**5))+(b*(t**4))+(c*(t**3))+(d*(t**2))+(e*t)+f return y def sextic(t, a, b, c, d, e, f, g): y = a*(t**6) + b*(t**5) + c*(t**4) + d*(t**3) + e*(t**2) + f*t + g return y def powlaw(a, t): y = a / t return y # Solar metallicity fit, parameters from chem_evol (see fitting notebook to derive new parameters) a = -0.0138858377011 b = 1.0712569392 c = -32.1555682584 d = 468.236521089 e = -3300.97955814 f = 9019.62468302 t = np.linspace(10, 22.2987, 100) # Polynomial portion of solar metallicity DTD x-axis # Define the DTD fit and plot it y = quintic(t, a, b, c, d, e, f) plt.plot(t, y) a = -2.88192413434e-5 b = 0.00387383125623 c = -0.20721471544 d = 5.64382310405 e = -82.6061154979 f = 617.464778362 g = -1840.49386605 y = sextic(t, a, b, c, d, e, f, g) plt.plot(t, y)
NSM_test_suite.ipynb
NuGrid/NuPyCEE
bsd-3-clause
Utility functions A number of utility callback functions for image display and for ploting the similarity metric during registration.
import matplotlib.pyplot as plt %matplotlib inline from ipywidgets import interact, fixed from IPython.display import clear_output # Callback invoked by the interact ipython method for scrolling through the image stacks of # the two images (moving and fixed). def display_images(fixed_image_z, moving_image_z, fixed_npa, moving_npa): # Create a figure with two subplots and the specified size. plt.subplots(1,2,figsize=(10,8)) # Draw the fixed image in the first subplot. plt.subplot(1,2,1) plt.imshow(fixed_npa[fixed_image_z,:,:],cmap=plt.cm.Greys_r); plt.title('fixed image') plt.axis('off') # Draw the moving image in the second subplot. plt.subplot(1,2,2) plt.imshow(moving_npa[moving_image_z,:,:],cmap=plt.cm.Greys_r); plt.title('moving image') plt.axis('off') plt.show() # Callback invoked by the IPython interact method for scrolling and modifying the alpha blending # of an image stack of two images that occupy the same physical space. def display_images_with_alpha(image_z, alpha, fixed, moving): img = (1.0 - alpha)*fixed[:,:,image_z] + alpha*moving[:,:,image_z] plt.imshow(sitk.GetArrayFromImage(img),cmap=plt.cm.Greys_r); plt.axis('off') plt.show() # Callback invoked when the StartEvent happens, sets up our new data. def start_plot(): global metric_values, multires_iterations metric_values = [] multires_iterations = [] # Callback invoked when the EndEvent happens, do cleanup of data and figure. def end_plot(): global metric_values, multires_iterations del metric_values del multires_iterations # Close figure, we don't want to get a duplicate of the plot latter on. plt.close() # Callback invoked when the IterationEvent happens, update our data and display new figure. def plot_values(registration_method): global metric_values, multires_iterations metric_values.append(registration_method.GetMetricValue()) # Clear the output area (wait=True, to reduce flickering), and plot current data clear_output(wait=True) # Plot the similarity metric values plt.plot(metric_values, 'r') plt.plot(multires_iterations, [metric_values[index] for index in multires_iterations], 'b*') plt.xlabel('Iteration Number',fontsize=12) plt.ylabel('Metric Value',fontsize=12) plt.show() # Callback invoked when the sitkMultiResolutionIterationEvent happens, update the index into the # metric_values list. def update_multires_iterations(): global metric_values, multires_iterations multires_iterations.append(len(metric_values))
60_Registration_Introduction.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Read images We first read the images, casting the pixel type to that required for registration (Float32 or Float64) and look at them.
fixed_image = sitk.ReadImage(fdata("training_001_ct.mha"), sitk.sitkFloat32) moving_image = sitk.ReadImage(fdata("training_001_mr_T1.mha"), sitk.sitkFloat32) interact(display_images, fixed_image_z=(0,fixed_image.GetSize()[2]-1), moving_image_z=(0,moving_image.GetSize()[2]-1), fixed_npa = fixed(sitk.GetArrayFromImage(fixed_image)), moving_npa=fixed(sitk.GetArrayFromImage(moving_image)));
60_Registration_Introduction.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Initial Alignment Use the CenteredTransformInitializer to align the centers of the two volumes and set the center of rotation to the center of the fixed image.
initial_transform = sitk.CenteredTransformInitializer(fixed_image, moving_image, sitk.Euler3DTransform(), sitk.CenteredTransformInitializerFilter.GEOMETRY) moving_resampled = sitk.Resample(moving_image, fixed_image, initial_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelIDValue()) interact(display_images_with_alpha, image_z=(0,fixed_image.GetSize()[2]), alpha=(0.0,1.0,0.05), fixed = fixed(fixed_image), moving=fixed(moving_resampled));
60_Registration_Introduction.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Registration The specific registration task at hand estimates a 3D rigid transformation between images of different modalities. There are multiple components from each group (optimizers, similarity metrics, interpolators) that are appropriate for the task. Note that each component selection requires setting some parameter values. We have made the following choices: <ul> <li>Similarity metric, mutual information (Mattes MI): <ul> <li>Number of histogram bins, 50.</li> <li>Sampling strategy, random.</li> <li>Sampling percentage, 1%.</li> </ul> </li> <li>Interpolator, sitkLinear.</li> <li>Optimizer, gradient descent: <ul> <li>Learning rate, step size along traversal direction in parameter space, 1.0 .</li> <li>Number of iterations, maximal number of iterations, 100.</li> <li>Convergence minimum value, value used for convergence checking in conjunction with the energy profile of the similarity metric that is estimated in the given window size, 1e-6.</li> <li>Convergence window size, number of values of the similarity metric which are used to estimate the energy profile of the similarity metric, 10.</li> </ul> </li> </ul> Perform registration using the settings given above, and take advantage of the built in multi-resolution framework, use a three tier pyramid. In this example we plot the similarity metric's value during regisration. Note that the change of scales in the multi-resolution framework is readily visible.
registration_method = sitk.ImageRegistrationMethod() # Similarity metric settings. registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50) registration_method.SetMetricSamplingStrategy(registration_method.RANDOM) registration_method.SetMetricSamplingPercentage(0.01) registration_method.SetInterpolator(sitk.sitkLinear) # Optimizer settings. registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100, convergenceMinimumValue=1e-6, convergenceWindowSize=10) registration_method.SetOptimizerScalesFromPhysicalShift() # Setup for the multi-resolution framework. registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1]) registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0]) registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn() # Don't optimize in-place, we would possibly like to run this cell multiple times. registration_method.SetInitialTransform(initial_transform, inPlace=False) # Connect all of the observers so that we can perform plotting during registration. registration_method.AddCommand(sitk.sitkStartEvent, start_plot) registration_method.AddCommand(sitk.sitkEndEvent, end_plot) registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent, update_multires_iterations) registration_method.AddCommand(sitk.sitkIterationEvent, lambda: plot_values(registration_method)) final_transform = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32), sitk.Cast(moving_image, sitk.sitkFloat32))
60_Registration_Introduction.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Post registration analysis Query the registration method to see the metric value and the reason the optimization terminated. The metric value allows us to compare multiple registration runs as there is a probabilistic aspect to our registration, we are using random sampling to estimate the similarity metric. Always remember to query why the optimizer terminated. This will help you understand whether termination is too early, either due to thresholds being too tight, early termination due to small number of iterations - numberOfIterations, or too loose, early termination due to large value for minimal change in similarity measure - convergenceMinimumValue)
print('Final metric value: {0}'.format(registration_method.GetMetricValue())) print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
60_Registration_Introduction.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Now visually inspect the results.
moving_resampled = sitk.Resample(moving_image, fixed_image, final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelIDValue()) interact(display_images_with_alpha, image_z=(0,fixed_image.GetSize()[2]), alpha=(0.0,1.0,0.05), fixed = fixed(fixed_image), moving=fixed(moving_resampled));
60_Registration_Introduction.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
If we are satisfied with the results, save them to file.
sitk.WriteImage(moving_resampled, os.path.join(OUTPUT_DIR, 'RIRE_training_001_mr_T1_resampled.mha')) sitk.WriteTransform(final_transform, os.path.join(OUTPUT_DIR, 'RIRE_training_001_CT_2_mr_T1.tfm'))
60_Registration_Introduction.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Linking Widgets Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events. The first method is to use the link and directional_link functions from the traitlets module. Linking traitlets attributes from the server side
from IPython.utils import traitlets caption = widgets.Latex(value = 'The values of slider1, slider2 and slider3 are synchronized') sliders1, slider2, slider3 = widgets.IntSlider(description='Slider 1'),\ widgets.IntSlider(description='Slider 2'),\ widgets.IntSlider(description='Slider 3') l2 = traitlets.link((sliders1, 'value'), (slider2, 'value')) l3 = traitlets.link((sliders1, 'value'), (slider3, 'value')) display(caption, sliders1, slider2, slider3) caption = widgets.Latex(value = 'Changes in source values are reflected in target1 and target2') source, target1, target2 = widgets.IntSlider(description='Source'),\ widgets.IntSlider(description='Target 1'),\ widgets.IntSlider(description='Target 2') traitlets.dlink((source, 'value'), (target1, 'value')) traitlets.dlink((source, 'value'), (target2, 'value')) display(caption, source, target1, target2)
python/demo1/scipy-advanced-tutorial-master/Part2/Widget Events.ipynb
masterfish2015/my_project
mit
Function traitlets.link returns a Link object. The link can be broken by calling the unlink method.
l2.unlink()
python/demo1/scipy-advanced-tutorial-master/Part2/Widget Events.ipynb
masterfish2015/my_project
mit
Linking widgets attributes from the client side When synchronizing traitlets attributes, you may experience a lag because of the latency dues to the rountrip to the server side. You can also directly link widgets attributes, either in a unidirectional or a bidirectional fashion using the link widgets.
caption = widgets.Latex(value = 'The values of range1, range2 and range3 are synchronized') range1, range2, range3 = widgets.IntSlider(description='Range 1'),\ widgets.IntSlider(description='Range 2'),\ widgets.IntSlider(description='Range 3') l2 = widgets.jslink((range1, 'value'), (range2, 'value')) l3 = widgets.jslink((range1, 'value'), (range3, 'value')) display(caption, range1, range2, range3) caption = widgets.Latex(value = 'Changes in source_range values are reflected in target_range1 and target_range2') source_range, target_range1, target_range2 = widgets.IntSlider(description='Source range'),\ widgets.IntSlider(description='Target range 1'),\ widgets.IntSlider(description='Target range 2') widgets.jsdlink((source_range, 'value'), (target_range1, 'value')) widgets.jsdlink((source_range, 'value'), (target_range2, 'value')) display(caption, source_range, target_range1, target_range2)
python/demo1/scipy-advanced-tutorial-master/Part2/Widget Events.ipynb
masterfish2015/my_project
mit
Function widgets.jslink returns a Link widget. The link can be broken by calling the unlink method.
l2.unlink()
python/demo1/scipy-advanced-tutorial-master/Part2/Widget Events.ipynb
masterfish2015/my_project
mit
Full model
def optimize(m, lr=5e-3, it=50): opt = torch.optim.LBFGS(m.parameters(), lr=lr, max_iter=40) def eval_model(): obj = m() opt.zero_grad() obj.backward() return obj for i in range(it): obj = m() opt.zero_grad() obj.backward() opt.step(eval_model) if i%5==0: print(i,':',obj.data[0]) return -obj.data[0] k = candlegp.kernels.RBF(1, lengthscales=torch.DoubleTensor([1.0]),variance=torch.DoubleTensor([1.0])) m = candlegp.models.GPR(Variable(X), Variable(Y.unsqueeze(1)), kern=k) #m.likelihood.variance.set(0.01) full_lml = optimize(m, lr=1e-2) xstar = torch.linspace(-3,9,100).double() mu, var = m.predict_y(Variable(xstar.unsqueeze(1))) cred_size = (var**0.5*2).squeeze(1) mu = mu.squeeze(1) pyplot.plot(xstar.numpy(),mu.data.numpy(),'b') pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75') pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2) m
notebooks/upper_bound.ipynb
t-vi/candlegp
apache-2.0
Upper bounds for sparse variational models As a first investigation, we compute the upper bound for models trained using the sparse variational GP approximation.
Ms = numpy.arange(4, 20, 2) vfe_lml = [] vupper_lml = [] vfe_hyps = [] for M in Ms: Zinit = X[:M, :].clone() k = candlegp.kernels.RBF(1, lengthscales=torch.DoubleTensor([1.0]),variance=torch.DoubleTensor([1.0])) m = candlegp.models.SGPR(Variable(X), Variable(Y.unsqueeze(1)), k, Zinit) m.likelihood.variance.set(0.01) optimize(m, lr=1e-3, it=100) vfe_lml.append(-m.objective().data[0]) vupper_lml.append(m.compute_upper_bound().data[0]) vfe_hyps.append(m.state_dict()) print("%i" % M, end=" ") IPython.display.clear_output() pyplot.figure() pyplot.plot(Ms[:len(vfe_lml)], vfe_lml, label="lower") pyplot.plot(Ms[:len(vfe_lml)], vupper_lml, label="upper") pyplot.axhline(full_lml, label="full", alpha=0.3) pyplot.xlabel("Number of inducing points") pyplot.ylabel("LML estimate") pyplot.legend() pyplot.title("LML bounds for models trained with SGPR") pyplot.xlim(4,20) pyplot.show() vfe_hyps = pandas.DataFrame(vfe_hyps)
notebooks/upper_bound.ipynb
t-vi/candlegp
apache-2.0
We see that the lower bound increases as more inducing points are added. Note that the upper bound does not monotonically decrease! This is because as we train the sparse model, we also get better estimates of the hyperparameters. The upper bound will be different for this different setting of the hyperparameters, and is sometimes looser. The upper bound also converges to the true lml slower than the lower bound. Upper bounds for fixed hyperparameters Here, we train sparse models with the hyperparameters fixed to the optimal value found previously.
fMs = numpy.arange(3, 20, 1) fvfe_lml = [] # Fixed vfe lml fvupper_lml = [] # Fixed upper lml for M in fMs: Zinit = vfe.Z.value[:M, :].copy() Zinit = np.vstack((Zinit, X[np.random.permutation(len(X))[:(M - len(Zinit))], :].copy())) init_params = vfe.get_parameter_dict() init_params['model.Z'] = Zinit vfe = gpflow.models.SGPR(X, Y, gpflow.kernels.RBF(1), Zinit) vfe.set_parameter_dict(init_params) vfe.kern.fixed = True vfe.likelihood.fixed = True vfe.optimize() fvfe_lml.append(-vfe._objective(vfe.get_free_state())[0]) fvupper_lml.append(vfe.compute_upper_bound()) print("%i" % M, end=" ") plt.plot(fMs, fvfe_lml, label="lower") plt.plot(fMs, fvupper_lml, label="upper") plt.axhline(full_lml, label="full", alpha=0.3) plt.xlabel("Number of inducing points") plt.ylabel("LML estimate") plt.legend()
notebooks/upper_bound.ipynb
t-vi/candlegp
apache-2.0
Now, as the hyperparameters are fixed, the bound does monotonically decrease. We chose the optimal hyperparameters here, but the picture should be the same for any hyperparameter setting. This shows that we increasingly get a better estimate of the marginal likelihood as we add more inducing points. A tight estimate bound does not imply a converged model
vfe = gpflow.models.SGPR(X, Y, gpflow.kernels.RBF(1), X[None, 0, :].copy()) vfe.optimize() print("Lower bound: %f" % -vfe._objective(vfe.get_free_state())[0]) print("Upper bound: %f" % vfe.compute_upper_bound())
notebooks/upper_bound.ipynb
t-vi/candlegp
apache-2.0
In this case we show that for the hyperparameter setting, the bound is very tight. However, this does not imply that we have enough inducing points, but simply that we have correctly identified the marginal likelihood for this particular hyperparameter setting. In this specific case, where we used a single inducing point, the model collapses to not using the GP at all (lengthscale is really long to model only the mean). The rest of the variance is explained by noise. This GP can be perfectly approximated with a single inducing point.
vfe
notebooks/upper_bound.ipynb
t-vi/candlegp
apache-2.0
3. Semantic Analysis The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms. The topic model algorithms in gensim assume that input documents are parameterized using the tf-idf model.
tfidf = gensim.models.TfidfModel(corpus_bow)
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
jeroarenas/MLBigData
mit
Or to apply a transformation to a whole corpus
corpus_tfidf = tfidf[corpus_bow]
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
jeroarenas/MLBigData
mit
Task: Find the document with the largest positive weight for topic 0. Compare the document and the topic.
# Extract weights from corpus_lsi # scode: weight0 = <FILL IN> # Locate the maximum positive weight nmax = np.argmax(weight0) print nmax print weight0[nmax] print corpus_lsi[nmax] # Get topic 0 # scode: topic_0 = <FILL IN> # Compute a list of tuples (token, wordcount) for all tokens in topic_0, where wordcount is the number of # occurences of the token in the article. # scode: token_counts = <FILL IN> print "Topic 0 is:" print topic_0 print "Token counts:" print token_counts
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
jeroarenas/MLBigData
mit
3.2.2. LDA using Sci-kit Learn The input matrix to the sklearn implementation of LDA contains the token-counts for all documents in the corpus. sklearn contains a powerfull CountVectorizer method that can be used to construct the input matrix from the corpus_bow. First, we will define an auxiliary function to print the top tokens in the model, that has been taken from the sklearn documentation.
# Adapted from an example in sklearn site # http://scikit-learn.org/dev/auto_examples/applications/topics_extraction_with_nmf_lda.html # You can try also with the dataset provided by sklearn in # from sklearn.datasets import fetch_20newsgroups #Β dataset = fetch_20newsgroups(shuffle=True, random_state=1, #Β  remove=('headers', 'footers', 'quotes')) def print_top_words(model, feature_names, n_top_words): for topic_idx, topic in enumerate(model.components_): print("Topic #%d:" % topic_idx) print(" ".join([feature_names[i] for i in topic.argsort()[:-n_top_words - 1:-1]])) print() " ".join(ListaTokens)
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
jeroarenas/MLBigData
mit
Now, we need a dataset to feed the Count_Vectorizer object, by joining all tokens in corpus_clean in a single string, using a space ' ' as separator. Task: Join all tokens from each document in a single string, using a white space as separator.
print("Loading dataset...") # scode: data_samples = <FILL IN> # Usar join sobre corpus_clean. data_samples = [" ".join(doc) for doc in corpus_clean] data_samples = map(lambda x: " ".join(x), corpus_clean) print 'Document 0:' print data_samples[0][0:200], '...'
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
jeroarenas/MLBigData
mit
Now we are ready to compute the token counts.
# Use tf (raw term count) features for LDA. print("Extracting tf features for LDA...") n_features = 1000 tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, max_features=n_features, stop_words='english') t0 = time() tf = tf_vectorizer.fit_transform(data_samples) print("done in %0.3fs." % (time() - t0)) print tf[0][0][0]
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
jeroarenas/MLBigData
mit
Now we can apply the LDA algorithm. Task: Create an LDA object with the following parameters: n_topics=n_topics, max_iter=5, learning_method='online', learning_offset=50., random_state=0
print("Fitting LDA models with tf features, " "n_samples=%d and n_features=%d..." % (n_samples, n_features)) # scode: lda = <FILL IN> lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5, learning_method='online', learning_offset=50., random_state=0)
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
jeroarenas/MLBigData
mit
Task: Fit model lda with the token frequencies computed by tf_vectorizer.
t0 = time() corpus_lda = lda.fit_transform(tf) print("done in %0.3fs." % (time() - t0)) print("\nTopics in LDA model:") tf_feature_names = tf_vectorizer.get_feature_names() print_top_words(lda, tf_feature_names, n_top_words)
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
jeroarenas/MLBigData
mit
Next, load the data that we are going to analyse using hoggorm. After the data has been loaded into the pandas data frame, we'll display it in the notebook.
# Load fluorescence data X1_df = pd.read_csv('cheese_fluorescence.txt', index_col=0, sep='\t') X1_df # Load sensory data X2_df = pd.read_csv('cheese_sensory.txt', index_col=0, sep='\t') X2_df
examples/SMI/SMI_on_sensory_and_fluorescence.ipynb
olivertomic/hoggorm
bsd-2-clause
Orthogonal Projections The default comparison between two matrices with SMI is using Orthogonal Projections, i.e. ordinary least squares regression is used to relate the dominant subspaces in the two matrices. In contrast to PLSR, SMI is not performing av prediction of sensory properties from fluorescence measurements, but rather treats the two sets of measurements symmetrically, focusing on the major variation in each of them. More details regarding the use of the SMI are found in the documentation.
# Get the values from the data frame X1 = X1_df.values X2 = X2_df.values smiOP = ho.SMI(X1, X2, ncomp1=10, ncomp2=10) print(np.round(smiOP.smi, 2))
examples/SMI/SMI_on_sensory_and_fluorescence.ipynb
olivertomic/hoggorm
bsd-2-clause
Finally we visualize the SMI values and their corresponding P-values.
# Plot similarities hop.plotSMI(smiOP, [10, 10], X1name='fluorescence', X2name='sensory')
examples/SMI/SMI_on_sensory_and_fluorescence.ipynb
olivertomic/hoggorm
bsd-2-clause
The significance symbols in the diamond plot above indicate if a chosen subspace from one matrix can be found inside the subspace from the other matrix ($\supset$, $\subset$, =), or if there is signficant difference (P-values <0.001*** <0.01 ** <0.05 * <0.1 . >=0.1). From the P-values and plot we can observe that the there is a significant difference between the sensory data and the fluorescence data in the first of the dominant subspaces of the matrices. Looking only at the diagonal, we see that 6 components are needed before we loose the significance completely. Looking at the one-dimensional subspaces, we can observe that four sensory components are needed before there is no significant difference to the first fluorescence component. This can be interpreted as some fundamental difference in the information spanned by flurescence measurements and sensory perceptions that is only masked if large proportions of the subspaces are included. Procrustes Rotations The similarities using PR <= OP, and in this simple case OP$^2$ = PR. Otherwise the pattern stays the same.
smiPR = ho.SMI(X1, X2, ncomp1=10, ncomp2=10, projection="Procrustes") print(np.round(smiPR.smi, 2))
examples/SMI/SMI_on_sensory_and_fluorescence.ipynb
olivertomic/hoggorm
bsd-2-clause
The number of permutations can be controlled for quick (100) or accurate (>10000) computations of significance.
print(np.round(smiPR.significance(B = 100),2)) hop.plotSMI(smiPR, X1name='fluorescence', X2name='sensory')
examples/SMI/SMI_on_sensory_and_fluorescence.ipynb
olivertomic/hoggorm
bsd-2-clause
<table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/federated/tutorials/building_your_own_federated_learning_algorithm"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.orgμ—μ„œ 보기</a></td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colabμ—μ„œ μ‹€ν–‰ν•˜κΈ°</a> </td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHubμ—μ„œ μ†ŒμŠ€ 보기</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">λ…ΈνŠΈλΆ λ‹€μš΄λ‘œλ“œ</a></td> </table> μ‹œμž‘ν•˜κΈ° 전에 μ‹œμž‘ν•˜κΈ° 전에 λ‹€μŒμ„ μ‹€ν–‰ν•˜μ—¬ ν™˜κ²½μ΄ μ˜¬λ°”λ₯΄κ²Œ μ„€μ •λ˜μ—ˆλŠ”μ§€ ν™•μΈν•˜μ„Έμš”. 인사말이 ν‘œμ‹œλ˜μ§€ μ•ŠμœΌλ©΄ μ„€μΉ˜ κ°€μ΄λ“œμ—μ„œ 지침을 μ°Έμ‘°ν•˜μ„Έμš”.
#@test {"skip": true} !pip install --quiet --upgrade tensorflow-federated !pip install --quiet --upgrade nest-asyncio import nest_asyncio nest_asyncio.apply() import tensorflow as tf import tensorflow_federated as tff
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
μ°Έκ³ : 이 Colab은 <code>tensorflow_federated</code> pip νŒ¨ν‚€μ§€μ˜ <a>μ΅œμ‹  릴리즈 버전</a>μ—μ„œ λ™μž‘ν•˜λŠ” κ²ƒμœΌλ‘œ ν™•μΈλ˜μ—ˆμ§€λ§Œ Tensorflow Federated ν”„λ‘œμ νŠΈλŠ” 아직 μ‹œν—˜νŒ 개발 쀑이며 mainμ—μ„œ λ™μž‘ν•˜μ§€ μ•Šμ„ 수 μžˆμŠ΅λ‹ˆλ‹€. κ³ μœ ν•œ νŽ˜λ”λ ˆμ΄μ…˜ ν•™μŠ΅ μ•Œκ³ λ¦¬μ¦˜ κ΅¬μΆ•ν•˜κΈ° 이미지 λΆ„λ₯˜ 및 ν…μŠ€νŠΈ 생성 νŠœν† λ¦¬μ–Όμ—μ„œλŠ” νŽ˜λ”λ ˆμ΄μ…˜ ν•™μŠ΅(FL)을 μœ„ν•œ λͺ¨λΈ 및 데이터 νŒŒμ΄ν”„λΌμΈμ„ μ„€μ •ν•˜λŠ” 방법을 λ°°μ› κ³  TFF의 tff.learning API 계측을 톡해 νŽ˜λ”λ ˆμ΄μ…˜ ν•™μŠ΅μ„ μˆ˜ν–‰ν–ˆμŠ΅λ‹ˆλ‹€. 이것은 FL 연ꡬ와 κ΄€λ ¨ν•˜μ—¬ λΉ™μ‚°μ˜ 일각에 λΆˆκ³Όν•©λ‹ˆλ‹€. 이 νŠœν† λ¦¬μ–Όμ—μ„œλŠ” tff.learning APIλ₯Ό λ”°λ₯΄μ§€ μ•Šκ³  νŽ˜λ”λ ˆμ΄μ…˜ ν•™μŠ΅ μ•Œκ³ λ¦¬μ¦˜μ„ κ΅¬ν˜„ν•˜λŠ” 방법을 λ…Όμ˜ν•©λ‹ˆλ‹€. μ—¬κΈ°μ„œλŠ” λ‹€μŒμ„ λͺ©ν‘œλ‘œ ν•©λ‹ˆλ‹€. λͺ©ν‘œ νŽ˜λ”λ ˆμ΄μ…˜ ν•™μŠ΅ μ•Œκ³ λ¦¬μ¦˜μ˜ 일반적인 ꡬ쑰λ₯Ό μ΄ν•΄ν•©λ‹ˆλ‹€. TFF의 νŽ˜λ”λ ˆμ΄μ…˜ μ½”μ–΄λ₯Ό νƒμƒ‰ν•©λ‹ˆλ‹€. Federated Coreλ₯Ό μ‚¬μš©ν•˜μ—¬ Federated Averaging을 직접 κ΅¬ν˜„ν•©λ‹ˆλ‹€. 이 νŠœν† λ¦¬μ–Όμ€ 자체적으둜 μ§„ν–‰ν•  수 μžˆμ§€λ§Œ λ¨Όμ € 이미지 λΆ„λ₯˜ 및 ν…μŠ€νŠΈ 생성 νŠœν† λ¦¬μ–Όμ„ μ½μ–΄λ³΄λŠ” 것이 μ’‹μŠ΅λ‹ˆλ‹€. μž…λ ₯ 데이터 μ€€λΉ„ν•˜κΈ° λ¨Όμ € TFF에 ν¬ν•¨λœ EMNIST λ°μ΄ν„°μ„ΈνŠΈλ₯Ό λ‘œλ“œν•˜κ³  μ „μ²˜λ¦¬ν•©λ‹ˆλ‹€. μžμ„Έν•œ λ‚΄μš©μ€ 이미지 λΆ„λ₯˜ νŠœν† λ¦¬μ–Όμ„ μ°Έμ‘°ν•˜μ„Έμš”.
emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
λ°μ΄ν„°μ„ΈνŠΈλ₯Ό λͺ¨λΈμ— μ œκ³΅ν•˜κΈ° μœ„ν•΄ 데이터λ₯Ό ν‰λ©΄ν™”ν•˜κ³  각 예제λ₯Ό (flattened_image_vector, label) ν˜•μ‹μ˜ νŠœν”Œλ‘œ λ³€ν™˜ν•©λ‹ˆλ‹€.
NUM_CLIENTS = 10 BATCH_SIZE = 20 def preprocess(dataset): def batch_format_fn(element): """Flatten a batch of EMNIST data and return a (features, label) tuple.""" return (tf.reshape(element['pixels'], [-1, 784]), tf.reshape(element['label'], [-1, 1])) return dataset.batch(BATCH_SIZE).map(batch_format_fn)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
이제 μ†Œμˆ˜μ˜ ν΄λΌμ΄μ–ΈνŠΈλ₯Ό μ„ νƒν•˜κ³  μœ„μ˜ μ „μ²˜λ¦¬λ₯Ό λ°μ΄ν„°μ„ΈνŠΈμ— μ μš©ν•©λ‹ˆλ‹€.
client_ids = sorted(emnist_train.client_ids)[:NUM_CLIENTS] federated_train_data = [preprocess(emnist_train.create_tf_dataset_for_client(x)) for x in client_ids ]
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
λͺ¨λΈ μ€€λΉ„ν•˜κΈ° 이미지 λΆ„λ₯˜ νŠœν† λ¦¬μ–Όμ—μ„œμ™€ λ™μΌν•œ λͺ¨λΈμ„ μ‚¬μš©ν•©λ‹ˆλ‹€. 이 λͺ¨λΈ(tf.kerasλ₯Ό 톡해 κ΅¬ν˜„λ¨)μ—λŠ” ν•˜λ‚˜μ˜ μˆ¨κ²¨μ§„ λ ˆμ΄μ–΄, 그리고 μ΄μ–΄μ„œ μ†Œν”„νŠΈλ§₯슀 λ ˆμ΄μ–΄κ°€ μžˆμŠ΅λ‹ˆλ‹€.
def create_keras_model(): initializer = tf.keras.initializers.GlorotNormal(seed=0) return tf.keras.models.Sequential([ tf.keras.layers.Input(shape=(784,)), tf.keras.layers.Dense(10, kernel_initializer=initializer), tf.keras.layers.Softmax(), ])
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
TFFμ—μ„œ 이 λͺ¨λΈμ„ μ‚¬μš©ν•˜κΈ° μœ„ν•΄ Keras λͺ¨λΈμ„ tff.learning.Model둜 λž˜ν•‘ν•©λ‹ˆλ‹€. 이λ₯Ό 톡해 TFF λ‚΄μ—μ„œ λͺ¨λΈμ˜ μ •λ°©ν–₯ 전달을 μˆ˜ν–‰ν•˜κ³  λͺ¨λΈ 좜λ ₯을 μΆ”μΆœν•  수 μžˆμŠ΅λ‹ˆλ‹€. μžμ„Έν•œ λ‚΄μš©μ€ 이미지 λΆ„λ₯˜ νŠœν† λ¦¬μ–Όμ„ μ°Έμ‘°ν•˜μ„Έμš”.
def model_fn(): keras_model = create_keras_model() return tff.learning.from_keras_model( keras_model, input_spec=federated_train_data[0].element_spec, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
μ—¬κΈ°μ„œλŠ” tf.kerasλ₯Ό μ‚¬μš©ν•˜μ—¬ tff.learning.Model을 μƒμ„±ν•˜μ§€λ§Œ TFFλŠ” 훨씬 더 λ§Žμ€ 일반 λͺ¨λΈμ„ μ§€μ›ν•©λ‹ˆλ‹€. μ΄λŸ¬ν•œ λͺ¨λΈμ—λŠ” λͺ¨λΈ κ°€μ€‘μΉ˜λ₯Ό μΊ‘μ²˜ν•˜λŠ” λ‹€μŒκ³Ό 같은 κ΄€λ ¨ 속성이 μžˆμŠ΅λ‹ˆλ‹€. trainable_variables: ν•™μŠ΅ κ°€λŠ₯ν•œ λ ˆμ΄μ–΄μ— ν•΄λ‹Ήν•˜λŠ” ν…μ„œμ˜ μ΄ν„°λŸ¬λΈ”μž…λ‹ˆλ‹€. non_trainable_variables: ν•™μŠ΅ν•  수 μ—†λŠ” λ ˆμ΄μ–΄μ— ν•΄λ‹Ήν•˜λŠ” ν…μ„œμ˜ μ΄ν„°λŸ¬λΈ”μž…λ‹ˆλ‹€. μ—¬κΈ°μ„œλŠ” λͺ©μ μƒ trainable_variables만 μ‚¬μš©ν•©λ‹ˆλ‹€(우리 λͺ¨λΈμ—λŠ” μ΄κ²ƒλ§Œ μžˆμœΌλ―€λ‘œ!). κ³ μœ ν•œ νŽ˜λ”λ ˆμ΄μ…˜ ν•™μŠ΅ μ•Œκ³ λ¦¬μ¦˜ κ΅¬μΆ•ν•˜κΈ° tff.learning APIλ₯Ό μ‚¬μš©ν•˜λ©΄ Federated Averaging의 λ§Žμ€ λ³€ν˜•μ„ 생성할 수 μžˆμ§€λ§Œ 이 ν”„λ ˆμž„μ›Œν¬μ— κΉ”λ”ν•˜κ²Œ λ§žμ§€ μ•ŠλŠ” λ‹€λ₯Έ νŽ˜λ”λ ˆμ΄μ…˜ μ•Œκ³ λ¦¬μ¦˜μ΄ μžˆμŠ΅λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄ μ •κ·œν™”, 클리핑 λ˜λŠ” νŽ˜λ”λ ˆμ΄μ…˜ GAN ν•™μŠ΅κ³Ό 같은 더 λ³΅μž‘ν•œ μ•Œκ³ λ¦¬μ¦˜μ„ μΆ”κ°€ν•  μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€. μ•„λ‹ˆλ©΄ νŽ˜λ”λ ˆμ΄μ…˜ 뢄석에 관심이 μžˆμ„ μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€. μ΄λŸ¬ν•œ κ³ κΈ‰ μ•Œκ³ λ¦¬μ¦˜μ˜ 경우 TFFλ₯Ό μ‚¬μš©ν•˜μ—¬ 자체 μ‚¬μš©μž μ§€μ • μ•Œκ³ λ¦¬μ¦˜μ„ μž‘μ„±ν•΄μ•Ό ν•©λ‹ˆλ‹€. λ§Žμ€ 경우 νŽ˜λ”λ ˆμ΄μ…˜ μ•Œκ³ λ¦¬μ¦˜μ—λŠ” 4κ°€μ§€ μ£Όμš” ꡬ성 μš”μ†Œκ°€ μžˆμŠ΅λ‹ˆλ‹€. μ„œλ²„-ν΄λΌμ΄μ–ΈνŠΈ λΈŒλ‘œλ“œμΊμŠ€νŠΈ 단계 둜컬 ν΄λΌμ΄μ–ΈνŠΈ μ—…λ°μ΄νŠΈ 단계 ν΄λΌμ΄μ–ΈνŠΈ-μ„œλ²„ μ—…λ‘œλ“œ 단계 μ„œλ²„ μ—…λ°μ΄νŠΈ 단계 TFFμ—μ„œλŠ” 일반적으둜 νŽ˜λ”λ ˆμ΄μ…˜ μ•Œκ³ λ¦¬μ¦˜μ„ tff.templates.IterativeProcess(λ‚˜λ¨Έμ§€ λΆ€λΆ„μ—μ„œ IterativeProcess라고 함)둜 λ‚˜νƒ€λƒ…λ‹ˆλ‹€. 이것은 initialize 및 next ν•¨μˆ˜λ₯Ό ν¬ν•¨ν•˜λŠ” ν΄λž˜μŠ€μž…λ‹ˆλ‹€. μ—¬κΈ°μ„œ initializeλŠ” μ„œλ²„λ₯Ό μ΄ˆκΈ°ν™”ν•˜λŠ” 데 μ‚¬μš©λ˜λ©° nextλŠ” νŽ˜λ”λ ˆμ΄μ…˜ μ•Œκ³ λ¦¬μ¦˜μ˜ ν•œ 톡신 λΌμš΄λ“œλ₯Ό μˆ˜ν–‰ν•©λ‹ˆλ‹€. FedAvg에 λŒ€ν•œ 반볡 ν”„λ‘œμ„ΈμŠ€κ°€ μ–΄λ–€ λͺ¨μŠ΅μΈμ§€ κ·Έ 골격을 μž‘μ„±ν•΄ λ³΄κ² μŠ΅λ‹ˆλ‹€. λ¨Όμ € tff.learning.Model을 μƒμ„±ν•˜κ³  ν•™μŠ΅ κ°€λŠ₯ν•œ κ°€μ€‘μΉ˜λ₯Ό λ°˜ν™˜ν•˜λŠ” μ΄ˆκΈ°ν™” ν•¨μˆ˜κ°€ μžˆμŠ΅λ‹ˆλ‹€.
def initialize_fn(): model = model_fn() return model.trainable_variables
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
이 ν•¨μˆ˜λŠ” 문제 μ—†μ–΄ λ³΄μ΄μ§€λ§Œ "TFF 계산"이 λ˜λ„λ‘ μ•½κ°„ μˆ˜μ •ν•΄μ•Ό ν•©λ‹ˆλ‹€. 이 λ‚΄μš©μ€ λ‚˜μ€‘μ— μ•Œμ•„λ³Ό κ²ƒμž…λ‹ˆλ‹€. λ˜ν•œ, μš°λ¦¬λŠ” next_fnλ₯Ό μŠ€μΌ€μΉ˜ν•˜λ €κ³  ν•©λ‹ˆλ‹€.
def next_fn(server_weights, federated_dataset): # Broadcast the server weights to the clients. server_weights_at_client = broadcast(server_weights) # Each client computes their updated weights. client_weights = client_update(federated_dataset, server_weights_at_client) # The server averages these updates. mean_client_weights = mean(client_weights) # The server updates its model. server_weights = server_update(mean_client_weights) return server_weights
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
이 λ„€ κ°€μ§€ ꡬ성 μš”μ†Œλ₯Ό κ°œλ³„μ μœΌλ‘œ κ΅¬ν˜„ν•˜λŠ” 데 쀑점을 λ‘˜ κ²ƒμž…λ‹ˆλ‹€. λ¨Όμ € 순수 TensorFlowμ—μ„œ κ΅¬ν˜„ν•  수 μžˆλŠ” λΆ€λΆ„, 즉 ν΄λΌμ΄μ–ΈνŠΈ 및 μ„œλ²„ μ—…λ°μ΄νŠΈ 단계에 μ΄ˆμ μ„ 맞좜 κ²ƒμž…λ‹ˆλ‹€. TensorFlow 블둝 ν΄λΌμ΄μ–ΈνŠΈ μ—…λ°μ΄νŠΈ tff.learning.Model을 μ‚¬μš©ν•˜μ—¬ 기본적으둜 TensorFlow λͺ¨λΈμ„ ν›ˆλ ¨ν•˜λŠ” 것과 λ™μΌν•œ λ°©μ‹μœΌλ‘œ ν΄λΌμ΄μ–ΈνŠΈ ν›ˆλ ¨μ„ μˆ˜ν–‰ν•©λ‹ˆλ‹€. 특히, μš°λ¦¬λŠ” tf.GradientTapeλ₯Ό μ‚¬μš©ν•˜μ—¬ 데이터 λ°°μΉ˜μ— λŒ€ν•œ κ·Έλ ˆλ””μ–ΈνŠΈλ₯Ό κ³„μ‚°ν•œ λ‹€μŒ client_optimizerλ₯Ό μ‚¬μš©ν•˜μ—¬ μ΄λŸ¬ν•œ κ·Έλ ˆλ””μ–ΈνŠΈλ₯Ό μ μš©ν•©λ‹ˆλ‹€. μš°λ¦¬λŠ” ν›ˆλ ¨ κ°€λŠ₯ν•œ κ°€μ€‘μΉ˜μ—λ§Œ μ΄ˆμ μ„ 맞μΆ₯λ‹ˆλ‹€.
@tf.function def client_update(model, dataset, server_weights, client_optimizer): """Performs training (using the server model weights) on the client's dataset.""" # Initialize the client model with the current server weights. client_weights = model.trainable_variables # Assign the server weights to the client model. tf.nest.map_structure(lambda x, y: x.assign(y), client_weights, server_weights) # Use the client_optimizer to update the local model. for batch in dataset: with tf.GradientTape() as tape: # Compute a forward pass on the batch of data outputs = model.forward_pass(batch) # Compute the corresponding gradient grads = tape.gradient(outputs.loss, client_weights) grads_and_vars = zip(grads, client_weights) # Apply the gradient using a client optimizer. client_optimizer.apply_gradients(grads_and_vars) return client_weights
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
μ„œλ²„ μ—…λ°μ΄νŠΈ FedAvg에 λŒ€ν•œ μ„œλ²„ μ—…λ°μ΄νŠΈλŠ” ν΄λΌμ΄μ–ΈνŠΈ μ—…λ°μ΄νŠΈλ³΄λ‹€ κ°„λ‹¨ν•©λ‹ˆλ‹€. μš°λ¦¬λŠ” λ‹¨μˆœνžˆ μ„œλ²„ λͺ¨λΈ κ°€μ€‘μΉ˜λ₯Ό ν΄λΌμ΄μ–ΈνŠΈ λͺ¨λΈ κ°€μ€‘μΉ˜μ˜ ν‰κ· μœΌλ‘œ λ°”κΎΈλŠ” "바닐라" νŽ˜λ”λ ˆμ΄μ…˜ 평균화λ₯Ό κ΅¬ν˜„ν•  κ²ƒμž…λ‹ˆλ‹€. λ‹€μ‹œ λ§ν•˜μ§€λ§Œ, μš°λ¦¬λŠ” ν›ˆλ ¨ κ°€λŠ₯ν•œ κ°€μ€‘μΉ˜μ—λ§Œ μ΄ˆμ μ„ 맞μΆ₯λ‹ˆλ‹€.
@tf.function def server_update(model, mean_client_weights): """Updates the server model weights as the average of the client model weights.""" model_weights = model.trainable_variables # Assign the mean client weights to the server model. tf.nest.map_structure(lambda x, y: x.assign(y), model_weights, mean_client_weights) return model_weights
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
λ‹¨μˆœνžˆ mean_client_weightsλ₯Ό λ°˜ν™˜ν•˜μ—¬ μŠ€λ‹ˆνŽ«μ„ λ‹¨μˆœν™”ν•  수 μžˆμŠ΅λ‹ˆλ‹€. κ·ΈλŸ¬λ‚˜ Federated Averaging의 κ³ κΈ‰ κ΅¬ν˜„μ—μ„œλŠ” λͺ¨λ©˜ν…€ λ˜λŠ” 적응성과 같은 보닀 μ •κ΅ν•œ 기술과 ν•¨κ»˜ mean_client_weightsλ₯Ό μ‚¬μš©ν•©λ‹ˆλ‹€. 과제: model_weights 및 mean_client_weights의 쀑간점이 λ˜λ„λ‘ μ„œλ²„ κ°€μ€‘μΉ˜λ₯Ό μ—…λ°μ΄νŠΈν•˜λŠ” server_update 버전을 κ΅¬ν˜„ν•©λ‹ˆλ‹€(μ°Έκ³ : μ΄λŸ¬ν•œ μ’…λ₯˜μ˜ "쀑간점" μ ‘κ·Ό 방식은 Lookahead μ˜΅ν‹°λ§ˆμ΄μ €μ— λŒ€ν•œ 졜근의 μž‘μ—…κ³Ό μœ μ‚¬ν•©λ‹ˆλ‹€!). μ§€κΈˆκΉŒμ§€ μˆœμˆ˜ν•œ TensorFlow μ½”λ“œλ§Œ μž‘μ„±ν–ˆμŠ΅λ‹ˆλ‹€. TFFλ₯Ό μ‚¬μš©ν•˜λ©΄ 이미 μ΅μˆ™ν•œ TensorFlow μ½”λ“œλ₯Ό 많이 μ‚¬μš©ν•  수 μžˆμœΌλ―€λ‘œ μ΄λŠ” μ˜λ„μ μœΌλ‘œ μ„€κ³„λœ κ²ƒμž…λ‹ˆλ‹€. κ·ΈλŸ¬λ‚˜ 이제 <em>μ˜€μΌ€μŠ€νŠΈλ ˆμ΄μ…˜ 둜직</em>, 즉 μ„œλ²„κ°€ ν΄λΌμ΄μ–ΈνŠΈμ— λΈŒλ‘œλ“œμΊμŠ€νŠΈν•˜λŠ” λ‚΄μš©κ³Ό ν΄λΌμ΄μ–ΈνŠΈκ°€ μ„œλ²„μ— μ—…λ‘œλ“œν•˜λŠ” λ‚΄μš©μ„ μ§€μ •ν•˜λŠ” λ‘œμ§μ„ μ§€μ •ν•΄μ•Ό ν•©λ‹ˆλ‹€. 이λ₯Ό μœ„ν•΄μ„œλŠ” TFF의 νŽ˜λ”λ ˆμ΄μ…˜ μ½”μ–΄κ°€ ν•„μš”ν•©λ‹ˆλ‹€. νŽ˜λ”λ ˆμ΄μ…˜ μ½”μ–΄ μ†Œκ°œ νŽ˜λ”λ ˆμ΄μ…˜ μ½”μ–΄(FC)λŠ” tff.learning API의 기반 역할을 ν•˜λŠ” ν•˜μœ„ μˆ˜μ€€ μΈν„°νŽ˜μ΄μŠ€ μ§‘ν•©μž…λ‹ˆλ‹€. κ·ΈλŸ¬λ‚˜ μ΄λŸ¬ν•œ μΈν„°νŽ˜μ΄μŠ€λŠ” ν•™μŠ΅μ—λ§Œ κ΅­ν•œλ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. μ‹€μ œλ‘œ λΆ„μ‚° 데이터에 λŒ€ν•œ 뢄석 및 기타 λ§Žμ€ 계산에 μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. μƒμœ„ μˆ˜μ€€μ—μ„œ νŽ˜λ”λ ˆμ΄μ…˜ μ½”μ–΄λŠ” κ°„κ²°ν•˜κ²Œ ν‘œν˜„λœ ν”„λ‘œκ·Έλž¨ λ‘œμ§μ„ μ‚¬μš©ν•˜μ—¬ TensorFlow μ½”λ“œλ₯Ό λΆ„μ‚° 톡신 μ—°μ‚°μž(예: λΆ„μ‚° 합계 및 λΈŒλ‘œλ“œμΊμŠ€νŠΈ)와 κ²°ν•©ν•  수 μžˆλŠ” 개발 ν™˜κ²½μž…λ‹ˆλ‹€. λͺ©ν‘œλŠ” μ‹œμŠ€ν…œ κ΅¬ν˜„ μ„ΈλΆ€ 사항(예: 지점 κ°„ λ„€νŠΈμ›Œν¬ λ©”μ‹œμ§€ κ΅ν™˜ μ§€μ •)을 μš”κ΅¬ν•˜μ§€ μ•Šκ³  μ—°κ΅¬μžμ™€ μ‹€λ¬΄μžμ—κ²Œ μ‹œμŠ€ν…œμ˜ λΆ„μ‚° 톡신을 μ‹ μ†ν•˜κ²Œ μ œμ–΄ν•  수 μžˆλ„λ‘ ν•˜λŠ” κ²ƒμž…λ‹ˆλ‹€. ν•œ κ°€μ§€ μš”μ μ€ TFFκ°€ 개인 정보 보호λ₯Ό μœ„ν•΄ μ„€κ³„λ˜μ—ˆλ‹€λŠ” κ²ƒμž…λ‹ˆλ‹€. λ”°λΌμ„œ 쀑앙 집쀑식 μ„œλ²„ μœ„μΉ˜μ—μ„œ μ›μΉ˜ μ•ŠλŠ” 데이터 좕적을 λ°©μ§€ν•˜κΈ° μœ„ν•΄ 데이터가 μžˆλŠ” μœ„μΉ˜λ₯Ό λͺ…μ‹œμ μœΌλ‘œ μ œμ–΄ν•  수 μžˆμŠ΅λ‹ˆλ‹€. νŽ˜λ”λ ˆμ΄μ…˜ 데이터 TFF의 핡심 κ°œλ…μ€ "νŽ˜λ”λ ˆμ΄μ…˜ 데이터"이며, μ΄λŠ” λΆ„μ‚° μ‹œμŠ€ν…œμ˜ μž₯치 κ·Έλ£Ή(예: ν΄λΌμ΄μ–ΈνŠΈ λ°μ΄ν„°μ„ΈνŠΈ λ˜λŠ” μ„œλ²„ λͺ¨λΈ κ°€μ€‘μΉ˜)에 걸쳐 ν˜ΈμŠ€νŒ…λ˜λŠ” 데이터 ν•­λͺ©μ˜ 집합을 λ‚˜νƒ€λƒ…λ‹ˆλ‹€. μš°λ¦¬λŠ” λͺ¨λ“  μž₯μΉ˜μ—μ„œ 데이터 ν•­λͺ©μ˜ 전체 μ»¬λ ‰μ…˜μ„ 단일 νŽ˜λ”λ ˆμ΄μ…˜ κ°’μœΌλ‘œ λͺ¨λΈλ§ν•©λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, μ„Όμ„œμ˜ μ˜¨λ„λ₯Ό λ‚˜νƒ€λ‚΄λŠ” 뢀동 μ†Œμˆ˜μ μ΄ μžˆλŠ” ν΄λΌμ΄μ–ΈνŠΈ μž₯μΉ˜κ°€ μžˆλ‹€κ³  κ°€μ •ν•©λ‹ˆλ‹€. μš°λ¦¬λŠ” 이것을 νŽ˜λ”λ ˆμ΄μ…˜ 뢀동 μ†Œμˆ˜μ μœΌλ‘œ ν‘œν˜„ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
federated_float_on_clients = tff.FederatedType(tf.float32, tff.CLIENTS)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
νŽ˜λ”λ ˆμ΄μ…˜ μœ ν˜•μ€ ν•΄λ‹Ή ꡬ성원 ꡬ성 μš”μ†Œ(예: tf.float32)의 μœ ν˜• T와 μž₯치 κ·Έλ£Ή G둜 μ§€μ •λ©λ‹ˆλ‹€. Gκ°€ tff.CLIENTS λ˜λŠ” tff.SERVER인 κ²½μš°μ— 쀑점을 λ‘˜ κ²ƒμž…λ‹ˆλ‹€. μ΄λŸ¬ν•œ νŽ˜λ”λ ˆμ΄μ…˜ μœ ν˜•μ€ μ•„λž˜μ™€ 같이 {T}@G둜 ν‘œμ‹œλ©λ‹ˆλ‹€.
str(federated_float_on_clients)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
게재 μœ„μΉ˜μ— 관심이 λ§Žμ€ μ΄μœ λŠ” λ¬΄μ—‡μž…λ‹ˆκΉŒ? TFF의 핡심 λͺ©ν‘œλŠ” μ‹€μ œ λΆ„μ‚° μ‹œμŠ€ν…œμ— 배포할 수 μžˆλŠ” μ½”λ“œλ₯Ό μž‘μ„±ν•  수 μžˆλ„λ‘ ν•˜λŠ” κ²ƒμž…λ‹ˆλ‹€. 즉, μž₯치의 ν•˜μœ„ 집합이 μ–΄λ–€ μ½”λ“œλ₯Ό μ‹€ν–‰ν•˜κ³  λ‹€λ₯Έ 데이터 쑰각이 어디에 μžˆλŠ”μ§€ μΆ”λ‘ ν•˜λŠ” 것이 μ€‘μš”ν•©λ‹ˆλ‹€. TFFλŠ” 데이터, 데이터가 λ°°μΉ˜λ˜λŠ” μœ„μΉ˜ 및 데이터가 λ³€ν™˜λ˜λŠ” λ°©λ²•μ˜ μ„Έ 가지에 쀑점을 λ‘‘λ‹ˆλ‹€. 처음 두 κ°œλŠ” νŽ˜λ”λ ˆμ΄μ…˜ μœ ν˜•μœΌλ‘œ μΊ‘μŠν™”λ˜κ³  λ§ˆμ§€λ§‰ 두 κ°œλŠ” νŽ˜λ”λ ˆμ΄μ…˜ 계산에 μΊ‘μŠν™”λ©λ‹ˆλ‹€. νŽ˜λ”λ ˆμ΄μ…˜ 계산 TFFλŠ” κΈ°λ³Έ λ‹¨μœ„κ°€ νŽ˜λ”λ ˆμ΄μ…˜ 계산인 κ°•λ ₯ν•œ ν˜•μ‹μ˜ ν•¨μˆ˜ν˜• ν”„λ‘œκ·Έλž˜λ° ν™˜κ²½μž…λ‹ˆλ‹€. 이듀은 νŽ˜λ”λ ˆμ΄μ…˜ 값을 μž…λ ₯으둜 받아듀이고 νŽ˜λ”λ ˆμ΄μ…˜ 값을 좜λ ₯으둜 λ°˜ν™˜ν•˜λŠ” 논리 μ‘°κ°μž…λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄ ν΄λΌμ΄μ–ΈνŠΈ μ„Όμ„œμ˜ μ˜¨λ„λ₯Ό ν‰κ· ν™”ν•˜κ³  μ‹Άλ‹€κ³  κ°€μ •ν•©λ‹ˆλ‹€. λ‹€μŒμ„ μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€(νŽ˜λ”λ ˆμ΄μ…˜ 뢀동 μ†Œμˆ˜μ  μ‚¬μš©).
@tff.federated_computation(tff.FederatedType(tf.float32, tff.CLIENTS)) def get_average_temperature(client_temperatures): return tff.federated_mean(client_temperatures)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0
TensorFlow의 tf.function λ°μ½”λ ˆμ΄ν„°μ™€ μ–΄λ–»κ²Œ λ‹€λ₯Έμ§€ λ¬Όμ–΄λ³Ό 수 μžˆμŠ΅λ‹ˆλ‹€. 핡심적인 λŒ€λ‹΅μ€ tff.federated_computation에 μ˜ν•΄ μƒμ„±λœ μ½”λ“œκ°€ TensorFlow도 μ•„λ‹ˆκ³  Python μ½”λ“œλ„ μ•„λ‹ˆλΌλŠ” κ²ƒμž…λ‹ˆλ‹€. λ‚΄λΆ€ ν”Œλž«νΌ 독립적인 글루 μ–Έμ–΄λ‘œ 된 λΆ„μ‚° μ‹œμŠ€ν…œμ˜ μ‚¬μ–‘μž…λ‹ˆλ‹€. λ³΅μž‘ν•˜κ²Œ 듀릴 수 μžˆμ§€λ§Œ TFF 계산은 잘 μ •μ˜λœ ν˜•μ‹ μ„œλͺ…이 μžˆλŠ” ν•¨μˆ˜λ‘œ 생각할 수 μžˆμŠ΅λ‹ˆλ‹€. μ΄λŸ¬ν•œ μœ ν˜• μ„œλͺ…은 직접 쿼리할 수 μžˆμŠ΅λ‹ˆλ‹€.
str(get_average_temperature.type_signature)
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
tensorflow/docs-l10n
apache-2.0