markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
The fourth way to run Python, which is fairly complex-looking at first but becomes much more useful the more you program, is through an IDE. When you installed Anaconda you got an IDE called "Spyder" along with it. Go ahead and start it up and have a look around. Open the Python file you ran at the command line a while ago, and run it in Spyder.
Spyder is also a non-interactive version of Python - when you run a program, it starts over fresh from the beginning and forgets everything that you ran before. However, like all IDEs Spyder has a debugger, which means that you can stop the program in the middle and have a closer look at what it is doing!
Getting around your filesystem from Python
You have seen that Spyder asks you questions about your "working directory", and that it starts you with a temporary file in some crazy location you don't want to remember. This is a good moment to talk about how Python works with the filesystem that we learned about in week 1.
Python has a bunch of tools for interacting with your operating system, and they are all in the os module. We can get access to it with the following line of code:
|
import os
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
We can list the contents of any directory by using the os.listdir() method. Without any arguments, it gives us the contents of the directory we are standing in.
|
os.listdir()
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
We can also list the contents of any other directory. For example, the Downloads folder on my Mac is in the folder with my short name (tla), which is in a folder called Users, which is right on the Macintosh HD. So I say
/Users/tla/Downloads
to get at it. That is: Starting at the root (/), look in Users; starting at Users (/Users/), look in tla; starting at tla (/Users/tla/), look in Downloads.
If you are on Windows you will probably want something like /Users/Tara Andrews/Downloads instead.
Let's print out each of the filenames we find in that directory.
|
for f in os.listdir("/Users/tla/Downloads"):
print(f)
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
Python (or any program running on your computer for that matter) always runs from some specific place in this filesystem - that is, some specific directory on your drive. To see where you are running now, you can ask for the current working directory, like so. Try this command in Python at the command line, in IPython, and in Spyder, and see what you get.
|
os.getcwd()
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
This will look a little different on Mac vs. Windows, because they use different notations (called the path) for their directory structure.
C:\Users\Tara L Andrews\Documents\IPython Notebooks
vs.
/Users/tla/Documents/2014 FS
The two important lessons to take from this are:
If you are not sure where you are, then os.getcwd() will tell you.
If you value your sanity, don't use / or \ when you name your folders and files.
Once you know where you are on the filesystem, you can change directory. This is accomplished with the os.chdir() method. For example:
|
os.chdir("/Users/tla") # go to the directory tla in the directory Users at the base of the drive
os.chdir("..") # go one directory down
print(os.getcwd()) # see where we are
os.chdir("./tla/Downloads") # go to the directory Downloads in the directory tla that is here where we are
print(os.getcwd()) # see where we are now
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
You will remember that there are a few special names when you look at files and directories:
. - the directory I'm standing in
.. - the directory that holds the directory I'm standing in
/ - (all by itself) the base ("root") directory
IF YOU ARE ON WINDOWS: The / is a little more complicated than that if you are on Windows. As you will have noticed, you often see \ instead of /, but Python (like Powershell) lets you call it / anyway. Moreover, it is the base directory of the drive you are currently looking at (e.g. C:, D:, or whatever.)
IF YOU ARE ON MAC OR LINUX: Unix machines (including Macs) don't deal with drives in the same way, so / just means the root of your OS filesystem.
So by saying
os.chdir("..")
we have moved down a directory. We could keep doing this all the way to the bottom, if we felt like it. But generally it is a good idea to stay in your home directory if you have the choice.
Here is how Python knows what your home directory is:
|
print(os.getenv("HOME", "I don't have a HOME variable, so I must be on Windows"))
print(os.getenv("USERPROFILE", "I don't have a USERPROFILE variable, so I must be on Mac or Linux" ))
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
Once you know which variable contains your home directory, you can save it into a variable and use it.
|
my_home = os.getenv("HOME")
for f in os.listdir( my_home ):
print(f)
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
Now, if I want to, I can look to see what we have in my Projects folder - this is where I usually keep the code I've worked on. I can do this by joining the directory name ('Projects') to my home directory (which I've saved in my_home.)
|
doc_path = os.path.join( my_home, "Projects" )
print( "The join operation has given me the path", doc_path )
os.listdir( doc_path )
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
Using the os.path.join command is a very good idea when you don't know what kind of computer might be running your script - it means that you are letting Python deal with the question of whether the path separator is / or \ or : or % or whatever. But you don't have to use the os.path.join command if you know that / will work; you can also just type the path directly.
|
os.listdir( "/Users/tla/Dropbox/book" ) # Look at the 'book' folder in the Dropbox folder in my home directory
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
So to sum up, here are some equivalences between the command line and Python:
cd DIRECTORY ==> os.chdir("DIRECTORY")
ls DIRECTORY ==> os.listdir("DIRECTORY")
pwd ==> os.getcwd()
echo $VAR ==> os.getenv("VAR")
Reading files
Now that we know how to navigate around our directory, we might also want to look inside some files! Here's how we do that.
|
fh = open(my_home + "/Dropbox/beef_stew.txt") # Open the file
contents = fh.read() # Read its contents
fh.close() # Close the file
print(contents) # Do something with the contents
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
There are three steps to dealing with files from Python:
Open it
Read (or write) whatever you need
Close it
When you open a file, you make something called a filehandle. The filehandle, well, handles the file - does the reading, writing, closing, etc. that you need to be done.
When you read a file, you have two choices: read in the entire thing, or read it line-by-line. If the file is small, or if you don't want to process it line-by-line, then you will read it all in one go. If it makes sense to process one line at a time (and especially if it is a huge file), then by all means do so. So, for example, we could add line numbers to what is in this file.
|
os.chdir(my_home)
fh = open( "Dropbox/beef_stew.txt" )
counter = 1
for line in fh:
print("%d: %s" % ( counter, line ))
counter += 1
fh.close()
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
Huh, great, but why did that double-space all of a sudden?
Here's what happened: every line in the file ends with a line break where I pressed Return. The print function adds a line break after everything it prints, by default. Since the lines in the file already have one, if we are not careful we will also be double-spacing the printout! In order to avoid that, we end the print statement with this parameter
end=""
which says "Instead of the newline you would normally print at the end, print nothing instead."
|
fh = open( "Dropbox/beef_stew.txt" )
counter = 1
for line in fh:
print("%d: %s" % ( counter, line ), end="")
counter += 1
fh.close()
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
Writing to files
When we open a filehandle on a particular file, that filehandle can either be reading from the file or writing to it (but not both, at least not until we understand the mechanics and the dangers of doing that.)
When we write to a file that already exists, there are two options: either we will replace whatever was there before, or we will add to it. Let's try it, first to a new file and then adding something to that file.
When we use the open() function, it takes two parameters: the filename and a letter that indicates whether we want to read or write or what. If we don't give any letter, it assumes we meant 'r' for 'read'. The options are:
r - open the file to reading
w - empty the file and open it to writing
a - open the file for writing (appending) to the end; do NOT empty it.
We can see this in action, by opening the old recipe for reading and a new recipe for writing. Where we use .read() on the old filehandle, we will use .write() on the new filehandle.
The write function is a lot like print, only it does NOT assume that you want a line break at the end. In this case that is pretty convenient, since we don't!
|
old_recipe = open( "Dropbox/beef_stew.txt" )
new_recipe = open( "Dropbox/numbered_beef_stew.txt", "w" )
counter = 1
for line in old_recipe:
new_recipe.write( "%d: %s" % ( counter, line ) )
counter += 1 # this is the same as counter = counter + 1
old_recipe.close()
new_recipe.close()
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
So let's look at the new recipe! We open it for reading this time, instead of writing. And instead of reading it line by line, we will read it all in one go, so that we don't have to worry about what print does with line endings.
|
new_recipe = open( "Dropbox/numbered_beef_stew.txt" )
contents = new_recipe.read()
new_recipe.close()
print(contents)
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
Let's say we forgot a step at the end, and want to add it.
When we write a new line to the file, since we are using .write() and not print(), we have to make sure to add the carriage return at the end of the line. In Python this can usually be done with the term "\n", no matter which sort of computer you are on. (This fact can confuse nerds used to the old ways of things, but is convenient for you!)
Finally, although it is always important to close the files we open, it is especially important if we are writing to the file. If we forget to close a file we're writing to, then it's possible that not everything will get written!
|
new_recipe = open( "Dropbox/numbered_beef_stew.txt", "a" )
new_recipe.write( "18: give the leftovers to the cat\n" )
new_recipe.close()
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
And let's see if that worked. This time, instead of using .read() to put the file into a single variable, we will use .readlines() to put the file line by line into an array. This is useful if you want to read the file all in one go, but are going to want to do something with its contents line by line.
|
new_recipe = open( "Dropbox/numbered_beef_stew.txt" )
contents = new_recipe.readlines()
new_recipe.close()
for line in contents:
print("-", line, end='')
|
unit1/Running_Python.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
<li>How much time has passed?
|
d1 - d2
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
<h4>Obviously that's not going to work. </h4>
<h4>We can't do date operations on strings</h4>
<h4>Let's see what happens with datetime</h4>
|
import datetime
d1 = datetime.date(2016,11,24)
d2 = datetime.date(2017,10,24)
max(d1,d2)
print(d2 - d1)
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
<li>datetime objects understand time
<h3>The datetime library contains several useful types</h3>
<li>date: stores the date (month,day,year)
<li>time: stores the time (hours,minutes,seconds)
<li>datetime: stores the date as well as the time (month,day,year,hours,minutes,seconds)
<li>timedelta: duration between two datetime or date objects
<h3>datetime.date</h3>
|
import datetime
century_start = datetime.date(2000,1,1)
today = datetime.date.today()
print(century_start,today)
print("We are",today-century_start,"days into this century")
print(type(century_start))
print(type(today))
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
<h3>For a cleaner output</h3>
|
print("We are",(today-century_start).days,"days into this century")
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
<h3>datetime.datetime</h3>
|
century_start = datetime.datetime(2000,1,1,0,0,0)
time_now = datetime.datetime.now()
print(century_start,time_now)
print("we are",time_now - century_start,"days, hour, minutes and seconds into this century")
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
<h4>datetime objects can check validity</h4>
<li>A ValueError exception is raised if the object is invalid</li>
|
some_date=datetime.date(2015,2,29)
#some_date =datetime.date(2016,2,29)
#some_time=datetime.datetime(2015,2,28,23,60,0)
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
<h3>datetime.timedelta</h3>
<h4>Used to store the duration between two points in time</h4>
|
century_start = datetime.datetime(2050,1,1,0,0,0)
time_now = datetime.datetime.now()
time_since_century_start = time_now - century_start
print("days since century start",time_since_century_start.days)
print("seconds since century start",time_since_century_start.total_seconds())
print("minutes since century start",time_since_century_start.total_seconds()/60)
print("hours since century start",time_since_century_start.total_seconds()/60/60)
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
<h3>datetime.time</h3>
|
date_and_time_now = datetime.datetime.now()
time_now = date_and_time_now.time()
print(time_now)
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
<h4>You can do arithmetic operations on datetime objects</h4>
<li>You can use timedelta objects to calculate new dates or times from a given date
|
today=datetime.date.today()
five_days_later=today+datetime.timedelta(days=5)
print(five_days_later)
now=datetime.datetime.today()
five_minutes_and_five_seconds_later = now + datetime.timedelta(minutes=5,seconds=5)
print(five_minutes_and_five_seconds_later)
now=datetime.datetime.today()
five_minutes_and_five_seconds_earlier = now+datetime.timedelta(minutes=-5,seconds=-5)
print(five_minutes_and_five_seconds_earlier)
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
<li>But you can't use timedelta on time objects. If you do, you'll get a TypeError exception
|
time_now=datetime.datetime.now().time() #Returns the time component (drops the day)
print(time_now)
thirty_seconds=datetime.timedelta(seconds=30)
time_later=time_now+thirty_seconds
#Bug or feature?
#But this is Python
#And we can always get around something by writing a new function!
#Let's write a small function to get around this problem
def add_to_time(time_object,time_delta):
import datetime
temp_datetime_object = datetime.datetime(500,1,1,time_object.hour,time_object.minute,time_object.second)
print(temp_datetime_object)
return (temp_datetime_object+time_delta).time()
#And test it
time_now=datetime.datetime.now().time()
thirty_seconds=datetime.timedelta(seconds=30)
print(time_now,add_to_time(time_now,thirty_seconds))
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
<h2>datetime and strings</h2>
<h4>datetime.strptime</h4>
<li>datetime.strptime(): grabs time from a string and creates a date or datetime or time object
<li>The programmer needs to tell the function what format the string is using
<li> See http://pubs.opengroup.org/onlinepubs/009695399/functions/strptime.html for how to specify the format
|
date='01-Apr-03'
date_object=datetime.datetime.strptime(date,'%d-%b-%y')
print(date_object)
#Unfortunately, there is no similar thing for time delta
#So we have to be creative!
bus_travel_time='2:15:30'
hours,minutes,seconds=bus_travel_time.split(':')
x=datetime.timedelta(hours=int(hours),minutes=int(minutes),seconds=int(seconds))
print(x)
#Or write a function that will do this for a particular format
def get_timedelta(time_string):
hours,minutes,seconds = time_string.split(':')
import datetime
return datetime.timedelta(hours=int(hours),minutes=int(minutes),seconds=int(seconds))
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
<h4>datetime.strftime</h4>
<li>The strftime function flips the strptime function. It converts a datetime object to a string
<li>with the specified format
|
now = datetime.datetime.now()
string_now = datetime.datetime.strftime(now,'%m/%d/%y %H:%M:%S')
print(now,string_now)
print(str(now)) #Or you can use the default conversion
|
BAMM.101x/datetime_objects.ipynb
|
KECB/learn
|
mit
|
Showing and Saving
NOTE: in IPython notebooks calling plot will display directly below the call to plot. When not in IPython you have several options for viewing the figure:
call b.show or b.savefig after calling plot
use the returned autofig and matplotlib figures however you'd like
pass show=True to the plot method.
pass save='myfilename' to the plot method. (same as calling plt.savefig('myfilename'))
Default Plots
To see the options for plotting that are dataset-dependent see the tutorials on that dataset method:
ORB dataset
MESH dataset
LC dataset
RV dataset
LP dataset
By calling the plot method on the bundle (or any ParameterSet) without any arguments, a plot or series of subplots will be built based on the contents of that ParameterSet.
|
afig, mplfig = b.plot(show=True)
|
2.1/tutorials/plotting.ipynb
|
phoebe-project/phoebe2-docs
|
gpl-3.0
|
For more information on each of the available arrays, see the relevant tutorial on that dataset method:
ORB dataset
MESH dataset
LC dataset
RV dataset
LP dataset
Selecting Phase
And to plot in phase we just send x='phases' or x='phases:binary'.
Setting x='phases' will use the ephemeris from the top-level of the hierarchy
(as if you called b.get_ephemeris()), whereas passing a string after the colon,
will use the ephemeris of that component.
|
afig, mplfig = b['lc01@dataset'].plot(x='phases', z=0, show=True)
|
2.1/tutorials/plotting.ipynb
|
phoebe-project/phoebe2-docs
|
gpl-3.0
|
Basic training loops
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/basic_training_loops"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/basic_training_loops.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/basic_training_loops.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/basic_training_loops.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In the previous guides, you have learned about tensors, variables, gradient tape, and modules. In this guide, you will fit these all together to train models.
TensorFlow also includes the tf.Keras API, a high-level neural network API that provides useful abstractions to reduce boilerplate. However, in this guide, you will use basic classes.
Setup
|
import tensorflow as tf
import matplotlib.pyplot as plt
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
Solving machine learning problems
Solving a machine learning problem usually consists of the following steps:
Obtain training data.
Define the model.
Define a loss function.
Run through the training data, calculating loss from the ideal value
Calculate gradients for that loss and use an optimizer to adjust the variables to fit the data.
Evaluate your results.
For illustration purposes, in this guide you'll develop a simple linear model, $f(x) = x * W + b$, which has two variables: $W$ (weights) and $b$ (bias).
This is the most basic of machine learning problems: Given $x$ and $y$, try to find the slope and offset of a line via simple linear regression.
Data
Supervised learning uses inputs (usually denoted as x) and outputs (denoted y, often called labels). The goal is to learn from paired inputs and outputs so that you can predict the value of an output from an input.
Each input of your data, in TensorFlow, is almost always represented by a tensor, and is often a vector. In supervised training, the output (or value you'd like to predict) is also a tensor.
Here is some data synthesized by adding Gaussian (Normal) noise to points along a line.
|
# The actual line
TRUE_W = 3.0
TRUE_B = 2.0
NUM_EXAMPLES = 201
# A vector of random x values
x = tf.linspace(-2,2, NUM_EXAMPLES)
x = tf.cast(x, tf.float32)
def f(x):
return x * TRUE_W + TRUE_B
# Generate some noise
noise = tf.random.normal(shape=[NUM_EXAMPLES])
# Calculate y
y = f(x) + noise
# Plot all the data
plt.plot(x, y, '.')
plt.show()
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
Tensors are usually gathered together in batches, or groups of inputs and outputs stacked together. Batching can confer some training benefits and works well with accelerators and vectorized computation. Given how small this dataset is, you can treat the entire dataset as a single batch.
Define the model
Use tf.Variable to represent all weights in a model. A tf.Variable stores a value and provides this in tensor form as needed. See the variable guide for more details.
Use tf.Module to encapsulate the variables and the computation. You could use any Python object, but this way it can be easily saved.
Here, you define both w and b as variables.
|
class MyModel(tf.Module):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Initialize the weights to `5.0` and the bias to `0.0`
# In practice, these should be randomly initialized
self.w = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.w * x + self.b
model = MyModel()
# List the variables tf.modules's built-in variable aggregation.
print("Variables:", model.variables)
# Verify the model works
assert model(3.0).numpy() == 15.0
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
The initial variables are set here in a fixed way, but Keras comes with any of a number of initializers you could use, with or without the rest of Keras.
Define a loss function
A loss function measures how well the output of a model for a given input matches the target output. The goal is to minimize this difference during training. Define the standard L2 loss, also known as the "mean squared" error:
|
# This computes a single loss value for an entire batch
def loss(target_y, predicted_y):
return tf.reduce_mean(tf.square(target_y - predicted_y))
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
Before training the model, you can visualize the loss value by plotting the model's predictions in red and the training data in blue:
|
plt.plot(x, y, '.', label="Data")
plt.plot(x, f(x), label="Ground truth")
plt.plot(x, model(x), label="Predictions")
plt.legend()
plt.show()
print("Current loss: %1.6f" % loss(y, model(x)).numpy())
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
Define a training loop
The training loop consists of repeatedly doing three tasks in order:
Sending a batch of inputs through the model to generate outputs
Calculating the loss by comparing the outputs to the output (or label)
Using gradient tape to find the gradients
Optimizing the variables with those gradients
For this example, you can train the model using gradient descent.
There are many variants of the gradient descent scheme that are captured in tf.keras.optimizers. But in the spirit of building from first principles, here you will implement the basic math yourself with the help of tf.GradientTape for automatic differentiation and tf.assign_sub for decrementing a value (which combines tf.assign and tf.sub):
|
# Given a callable model, inputs, outputs, and a learning rate...
def train(model, x, y, learning_rate):
with tf.GradientTape() as t:
# Trainable variables are automatically tracked by GradientTape
current_loss = loss(y, model(x))
# Use GradientTape to calculate the gradients with respect to W and b
dw, db = t.gradient(current_loss, [model.w, model.b])
# Subtract the gradient scaled by the learning rate
model.w.assign_sub(learning_rate * dw)
model.b.assign_sub(learning_rate * db)
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
For a look at training, you can send the same batch of x and y through the training loop, and see how W and b evolve.
|
model = MyModel()
# Collect the history of W-values and b-values to plot later
weights = []
biases = []
epochs = range(10)
# Define a training loop
def report(model, loss):
return f"W = {model.w.numpy():1.2f}, b = {model.b.numpy():1.2f}, loss={loss:2.5f}"
def training_loop(model, x, y):
for epoch in epochs:
# Update the model with the single giant batch
train(model, x, y, learning_rate=0.1)
# Track this before I update
weights.append(model.w.numpy())
biases.append(model.b.numpy())
current_loss = loss(y, model(x))
print(f"Epoch {epoch:2d}:")
print(" ", report(model, current_loss))
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
Do the training
|
current_loss = loss(y, model(x))
print(f"Starting:")
print(" ", report(model, current_loss))
training_loop(model, x, y)
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
Plot the evolution of the weights over time:
|
plt.plot(epochs, weights, label='Weights', color=colors[0])
plt.plot(epochs, [TRUE_W] * len(epochs), '--',
label = "True weight", color=colors[0])
plt.plot(epochs, biases, label='bias', color=colors[1])
plt.plot(epochs, [TRUE_B] * len(epochs), "--",
label="True bias", color=colors[1])
plt.legend()
plt.show()
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
Visualize how the trained model performs
|
plt.plot(x, y, '.', label="Data")
plt.plot(x, f(x), label="Ground truth")
plt.plot(x, model(x), label="Predictions")
plt.legend()
plt.show()
print("Current loss: %1.6f" % loss(model(x), y).numpy())
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
The same solution, but with Keras
It's useful to contrast the code above with the equivalent in Keras.
Defining the model looks exactly the same if you subclass tf.keras.Model. Remember that Keras models inherit ultimately from module.
|
class MyModelKeras(tf.keras.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Initialize the weights to `5.0` and the bias to `0.0`
# In practice, these should be randomly initialized
self.w = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def call(self, x):
return self.w * x + self.b
keras_model = MyModelKeras()
# Reuse the training loop with a Keras model
training_loop(keras_model, x, y)
# You can also save a checkpoint using Keras's built-in support
keras_model.save_weights("my_checkpoint")
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
Rather than write new training loops each time you create a model, you can use the built-in features of Keras as a shortcut. This can be useful when you do not want to write or debug Python training loops.
If you do, you will need to use model.compile() to set the parameters, and model.fit() to train. It can be less code to use Keras implementations of L2 loss and gradient descent, again as a shortcut. Keras losses and optimizers can be used outside of these convenience functions, too, and the previous example could have used them.
|
keras_model = MyModelKeras()
# compile sets the training parameters
keras_model.compile(
# By default, fit() uses tf.function(). You can
# turn that off for debugging, but it is on now.
run_eagerly=False,
# Using a built-in optimizer, configuring as an object
optimizer=tf.keras.optimizers.SGD(learning_rate=0.1),
# Keras comes with built-in MSE error
# However, you could use the loss function
# defined above
loss=tf.keras.losses.mean_squared_error,
)
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
Keras fit expects batched data or a complete dataset as a NumPy array. NumPy arrays are chopped into batches and default to a batch size of 32.
In this case, to match the behavior of the hand-written loop, you should pass x in as a single batch of size 1000.
|
print(x.shape[0])
keras_model.fit(x, y, epochs=10, batch_size=1000)
|
site/en/guide/basic_training_loops.ipynb
|
tensorflow/docs
|
apache-2.0
|
Let's show the symbols data, to see how good the recommender has to be.
|
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 25
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
|
notebooks/prod/n08_simple_q_learner_1000_states_full_training_25_epochs.ipynb
|
mtasende/Machine-Learning-Nanodegree-Capstone
|
mit
|
Ok, let's save that
|
import pickle
with open('../../data/simple_q_learner_1000_states_full_training_25_epochs.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
|
notebooks/prod/n08_simple_q_learner_1000_states_full_training_25_epochs.ipynb
|
mtasende/Machine-Learning-Nanodegree-Capstone
|
mit
|
Firstly we will calculate the features required to characterise the pointcloud.
These are calculated on 3 scales which by default are k=10, 20 & 30 nearest-neighbours.
If wish to alter this go ahead!
The features are:
- anisotropy
- curvature
- eigenentropy
- eigen_sum
- linearity
- omnivariance
- planarity
- sphericity
As well as these, the RGB and normals are used as features. We do not however uses XYZ as these always have negative effect on results.
|
ln.ply_features(incloud)
|
example_notebooks/PointCloudClassification.ipynb
|
Ciaran1981/geospatial-learn
|
gpl-3.0
|
This snippet is used to create the column names for the 128 attributes (16 sensors and 8 attributes measured by each sensor), the target label and batch number for the corresponding row.
We add '\n' to the 'Batch_no' label to signify EOL. If we use 'csv' package then we don't need to add that, the 'writer' method will handle it
|
col_names = ['Label']
for x in map(chr,range(65,81)):
for y in map(str,range(1,9)):
col_names.append('Sensor_'+x+y)
col_names.append('Batch_No\n')
print 'Number of columns -',len(col_names)
|
data_formatting.ipynb
|
aamirg/athena-hacks-honeywell
|
apache-2.0
|
Open a csv file and write the column names to it
|
out=open('formatted_data.csv','w')
out.write(','.join(col_names))
|
data_formatting.ipynb
|
aamirg/athena-hacks-honeywell
|
apache-2.0
|
Get the file names for all the files from the 'raw_data' directory
|
raw_files = [f for f in listdir('./raw_data') if isfile(join('./raw_data', f))]
|
data_formatting.ipynb
|
aamirg/athena-hacks-honeywell
|
apache-2.0
|
Read the data in from each file, format it by -
Strip extra whitespaces and split on whitespace
Add the batch number from file name, the target label
Split the key value pairs on ':' and retrieve the values
|
for file_name in raw_files:
with open('./raw_data/'+file_name,'r') as f:
for i in f:
j=i.strip().split(' ')
out.write(','.join([j[0]]+[k.split(':')[1] for k in j[1:]]+[file_name.strip('batch').split('.')[0],'\n']))
out.close()
|
data_formatting.ipynb
|
aamirg/athena-hacks-honeywell
|
apache-2.0
|
The above snippet can be rewritten in the long form as
|
#for file_name in raw_files:
# with open('./raw_data/'+file_name,'r') as f:
# for i in f:
# j=i.strip().split(' ')
# target_label = [j[0]]
# attributes = [k.split(':')[1] for k in j[1:]]
# batch_no = [file_name.strip('batch').split('.')[0],'\n']
# out.write(','.join(target_label+attributes+batch_no))
#out.close()
|
data_formatting.ipynb
|
aamirg/athena-hacks-honeywell
|
apache-2.0
|
Initialisation
|
# Do a SYGMA run for each NuGrid metallicity
s_02 = s.sygma(iniZ=0.02, imf_type='salpeter')
s_01 = s.sygma(iniZ=0.01, imf_type='salpeter')
s_006 = s.sygma(iniZ=0.006, imf_type='salpeter')
s_001 = s.sygma(iniZ=0.001, imf_type='salpeter')
s_0001 = s.sygma(iniZ=0.0001, imf_type='salpeter')
# Show the number of neutron star mergers at each timestep; should not be negative, and should be roughly equal!
print np.sum(s_02.history.nsm_numbers)
print np.sum(s_01.history.nsm_numbers)
print np.sum(s_006.history.nsm_numbers)
print np.sum(s_001.history.nsm_numbers)
print np.sum(s_0001.history.nsm_numbers)
|
NSM_test_suite.ipynb
|
NuGrid/NuPyCEE
|
bsd-3-clause
|
IMF check
The Salpeter allows one to calculate the number of stars $N_{12}$ in the mass interval [m1, m2] with
(I) $N_{12} = k_N \int _{m1}^{m2} m^{-2.35} dm$
Where $k_{N}$ is the normalization constant, which can be derived from the total amount of mass in the system $M_{tot}$.
Since the total mass $M_{12}$ in the mass interval above can be estimated with
(II) $M_{12} = k_N \int _{m1}^{m2} m^{-1.35} dm$
the mass interval of [8, 100] for neutron star progenitors, [0.1, 100] for the total mass, and $M_{tot}=1e4$ will yield for $k_N$:
$1e4 = \frac{k_N}{0.35}(0.1^{-0.35} - 100^{-0.35})$
|
# Name the relevant quantities
mtot = s_001.mgal
nsm_l = s_001.transitionmass
imf0 = s_001.imf_bdys[0]
imf1 = s_001.imf_bdys[1]
# Compute the normalization constant as defined above
k_N = (mtot*0.35) / (imf0**-0.35 - imf1**-0.35) #(I)
|
NSM_test_suite.ipynb
|
NuGrid/NuPyCEE
|
bsd-3-clause
|
The total number of NS merger progenitors $N_{12}$ is then:
$N_{12} = \frac{k_N}{1.35}(8^{-1.35} - 100^{-1.35})$
|
# Compute the total number of neutron star merger progenitors as defined above
N_nsm = (k_N/1.35) * (nsm_l**-1.35 - imf1**-1.35)
|
NSM_test_suite.ipynb
|
NuGrid/NuPyCEE
|
bsd-3-clause
|
Compared to a SYGMA run:
|
# Compute the number of NS merger progenitors in one of the SYGMA runs (normalize to mgal)
A_imf = mtot / s_001._imf(imf0, imf1, 2)
N_sim = A_imf * s_001._imf(nsm_l, imf1, 1)
print 'Theoretical number of neutron star progenitors: ', N_nsm
print 'Number of neutron star progenitors in SYGMA run: ', N_sim
print 'Ratio (should be ~1): ', N_sim / N_nsm
|
NSM_test_suite.ipynb
|
NuGrid/NuPyCEE
|
bsd-3-clause
|
Ensure r-process yields are being read in properly
|
# Obtain the fractional isotope yields from a SYGMA run
l = len(s_01.history.ism_iso_yield_nsm)
y = s_01.history.ism_iso_yield_nsm[l-1]
n = np.sum(y)
yields = y / n
# Exclude isotopes for which there are no r-process yields in the yield table
nonzero = np.nonzero(yields)
yields = yields[nonzero]
# Obtain the mass numbers for the isotopes (x-axis ticks)
massnums = []
for i in s_01.history.isotopes:
massnum = i.split('-')[1]
massnums.append(float(massnum))
# Again exclude zero values
massnums = np.asarray(massnums)
massnums = massnums[nonzero]
# Hacky text parser to get all the fractional isotope values from the r-process yield table
r_yields_text = open('yield_tables/r_process_rosswog_2014.txt')
r_yields = r_yields_text.readlines()
lines = []
for line in r_yields:
lines.append(line)
newlines = []
for line in lines:
if '&' in line:
new = line.strip()
new = new.split('&')
newlines.append(new)
massfracs = []
rmassnums = []
for ind, el in enumerate(newlines):
if ind is not 0:
massfracs.append(float(el[2]))
rmassnums.append(float(el[1].split('-')[1]))
# Array of r-process yields to compare with simulation yields
massfracs = np.asarray(massfracs)
# Plot r-process yields against neutron star merger simulation yields (should be nearly identical)
plt.figure(figsize=(12,8))
plt.scatter(massnums, yields, marker='x', s=32, color='red', label='Final neutron star merger ejecta')
plt.scatter(rmassnums, massfracs, s=8, label='r-process yields')
plt.xlim(80, 250)
plt.ylim(0.000000000001, 1)
plt.yscale('log')
plt.xlabel('Mass number')
plt.ylabel('Mass fraction')
plt.legend(loc=4)
#plt.savefig('yield_comparison.png', dpi=200)
|
NSM_test_suite.ipynb
|
NuGrid/NuPyCEE
|
bsd-3-clause
|
Check DTD fits
|
# Define the three functions which are fit to the DTD (Power law, 5th and 6th degree polynomials)
def quintic(t, a, b, c, d, e, f):
y = (a*(t**5))+(b*(t**4))+(c*(t**3))+(d*(t**2))+(e*t)+f
return y
def sextic(t, a, b, c, d, e, f, g):
y = a*(t**6) + b*(t**5) + c*(t**4) + d*(t**3) + e*(t**2) + f*t + g
return y
def powlaw(a, t):
y = a / t
return y
# Solar metallicity fit, parameters from chem_evol (see fitting notebook to derive new parameters)
a = -0.0138858377011
b = 1.0712569392
c = -32.1555682584
d = 468.236521089
e = -3300.97955814
f = 9019.62468302
t = np.linspace(10, 22.2987, 100) # Polynomial portion of solar metallicity DTD x-axis
# Define the DTD fit and plot it
y = quintic(t, a, b, c, d, e, f)
plt.plot(t, y)
a = -2.88192413434e-5
b = 0.00387383125623
c = -0.20721471544
d = 5.64382310405
e = -82.6061154979
f = 617.464778362
g = -1840.49386605
y = sextic(t, a, b, c, d, e, f, g)
plt.plot(t, y)
|
NSM_test_suite.ipynb
|
NuGrid/NuPyCEE
|
bsd-3-clause
|
Utility functions
A number of utility callback functions for image display and for ploting the similarity metric during registration.
|
import matplotlib.pyplot as plt
%matplotlib inline
from ipywidgets import interact, fixed
from IPython.display import clear_output
# Callback invoked by the interact ipython method for scrolling through the image stacks of
# the two images (moving and fixed).
def display_images(fixed_image_z, moving_image_z, fixed_npa, moving_npa):
# Create a figure with two subplots and the specified size.
plt.subplots(1,2,figsize=(10,8))
# Draw the fixed image in the first subplot.
plt.subplot(1,2,1)
plt.imshow(fixed_npa[fixed_image_z,:,:],cmap=plt.cm.Greys_r);
plt.title('fixed image')
plt.axis('off')
# Draw the moving image in the second subplot.
plt.subplot(1,2,2)
plt.imshow(moving_npa[moving_image_z,:,:],cmap=plt.cm.Greys_r);
plt.title('moving image')
plt.axis('off')
plt.show()
# Callback invoked by the IPython interact method for scrolling and modifying the alpha blending
# of an image stack of two images that occupy the same physical space.
def display_images_with_alpha(image_z, alpha, fixed, moving):
img = (1.0 - alpha)*fixed[:,:,image_z] + alpha*moving[:,:,image_z]
plt.imshow(sitk.GetArrayFromImage(img),cmap=plt.cm.Greys_r);
plt.axis('off')
plt.show()
# Callback invoked when the StartEvent happens, sets up our new data.
def start_plot():
global metric_values, multires_iterations
metric_values = []
multires_iterations = []
# Callback invoked when the EndEvent happens, do cleanup of data and figure.
def end_plot():
global metric_values, multires_iterations
del metric_values
del multires_iterations
# Close figure, we don't want to get a duplicate of the plot latter on.
plt.close()
# Callback invoked when the IterationEvent happens, update our data and display new figure.
def plot_values(registration_method):
global metric_values, multires_iterations
metric_values.append(registration_method.GetMetricValue())
# Clear the output area (wait=True, to reduce flickering), and plot current data
clear_output(wait=True)
# Plot the similarity metric values
plt.plot(metric_values, 'r')
plt.plot(multires_iterations, [metric_values[index] for index in multires_iterations], 'b*')
plt.xlabel('Iteration Number',fontsize=12)
plt.ylabel('Metric Value',fontsize=12)
plt.show()
# Callback invoked when the sitkMultiResolutionIterationEvent happens, update the index into the
# metric_values list.
def update_multires_iterations():
global metric_values, multires_iterations
multires_iterations.append(len(metric_values))
|
60_Registration_Introduction.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Read images
We first read the images, casting the pixel type to that required for registration (Float32 or Float64) and look at them.
|
fixed_image = sitk.ReadImage(fdata("training_001_ct.mha"), sitk.sitkFloat32)
moving_image = sitk.ReadImage(fdata("training_001_mr_T1.mha"), sitk.sitkFloat32)
interact(display_images, fixed_image_z=(0,fixed_image.GetSize()[2]-1), moving_image_z=(0,moving_image.GetSize()[2]-1), fixed_npa = fixed(sitk.GetArrayFromImage(fixed_image)), moving_npa=fixed(sitk.GetArrayFromImage(moving_image)));
|
60_Registration_Introduction.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Initial Alignment
Use the CenteredTransformInitializer to align the centers of the two volumes and set the center of rotation to the center of the fixed image.
|
initial_transform = sitk.CenteredTransformInitializer(fixed_image,
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY)
moving_resampled = sitk.Resample(moving_image, fixed_image, initial_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelIDValue())
interact(display_images_with_alpha, image_z=(0,fixed_image.GetSize()[2]), alpha=(0.0,1.0,0.05), fixed = fixed(fixed_image), moving=fixed(moving_resampled));
|
60_Registration_Introduction.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Registration
The specific registration task at hand estimates a 3D rigid transformation between images of different modalities. There are multiple components from each group (optimizers, similarity metrics, interpolators) that are appropriate for the task. Note that each component selection requires setting some parameter values. We have made the following choices:
<ul>
<li>Similarity metric, mutual information (Mattes MI):
<ul>
<li>Number of histogram bins, 50.</li>
<li>Sampling strategy, random.</li>
<li>Sampling percentage, 1%.</li>
</ul>
</li>
<li>Interpolator, sitkLinear.</li>
<li>Optimizer, gradient descent:
<ul>
<li>Learning rate, step size along traversal direction in parameter space, 1.0 .</li>
<li>Number of iterations, maximal number of iterations, 100.</li>
<li>Convergence minimum value, value used for convergence checking in conjunction with the energy profile of the similarity metric that is estimated in the given window size, 1e-6.</li>
<li>Convergence window size, number of values of the similarity metric which are used to estimate the energy profile of the similarity metric, 10.</li>
</ul>
</li>
</ul>
Perform registration using the settings given above, and take advantage of the built in multi-resolution framework, use a three tier pyramid.
In this example we plot the similarity metric's value during regisration. Note that the change of scales in the multi-resolution framework is readily visible.
|
registration_method = sitk.ImageRegistrationMethod()
# Similarity metric settings.
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkLinear)
# Optimizer settings.
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100, convergenceMinimumValue=1e-6, convergenceWindowSize=10)
registration_method.SetOptimizerScalesFromPhysicalShift()
# Setup for the multi-resolution framework.
registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
# Don't optimize in-place, we would possibly like to run this cell multiple times.
registration_method.SetInitialTransform(initial_transform, inPlace=False)
# Connect all of the observers so that we can perform plotting during registration.
registration_method.AddCommand(sitk.sitkStartEvent, start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, end_plot)
registration_method.AddCommand(sitk.sitkMultiResolutionIterationEvent, update_multires_iterations)
registration_method.AddCommand(sitk.sitkIterationEvent, lambda: plot_values(registration_method))
final_transform = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
|
60_Registration_Introduction.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Post registration analysis
Query the registration method to see the metric value and the reason the optimization terminated.
The metric value allows us to compare multiple registration runs as there is a probabilistic aspect to our registration, we are using random sampling to estimate the similarity metric.
Always remember to query why the optimizer terminated. This will help you understand whether termination is too early, either due to thresholds being too tight, early termination due to small number of iterations - numberOfIterations, or too loose, early termination due to large value for minimal change in similarity measure - convergenceMinimumValue)
|
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
|
60_Registration_Introduction.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Now visually inspect the results.
|
moving_resampled = sitk.Resample(moving_image, fixed_image, final_transform, sitk.sitkLinear, 0.0, moving_image.GetPixelIDValue())
interact(display_images_with_alpha, image_z=(0,fixed_image.GetSize()[2]), alpha=(0.0,1.0,0.05), fixed = fixed(fixed_image), moving=fixed(moving_resampled));
|
60_Registration_Introduction.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
If we are satisfied with the results, save them to file.
|
sitk.WriteImage(moving_resampled, os.path.join(OUTPUT_DIR, 'RIRE_training_001_mr_T1_resampled.mha'))
sitk.WriteTransform(final_transform, os.path.join(OUTPUT_DIR, 'RIRE_training_001_CT_2_mr_T1.tfm'))
|
60_Registration_Introduction.ipynb
|
thewtex/SimpleITK-Notebooks
|
apache-2.0
|
Linking Widgets
Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events.
The first method is to use the link and directional_link functions from the traitlets module.
Linking traitlets attributes from the server side
|
from IPython.utils import traitlets
caption = widgets.Latex(value = 'The values of slider1, slider2 and slider3 are synchronized')
sliders1, slider2, slider3 = widgets.IntSlider(description='Slider 1'),\
widgets.IntSlider(description='Slider 2'),\
widgets.IntSlider(description='Slider 3')
l2 = traitlets.link((sliders1, 'value'), (slider2, 'value'))
l3 = traitlets.link((sliders1, 'value'), (slider3, 'value'))
display(caption, sliders1, slider2, slider3)
caption = widgets.Latex(value = 'Changes in source values are reflected in target1 and target2')
source, target1, target2 = widgets.IntSlider(description='Source'),\
widgets.IntSlider(description='Target 1'),\
widgets.IntSlider(description='Target 2')
traitlets.dlink((source, 'value'), (target1, 'value'))
traitlets.dlink((source, 'value'), (target2, 'value'))
display(caption, source, target1, target2)
|
python/demo1/scipy-advanced-tutorial-master/Part2/Widget Events.ipynb
|
masterfish2015/my_project
|
mit
|
Function traitlets.link returns a Link object. The link can be broken by calling the unlink method.
|
l2.unlink()
|
python/demo1/scipy-advanced-tutorial-master/Part2/Widget Events.ipynb
|
masterfish2015/my_project
|
mit
|
Linking widgets attributes from the client side
When synchronizing traitlets attributes, you may experience a lag because of the latency dues to the rountrip to the server side. You can also directly link widgets attributes, either in a unidirectional or a bidirectional fashion using the link widgets.
|
caption = widgets.Latex(value = 'The values of range1, range2 and range3 are synchronized')
range1, range2, range3 = widgets.IntSlider(description='Range 1'),\
widgets.IntSlider(description='Range 2'),\
widgets.IntSlider(description='Range 3')
l2 = widgets.jslink((range1, 'value'), (range2, 'value'))
l3 = widgets.jslink((range1, 'value'), (range3, 'value'))
display(caption, range1, range2, range3)
caption = widgets.Latex(value = 'Changes in source_range values are reflected in target_range1 and target_range2')
source_range, target_range1, target_range2 = widgets.IntSlider(description='Source range'),\
widgets.IntSlider(description='Target range 1'),\
widgets.IntSlider(description='Target range 2')
widgets.jsdlink((source_range, 'value'), (target_range1, 'value'))
widgets.jsdlink((source_range, 'value'), (target_range2, 'value'))
display(caption, source_range, target_range1, target_range2)
|
python/demo1/scipy-advanced-tutorial-master/Part2/Widget Events.ipynb
|
masterfish2015/my_project
|
mit
|
Function widgets.jslink returns a Link widget. The link can be broken by calling the unlink method.
|
l2.unlink()
|
python/demo1/scipy-advanced-tutorial-master/Part2/Widget Events.ipynb
|
masterfish2015/my_project
|
mit
|
Full model
|
def optimize(m, lr=5e-3, it=50):
opt = torch.optim.LBFGS(m.parameters(), lr=lr, max_iter=40)
def eval_model():
obj = m()
opt.zero_grad()
obj.backward()
return obj
for i in range(it):
obj = m()
opt.zero_grad()
obj.backward()
opt.step(eval_model)
if i%5==0:
print(i,':',obj.data[0])
return -obj.data[0]
k = candlegp.kernels.RBF(1, lengthscales=torch.DoubleTensor([1.0]),variance=torch.DoubleTensor([1.0]))
m = candlegp.models.GPR(Variable(X), Variable(Y.unsqueeze(1)), kern=k)
#m.likelihood.variance.set(0.01)
full_lml = optimize(m, lr=1e-2)
xstar = torch.linspace(-3,9,100).double()
mu, var = m.predict_y(Variable(xstar.unsqueeze(1)))
cred_size = (var**0.5*2).squeeze(1)
mu = mu.squeeze(1)
pyplot.plot(xstar.numpy(),mu.data.numpy(),'b')
pyplot.fill_between(xstar.numpy(),mu.data.numpy()+cred_size.data.numpy(), mu.data.numpy()-cred_size.data.numpy(),facecolor='0.75')
pyplot.plot(X.numpy(), Y.numpy(), 'kx', mew=2)
m
|
notebooks/upper_bound.ipynb
|
t-vi/candlegp
|
apache-2.0
|
Upper bounds for sparse variational models
As a first investigation, we compute the upper bound for models trained using the sparse variational GP approximation.
|
Ms = numpy.arange(4, 20, 2)
vfe_lml = []
vupper_lml = []
vfe_hyps = []
for M in Ms:
Zinit = X[:M, :].clone()
k = candlegp.kernels.RBF(1, lengthscales=torch.DoubleTensor([1.0]),variance=torch.DoubleTensor([1.0]))
m = candlegp.models.SGPR(Variable(X), Variable(Y.unsqueeze(1)), k, Zinit)
m.likelihood.variance.set(0.01)
optimize(m, lr=1e-3, it=100)
vfe_lml.append(-m.objective().data[0])
vupper_lml.append(m.compute_upper_bound().data[0])
vfe_hyps.append(m.state_dict())
print("%i" % M, end=" ")
IPython.display.clear_output()
pyplot.figure()
pyplot.plot(Ms[:len(vfe_lml)], vfe_lml, label="lower")
pyplot.plot(Ms[:len(vfe_lml)], vupper_lml, label="upper")
pyplot.axhline(full_lml, label="full", alpha=0.3)
pyplot.xlabel("Number of inducing points")
pyplot.ylabel("LML estimate")
pyplot.legend()
pyplot.title("LML bounds for models trained with SGPR")
pyplot.xlim(4,20)
pyplot.show()
vfe_hyps = pandas.DataFrame(vfe_hyps)
|
notebooks/upper_bound.ipynb
|
t-vi/candlegp
|
apache-2.0
|
We see that the lower bound increases as more inducing points are added. Note that the upper bound does not monotonically decrease! This is because as we train the sparse model, we also get better estimates of the hyperparameters. The upper bound will be different for this different setting of the hyperparameters, and is sometimes looser. The upper bound also converges to the true lml slower than the lower bound.
Upper bounds for fixed hyperparameters
Here, we train sparse models with the hyperparameters fixed to the optimal value found previously.
|
fMs = numpy.arange(3, 20, 1)
fvfe_lml = [] # Fixed vfe lml
fvupper_lml = [] # Fixed upper lml
for M in fMs:
Zinit = vfe.Z.value[:M, :].copy()
Zinit = np.vstack((Zinit, X[np.random.permutation(len(X))[:(M - len(Zinit))], :].copy()))
init_params = vfe.get_parameter_dict()
init_params['model.Z'] = Zinit
vfe = gpflow.models.SGPR(X, Y, gpflow.kernels.RBF(1), Zinit)
vfe.set_parameter_dict(init_params)
vfe.kern.fixed = True
vfe.likelihood.fixed = True
vfe.optimize()
fvfe_lml.append(-vfe._objective(vfe.get_free_state())[0])
fvupper_lml.append(vfe.compute_upper_bound())
print("%i" % M, end=" ")
plt.plot(fMs, fvfe_lml, label="lower")
plt.plot(fMs, fvupper_lml, label="upper")
plt.axhline(full_lml, label="full", alpha=0.3)
plt.xlabel("Number of inducing points")
plt.ylabel("LML estimate")
plt.legend()
|
notebooks/upper_bound.ipynb
|
t-vi/candlegp
|
apache-2.0
|
Now, as the hyperparameters are fixed, the bound does monotonically decrease. We chose the optimal hyperparameters here, but the picture should be the same for any hyperparameter setting. This shows that we increasingly get a better estimate of the marginal likelihood as we add more inducing points.
A tight estimate bound does not imply a converged model
|
vfe = gpflow.models.SGPR(X, Y, gpflow.kernels.RBF(1), X[None, 0, :].copy())
vfe.optimize()
print("Lower bound: %f" % -vfe._objective(vfe.get_free_state())[0])
print("Upper bound: %f" % vfe.compute_upper_bound())
|
notebooks/upper_bound.ipynb
|
t-vi/candlegp
|
apache-2.0
|
In this case we show that for the hyperparameter setting, the bound is very tight. However, this does not imply that we have enough inducing points, but simply that we have correctly identified the marginal likelihood for this particular hyperparameter setting. In this specific case, where we used a single inducing point, the model collapses to not using the GP at all (lengthscale is really long to model only the mean). The rest of the variance is explained by noise. This GP can be perfectly approximated with a single inducing point.
|
vfe
|
notebooks/upper_bound.ipynb
|
t-vi/candlegp
|
apache-2.0
|
3. Semantic Analysis
The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms. The topic model algorithms in gensim assume that input documents are parameterized using the tf-idf model.
|
tfidf = gensim.models.TfidfModel(corpus_bow)
|
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
|
jeroarenas/MLBigData
|
mit
|
Or to apply a transformation to a whole corpus
|
corpus_tfidf = tfidf[corpus_bow]
|
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
|
jeroarenas/MLBigData
|
mit
|
Task: Find the document with the largest positive weight for topic 0. Compare the document and the topic.
|
# Extract weights from corpus_lsi
# scode: weight0 = <FILL IN>
# Locate the maximum positive weight
nmax = np.argmax(weight0)
print nmax
print weight0[nmax]
print corpus_lsi[nmax]
# Get topic 0
# scode: topic_0 = <FILL IN>
# Compute a list of tuples (token, wordcount) for all tokens in topic_0, where wordcount is the number of
# occurences of the token in the article.
# scode: token_counts = <FILL IN>
print "Topic 0 is:"
print topic_0
print "Token counts:"
print token_counts
|
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
|
jeroarenas/MLBigData
|
mit
|
3.2.2. LDA using Sci-kit Learn
The input matrix to the sklearn implementation of LDA contains the token-counts for all documents in the corpus.
sklearn contains a powerfull CountVectorizer method that can be used to construct the input matrix from the corpus_bow.
First, we will define an auxiliary function to print the top tokens in the model, that has been taken from the sklearn documentation.
|
# Adapted from an example in sklearn site
# http://scikit-learn.org/dev/auto_examples/applications/topics_extraction_with_nmf_lda.html
# You can try also with the dataset provided by sklearn in
# from sklearn.datasets import fetch_20newsgroups
#Β dataset = fetch_20newsgroups(shuffle=True, random_state=1,
#Β remove=('headers', 'footers', 'quotes'))
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
" ".join(ListaTokens)
|
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
|
jeroarenas/MLBigData
|
mit
|
Now, we need a dataset to feed the Count_Vectorizer object, by joining all tokens in corpus_clean in a single string, using a space ' ' as separator.
Task: Join all tokens from each document in a single string, using a white space as separator.
|
print("Loading dataset...")
# scode: data_samples = <FILL IN> # Usar join sobre corpus_clean.
data_samples = [" ".join(doc) for doc in corpus_clean]
data_samples = map(lambda x: " ".join(x), corpus_clean)
print 'Document 0:'
print data_samples[0][0:200], '...'
|
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
|
jeroarenas/MLBigData
|
mit
|
Now we are ready to compute the token counts.
|
# Use tf (raw term count) features for LDA.
print("Extracting tf features for LDA...")
n_features = 1000
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
print("done in %0.3fs." % (time() - t0))
print tf[0][0][0]
|
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
|
jeroarenas/MLBigData
|
mit
|
Now we can apply the LDA algorithm.
Task: Create an LDA object with the following parameters:
n_topics=n_topics, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0
|
print("Fitting LDA models with tf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
# scode: lda = <FILL IN>
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0)
|
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
|
jeroarenas/MLBigData
|
mit
|
Task: Fit model lda with the token frequencies computed by tf_vectorizer.
|
t0 = time()
corpus_lda = lda.fit_transform(tf)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words)
|
3_TopicModeling/TM2_Topic_Modeling_student.ipynb
|
jeroarenas/MLBigData
|
mit
|
Next, load the data that we are going to analyse using hoggorm. After the data has been loaded into the pandas data frame, we'll display it in the notebook.
|
# Load fluorescence data
X1_df = pd.read_csv('cheese_fluorescence.txt', index_col=0, sep='\t')
X1_df
# Load sensory data
X2_df = pd.read_csv('cheese_sensory.txt', index_col=0, sep='\t')
X2_df
|
examples/SMI/SMI_on_sensory_and_fluorescence.ipynb
|
olivertomic/hoggorm
|
bsd-2-clause
|
Orthogonal Projections
The default comparison between two matrices with SMI is using Orthogonal Projections, i.e. ordinary least squares regression is used to relate the dominant subspaces in the two matrices.
In contrast to PLSR, SMI is not performing av prediction of sensory properties from fluorescence measurements, but rather treats the two sets of measurements symmetrically, focusing on the major variation in each of them.
More details regarding the use of the SMI are found in the documentation.
|
# Get the values from the data frame
X1 = X1_df.values
X2 = X2_df.values
smiOP = ho.SMI(X1, X2, ncomp1=10, ncomp2=10)
print(np.round(smiOP.smi, 2))
|
examples/SMI/SMI_on_sensory_and_fluorescence.ipynb
|
olivertomic/hoggorm
|
bsd-2-clause
|
Finally we visualize the SMI values and their corresponding P-values.
|
# Plot similarities
hop.plotSMI(smiOP, [10, 10], X1name='fluorescence', X2name='sensory')
|
examples/SMI/SMI_on_sensory_and_fluorescence.ipynb
|
olivertomic/hoggorm
|
bsd-2-clause
|
The significance symbols in the diamond plot above indicate if a chosen subspace from one matrix can be found inside the subspace from the other matrix ($\supset$, $\subset$, =), or if there is signficant difference (P-values <0.001*** <0.01 ** <0.05 * <0.1 . >=0.1).
From the P-values and plot we can observe that the there is a significant difference between the sensory data and the fluorescence data in the first of the dominant subspaces of the matrices. Looking only at the diagonal, we see that 6 components are needed before we loose the significance completely. Looking at the one-dimensional subspaces, we can observe that four sensory components are needed before there is no significant difference to the first fluorescence component.
This can be interpreted as some fundamental difference in the information spanned by flurescence measurements and sensory perceptions that is only masked if large proportions of the subspaces are included.
Procrustes Rotations
The similarities using PR <= OP, and in this simple case OP$^2$ = PR. Otherwise the pattern stays the same.
|
smiPR = ho.SMI(X1, X2, ncomp1=10, ncomp2=10, projection="Procrustes")
print(np.round(smiPR.smi, 2))
|
examples/SMI/SMI_on_sensory_and_fluorescence.ipynb
|
olivertomic/hoggorm
|
bsd-2-clause
|
The number of permutations can be controlled for quick (100) or accurate (>10000) computations of significance.
|
print(np.round(smiPR.significance(B = 100),2))
hop.plotSMI(smiPR, X1name='fluorescence', X2name='sensory')
|
examples/SMI/SMI_on_sensory_and_fluorescence.ipynb
|
olivertomic/hoggorm
|
bsd-2-clause
|
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/federated/tutorials/building_your_own_federated_learning_algorithm"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.orgμμ 보기</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colabμμ μ€ννκΈ°</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHubμμ μμ€ λ³΄κΈ°</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">λ
ΈνΈλΆ λ€μ΄λ‘λ</a></td>
</table>
μμνκΈ° μ μ
μμνκΈ° μ μ λ€μμ μ€ννμ¬ νκ²½μ΄ μ¬λ°λ₯΄κ² μ€μ λμλμ§ νμΈνμΈμ. μΈμ¬λ§μ΄ νμλμ§ μμΌλ©΄ μ€μΉ κ°μ΄λμμ μ§μΉ¨μ μ°Έμ‘°νμΈμ.
|
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import tensorflow as tf
import tensorflow_federated as tff
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
μ°Έκ³ : μ΄ Colabμ <code>tensorflow_federated</code> pip ν¨ν€μ§μ <a>μ΅μ λ¦΄λ¦¬μ¦ λ²μ </a>μμ λμνλ κ²μΌλ‘ νμΈλμμ§λ§ Tensorflow Federated νλ‘μ νΈλ μμ§ μνν κ°λ° μ€μ΄λ©° mainμμ λμνμ§ μμ μ μμ΅λλ€.
κ³ μ ν νλλ μ΄μ
νμ΅ μκ³ λ¦¬μ¦ κ΅¬μΆνκΈ°
μ΄λ―Έμ§ λΆλ₯ λ° ν
μ€νΈ μμ± νν 리μΌμμλ νλλ μ΄μ
νμ΅(FL)μ μν λͺ¨λΈ λ° λ°μ΄ν° νμ΄νλΌμΈμ μ€μ νλ λ°©λ²μ λ°°μ κ³ TFFμ tff.learning API κ³μΈ΅μ ν΅ν΄ νλλ μ΄μ
νμ΅μ μννμ΅λλ€.
μ΄κ²μ FL μ°κ΅¬μ κ΄λ ¨νμ¬ λΉμ°μ μΌκ°μ λΆκ³Όν©λλ€. μ΄ νν 리μΌμμλ tff.learning APIλ₯Ό λ°λ₯΄μ§ μκ³ νλλ μ΄μ
νμ΅ μκ³ λ¦¬μ¦μ ꡬννλ λ°©λ²μ λ
Όμν©λλ€. μ¬κΈ°μλ λ€μμ λͺ©νλ‘ ν©λλ€.
λͺ©ν
νλλ μ΄μ
νμ΅ μκ³ λ¦¬μ¦μ μΌλ°μ μΈ κ΅¬μ‘°λ₯Ό μ΄ν΄ν©λλ€.
TFFμ νλλ μ΄μ
μ½μ΄λ₯Ό νμν©λλ€.
Federated Coreλ₯Ό μ¬μ©νμ¬ Federated Averagingμ μ§μ ꡬνν©λλ€.
μ΄ νν 리μΌμ μ체μ μΌλ‘ μ§νν μ μμ§λ§ λ¨Όμ μ΄λ―Έμ§ λΆλ₯ λ° ν
μ€νΈ μμ± νν 리μΌμ μ½μ΄λ³΄λ κ²μ΄ μ’μ΅λλ€.
μ
λ ₯ λ°μ΄ν° μ€λΉνκΈ°
λ¨Όμ TFFμ ν¬ν¨λ EMNIST λ°μ΄ν°μΈνΈλ₯Ό λ‘λνκ³ μ μ²λ¦¬ν©λλ€. μμΈν λ΄μ©μ μ΄λ―Έμ§ λΆλ₯ νν 리μΌμ μ°Έμ‘°νμΈμ.
|
emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data()
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
λ°μ΄ν°μΈνΈλ₯Ό λͺ¨λΈμ μ 곡νκΈ° μν΄ λ°μ΄ν°λ₯Ό νλ©΄ννκ³ κ° μμ λ₯Ό (flattened_image_vector, label) νμμ ννλ‘ λ³νν©λλ€.
|
NUM_CLIENTS = 10
BATCH_SIZE = 20
def preprocess(dataset):
def batch_format_fn(element):
"""Flatten a batch of EMNIST data and return a (features, label) tuple."""
return (tf.reshape(element['pixels'], [-1, 784]),
tf.reshape(element['label'], [-1, 1]))
return dataset.batch(BATCH_SIZE).map(batch_format_fn)
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
μ΄μ μμμ ν΄λΌμ΄μΈνΈλ₯Ό μ ννκ³ μμ μ μ²λ¦¬λ₯Ό λ°μ΄ν°μΈνΈμ μ μ©ν©λλ€.
|
client_ids = sorted(emnist_train.client_ids)[:NUM_CLIENTS]
federated_train_data = [preprocess(emnist_train.create_tf_dataset_for_client(x))
for x in client_ids
]
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
λͺ¨λΈ μ€λΉνκΈ°
μ΄λ―Έμ§ λΆλ₯ νν 리μΌμμμ λμΌν λͺ¨λΈμ μ¬μ©ν©λλ€. μ΄ λͺ¨λΈ(tf.kerasλ₯Ό ν΅ν΄ ꡬνλ¨)μλ νλμ μ¨κ²¨μ§ λ μ΄μ΄, κ·Έλ¦¬κ³ μ΄μ΄μ μννΈλ§₯μ€ λ μ΄μ΄κ° μμ΅λλ€.
|
def create_keras_model():
initializer = tf.keras.initializers.GlorotNormal(seed=0)
return tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(784,)),
tf.keras.layers.Dense(10, kernel_initializer=initializer),
tf.keras.layers.Softmax(),
])
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
TFFμμ μ΄ λͺ¨λΈμ μ¬μ©νκΈ° μν΄ Keras λͺ¨λΈμ tff.learning.Modelλ‘ λνν©λλ€. μ΄λ₯Ό ν΅ν΄ TFF λ΄μμ λͺ¨λΈμ μ λ°©ν₯ μ λ¬μ μννκ³ λͺ¨λΈ μΆλ ₯μ μΆμΆν μ μμ΅λλ€. μμΈν λ΄μ©μ μ΄λ―Έμ§ λΆλ₯ νν 리μΌμ μ°Έμ‘°νμΈμ.
|
def model_fn():
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec=federated_train_data[0].element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
μ¬κΈ°μλ tf.kerasλ₯Ό μ¬μ©νμ¬ tff.learning.Modelμ μμ±νμ§λ§ TFFλ ν¨μ¬ λ λ§μ μΌλ° λͺ¨λΈμ μ§μν©λλ€. μ΄λ¬ν λͺ¨λΈμλ λͺ¨λΈ κ°μ€μΉλ₯Ό μΊ‘μ²νλ λ€μκ³Ό κ°μ κ΄λ ¨ μμ±μ΄ μμ΅λλ€.
trainable_variables: νμ΅ κ°λ₯ν λ μ΄μ΄μ ν΄λΉνλ ν
μμ μ΄ν°λ¬λΈμ
λλ€.
non_trainable_variables: νμ΅ν μ μλ λ μ΄μ΄μ ν΄λΉνλ ν
μμ μ΄ν°λ¬λΈμ
λλ€.
μ¬κΈ°μλ λͺ©μ μ trainable_variablesλ§ μ¬μ©ν©λλ€(μ°λ¦¬ λͺ¨λΈμλ μ΄κ²λ§ μμΌλ―λ‘!).
κ³ μ ν νλλ μ΄μ
νμ΅ μκ³ λ¦¬μ¦ κ΅¬μΆνκΈ°
tff.learning APIλ₯Ό μ¬μ©νλ©΄ Federated Averagingμ λ§μ λ³νμ μμ±ν μ μμ§λ§ μ΄ νλ μμν¬μ κΉλνκ² λ§μ§ μλ λ€λ₯Έ νλλ μ΄μ
μκ³ λ¦¬μ¦μ΄ μμ΅λλ€. μλ₯Ό λ€μ΄ μ κ·ν, ν΄λ¦¬ν λλ νλλ μ΄μ
GAN νμ΅κ³Ό κ°μ λ 볡μ‘ν μκ³ λ¦¬μ¦μ μΆκ°ν μλ μμ΅λλ€. μλλ©΄ νλλ μ΄μ
λΆμμ κ΄μ¬μ΄ μμ μλ μμ΅λλ€.
μ΄λ¬ν κ³ κΈ μκ³ λ¦¬μ¦μ κ²½μ° TFFλ₯Ό μ¬μ©νμ¬ μ체 μ¬μ©μ μ§μ μκ³ λ¦¬μ¦μ μμ±ν΄μΌ ν©λλ€. λ§μ κ²½μ° νλλ μ΄μ
μκ³ λ¦¬μ¦μλ 4κ°μ§ μ£Όμ κ΅¬μ± μμκ° μμ΅λλ€.
μλ²-ν΄λΌμ΄μΈνΈ λΈλ‘λμΊμ€νΈ λ¨κ³
λ‘컬 ν΄λΌμ΄μΈνΈ μ
λ°μ΄νΈ λ¨κ³
ν΄λΌμ΄μΈνΈ-μλ² μ
λ‘λ λ¨κ³
μλ² μ
λ°μ΄νΈ λ¨κ³
TFFμμλ μΌλ°μ μΌλ‘ νλλ μ΄μ
μκ³ λ¦¬μ¦μ tff.templates.IterativeProcess(λλ¨Έμ§ λΆλΆμμ IterativeProcessλΌκ³ ν¨)λ‘ λνλ
λλ€. μ΄κ²μ initialize λ° next ν¨μλ₯Ό ν¬ν¨νλ ν΄λμ€μ
λλ€. μ¬κΈ°μ initializeλ μλ²λ₯Ό μ΄κΈ°ννλ λ° μ¬μ©λλ©° nextλ νλλ μ΄μ
μκ³ λ¦¬μ¦μ ν ν΅μ λΌμ΄λλ₯Ό μνν©λλ€. FedAvgμ λν λ°λ³΅ νλ‘μΈμ€κ° μ΄λ€ λͺ¨μ΅μΈμ§ κ·Έ 골격μ μμ±ν΄ λ³΄κ² μ΅λλ€.
λ¨Όμ tff.learning.Modelμ μμ±νκ³ νμ΅ κ°λ₯ν κ°μ€μΉλ₯Ό λ°ννλ μ΄κΈ°ν ν¨μκ° μμ΅λλ€.
|
def initialize_fn():
model = model_fn()
return model.trainable_variables
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
μ΄ ν¨μλ λ¬Έμ μμ΄ λ³΄μ΄μ§λ§ "TFF κ³μ°"μ΄ λλλ‘ μ½κ° μμ ν΄μΌ ν©λλ€. μ΄ λ΄μ©μ λμ€μ μμλ³Ό κ²μ
λλ€.
λν, μ°λ¦¬λ next_fnλ₯Ό μ€μΌμΉνλ €κ³ ν©λλ€.
|
def next_fn(server_weights, federated_dataset):
# Broadcast the server weights to the clients.
server_weights_at_client = broadcast(server_weights)
# Each client computes their updated weights.
client_weights = client_update(federated_dataset, server_weights_at_client)
# The server averages these updates.
mean_client_weights = mean(client_weights)
# The server updates its model.
server_weights = server_update(mean_client_weights)
return server_weights
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
μ΄ λ€ κ°μ§ κ΅¬μ± μμλ₯Ό κ°λ³μ μΌλ‘ ꡬννλ λ° μ€μ μ λ κ²μ
λλ€. λ¨Όμ μμ TensorFlowμμ ꡬνν μ μλ λΆλΆ, μ¦ ν΄λΌμ΄μΈνΈ λ° μλ² μ
λ°μ΄νΈ λ¨κ³μ μ΄μ μ λ§μΆ κ²μ
λλ€.
TensorFlow λΈλ‘
ν΄λΌμ΄μΈνΈ μ
λ°μ΄νΈ
tff.learning.Modelμ μ¬μ©νμ¬ κΈ°λ³Έμ μΌλ‘ TensorFlow λͺ¨λΈμ νλ ¨νλ κ²κ³Ό λμΌν λ°©μμΌλ‘ ν΄λΌμ΄μΈνΈ νλ ¨μ μνν©λλ€. νΉν, μ°λ¦¬λ tf.GradientTapeλ₯Ό μ¬μ©νμ¬ λ°μ΄ν° λ°°μΉμ λν κ·Έλ λμΈνΈλ₯Ό κ³μ°ν λ€μ client_optimizerλ₯Ό μ¬μ©νμ¬ μ΄λ¬ν κ·Έλ λμΈνΈλ₯Ό μ μ©ν©λλ€. μ°λ¦¬λ νλ ¨ κ°λ₯ν κ°μ€μΉμλ§ μ΄μ μ λ§μΆ₯λλ€.
|
@tf.function
def client_update(model, dataset, server_weights, client_optimizer):
"""Performs training (using the server model weights) on the client's dataset."""
# Initialize the client model with the current server weights.
client_weights = model.trainable_variables
# Assign the server weights to the client model.
tf.nest.map_structure(lambda x, y: x.assign(y),
client_weights, server_weights)
# Use the client_optimizer to update the local model.
for batch in dataset:
with tf.GradientTape() as tape:
# Compute a forward pass on the batch of data
outputs = model.forward_pass(batch)
# Compute the corresponding gradient
grads = tape.gradient(outputs.loss, client_weights)
grads_and_vars = zip(grads, client_weights)
# Apply the gradient using a client optimizer.
client_optimizer.apply_gradients(grads_and_vars)
return client_weights
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
μλ² μ
λ°μ΄νΈ
FedAvgμ λν μλ² μ
λ°μ΄νΈλ ν΄λΌμ΄μΈνΈ μ
λ°μ΄νΈλ³΄λ€ κ°λ¨ν©λλ€. μ°λ¦¬λ λ¨μν μλ² λͺ¨λΈ κ°μ€μΉλ₯Ό ν΄λΌμ΄μΈνΈ λͺ¨λΈ κ°μ€μΉμ νκ· μΌλ‘ λ°κΎΈλ "λ°λλΌ" νλλ μ΄μ
νκ· νλ₯Ό ꡬνν κ²μ
λλ€. λ€μ λ§νμ§λ§, μ°λ¦¬λ νλ ¨ κ°λ₯ν κ°μ€μΉμλ§ μ΄μ μ λ§μΆ₯λλ€.
|
@tf.function
def server_update(model, mean_client_weights):
"""Updates the server model weights as the average of the client model weights."""
model_weights = model.trainable_variables
# Assign the mean client weights to the server model.
tf.nest.map_structure(lambda x, y: x.assign(y),
model_weights, mean_client_weights)
return model_weights
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
λ¨μν mean_client_weightsλ₯Ό λ°ννμ¬ μ€λν«μ λ¨μνν μ μμ΅λλ€. κ·Έλ¬λ Federated Averagingμ κ³ κΈ κ΅¬νμμλ λͺ¨λ©ν
λλ μ μμ±κ³Ό κ°μ λ³΄λ€ μ κ΅ν κΈ°μ κ³Ό ν¨κ» mean_client_weightsλ₯Ό μ¬μ©ν©λλ€.
κ³Όμ : model_weights λ° mean_client_weightsμ μ€κ°μ μ΄ λλλ‘ μλ² κ°μ€μΉλ₯Ό μ
λ°μ΄νΈνλ server_update λ²μ μ ꡬνν©λλ€(μ°Έκ³ : μ΄λ¬ν μ’
λ₯μ "μ€κ°μ " μ κ·Ό λ°©μμ Lookahead μ΅ν°λ§μ΄μ μ λν μ΅κ·Όμ μμ
κ³Ό μ μ¬ν©λλ€!).
μ§κΈκΉμ§ μμν TensorFlow μ½λλ§ μμ±νμ΅λλ€. TFFλ₯Ό μ¬μ©νλ©΄ μ΄λ―Έ μ΅μν TensorFlow μ½λλ₯Ό λ§μ΄ μ¬μ©ν μ μμΌλ―λ‘ μ΄λ μλμ μΌλ‘ μ€κ³λ κ²μ
λλ€. κ·Έλ¬λ μ΄μ <em>μ€μΌμ€νΈλ μ΄μ
λ‘μ§</em>, μ¦ μλ²κ° ν΄λΌμ΄μΈνΈμ λΈλ‘λμΊμ€νΈνλ λ΄μ©κ³Ό ν΄λΌμ΄μΈνΈκ° μλ²μ μ
λ‘λνλ λ΄μ©μ μ§μ νλ λ‘μ§μ μ§μ ν΄μΌ ν©λλ€.
μ΄λ₯Ό μν΄μλ TFFμ νλλ μ΄μ
μ½μ΄κ° νμν©λλ€.
νλλ μ΄μ
μ½μ΄ μκ°
νλλ μ΄μ
μ½μ΄(FC)λ tff.learning APIμ κΈ°λ° μν μ νλ νμ μμ€ μΈν°νμ΄μ€ μ§ν©μ
λλ€. κ·Έλ¬λ μ΄λ¬ν μΈν°νμ΄μ€λ νμ΅μλ§ κ΅νλμ§ μμ΅λλ€. μ€μ λ‘ λΆμ° λ°μ΄ν°μ λν λΆμ λ° κΈ°ν λ§μ κ³μ°μ μ¬μ©ν μ μμ΅λλ€.
μμ μμ€μμ νλλ μ΄μ
μ½μ΄λ κ°κ²°νκ² ννλ νλ‘κ·Έλ¨ λ‘μ§μ μ¬μ©νμ¬ TensorFlow μ½λλ₯Ό λΆμ° ν΅μ μ°μ°μ(μ: λΆμ° ν©κ³ λ° λΈλ‘λμΊμ€νΈ)μ κ²°ν©ν μ μλ κ°λ° νκ²½μ
λλ€. λͺ©νλ μμ€ν
ꡬν μΈλΆ μ¬ν(μ: μ§μ κ° λ€νΈμν¬ λ©μμ§ κ΅ν μ§μ )μ μꡬνμ§ μκ³ μ°κ΅¬μμ μ€λ¬΄μμκ² μμ€ν
μ λΆμ° ν΅μ μ μ μνκ² μ μ΄ν μ μλλ‘ νλ κ²μ
λλ€.
ν κ°μ§ μμ μ TFFκ° κ°μΈ μ 보 보νΈλ₯Ό μν΄ μ€κ³λμλ€λ κ²μ
λλ€. λ°λΌμ μ€μ μ§μ€μ μλ² μμΉμμ μμΉ μλ λ°μ΄ν° μΆμ μ λ°©μ§νκΈ° μν΄ λ°μ΄ν°κ° μλ μμΉλ₯Ό λͺ
μμ μΌλ‘ μ μ΄ν μ μμ΅λλ€.
νλλ μ΄μ
λ°μ΄ν°
TFFμ ν΅μ¬ κ°λ
μ "νλλ μ΄μ
λ°μ΄ν°"μ΄λ©°, μ΄λ λΆμ° μμ€ν
μ μ₯μΉ κ·Έλ£Ή(μ: ν΄λΌμ΄μΈνΈ λ°μ΄ν°μΈνΈ λλ μλ² λͺ¨λΈ κ°μ€μΉ)μ κ±Έμ³ νΈμ€ν
λλ λ°μ΄ν° νλͺ©μ μ§ν©μ λνλ
λλ€. μ°λ¦¬λ λͺ¨λ μ₯μΉμμ λ°μ΄ν° νλͺ©μ μ 체 컬λ μ
μ λ¨μΌ νλλ μ΄μ
κ°μΌλ‘ λͺ¨λΈλ§ν©λλ€.
μλ₯Ό λ€μ΄, μΌμμ μ¨λλ₯Ό λνλ΄λ λΆλ μμμ μ΄ μλ ν΄λΌμ΄μΈνΈ μ₯μΉκ° μλ€κ³ κ°μ ν©λλ€. μ°λ¦¬λ μ΄κ²μ νλλ μ΄μ
λΆλ μμμ μΌλ‘ ννν μ μμ΅λλ€.
|
federated_float_on_clients = tff.FederatedType(tf.float32, tff.CLIENTS)
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
νλλ μ΄μ
μ νμ ν΄λΉ ꡬμ±μ κ΅¬μ± μμ(μ: tf.float32)μ μ ν Tμ μ₯μΉ κ·Έλ£Ή Gλ‘ μ§μ λ©λλ€. Gκ° tff.CLIENTS λλ tff.SERVERμΈ κ²½μ°μ μ€μ μ λ κ²μ
λλ€. μ΄λ¬ν νλλ μ΄μ
μ νμ μλμ κ°μ΄ {T}@Gλ‘ νμλ©λλ€.
|
str(federated_float_on_clients)
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
κ²μ¬ μμΉμ κ΄μ¬μ΄ λ§μ μ΄μ λ 무μμ
λκΉ? TFFμ ν΅μ¬ λͺ©νλ μ€μ λΆμ° μμ€ν
μ λ°°ν¬ν μ μλ μ½λλ₯Ό μμ±ν μ μλλ‘ νλ κ²μ
λλ€. μ¦, μ₯μΉμ νμ μ§ν©μ΄ μ΄λ€ μ½λλ₯Ό μ€ννκ³ λ€λ₯Έ λ°μ΄ν° μ‘°κ°μ΄ μ΄λμ μλμ§ μΆλ‘ νλ κ²μ΄ μ€μν©λλ€.
TFFλ λ°μ΄ν°, λ°μ΄ν°κ° λ°°μΉλλ μμΉ λ° λ°μ΄ν°κ° λ³νλλ λ°©λ²μ μΈ κ°μ§μ μ€μ μ λ‘λλ€. μ²μ λ κ°λ νλλ μ΄μ
μ νμΌλ‘ μΊ‘μνλκ³ λ§μ§λ§ λ κ°λ νλλ μ΄μ
κ³μ°μ μΊ‘μνλ©λλ€.
νλλ μ΄μ
κ³μ°
TFFλ κΈ°λ³Έ λ¨μκ° νλλ μ΄μ
κ³μ°μΈ κ°λ ₯ν νμμ ν¨μν νλ‘κ·Έλλ° νκ²½μ
λλ€. μ΄λ€μ νλλ μ΄μ
κ°μ μ
λ ₯μΌλ‘ λ°μλ€μ΄κ³ νλλ μ΄μ
κ°μ μΆλ ₯μΌλ‘ λ°ννλ λ
Όλ¦¬ μ‘°κ°μ
λλ€.
μλ₯Ό λ€μ΄ ν΄λΌμ΄μΈνΈ μΌμμ μ¨λλ₯Ό νκ· ννκ³ μΆλ€κ³ κ°μ ν©λλ€. λ€μμ μ μν μ μμ΅λλ€(νλλ μ΄μ
λΆλ μμμ μ¬μ©).
|
@tff.federated_computation(tff.FederatedType(tf.float32, tff.CLIENTS))
def get_average_temperature(client_temperatures):
return tff.federated_mean(client_temperatures)
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
TensorFlowμ tf.function λ°μ½λ μ΄ν°μ μ΄λ»κ² λ€λ₯Έμ§ λ¬Όμ΄λ³Ό μ μμ΅λλ€. ν΅μ¬μ μΈ λλ΅μ tff.federated_computationμ μν΄ μμ±λ μ½λκ° TensorFlowλ μλκ³ Python μ½λλ μλλΌλ κ²μ
λλ€. λ΄λΆ νλ«νΌ λ
립μ μΈ κΈλ£¨ μΈμ΄λ‘ λ λΆμ° μμ€ν
μ μ¬μμ
λλ€.
볡μ‘νκ² λ€λ¦΄ μ μμ§λ§ TFF κ³μ°μ μ μ μλ νμ μλͺ
μ΄ μλ ν¨μλ‘ μκ°ν μ μμ΅λλ€. μ΄λ¬ν μ ν μλͺ
μ μ§μ 쿼리ν μ μμ΅λλ€.
|
str(get_average_temperature.type_signature)
|
site/ko/federated/tutorials/building_your_own_federated_learning_algorithm.ipynb
|
tensorflow/docs-l10n
|
apache-2.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.