markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Function with state For the solvers which have the option "rhs_with_state" the QobjEvo can take coefficient function with the signature (for backward compatibility): def coeff(t, psi. args) or use advanced args: args={"psi=vec":psi0}
def coeff_state(t, args): return np.max(args["psi"])*args["w"] td_state = QobjEvo([Id, [destroy, coeff_state]],args={"w":1, "psi=vec":qt.basis(N,0)}) td_state(2,state=vec)
development/development-qobjevo-adv.ipynb
qutip/qutip-notebooks
lgpl-3.0
Cython object and "Compiling" There is a cython version of the qobjevo for fast call: - qutip.cy.cqobjevo.CQobjEvo The cython is created when the "compile" method is created. The cython object contain fast version of the call, expect and rhs (spmv) methods.
td_str.compiled = False print("Before compilation") %timeit td_str(2, data=True) %timeit td_str.mul_vec(2,vec) %timeit td_str.mul_mat(2,mat_c) %timeit td_str.mul_mat(2,mat_f) %timeit td_str.expect(2,vec,0) td_str.compile() print("After compilation") %timeit td_str(2, data=True) %timeit td_str.mul_vec(2,vec) %timeit td_str.mul_mat(2,mat_c) %timeit td_str.mul_mat(2,mat_f) %timeit td_str.expect(2,vec,0)
development/development-qobjevo-adv.ipynb
qutip/qutip-notebooks
lgpl-3.0
apply Pass a function ( Qobj, *args, **kwargs) -> Qobj to act on each component of the qobjevo. Will only be mathematicaly valid if the transformation is linear.
def multiply(qobj, b, factor = 3.): return qobj*b*factor print(td_func.apply(multiply,2)(2) == td_func(2)*6) print(td_func.apply(multiply,2,factor=2)(2) == td_func(2)*4)
development/development-qobjevo-adv.ipynb
qutip/qutip-notebooks
lgpl-3.0
apply_decorator Transform the functions containing the time dependence using a decorator. The decorator must return a function of (t, **kwargs). Do not modify the constant part (the contant part do not have a function f(t) = 1).
def rescale_time_and_scale(f_original, time_scale, factor=2.): def f(t, *args, **kwargs): return f_original(time_scale*t, *args, **kwargs)*factor return f print(td_func.apply_decorator(rescale_time_and_scale, 2)(2) == td_func(4)*2-Id) print(td_func.apply_decorator(rescale_time_and_scale, 3, factor=3)(2) == td_func(6)*3.0 - 2*Id)
development/development-qobjevo-adv.ipynb
qutip/qutip-notebooks
lgpl-3.0
QobjEvo based on string and np.array are changed to a function then the decorator is applied. There are option so that the type of coefficient stay unchanged: str_mod : change the string -> str_mod[0] + str + str_mod[1] inplace_np : modify the array (array[i] = decorator(lambda v: v)(array[i])) *any modification that rescale the time will not work properly Decorator can cause problem when used in parallel. (function cannot be pickled error)
td_func_1 = QobjEvo([[create,exp_i]],args={"w":2}) td_str_1 = QobjEvo([[create,"exp(-1j*t)"]],args={'w':2.}) td_array_1 = QobjEvo([[create,np.exp(-1j*tlist)]],tlist=tlist) def square_qobj(qobj): return qobj*qobj def square_f(f_original): def f(t, *args, **kwargs): return f_original(t, *args, **kwargs)**2 return f t1 = td_func_1.apply(square_qobj).apply_decorator(square_f) print(t1(2) == td_func_1(2)*td_func_1(2)) print((t1.ops[0][2])) t1 = td_str_1.apply(square_qobj).apply_decorator(square_f) print(t1(2) == td_str_1(2)*td_str_1(2)) print("str not updated:", (t1.ops[0][2])) t1 = td_str_1.apply(square_qobj).apply_decorator(square_f, str_mod=["(",")**2"]) print(t1(2) == td_str_1(2)*td_str_1(2)) print("str updated:",(t1.ops[0][2])) t1 = td_array_1.apply(square_qobj).apply_decorator(square_f) print(t1(2) == td_array_1(2)*td_array_1(2)) print("array not updated:",(t1.ops[0][2])) t1 = td_array_1.apply(square_qobj).apply_decorator(square_f, inplace_np=1) print(t1(2) == td_array_1(2)*td_array_1(2)) print("array updated:",(t1.ops[0][2]))
development/development-qobjevo-adv.ipynb
qutip/qutip-notebooks
lgpl-3.0
Removing redundant Qobj When multiple components of the QobjEvo are made from the same Qobj, you can unite them with the "compress" method. It is only done with the same form of time dependance:
small = qt.destroy(2) def f1(t,args): return np.sin(t) def f2(t,args): return np.cos(args["w"]*t) def f3(t,args): return np.sin(args["w"]*t) def f4(t,args): return np.cos(t) td_redoundance = QobjEvo([qt.qeye(2),[small,"sin(t)"],[small,"cos(w*t)"],[small,"sin(w*t)"], [small,"cos(t)"]],args={'w':2.}) td_redoundance1 = QobjEvo([qt.qeye(2),[small,"sin(t)"],[small,"cos(w*t)"],[small,"sin(w*t)"], [small,"cos(t)"]],args={'w':2.}) td_redoundance2 = QobjEvo([qt.qeye(2),[small,f1],[small,f2],[small,f3],[small,f4]],args={'w':2.}) td_redoundance3 = QobjEvo([qt.qeye(2),[small,np.sin(tlist)],[small,np.cos(2*tlist)], [small,np.sin(2*tlist)],[small,np.cos(tlist)]],tlist=tlist) td_redoundance4 = QobjEvo([qt.qeye(2),[small,f1],[small,"cos(w*t)"], [small,np.sin(2*tlist)],[small,"cos(t)"]],args={'w':2.},tlist=tlist) td_redoundance1.compress() print(td_redoundance1.to_list()) print(len(td_redoundance1.ops)) print(td_redoundance(1.) == td_redoundance1(1.)) td_redoundance2.compress() print(len(td_redoundance2.ops)) print(td_redoundance(1.) == td_redoundance2(1.)) td_redoundance3.compress() print(len(td_redoundance3.ops)) print(td_redoundance(1.) == td_redoundance3(1.)) td_redoundance4.compress() print(len(td_redoundance4.ops)) print(td_redoundance(1.) == td_redoundance4(1.)) td_redoundance_v2 = QobjEvo([qt.qeye(2),[qt.qeye(2),"sin(t)"],[small,"sin(t)"],[qt.create(2),"sin(t)"]]) td_redoundance_v2.compress() td_redoundance_v2.to_list()
development/development-qobjevo-adv.ipynb
qutip/qutip-notebooks
lgpl-3.0
cimport cython object The cython object can be 'cimported'. cdef class CQobjEvo: cdef void _mul_vec(self, double t, complex* vec, complex* out) cdef void _mul_matf(self, double t, complex* mat, complex* out,int nrow, int ncols) cdef void _mul_matc(self, double t, complex* mat, complex* out,int nrow, int ncols) cdef complex _expect(self, double t, complex* vec, int isherm) cdef complex _expect_super(self, double t, complex* rho, int isherm)
%%cython from qutip.cy.cqobjevo cimport CQobjEvo cimport numpy as np def rhs_call_from_cy(CQobjEvo qobj, double t, np.ndarray[complex, ndim=1] vec, np.ndarray[complex, ndim=1] out): qobj._mul_vec(t,&vec[0],&out[0]) def expect_call_from_cy(CQobjEvo qobj, double t, np.ndarray[complex, ndim=1] vec, int isherm): return qobj._expect(t,&vec[0]) def rhs_cdef_timing(CQobjEvo qobj, double t, np.ndarray[complex, ndim=1] vec, np.ndarray[complex, ndim=1] out): cdef int i for i in range(10000): qobj._mul_vec(t,&vec[0],&out[0]) def expect_cdef_timing(CQobjEvo qobj, double t, np.ndarray[complex, ndim=1] vec, int isherm): cdef complex aa = 0. cdef int i for i in range(10000): aa = qobj._expect(t, &vec[0]) return aa def rhs_def_timing(qobj, double t, np.ndarray[complex, ndim=1] vec, complex[::1] out): cdef int i for i in range(10000): out = qobj.mul_vec(t,vec) def expect_def_timing(qobj, double t, np.ndarray[complex, ndim=1] vec, int isherm): cdef complex aa = 0. cdef int i for i in range(10000): aa = qobj.expect(t, vec) return aa td_str.compile() print(expect_call_from_cy(td_str.compiled_qobjevo, 2, vec, 0) - td_str.expect(2,vec,0)) %timeit expect_def_timing(td_str.compiled_qobjevo, 2, vec, 0) %timeit expect_cdef_timing(td_str.compiled_qobjevo, 2, vec, 0) out = np.zeros(N,dtype=np.complex128) rhs_call_from_cy(td_str.compiled_qobjevo, 2, vec, out) print( [a - b for a,b in zip(out, td_str.mul_vec(2,vec))]) %timeit rhs_def_timing(td_str.compiled_qobjevo, 2, vec, out) %timeit rhs_cdef_timing(td_str.compiled_qobjevo, 2, vec, out) # Most of the time gained is from allocating the out vector, not the td_cte = QobjEvo([Id]) td_cte.compile() out = np.zeros(N,dtype=np.complex128) rhs_call_from_cy(td_cte.compiled_qobjevo, 2, vec, out) print( [a - b for a,b in zip(out, td_cte.mul_vec(2,vec))]) %timeit rhs_def_timing(td_cte.compiled_qobjevo, 2, vec, out) %timeit rhs_cdef_timing(td_cte.compiled_qobjevo, 2, vec, out)
development/development-qobjevo-adv.ipynb
qutip/qutip-notebooks
lgpl-3.0
Compiled string code
td_str.compiled = False print(td_str.compile(code=True)) qt.about()
development/development-qobjevo-adv.ipynb
qutip/qutip-notebooks
lgpl-3.0
Determining movie recommendations. We'll now use the users_feats tensor we computed above to determine the movie ratings and recommendations for each user. To compute the projected ratings for each movie, we compute the similarity measure between the user's feature vector and the corresponding movie feature vector. We will use the dot product as our similarity measure. In essence, this is a weighted movie average for each user. TODO 2: Implement this as a matrix multiplication. Hint: one of the operands will need to be transposed.
users_ratings = tf.matmul(users_feats, tf.transpose(movies_feats)) users_ratings
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_by_hand.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now we'll make a list of all the residue contacts that are made, keeping only those that occur more than 20% of the time. We'll put that into a ResidueContactConcurrence object, and plot it!
%%time contacts = ContactFrequency(traj, query=yyg, haystack=protein) contact_list = [(contact_pair, freq) for contact_pair, freq in contacts.residue_contacts.most_common() if freq >= 0.2] %%time concurrence = ResidueContactConcurrence(traj, contact_list, select="") # optionally, create x-values... since we know that we have 1 ns/frame times = [1.0*(i+1) for i in range(len(traj))] (fig, ax, lgd) = plot_concurrence(concurrence, x_values=times) plt.xlabel("Time (ns)") plt.savefig("concurrences.pdf", bbox_extra_artists=(lgd,), bbox_inches='tight')
examples/concurrences.ipynb
dwhswenson/contact_map
lgpl-2.1
This plot shows when each contact occurred. The x-axis is time. Each dot represents that a specific contact pair is present at that time. The contact pairs are separated along the vertical axis, listed in the order from contact_list (here, decreasing order of frequency of the contact). The contacts are also listed in that order in the legend, to the right. This trajectory shows two groups of stable contacts between the protein and the ligand; i.e. there is a change in the stable state. This allows us to visually identify the contacts involved in each state. Both states involve the ligand being in contact with Phe33, but the earlier state includes contacts with Ile28, Gly29, etc., while the later state includes contacts with Ser32 and Gly168. This change occurs around 60ns (which is also frame 60), and is evident when viewing the MD trajectory. If you have NGLView installed, visualize the trajectory with the following:
# for visualization, we need to clean up the trajectory: traj.topology.create_standard_bonds() # required for image_molecules traj = traj.image_molecules().superpose(traj) import nglview as nv view = nv.show_mdtraj(traj) view.remove_cartoon() view.add_cartoon('protein', color='#0000BB', opacity=0.3) view.add_ball_and_stick("YYG") # update to my recommeded camera orientation camera_orientation = [-100, 45, -30, 0, 50, 90, -45, 0, 0, -45,-100, 0, -50, -45, -45, 1] view._set_camera_orientation(camera_orientation) # start trajectory at frame 60: you can scrub forward or backward view.frame = 60 view
examples/concurrences.ipynb
dwhswenson/contact_map
lgpl-2.1
Migrating model checkpoints <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/migrate/migrating_checkpoints"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/migrating_checkpoints.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/migrating_checkpoints.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/migrating_checkpoints.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Note: Checkpoints saved with tf.compat.v1.Saver are often referred as TF1 or name-based checkpoints. Checkpoints saved with tf.train.Checkpoint are referred as TF2 or object-based checkpoints. Overview This guide assumes that you have a model that saves and loads checkpoints with tf.compat.v1.Saver, and want to migrate the code use the TF2 tf.train.Checkpoint API, or use pre-existing checkpoints in your TF2 model. Below are some common scenarios that you may encounter: Scenario 1 There are existing TF1 checkpoints from previous training runs that need to be loaded or converted to TF2. To load the TF1 checkpoint in TF2, see the snippet Load a TF1 checkpoint in TF2. To convert the checkpoint to TF2, see Checkpoint conversion. Scenario 2 You are adjusting your model in a way that risks changing variable names and paths (such as when incrementally migrating away from get_variable to explicit tf.Variable creation), and would like to maintain saving/loading of existing checkpoints along the way. See the section on How to maintain checkpoint compatibility during model migration Scenario 3 You are migrating your training code and checkpoints to TF2, but your inference pipeline continues to require TF1 checkpoints for now (for production stability). Option 1 Save both TF1 and TF2 checkpoints when training. see Save a TF1 checkpoint in TF2 Option 2 Convert the TF2 checkpoint to TF1. see Checkpoint conversion The examples below show all the combinations of saving and loading checkpoints in TF1/TF2, so you have some flexibility in determining how to migrate your model. Setup
import tensorflow as tf import tensorflow.compat.v1 as tf1 def print_checkpoint(save_path): reader = tf.train.load_checkpoint(save_path) shapes = reader.get_variable_to_shape_map() dtypes = reader.get_variable_to_dtype_map() print(f"Checkpoint at '{save_path}':") for key in shapes: print(f" (key='{key}', shape={shapes[key]}, dtype={dtypes[key].name}, " f"value={reader.get_tensor(key)})")
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Changes from TF1 to TF2 This section is included if you are curious about what has changed between TF1 and TF2, and what we mean by "name-based" (TF1) vs "object-based" (TF2) checkpoints. The two types of checkpoints are actually saved in the same format, which is essentially a key-value table. The difference lies in how the keys are generated. The keys in named-based checkpoints are the names of the variables. The keys in object-based checkpoints refer to the path from the root object to the variable (the examples below will help to get a better sense of what this means). First, save some checkpoints:
with tf.Graph().as_default() as g: a = tf1.get_variable('a', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) b = tf1.get_variable('b', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) c = tf1.get_variable('scoped/c', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) with tf1.Session() as sess: saver = tf1.train.Saver() sess.run(a.assign(1)) sess.run(b.assign(2)) sess.run(c.assign(3)) saver.save(sess, 'tf1-ckpt') print_checkpoint('tf1-ckpt') a = tf.Variable(5.0, name='a') b = tf.Variable(6.0, name='b') with tf.name_scope('scoped'): c = tf.Variable(7.0, name='c') ckpt = tf.train.Checkpoint(variables=[a, b, c]) save_path_v2 = ckpt.save('tf2-ckpt') print_checkpoint(save_path_v2)
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
If you look at the keys in tf2-ckpt, they all refer to the object paths of each variable. For example, variable a is the first element in the variables list, so its key becomes variables/0/... (feel free to ignore the .ATTRIBUTES/VARIABLE_VALUE constant). A closer inspection of the Checkpoint object below:
a = tf.Variable(0.) b = tf.Variable(0.) c = tf.Variable(0.) root = ckpt = tf.train.Checkpoint(variables=[a, b, c]) print("root type =", type(root).__name__) print("root.variables =", root.variables) print("root.variables[0] =", root.variables[0])
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Try experimenting with the below snippet and see how the checkpoint keys change with the object structure:
module = tf.Module() module.d = tf.Variable(0.) test_ckpt = tf.train.Checkpoint(v={'a': a, 'b': b}, c=c, module=module) test_ckpt_path = test_ckpt.save('root-tf2-ckpt') print_checkpoint(test_ckpt_path)
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Why does TF2 use this mechanism? Because there is no more global graph in TF2, variable names are unreliable and can be inconsistent between programs. TF2 encourages the object-oriented modelling approach where variables are owned by layers, and layers are owned by a model: variable = tf.Variable(...) layer.variable_name = variable model.layer_name = layer How to maintain checkpoint compatibility during model migration <a name="maintain-checkpoint-compat"></a> One important step in the migration process is ensuring that all variables are initialized to the correct values, which in turn allows you to validate that the ops/functions are doing the correct computations. To accomplish this, you must consider the checkpoint compatibility between models in the various stages of migration. Essentially, this section answers the question, how do I keep using the same checkpoint while changing the model. Below are three ways of maintaining checkpoint compatibility, in order of increasing flexibility: The model has the same variable names as before. The model has different variable names, and maintains a assignment map that maps variable names in the checkpoint to the new names. The model has different variable names, and maintains a TF2 Checkpoint object that stores all of the variables. When the variable names match Long title: How to re-use checkpoints when the variable names match. Short answer: You can directly load the pre-existing checkpoint with either tf1.train.Saver or tf.train.Checkpoint. If you are using tf.compat.v1.keras.utils.track_tf1_style_variables, then it will ensure that your model variable names are the same as before. You can also manually ensure that variable names match. When the variable names match in the migrated models, you may directly use either tf.train.Checkpoint or tf.compat.v1.train.Saver to load the checkpoint. Both APIs are compatible with eager and graph mode, so you can use them at any stage of the migration. Note: You can use tf.train.Checkpoint to load TF1 checkpoints, but you cannot use tf.compat.v1.Saver to load TF2 checkpoints without complicated name matching. Below are examples of using the same checkpoint with different models. First, save a TF1 checkpoint with tf1.train.Saver:
with tf.Graph().as_default() as g: a = tf1.get_variable('a', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) b = tf1.get_variable('b', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) c = tf1.get_variable('scoped/c', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) with tf1.Session() as sess: saver = tf1.train.Saver() sess.run(a.assign(1)) sess.run(b.assign(2)) sess.run(c.assign(3)) save_path = saver.save(sess, 'tf1-ckpt') print_checkpoint(save_path)
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
The example below uses tf.compat.v1.Saver to load the checkpoint while in eager mode:
a = tf.Variable(0.0, name='a') b = tf.Variable(0.0, name='b') with tf.name_scope('scoped'): c = tf.Variable(0.0, name='c') # With the removal of collections in TF2, you must pass in the list of variables # to the Saver object: saver = tf1.train.Saver(var_list=[a, b, c]) saver.restore(sess=None, save_path=save_path) print(f"loaded values of [a, b, c]: [{a.numpy()}, {b.numpy()}, {c.numpy()}]") # Saving also works in eager (sess must be None). path = saver.save(sess=None, save_path='tf1-ckpt-saved-in-eager') print_checkpoint(path)
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
The next snippet loads the checkpoint using the TF2 API tf.train.Checkpoint:
a = tf.Variable(0.0, name='a') b = tf.Variable(0.0, name='b') with tf.name_scope('scoped'): c = tf.Variable(0.0, name='c') # Without the name_scope, name="scoped/c" works too: c_2 = tf.Variable(0.0, name='scoped/c') print("Variable names: ") print(f" a.name = {a.name}") print(f" b.name = {b.name}") print(f" c.name = {c.name}") print(f" c_2.name = {c_2.name}") # Restore the values with tf.train.Checkpoint ckpt = tf.train.Checkpoint(variables=[a, b, c, c_2]) ckpt.restore(save_path) print(f"loaded values of [a, b, c, c_2]: [{a.numpy()}, {b.numpy()}, {c.numpy()}, {c_2.numpy()}]")
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Variable names in TF2 Variables still all have a name argument you can set. Keras models also take a name argument as which they set as the prefix for their variables. The v1.name_scope function can be used to set variable name prefixes. This is very different from tf.variable_scope. It only affects names, and doesn't track variables and reuse. The tf.compat.v1.keras.utils.track_tf1_style_variables decorator is a shim that helps you maintain variable names and TF1 checkpoint compatibility, by keeping the naming and reuse semantics of tf.variable_scope and tf.compat.v1.get_variable unchanged. See the Model mapping guide for more info. Note 1: If you are using the shim, use TF2 APIs to load your checkpoints (even when using pre-trained TF1 checkpoints). See the section Checkpoint Keras. Note 2: When migrating to tf.Variable from get_variable: If your shim-decorated layer or module consists of some variables (or Keras layers/models) that use tf.Variable instead of tf.compat.v1.get_variable and get attached as properties/tracked in an object oriented way, they may have different variable naming semantics in TF1.x graphs/sessions versus during eager execution. In short, the names may not be what you expect them to be when running in TF2. Warning: Variables may have duplicate names in eager execution, which may cause problems if multiple variables in the name-based checkpoint need to be mapped to the same name. You may be able to explicitly adjust the layer and variable names using tf.name_scope and layer constructor or tf.Variable name arguments to adjust variable names and ensure there are no duplicates. Maintaining assignment maps Assignment maps are commonly used to transfer weights between TF1 models, and can also be used during your model migration if the variable names change. You can use these maps with tf.compat.v1.train.init_from_checkpoint, tf.compat.v1.train.Saver, and tf.train.load_checkpoint to load weights into models in which the variable or scope names may have changed. The examples in this section will use a previously saved checkpoint:
print_checkpoint('tf1-ckpt')
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Loading with init_from_checkpoint tf1.train.init_from_checkpoint must be called while in a Graph/Session, because it places the values in the variable initializers instead of creating an assign op. You can use the assignment_map argument to configure how the variables are loaded. From the documentation: Assignment map supports following syntax: * 'checkpoint_scope_name/': 'scope_name/' - will load all variables in current scope_name from checkpoint_scope_name with matching tensor names. * 'checkpoint_scope_name/some_other_variable': 'scope_name/variable_name' - will initialize scope_name/variable_name variable from checkpoint_scope_name/some_other_variable. * 'scope_variable_name': variable - will initialize given tf.Variable object with tensor 'scope_variable_name' from the checkpoint. * 'scope_variable_name': list(variable) - will initialize list of partitioned variables with tensor 'scope_variable_name' from the checkpoint. * '/': 'scope_name/' - will load all variables in current scope_name from checkpoint's root (e.g. no scope).
# Restoring with tf1.train.init_from_checkpoint: # A new model with a different scope for the variables. with tf.Graph().as_default() as g: with tf1.variable_scope('new_scope'): a = tf1.get_variable('a', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) b = tf1.get_variable('b', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) c = tf1.get_variable('scoped/c', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) with tf1.Session() as sess: # The assignment map will remap all variables in the checkpoint to the # new scope: tf1.train.init_from_checkpoint( 'tf1-ckpt', assignment_map={'/': 'new_scope/'}) # `init_from_checkpoint` adds the initializers to these variables. # Use `sess.run` to run these initializers. sess.run(tf1.global_variables_initializer()) print("Restored [a, b, c]: ", sess.run([a, b, c]))
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Loading with tf1.train.Saver Unlike init_from_checkpoint, tf.compat.v1.train.Saver runs in both graph and eager mode. The var_list argument optionally accepts a dictionary, except it must map variable names to the tf.Variable object.
# Restoring with tf1.train.Saver (works in both graph and eager): # A new model with a different scope for the variables. with tf1.variable_scope('new_scope'): a = tf1.get_variable('a', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) b = tf1.get_variable('b', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) c = tf1.get_variable('scoped/c', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) # Initialize the saver with a dictionary with the original variable names: saver = tf1.train.Saver({'a': a, 'b': b, 'scoped/c': c}) saver.restore(sess=None, save_path='tf1-ckpt') print("Restored [a, b, c]: ", [a.numpy(), b.numpy(), c.numpy()])
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Loading with tf.train.load_checkpoint This option is for you if you need precise control over the variable values. Again, this works in both graph and eager modes.
# Restoring with tf.train.load_checkpoint (works in both graph and eager): # A new model with a different scope for the variables. with tf.Graph().as_default() as g: with tf1.variable_scope('new_scope'): a = tf1.get_variable('a', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) b = tf1.get_variable('b', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) c = tf1.get_variable('scoped/c', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) with tf1.Session() as sess: # It may be easier writing a loop if your model has a lot of variables. reader = tf.train.load_checkpoint('tf1-ckpt') sess.run(a.assign(reader.get_tensor('a'))) sess.run(b.assign(reader.get_tensor('b'))) sess.run(c.assign(reader.get_tensor('scoped/c'))) print("Restored [a, b, c]: ", sess.run([a, b, c]))
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Maintaining a TF2 Checkpoint object If the variable and scope names may change a lot during the migration, then use tf.train.Checkpoint and TF2 checkpoints. TF2 uses the object structure instead of variable names (more details in Changes from TF1 to TF2). In short, when creating a tf.train.Checkpoint to save or restore checkpoints, make sure it uses the same ordering (for lists) and keys (for dictionaries and keyword arguments to the Checkpoint initializer). Some examples of checkpoint compatibility: ``` ckpt = tf.train.Checkpoint(foo=[var_a, var_b]) compatible with ckpt tf.train.Checkpoint(foo=[var_a, var_b]) not compatible with ckpt tf.train.Checkpoint(foo=[var_b, var_a]) tf.train.Checkpoint(bar=[var_a, var_b]) ``` The code samples below show how to use the "same" tf.train.Checkpoint to load variables with different names. First, save a TF2 checkpoint:
with tf.Graph().as_default() as g: a = tf1.get_variable('a', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(1)) b = tf1.get_variable('b', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(2)) with tf1.variable_scope('scoped'): c = tf1.get_variable('c', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(3)) with tf1.Session() as sess: sess.run(tf1.global_variables_initializer()) print("[a, b, c]: ", sess.run([a, b, c])) # Save a TF2 checkpoint ckpt = tf.train.Checkpoint(unscoped=[a, b], scoped=[c]) tf2_ckpt_path = ckpt.save('tf2-ckpt') print_checkpoint(tf2_ckpt_path)
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
You can keep using tf.train.Checkpoint even if the variable/scope names change:
with tf.Graph().as_default() as g: a = tf1.get_variable('a_different_name', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) b = tf1.get_variable('b_different_name', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) with tf1.variable_scope('different_scope'): c = tf1.get_variable('c', shape=[], dtype=tf.float32, initializer=tf1.zeros_initializer()) with tf1.Session() as sess: sess.run(tf1.global_variables_initializer()) print("Initialized [a, b, c]: ", sess.run([a, b, c])) ckpt = tf.train.Checkpoint(unscoped=[a, b], scoped=[c]) # `assert_consumed` validates that all checkpoint objects are restored from # the checkpoint. `run_restore_ops` is required when running in a TF1 # session. ckpt.restore(tf2_ckpt_path).assert_consumed().run_restore_ops() # Removing `assert_consumed` is fine if you want to skip the validation. # ckpt.restore(tf2_ckpt_path).run_restore_ops() print("Restored [a, b, c]: ", sess.run([a, b, c]))
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
And in eager mode:
a = tf.Variable(0.) b = tf.Variable(0.) c = tf.Variable(0.) print("Initialized [a, b, c]: ", [a.numpy(), b.numpy(), c.numpy()]) # The keys "scoped" and "unscoped" are no longer relevant, but are used to # maintain compatibility with the saved checkpoints. ckpt = tf.train.Checkpoint(unscoped=[a, b], scoped=[c]) ckpt.restore(tf2_ckpt_path).assert_consumed().run_restore_ops() print("Restored [a, b, c]: ", [a.numpy(), b.numpy(), c.numpy()])
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
TF2 checkpoints in Estimator The sections above describe how to maintain checkpoint compatiblity while migrating your model. These concepts also apply for Estimator models, although the way the checkpoint is saved/loaded is slightly different. As you migrate your Estimator model to use TF2 APIs, you may want to switch from TF1 to TF2 checkpoints while the model is still using the estimator. This sections shows how to do so. tf.estimator.Estimator and MonitoredSession have a saving mechanism called the scaffold, a tf.compat.v1.train.Scaffold object. The Scaffold can contain a tf1.train.Saver or tf.train.Checkpoint, which enables Estimator and MonitoredSession to save TF1- or TF2-style checkpoints.
# A model_fn that saves a TF1 checkpoint def model_fn_tf1_ckpt(features, labels, mode): # This model adds 2 to the variable `v` in every train step. train_step = tf1.train.get_or_create_global_step() v = tf1.get_variable('var', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(0)) return tf.estimator.EstimatorSpec( mode, predictions=v, train_op=tf.group(v.assign_add(2), train_step.assign_add(1)), loss=tf.constant(1.), scaffold=None ) !rm -rf est-tf1 est = tf.estimator.Estimator(model_fn_tf1_ckpt, 'est-tf1') def train_fn(): return tf.data.Dataset.from_tensor_slices(([1,2,3], [4,5,6])) est.train(train_fn, steps=1) latest_checkpoint = tf.train.latest_checkpoint('est-tf1') print_checkpoint(latest_checkpoint) # A model_fn that saves a TF2 checkpoint def model_fn_tf2_ckpt(features, labels, mode): # This model adds 2 to the variable `v` in every train step. train_step = tf1.train.get_or_create_global_step() v = tf1.get_variable('var', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(0)) ckpt = tf.train.Checkpoint(var_list={'var': v}, step=train_step) return tf.estimator.EstimatorSpec( mode, predictions=v, train_op=tf.group(v.assign_add(2), train_step.assign_add(1)), loss=tf.constant(1.), scaffold=tf1.train.Scaffold(saver=ckpt) ) !rm -rf est-tf2 est = tf.estimator.Estimator(model_fn_tf2_ckpt, 'est-tf2', warm_start_from='est-tf1') def train_fn(): return tf.data.Dataset.from_tensor_slices(([1,2,3], [4,5,6])) est.train(train_fn, steps=1) latest_checkpoint = tf.train.latest_checkpoint('est-tf2') print_checkpoint(latest_checkpoint) assert est.get_variable_value('var_list/var/.ATTRIBUTES/VARIABLE_VALUE') == 4
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
The final value of v should be 16, after being warm-started from est-tf1, then trained for an additional 5 steps. The train step value doesn't carry over from the warm_start checkpoint. Checkpointing Keras Models built with Keras still use tf1.train.Saver and tf.train.Checkpoint to load pre-existing weights. When your model is fully migrated, switch to using model.save_weights and model.load_weights, especially if you are using the ModelCheckpoint callback when training. Some things you should know about checkpoints and Keras: Initialization vs Building Keras models and layers must go through two steps before being fully created. First is the initialization of the Python object: layer = tf.keras.layers.Dense(x). Second is the build step, in which most of the weights are actually created: layer.build(input_shape). You can also build a model by calling it or running a single train, eval, or predict step (the first time only). If you find that model.load_weights(path).assert_consumed() is raising an error, then it is likely that the model/layers have not been built. Keras uses TF2 checkpoints tf.train.Checkpoint(model).write is equivalent to model.save_weights. Same with tf.train.Checkpoint(model).read and model.load_weights. Note that Checkpoint(model) != Checkpoint(model=model). TF2 checkpoints work with Keras's build() step tf.train.Checkpoint.restore has a mechanism called deferred restoration which allows tf.Module and Keras objects to store variable values if the variable has not yet been created. This allows initialized models to load weights and build after. ``` m = YourKerasModel() status = m.load_weights(path) This call builds the model. The variables are created with the restored values. m.predict(inputs) status.assert_consumed() ``` Because of this mechanism, we highly recommend that you use TF2 checkpoint loading APIs with Keras models (even when restoring pre-existing TF1 checkpoints into the model mapping shims). See more in the checkpoint guide. Code Snippets The snippets below show the TF1/TF2 version compatibility in the checkpoint saving APIs. Save a TF1 checkpoint in TF2 <a name="save-tf1-in-tf2"></a>
a = tf.Variable(1.0, name='a') b = tf.Variable(2.0, name='b') with tf.name_scope('scoped'): c = tf.Variable(3.0, name='c') saver = tf1.train.Saver(var_list=[a, b, c]) path = saver.save(sess=None, save_path='tf1-ckpt-saved-in-eager') print_checkpoint(path)
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Load a TF1 checkpoint in TF2 <a name="load-tf1-in-tf2"></a>
a = tf.Variable(0., name='a') b = tf.Variable(0., name='b') with tf.name_scope('scoped'): c = tf.Variable(0., name='c') print("Initialized [a, b, c]: ", [a.numpy(), b.numpy(), c.numpy()]) saver = tf1.train.Saver(var_list=[a, b, c]) saver.restore(sess=None, save_path='tf1-ckpt-saved-in-eager') print("Restored [a, b, c]: ", [a.numpy(), b.numpy(), c.numpy()])
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Save a TF2 checkpoint in TF1
with tf.Graph().as_default() as g: a = tf1.get_variable('a', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(1)) b = tf1.get_variable('b', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(2)) with tf1.variable_scope('scoped'): c = tf1.get_variable('c', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(3)) with tf1.Session() as sess: sess.run(tf1.global_variables_initializer()) ckpt = tf.train.Checkpoint( var_list={v.name.split(':')[0]: v for v in tf1.global_variables()}) tf2_in_tf1_path = ckpt.save('tf2-ckpt-saved-in-session') print_checkpoint(tf2_in_tf1_path)
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Load a TF2 checkpoint in TF1
with tf.Graph().as_default() as g: a = tf1.get_variable('a', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(0)) b = tf1.get_variable('b', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(0)) with tf1.variable_scope('scoped'): c = tf1.get_variable('c', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(0)) with tf1.Session() as sess: sess.run(tf1.global_variables_initializer()) print("Initialized [a, b, c]: ", sess.run([a, b, c])) ckpt = tf.train.Checkpoint( var_list={v.name.split(':')[0]: v for v in tf1.global_variables()}) ckpt.restore('tf2-ckpt-saved-in-session-1').run_restore_ops() print("Restored [a, b, c]: ", sess.run([a, b, c]))
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Checkpoint conversion <a name="checkpoint-conversion"></a> You can convert checkpoints between TF1 and TF2 by loading and re-saving the checkpoints. An alternative is tf.train.load_checkpoint, shown in the code below. Convert TF1 checkpoint to TF2
def convert_tf1_to_tf2(checkpoint_path, output_prefix): """Converts a TF1 checkpoint to TF2. To load the converted checkpoint, you must build a dictionary that maps variable names to variable objects. ``` ckpt = tf.train.Checkpoint(vars={name: variable}) ckpt.restore(converted_ckpt_path) ``` Args: checkpoint_path: Path to the TF1 checkpoint. output_prefix: Path prefix to the converted checkpoint. Returns: Path to the converted checkpoint. """ vars = {} reader = tf.train.load_checkpoint(checkpoint_path) dtypes = reader.get_variable_to_dtype_map() for key in dtypes.keys(): vars[key] = tf.Variable(reader.get_tensor(key)) return tf.train.Checkpoint(vars=vars).save(output_prefix)
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Convert the checkpoint saved in the snippet Save a TF1 checkpoint in TF2:
# Make sure to run the snippet in `Save a TF1 checkpoint in TF2`. print_checkpoint('tf1-ckpt-saved-in-eager') converted_path = convert_tf1_to_tf2('tf1-ckpt-saved-in-eager', 'converted-tf1-to-tf2') print("\n[Converted]") print_checkpoint(converted_path) # Try loading the converted checkpoint. a = tf.Variable(0.) b = tf.Variable(0.) c = tf.Variable(0.) ckpt = tf.train.Checkpoint(vars={'a': a, 'b': b, 'scoped/c': c}) ckpt.restore(converted_path).assert_consumed() print("\nRestored [a, b, c]: ", [a.numpy(), b.numpy(), c.numpy()])
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Convert TF2 checkpoint to TF1
def convert_tf2_to_tf1(checkpoint_path, output_prefix): """Converts a TF2 checkpoint to TF1. The checkpoint must be saved using a `tf.train.Checkpoint(var_list={name: variable})` To load the converted checkpoint with `tf.compat.v1.Saver`: ``` saver = tf.compat.v1.train.Saver(var_list={name: variable}) # An alternative, if the variable names match the keys: saver = tf.compat.v1.train.Saver(var_list=[variables]) saver.restore(sess, output_path) ``` """ vars = {} reader = tf.train.load_checkpoint(checkpoint_path) dtypes = reader.get_variable_to_dtype_map() for key in dtypes.keys(): # Get the "name" from the if key.startswith('var_list/'): var_name = key.split('/')[1] # TF2 checkpoint keys use '/', so if they appear in the user-defined name, # they are escaped to '.S'. var_name = var_name.replace('.S', '/') vars[var_name] = tf.Variable(reader.get_tensor(key)) return tf1.train.Saver(var_list=vars).save(sess=None, save_path=output_prefix)
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
Convert the checkpoint saved in the snippet Save a TF2 checkpoint in TF1:
# Make sure to run the snippet in `Save a TF2 checkpoint in TF1`. print_checkpoint('tf2-ckpt-saved-in-session-1') converted_path = convert_tf2_to_tf1('tf2-ckpt-saved-in-session-1', 'converted-tf2-to-tf1') print("\n[Converted]") print_checkpoint(converted_path) # Try loading the converted checkpoint. with tf.Graph().as_default() as g: a = tf1.get_variable('a', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(0)) b = tf1.get_variable('b', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(0)) with tf1.variable_scope('scoped'): c = tf1.get_variable('c', shape=[], dtype=tf.float32, initializer=tf1.constant_initializer(0)) with tf1.Session() as sess: saver = tf1.train.Saver([a, b, c]) saver.restore(sess, converted_path) print("\nRestored [a, b, c]: ", sess.run([a, b, c]))
site/en/guide/migrate/migrating_checkpoints.ipynb
tensorflow/docs
apache-2.0
In the above example plot, one of the clusters is linearly seperable and at a good seperation from other two clusters. Two of the clusters are close by and not linearly seperable. Also the dataset is 4-dimensional i.e. it has 4 features, but for the sake of visualization using matplotlib, one of dimensions has been ignored. Therefore, it can be said that just visualization of data-points is not always enough for knowing optimal number of clusters $K$. Elbow Method Yellowbrick's KElbowVisualizer implements the “elbow” method of selecting the optimal number of clusters by fitting the K-Means model with a range of values for $K$. If the line chart looks like an arm, then the “elbow” (the point of inflection on the curve) is a good indication that the underlying model fits best at that point. In the following example, the KElbowVisualizer fits the model for a range of $K$ values from 2 to 10, which is set by the parameter k=(2,11). When the model is fit with 3 clusters we can see an "elbow" in the graph, which in this case we know to be the optimal number since our dataset has 3 clusters of points.
# Instantiate the clustering model and visualizer model = KMeans() visualizer = KElbowVisualizer(model, k=(2,11)) visualizer.fit(X) # Fit the data to the visualizer visualizer.poof() # Draw/show/poof the data
examples/gokriznastic/Iris - clustering example.ipynb
pdamodaran/yellowbrick
apache-2.0
By default, the scoring parameter metric is set to distortion, which computes the sum of squared distances from each point to its assigned center. However, two other metrics can also be used with the KElbowVisualizer&mdash;silhouette and calinski_harabaz. The silhouette score is the mean silhouette coefficient for all samples, while the calinski_harabaz score computes the ratio of dispersion between and within clusters. The KElbowVisualizer also displays the amount of time to fit the model per $K$, which can be hidden by setting timings=False. In the following example, we'll use the calinski_harabaz score and hide the time to fit the model.
# Instantiate the clustering model and visualizer model = KMeans() visualizer = KElbowVisualizer(model, k=(2,11), metric='calinski_harabaz', timings=False) visualizer.fit(X) # Fit the data to the visualizer visualizer.poof() # Draw/show/poof the data
examples/gokriznastic/Iris - clustering example.ipynb
pdamodaran/yellowbrick
apache-2.0
It is important to remember that the Elbow method does not work well if the data is not very clustered. In such cases, you might see a smooth curve and the optimal value of $K$ will be unclear. You can learn more about the Elbow method at Robert Grove's Blocks. Silhouette Visualizer Silhouette analysis can be used to evaluate the density and separation between clusters. The score is calculated by averaging the silhouette coefficient for each sample, which is computed as the difference between the average intra-cluster distance and the mean nearest-cluster distance for each sample, normalized by the maximum value. This produces a score between -1 and +1, where scores near +1 indicate high separation and scores near -1 indicate that the samples may have been assigned to the wrong cluster. The SilhouetteVisualizer displays the silhouette coefficient for each sample on a per-cluster basis, allowing users to visualize the density and separation of the clusters. This is particularly useful for determining cluster imbalance or for selecting a value for $K$ by comparing multiple visualizers. Since we created the sample dataset for these examples, we already know that the data points are grouped into 8 clusters. So for the first SilhouetteVisualizer example, we'll set $K$ to 3 in order to show how the plot looks when using the optimal value of $K$. Notice that graph contains homogeneous and long silhouettes. In addition, the vertical red-dotted line on the plot indicates the average silhouette score for all observations.
# Instantiate the clustering model and visualizer model = KMeans(3) visualizer = SilhouetteVisualizer(model) visualizer.fit(X) # Fit the data to the visualizer visualizer.poof() # Draw/show/poof the data
examples/gokriznastic/Iris - clustering example.ipynb
pdamodaran/yellowbrick
apache-2.0
For the next example, let's see what happens when using a non-optimal value for $K$, in this case, 6. Now we see that the width of clusters 1 to 6 have become narrow, of unequal width and their silhouette coefficient scores have dropped. This occurs because the width of each silhouette is proportional to the number of samples assigned to the cluster. The model is trying to fit our data into a larger than optimal number of clusters, making some of the clusters narrower but much less cohesive as seen from the drop in average-silhouette score.
# Instantiate the clustering model and visualizer model = KMeans(6) visualizer = SilhouetteVisualizer(model) visualizer.fit(X) # Fit the data to the visualizer visualizer.poof() # Draw/show/poof the data
examples/gokriznastic/Iris - clustering example.ipynb
pdamodaran/yellowbrick
apache-2.0
After Gopal's improvements to Silhouette Visualizer
import numpy as np import matplotlib.pyplot as plt from yellowbrick.style import color_palette from yellowbrick.cluster.base import ClusteringScoreVisualizer from sklearn.metrics import silhouette_score, silhouette_samples ## Packages for export __all__ = [ "SilhouetteVisualizer" ] ########################################################################## ## Silhouette Method for K Selection ########################################################################## class SilhouetteVisualizer(ClusteringScoreVisualizer): """ The Silhouette Visualizer displays the silhouette coefficient for each sample on a per-cluster basis, visually evaluating the density and separation between clusters. The score is calculated by averaging the silhouette coefficient for each sample, computed as the difference between the average intra-cluster distance and the mean nearest-cluster distance for each sample, normalized by the maximum value. This produces a score between -1 and +1, where scores near +1 indicate high separation and scores near -1 indicate that the samples may have been assigned to the wrong cluster. In SilhouetteVisualizer plots, clusters with higher scores have wider silhouettes, but clusters that are less cohesive will fall short of the average score across all clusters, which is plotted as a vertical dotted red line. This is particularly useful for determining cluster imbalance, or for selecting a value for K by comparing multiple visualizers. Parameters ---------- model : a Scikit-Learn clusterer Should be an instance of a centroidal clustering algorithm (``KMeans`` or ``MiniBatchKMeans``). ax : matplotlib Axes, default: None The axes to plot the figure on. If None is passed in the current axes will be used (or generated if required). kwargs : dict Keyword arguments that are passed to the base class and may influence the visualization as defined in other Visualizers. Attributes ---------- silhouette_score_ : float Mean Silhouette Coefficient for all samples. Computed via scikit-learn `sklearn.metrics.silhouette_score`. silhouette_samples_ : array, shape = [n_samples] Silhouette Coefficient for each samples. Computed via scikit-learn `sklearn.metrics.silhouette_samples`. n_samples_ : integer Number of total samples in the dataset (X.shape[0]) n_clusters_ : integer Number of clusters (e.g. n_clusters or k value) passed to internal scikit-learn model. Examples -------- >>> from yellowbrick.cluster import SilhouetteVisualizer >>> from sklearn.cluster import KMeans >>> model = SilhouetteVisualizer(KMeans(10)) >>> model.fit(X) >>> model.poof() """ def __init__(self, model, ax=None, **kwargs): super(SilhouetteVisualizer, self).__init__(model, ax=ax, **kwargs) # Visual Properties # TODO: Fix the color handling self.colormap = kwargs.get('colormap', 'set1') self.color = kwargs.get('color', None) def fit(self, X, y=None, **kwargs): """ Fits the model and generates the silhouette visualization. """ # TODO: decide to use this method or the score method to draw. # NOTE: Probably this would be better in score, but the standard score # is a little different and I'm not sure how it's used. # Fit the wrapped estimator self.estimator.fit(X, y, **kwargs) # Get the properties of the dataset self.n_samples_ = X.shape[0] self.n_clusters_ = self.estimator.n_clusters # Compute the scores of the cluster labels = self.estimator.predict(X) self.silhouette_score_ = silhouette_score(X, labels) self.silhouette_samples_ = silhouette_samples(X, labels) # Draw the silhouette figure self.draw(labels) # Return the estimator return self def draw(self, labels): """ Draw the silhouettes for each sample and the average score. Parameters ---------- labels : array-like An array with the cluster label for each silhouette sample, usually computed with ``predict()``. Labels are not stored on the visualizer so that the figure can be redrawn with new data. """ # Track the positions of the lines being drawn y_lower = 10 # The bottom of the silhouette # Get the colors from the various properties # TODO: Use resolve_colors instead of this colors = color_palette(self.colormap, self.n_clusters_) # For each cluster, plot the silhouette scores for idx in range(self.n_clusters_): # Collect silhouette scores for samples in the current cluster . values = self.silhouette_samples_[labels == idx] values.sort() # Compute the size of the cluster and find upper limit size = values.shape[0] y_upper = y_lower + size color = colors[idx] self.ax.fill_betweenx( np.arange(y_lower, y_upper), 0, values, facecolor=color, edgecolor=color, alpha=0.5 ) # Label the silhouette plots with their cluster numbers self.ax.text(-0.05, y_lower + 0.5 * size, str(idx)) # Compute the new y_lower for next plot y_lower = y_upper + 10 # The vertical line for average silhouette score of all the values self.ax.axvline( x=self.silhouette_score_, color="red", linestyle="--" ) return self.ax def finalize(self): """ Prepare the figure for rendering by setting the title and adjusting the limits on the axes, adding labels and a legend. """ # Set the title self.set_title(( "Silhouette Plot of {} Clustering for {} Samples in {} Centers" ).format( self.name, self.n_samples_, self.n_clusters_ )) # Set the X and Y limits # The silhouette coefficient can range from -1, 1; # but here we scale the plot according to our visualizations # l_xlim and u_xlim are lower and upper limits of the x-axis, # set according to our calculated maximum and minimum silhouette score along with necessary padding l_xlim = max(-1, min(-0.1, round(min(self.silhouette_samples_) - 0.1, 1))) u_xlim = min(1, round(max(self.silhouette_samples_) + 0.1, 1)) self.ax.set_xlim([l_xlim, u_xlim]) # The (n_clusters_+1)*10 is for inserting blank space between # silhouette plots of individual clusters, to demarcate them clearly. self.ax.set_ylim([0, self.n_samples_ + (self.n_clusters_ + 1) * 10]) # Set the x and y labels self.ax.set_xlabel("silhouette coefficient values") self.ax.set_ylabel("cluster label") # Set the ticks on the axis object. self.ax.set_yticks([]) # Clear the yaxis labels / ticks self.ax.xaxis.set_major_locator(plt.MultipleLocator(0.1)) # Set the ticks at multiples of 0.1 # Instantiate the clustering model and visualizer model = KMeans(6) visualizer = SilhouetteVisualizer(model) visualizer.fit(X) # Fit the data to the visualizer visualizer.poof() # Draw/show/poof the data
examples/gokriznastic/Iris - clustering example.ipynb
pdamodaran/yellowbrick
apache-2.0
Equation of motion - SDE to be solved $\ddot{q}(t) + \Gamma_0\dot{q}(t) + \Omega_0^2 q(t) - \dfrac{1}{m} F(t) = 0 $ where q = x, y or z Where $F(t) = \mathcal{F}{fluct}(t) + F{feedback}(t)$ Taken from page 46 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler Using $\mathcal{F}_{fluct}(t) = \sqrt{2m \Gamma_0 k_B T_0}\dfrac{dW(t)}{dt}$ and $F_{feedback}(t) = \Omega_0 \eta q^2 \dot{q}$ Taken from page 49 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler we get the following SDE: $\dfrac{d^2q(t)}{dt^2} + (\Gamma_0 - \Omega_0 \eta q(t)^2)\dfrac{dq(t)}{dt} + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$ split into 2 first order ODE/SDE s letting $v = \dfrac{dq}{dt}$ $\dfrac{dv(t)}{dt} + (\Gamma_0 - \Omega_0 \eta q(t)^2)v + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$ therefore $\dfrac{dv(t)}{dt} = -(\Gamma_0 - \Omega_0 \eta q(t)^2)v - \Omega_0^2 q(t) + \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} $ $v = \dfrac{dq}{dt}$ therefore $dq = v~dt$ \begin{align} dq&=v\,dt\ dv&=[-(\Gamma_0-\Omega_0 \eta q(t)^2)v(t) - \Omega_0^2 q(t)]\,dt + \sqrt{\frac{2\Gamma_0 k_B T_0}m}\,dW \end{align} Apply Milstein Method to solve Consider the autonomous Itō stochastic differential equation ${\mathrm {d}}X_{t}=a(X_{t})\,{\mathrm {d}}t+b(X_{t})\,{\mathrm {d}}W_{t}$ Taking $X_t = q_t$ for the 1st equation above (i.e. $dq = v~dt$) we get: $$ a(q_t) = v $$ $$ b(q_t) = 0 $$ Taking $X_t = v_t$ for the 2nd equation above (i.e. $dv = ...$) we get: $$a(v_t) = -(\Gamma_0-\Omega_0\eta q(t)^2)v - \Omega_0^2 q(t)$$ $$b(v_t) = \sqrt{\dfrac{2\Gamma_0 k_B T_0}m}$$ ${\displaystyle b'(v_{t})=0}$ therefore the diffusion term does not depend on ${\displaystyle v_{t}}$ , the Milstein's method in this case is therefore equivalent to the Euler–Maruyama method. We then construct these functions in python:
def a_q(t, v, q): return v def a_v(t, v, q): return -(Gamma0 - Omega0*eta*q**2)*v - Omega0**2*q def b_v(t, v, q): return np.sqrt(2*Gamma0*k_b*T_0/m)
SDE_Solution_Derivation.ipynb
AshleySetter/datahandling
mit
Using values obtained from fitting to data from a real particle we set the following constant values describing the system. Cooling has been assumed to be off by setting $\eta = 0$.
Gamma0 = 4000 # radians/second Omega0 = 75e3*2*np.pi # radians/second eta = 0.5e7 T_0 = 300 # K k_b = scipy.constants.Boltzmann # J/K m = 3.1e-19 # KG
SDE_Solution_Derivation.ipynb
AshleySetter/datahandling
mit
partition the interval [0, T] into N equal subintervals of width $\Delta t>0$: $ 0=\tau {0}<\tau {1}<\dots <\tau {N}=T{\text{ with }}\tau {n}:=n\Delta t{\text{ and }}\Delta t={\frac {T}{N}}$
dt = 1e-10 tArray = np.arange(0, 100e-6, dt) print("{} Hz".format(1/dt))
SDE_Solution_Derivation.ipynb
AshleySetter/datahandling
mit
set $Y_{0}=x_{0}$
q0 = 0 v0 = 0 q = np.zeros_like(tArray) v = np.zeros_like(tArray) q[0] = q0 v[0] = v0
SDE_Solution_Derivation.ipynb
AshleySetter/datahandling
mit
Generate independent and identically distributed normal random variables with expected value 0 and variance dt
np.random.seed(88) dwArray = np.random.normal(0, np.sqrt(dt), len(tArray)) # independent and identically distributed normal random variables with expected value 0 and variance dt
SDE_Solution_Derivation.ipynb
AshleySetter/datahandling
mit
Apply Milstein's method (Euler Maruyama if $b'(Y_{n}) = 0$ as is the case here): recursively define $Y_{n}$ for $ 1\leq n\leq N $ by $ Y_{{n+1}}=Y_{n}+a(Y_{n})\Delta t+b(Y_{n})\Delta W_{n}+{\frac {1}{2}}b(Y_{n})b'(Y_{n})\left((\Delta W_{n})^{2}-\Delta t\right)$ Perform this for the 2 first order differential equations:
#%%timeit for n, t in enumerate(tArray[:-1]): dw = dwArray[n] v[n+1] = v[n] + a_v(t, v[n], q[n])*dt + b_v(t, v[n], q[n])*dw + 0 q[n+1] = q[n] + a_q(t, v[n], q[n])*dt + 0
SDE_Solution_Derivation.ipynb
AshleySetter/datahandling
mit
We now have an array of positions, $v$, and velocities $p$ with time $t$.
plt.plot(tArray*1e6, v) plt.xlabel("t (us)") plt.ylabel("v") plt.plot(tArray*1e6, q) plt.xlabel("t (us)") plt.ylabel("q")
SDE_Solution_Derivation.ipynb
AshleySetter/datahandling
mit
Alternatively we can use a derivative-free version of Milsteins method as a two-stage kind-of Runge-Kutta method, documented in wikipedia (https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_method_%28SDE%29) or the original in arxiv.org https://arxiv.org/pdf/1210.0933.pdf.
q0 = 0 v0 = 0 X = np.zeros([len(tArray), 2]) X[0, 0] = q0 X[0, 1] = v0 def a(t, X): q, v = X return np.array([v, -(Gamma0 - Omega0*eta*q**2)*v - Omega0**2*q]) def b(t, X): q, v = X return np.array([0, np.sqrt(2*Gamma0*k_b*T_0/m)]) %%timeit S = np.array([-1,1]) for n, t in enumerate(tArray[:-1]): dw = dwArray[n] K1 = a(t, X[n])*dt + b(t, X[n])*(dw - S*np.sqrt(dt)) Xh = X[n] + K1 K2 = a(t, Xh)*dt + b(t, Xh)*(dw + S*np.sqrt(dt)) X[n+1] = X[n] + 0.5 * (K1+K2) q = X[:, 0] v = X[:, 1] plt.plot(tArray*1e6, v) plt.xlabel("t (us)") plt.ylabel("v") plt.plot(tArray*1e6, q) plt.xlabel("t (us)") plt.ylabel("q")
SDE_Solution_Derivation.ipynb
AshleySetter/datahandling
mit
The form of $F_{feedback}(t)$ is still questionable On page 49 of 'Dynamics of optically levitated nanoparticles in high vacuum' - Thesis by Jan Gieseler he uses the form: $F_{feedback}(t) = \Omega_0 \eta q^2 \dot{q}$ On page 2 of 'Parametric feeedback cooling of levitated optomechancs in a parabolic mirror trap' Paper by Jamie and Muddassar they use the form: $F_{feedback}(t) = \dfrac{\Omega_0 \eta q^2 \dot{q}}{q_0^2}$ where $q_0$ is the amplitude of the motion: $q(t) = q_0(sin(\omega_0t)$ However it always shows up as a term $\delta \Gamma$ like so: $\dfrac{d^2q(t)}{dt^2} + (\Gamma_0 + \delta \Gamma)\dfrac{dq(t)}{dt} + \Omega_0^2 q(t) - \sqrt{\dfrac{2\Gamma_0 k_B T_0}{m}} \dfrac{dW(t)}{dt} = 0$ By fitting to data we extract the following 3 parameters: 1) $A = \gamma^2 \dfrac{k_B T_0}{\pi m}\Gamma_0 $ Where: $\gamma$ is the conversion factor between Volts and nanometres. This parameterises the amount of light/ number of photons collected from the nanoparticle. With unchanged allignment and the same particle this should remain constant with changes in pressure. $m$ is the mass of the particle, a constant $T_0$ is the temperature of the environment $\Gamma_0$ the damping due to the environment only 2) $\Omega_0$ - the natural frequency at this trapping power 3) $\Gamma$ - the total damping on the system including environment and feedback etc... By taking a reference save with no cooling we have $\Gamma = \Gamma_0$ and therefore we can extract $A' = \gamma^2 \dfrac{k_B T_0}{\pi m}$. Since $A'$ should be constant with pressure we can therefore extract $\Gamma_0$ at any pressure (if we have a reference save and therefore a value of $A'$) and therefore can extract $\delta \Gamma$, the damping due to cooling, we can then plug this into our SDE instead in order to include cooling in the SDE model. For any dataset at any pressure we can do: $\Gamma_0 = \dfrac{A}{A'}$ And then $\delta \Gamma = \Gamma - \Gamma_0$ Using this form and the same derivation as above we arrive at the following form of the 2 1st order differential equations: \begin{align} dq&=v\,dt\ dv&=[-(\Gamma_0 + \delta \Gamma)v(t) - \Omega_0^2 v(t)]\,dt + \sqrt{\frac{2\Gamma_0 k_B T_0}m}\,dW \end{align}
def a_q(t, v, q): return v def a_v(t, v, q): return -(Gamma0 + deltaGamma)*v - Omega0**2*q def b_v(t, v, q): return np.sqrt(2*Gamma0*k_b*T_0/m)
SDE_Solution_Derivation.ipynb
AshleySetter/datahandling
mit
values below are taken from a ~1e-2 mbar cooled save
Gamma0 = 15 # radians/second deltaGamma = 2200 Omega0 = 75e3*2*np.pi # radians/second eta = 0.5e7 T_0 = 300 # K k_b = scipy.constants.Boltzmann # J/K m = 3.1e-19 # KG dt = 1e-10 tArray = np.arange(0, 100e-6, dt) q0 = 0 v0 = 0 q = np.zeros_like(tArray) v = np.zeros_like(tArray) q[0] = q0 v[0] = v0 np.random.seed(88) dwArray = np.random.normal(0, np.sqrt(dt), len(tArray)) # independent and identically distributed normal random variables with expected value 0 and variance dt for n, t in enumerate(tArray[:-1]): dw = dwArray[n] v[n+1] = v[n] + a_v(t, v[n], q[n])*dt + b_v(t, v[n], q[n])*dw + 0 q[n+1] = q[n] + a_q(t, v[n], q[n])*dt + 0 plt.plot(tArray*1e6, v) plt.xlabel("t (us)") plt.ylabel("v") plt.plot(tArray*1e6, q) plt.xlabel("t (us)") plt.ylabel("q")
SDE_Solution_Derivation.ipynb
AshleySetter/datahandling
mit
이 시계열 자료에서 입력 순서열과 target 값을 만든다. 입력 순서열은 3 스텝 크기의 순서열을 사용하고 target으로는 그 다음 시간 스텝의 값을 사용한다. 즉, 3개의 순서열을 입력한 다음 마지막 출력값이 target과 일치하게 만드는 sequence-to-value (many-to-one) 문제를 풀어보도록 한다. Keras 에서 RNN 을 사용하려면 입력 데이터는 (nb_samples, timesteps, input_dim) 크기를 가지는 ndim=3인 3차원 텐서(tensor) 형태이어야 한다. nb_samples: 자료의 수 timesteps: 순서열의 길이 input_dim: x 벡터의 크기 여기에서는 단일 시계열이므로 input_dim = 1 이고 3 스텝 크기의 순서열을 사용하므로 timesteps = 3 이며 자료의 수는 18 개이다. 다음코드와 같이 원래의 시계열 벡터를 Toeplitz 행렬 형태로 변환하여 3차원 텐서를 만든다.
from scipy.linalg import toeplitz S = np.fliplr(toeplitz(np.r_[s[-1], np.zeros(s.shape[0] - 2)], s[::-1])) S[:5, :3] X_train = S[:-1, :3][:, :, np.newaxis] Y_train = S[:-1, 3] X_train.shape, Y_train.shape X_train[:2] Y_train[:2] plt.subplot(211) plt.plot([0, 1, 2], X_train[0].flatten(), 'bo-', label="input sequence") plt.plot([3], Y_train[0], 'ro', label="target") plt.xlim(-0.5, 4.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("First sample sequence") plt.subplot(212) plt.plot([1, 2, 3], X_train[1].flatten(), 'bo-', label="input sequence") plt.plot([4], Y_train[1], 'ro', label="target") plt.xlim(-0.5, 4.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Second sample sequence") plt.tight_layout() plt.show()
30. 딥러닝/07. RNN 기본 구조와 Keras를 사용한 RNN 구현.ipynb
zzsza/Datascience_School
mit
Keras의 SimpleRNN 클래스 Keras에서 신경망 모형은 다음과 같은 순서로 만든다. Sequential 클래스 객체인 모형을 생성한다. add 메서드로 다양한 레이어를 추가한다. compile 메서드로 목적함수 및 최적화 방법을 지정한다. fit 메서드로 가중치를 계산한다. 우선 가장 단순한 신경망 구조인 SimpleRNN 클래스를 사용하는 방법은 다음 코드와 같다. 여기에서는 SimpleRNN 클래스 객체로 10개의 뉴런을 가지는 RNN 층을 만든다. 첫번째 인수로는 뉴런의 크기, input_dim 인수로는 벡터의 크기, input_length 인수로는 순서열으 길이를 입력한다. 그 다음으로 SimpleRNN 클래스 객체에서 나오는 10개의 출력값을 하나로 묶어 실수 값을 출력으로 만들기 위해 Dense 클래스 객체를 추가하였다. 손실 함수로는 mean-squred-error를, 최적화 방법으로는 단순한 stochastic gradient descent 방법을 사용한다.
from keras.models import Sequential from keras.layers import SimpleRNN, Dense np.random.seed(0) model = Sequential() model.add(SimpleRNN(10, input_dim=1, input_length=3)) model.add(Dense(1)) model.compile(loss='mse', optimizer='sgd')
30. 딥러닝/07. RNN 기본 구조와 Keras를 사용한 RNN 구현.ipynb
zzsza/Datascience_School
mit
일단 학습을 시키기 이전에 나오는 출력을 살펴보자.
plt.plot(Y_train, 'ro-', label="target") plt.plot(model.predict(X_train[:,:,:]), 'bs-', label="output") plt.xlim(-0.5, 20.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Before training") plt.show()
30. 딥러닝/07. RNN 기본 구조와 Keras를 사용한 RNN 구현.ipynb
zzsza/Datascience_School
mit
fit 메서드로 학습을 한다.
history = model.fit(X_train, Y_train, nb_epoch=100, verbose=0) plt.plot(history.history["loss"]) plt.title("Loss") plt.show()
30. 딥러닝/07. RNN 기본 구조와 Keras를 사용한 RNN 구현.ipynb
zzsza/Datascience_School
mit
학습을 마친 후의 출력은 다음과 같다.
plt.plot(Y_train, 'ro-', label="target") plt.plot(model.predict(X_train[:,:,:]), 'bs-', label="output") plt.xlim(-0.5, 20.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("After training") plt.show()
30. 딥러닝/07. RNN 기본 구조와 Keras를 사용한 RNN 구현.ipynb
zzsza/Datascience_School
mit
SimpleRNN 클래스 생성시 return_sequences 인수를 True로 하면 출력 순서열 중 마지막 값만 출력하는 것이 아니라 전체 순서열을 3차원 텐서 형태로 출력하므로 sequence-to-sequence 문제로 풀 수 있다. 다만 입력 순서열과 출력 순서열의 크기는 같아야 한다. 다만 이 경우에는 다음에 오는 Dense 클래스 객체를 TimeDistributed wrapper를 사용하여 3차원 텐서 입력을 받을 수 있게 확장해 주어야 한다.
from keras.layers import TimeDistributed model2 = Sequential() model2.add(SimpleRNN(20, input_dim=1, input_length=3, return_sequences=True)) model2.add(TimeDistributed(Dense(1))) model2.compile(loss='mse', optimizer='sgd')
30. 딥러닝/07. RNN 기본 구조와 Keras를 사용한 RNN 구현.ipynb
zzsza/Datascience_School
mit
이 번에는 출력값도 3개짜리 순서열로 한다.
X_train2 = S[:-3, 0:3][:, :, np.newaxis] Y_train2 = S[:-3, 3:6][:, :, np.newaxis] X_train2.shape, Y_train2.shape X_train2[:2] Y_train2[:2] plt.subplot(211) plt.plot([0, 1, 2], X_train2[0].flatten(), 'bo-', label="input sequence") plt.plot([3, 4, 5], Y_train2[0].flatten(), 'ro-', label="target sequence") plt.xlim(-0.5, 6.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("First sample sequence") plt.subplot(212) plt.plot([1, 2, 3], X_train2[1].flatten(), 'bo-', label="input sequence") plt.plot([4, 5, 6], Y_train2[1].flatten(), 'ro-', label="target sequence") plt.xlim(-0.5, 6.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Second sample sequence") plt.tight_layout() plt.show() history2 = model2.fit(X_train2, Y_train2, nb_epoch=100, verbose=0) plt.plot(history2.history["loss"]) plt.title("Loss") plt.show()
30. 딥러닝/07. RNN 기본 구조와 Keras를 사용한 RNN 구현.ipynb
zzsza/Datascience_School
mit
학습 결과는 다음과 같다.
plt.subplot(211) plt.plot([1, 2, 3], X_train2[1].flatten(), 'bo-', label="input sequence") plt.plot([4, 5, 6], Y_train2[1].flatten(), 'ro-', label="target sequence") plt.plot([4, 5, 6], model2.predict(X_train2[1:2,:,:]).flatten(), 'gs-', label="output sequence") plt.xlim(-0.5, 7.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Second sample sequence") plt.subplot(212) plt.plot([2, 3, 4], X_train2[2].flatten(), 'bo-', label="input sequence") plt.plot([5, 6, 7], Y_train2[2].flatten(), 'ro-', label="target sequence") plt.plot([5, 6, 7], model2.predict(X_train2[2:3,:,:]).flatten(), 'gs-', label="output sequence") plt.xlim(-0.5, 7.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Third sample sequence") plt.tight_layout() plt.show()
30. 딥러닝/07. RNN 기본 구조와 Keras를 사용한 RNN 구현.ipynb
zzsza/Datascience_School
mit
Load, explore, and prepare the dataset The first critical step is preparing the data correctly. Variables/values on different scales make it difficult for the NN to efficiently learn the correct parameters (e.g. weights and biases).
# To check the data in the Terminal/Konsole % ls data/bike_data/ data_path = 'data/bike_data/hour.csv' rides = pd.read_csv(data_path) # # The new historical/time-seri data to visualize # data_path_watch = 'data/watch_multisensor_data/2_Year_Data_Basis_Watch/2-year_data/Basis_Watch_Data.csv' # watch = pd.read_csv(data_path_watch)
impl-dl/etc/misc/nn.ipynb
arasdar/DL
unlicense
Downloading/ checking out the bike sharing dataset Bike sharing dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days or so in the data set. NOTE: Some days don't have exactly 24 entries in the dataset, so it's not exactly 10 days. You can see the hourly rentals here. This dataset is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the dataset, we also have information about temperature, humidity, and windspeed. All of these factors are likely affecting the number of riders. You'll be trying to capture all this with your model.
rides.head() rides[:10] rides[:24*10].plot(x='dteday', y='cnt') # watch[:1000].plot()
impl-dl/etc/misc/nn.ipynb
arasdar/DL
unlicense
Dummy variables Here we have some categorical variables such as season, weather, and month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head()
impl-dl/etc/misc/nn.ipynb
arasdar/DL
unlicense
Batch normalization for standardizing the dataset using mean and variance/ scaling the data To training NN easier and more efficiently, we should standardize each of the continuous variables. we'll shift and scale the variables such that they have zero-mean and a standard deviation of 1. These scaling factors are saved to add up to the NN predictions eventually.
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std
impl-dl/etc/misc/nn.ipynb
arasdar/DL
unlicense
Splitting the dataset into training, validation, and testing sets We'll save the data for the last approximately 21 days to use as test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
# Save data for approximately the last 21 days * 24 hours-per-day (24 hours/day) test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features_txn, test_targets_txm = test_data.drop(target_fields, axis=1), test_data[target_fields] test_features_txn.shape, test_targets_txm.shape
impl-dl/etc/misc/nn.ipynb
arasdar/DL
unlicense
Dividing the entire training data into training and validation sets to avoid overfitting and underfitting during the training We'll split the entire training data into two sets: one for training and one for validating as NN is being trained. Since this is a time-series dataset, we'll train on historical data, then try to predict on future data (the validation set).
# Hold out the last 60 days for validation set # txn: t is time/row (num of records) and n is space/col (input feature space dims) # txm: t is time/row (num of records) and m is space/col (output feature space dims) train_features_txn, train_targets_txm = features[:-60*24], targets[:-60*24] valid_features_txn, valid_targets_txm = features[-60*24:], targets[-60*24:] train_features_txn.shape, train_targets_txm.shape, valid_features_txn.shape, valid_targets_txm.shape
impl-dl/etc/misc/nn.ipynb
arasdar/DL
unlicense
Training the NN At first, we'll set the hyperparameters for the NN. The strategy here is to find hyperparameters such that the error on the training set is the low enough, but you're not underfitting or overfitting to the training set. If you train the NN too long or NN has too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. We'll also be using a method known as Stochastic Gradient Descent (SGD) to train the NN using Backpropagation. The idea is that for each training pass, you grab a either a random minibatch of the data instead of using the whole data set/full batch. We can also use bacth gradient descent (BGD), but with SGD, each pass is much faster than BGD. That is why as the data size grows in number of samples, BGD is not feasible for training the NN and we have to use SGD instead to consider our hardware limitation, specifically the memory size limits, i.e. RAM and Cache. Epochs: choose the number of iterations for updating NN parameters This is the number of times to update the NN parameters using the training dataset as we train the NN. The more iterations we use, the better the model might fit the data. However, if you use too many epoch iterations, then the model might not generalize well to the data but memorize the data which is called/known as overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Learning rate This scales the size of the NN parameters updates. If it is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the NN has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the NN to converge. Number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the NN performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
### Set the hyperparameters here ### num_epochs = 100 # updating NN parameters (w, b) learning_rate = 2 * 1/train_features_txn.shape[0] # train_features = x_txn, t: number of recorded samples/records hidden_nodes = 5 output_nodes = 1 # y_tx1 input_nodes = train_features_txn.shape[1] # x_txn # Buidlingthe NN by initializing/instantiating the NN class nn = NN(h=hidden_nodes, lr=learning_rate, m=output_nodes, n=input_nodes) # Training-validating the NN - learning process losses_tx2 = {'train':[], 'valid':[]} for each_epoch in range(num_epochs): # # Go through a random minibatch of 128 records from the training data set # random_minibatch = np.random.choice(train_features_txn.index, size=128) # x_txn, y_txm = train_features_txn.ix[random_minibatch].values, train_targets_txm.ix[random_minibatch]['cnt'] # Go through the full batch of records in the training data set x_txn , y_tx1 = train_features_txn.values, train_targets_txm['cnt'] nn.train(X_txn=x_txn, Y_txm=y_tx1) # Printing out the training progress train_loss_1x1_value = MSE(Y_pred_1xt=nn.run(X_txn=train_features_txn).T, Y_1xt=train_targets_txm['cnt'].values) valid_loss_1x1_value = MSE(Y_pred_1xt=nn.run(X_txn=valid_features_txn).T, Y_1xt=valid_targets_txm['cnt'].values) print('each_epoch:', each_epoch, 'num_epochs:', num_epochs, 'train_loss:', train_loss_1x1_value, 'valid_loss:', valid_loss_1x1_value) losses_tx2['train'].append(train_loss_1x1_value) losses_tx2['valid'].append(valid_loss_1x1_value) plt.plot(losses_tx2['train'], label='Train loss') plt.plot(losses_tx2['valid'], label='Valid loss') plt.legend() _ = plt.ylim()
impl-dl/etc/misc/nn.ipynb
arasdar/DL
unlicense
Test predictions Here, we test our NN on the test data to view how well the NN is modeling/predicting the test dataset. If something is wrong, the NN is NOT implemented correctly.
fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions_tx1 = nn.run(test_features_txn).T*std + mean ax.plot(predictions_tx1[0], label='Prediction') ax.plot((test_targets_txm['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions_tx1)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45)
impl-dl/etc/misc/nn.ipynb
arasdar/DL
unlicense
Path and file definitions
os.name if os.name == 'posix': baseDir = r'/home/hase/Documents/ZHAW/InfoEng/Lectures/Information_Retrieval/Exercises/PT_5_MiniRetrieve/' doc_path = r'/home/hase/Documents/ZHAW/InfoEng/Lectures/Information_Retrieval/Exercises/PT_5_MiniRetrieve/documents/' query_path = r'/home/hase/Documents/ZHAW/InfoEng/Lectures/Information_Retrieval/Exercises/PT_5_MiniRetrieve/queries/' elif os.name == 'nt': baseDir = r'C:\ZHAW\IR\PT_5_MiniRetrieve\\' doc_path = r'C:\ZHAW\IR\PT_5_MiniRetrieve\documents\\' STOPWORDS_PATH = 'stopwords.txt'
scripting/exercises/Mini+Retrieve.ipynb
pandastrail/InfoEng
gpl-3.0
Read files
# First, read the entire document as a string def readDoc(dir_path, file): path = dir_path + file with open(path, 'r') as f: string = f.read() return string # Reading document and create a string string = readDoc(doc_path, '1') #string
scripting/exercises/Mini+Retrieve.ipynb
pandastrail/InfoEng
gpl-3.0
Simple Tokenize
# Define regex to parse the string, and perform a simple tokenize # Later will need a proper tokenize function to remove stopwords split_regex = r'\W+' def simpleTokenize(string): """ A simple implementation of input string tokenization Args: string (str): input string Returns: list: a list of tokens """ # Convert string to lowercase string = string.lower() # Tokenize using the split_regex definition raw_tokens = re.split(split_regex, string) # Remove empty tokens tokens = [] for raw_token in raw_tokens: if len(raw_token) != 0: tokens.append(raw_token) return tokens #print(simpleTokenize(string))
scripting/exercises/Mini+Retrieve.ipynb
pandastrail/InfoEng
gpl-3.0
Remove Stopwords
# File with stopwords stopfile = os.path.join(baseDir, STOPWORDS_PATH) print(stopfile) # Create list of stopwords stopwords = [] with open(stopfile, 'r') as s: stopwords_string = s.read() stopwords = re.split(split_regex, stopwords_string) type(stopwords), len(stopwords) def tokenize(string): """ An implementation of input string tokenization that excludes stopwords Args: string (str): input string Returns: list: a list of tokens without stopwords """ tokens = simpleTokenize(string) # Loop the entire list and add words that are on not on the stopwords list to a new list filtered = [] for token in tokens: if token in stopwords: continue else: filtered.append(token) return filtered #tokenize(string)
scripting/exercises/Mini+Retrieve.ipynb
pandastrail/InfoEng
gpl-3.0
Inverted and Non-Inverted index for each document 'doc' in the list of documents D: get tokens by tokenizing 'd' for each token in tokens: inverted index dict {token_one:[doc with token_one, frequency of token_one in doc with token_one], token_two:[doc with token two, frequency of token_two in doc with token_two], ... } non-inverted index dict {doc_one:[token in doc_one, frequency of token in doc_one], doc_one:[another_token in doc one, frequency of another_token in doc_one], doc_two:[token in doc_two, frequency of token in doc_two], ... }
def noninvIndex(dir_path, num_files): """ A simple implementation of non-inverted index i.e. token frequency found in document Args: dir_path (string): path were all the documents are stored num_files (string): number of files stored in dir_path; assuming the name is a number Returns: docNoniIdx (dict): tokens frequency for each document """ # Create a list of files in the directory files = [] total_files = int(num_files) for i in range(1,total_files): files.append(str(i)) # Dictionary to store the non-inverted index for all documents docNoniIdx = {} # Loop the list files to parse all the existing documents for file in files: # Create a string for the file read path = dir_path + file with open(path, 'r') as f: string = f.read() # tokenize of the string removing stopwords #tokens = tokenize(string) tokens = simpleTokenize(string) # With the list of tokens create a non-inverted Index noniIdx = {} for token in tokens: if token not in noniIdx.keys(): noniIdx[token] = 1 else: noniIdx[token] += 1 docNoniIdx[file] = noniIdx return docNoniIdx non_invIndex = noninvIndex(doc_path,10) len(non_invIndex) non_invIndex def invIndex(dir_path, num_files): """ A simple implementation of inverted index i.e. token frequency found in document Args: dir_path (string): path were all the documents are stored num_files (string): number of files stored in dir_path; assuming the name is a number Returns: dociIdx (dict): frequency of token in documents dociIdx = {token:{'doc 1':freq, 'doc 2':freq}, token_two:{'doc 1:freq}, ...} """ # Create a list of files in the directory files = [] total_files = int(num_files) for i in range(1,total_files): files.append(str(i)) # Dictionary to store the non-inverted index for all documents dociIdx = {} # Loop the list files to parse all the existing documents for file in files: # Create a string for the file read path = dir_path + file with open(path, 'r') as f: string = f.read() # tokenize of the string; remove stopwords later tokens = simpleTokenize(string) # With the list of tokens create a non-inverted Index for token in tokens: if token not in dociIdx.keys(): #print(token, 'not in') dociIdx[token] = {file:1} #print(dociIdx) elif token in dociIdx.keys(): #print(token, 'in') if file in dociIdx[token].keys(): #print(file, 'in value') dociIdx[token][file] += 1 #print(dociIdx) else: #print(file, 'not in value') dociIdx[token].update({file:1}) #print(dociIdx) return dociIdx docinvIndex = invIndex(doc_path,10) docinvIndex
scripting/exercises/Mini+Retrieve.ipynb
pandastrail/InfoEng
gpl-3.0
Annotate You can add new fields to a table with annotate. As an example, let's create a new column called cleaned_occupation that replaces missing entries in the occupation field labeled as 'other' with 'none.'
missing_occupations = hl.set(['other', 'none']) t = users.annotate( cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation), hl.null('str'), users.occupation)) t.show()
hail/python/hail/docs/tutorials/05-filter-annotate.ipynb
cseed/hail
mit
transmute replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. transmute is useful for transforming data into a new form. Compare the following two snippts of code. The second is identical to the first with transmute replacing select.
missing_occupations = hl.set(['other', 'none']) t = users.select( cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation), hl.null('str'), users.occupation)) t.show() missing_occupations = hl.set(['other', 'none']) t = users.transmute( cleaned_occupation = hl.cond(missing_occupations.contains(users.occupation), hl.null('str'), users.occupation)) t.show()
hail/python/hail/docs/tutorials/05-filter-annotate.ipynb
cseed/hail
mit
Collections
## Tuple - immutable my_tuple = (2, 3, 4, 5) print('A tuple:', my_tuple) print('First element of tuple:', my_tuple[0]) ## List my_list = [2, 3, 4, 5] print('A list:', my_list) print('First element of list:', my_list[0]) my_list.append(12) print('Appended list:', my_list) my_list[0] = 45 print('Modified list:', my_list) ## String - immutable, tuple of characters text = "ATGTCATTT" print('Here is a string:', text) print('First character:', text[0]) print('Slice text[1:3]:', text[1:3]) print('Number of characters in text', len(text)) ## Set - unique unordered elements my_set = set([1,2,2,2,2,4,5,6,6,6]) print('A set:', my_set) ## Dictionary my_dictionary = {"A": "Adenine", "C": "Cytosine", "G": "Guanine", "T": "Thymine"} print('A dictionary:', my_dictionary) print('Value associated to key C:', my_dictionary['C'])
python_basic_2_intro.ipynb
pycam/python-basic
unlicense
Functions used so far...
my_list = ['A', 'C', 'A', 'T', 'G'] print('There are', len(my_list), 'elements in the list', my_list) print('There are', my_list.count('A'), 'letter A in the list', my_list) print("ATG TCA CCG GGC".split())
python_basic_2_intro.ipynb
pycam/python-basic
unlicense
While the temperature ratio appears fixed in Equations (7) and (8), variation in the derived values can be estimated from the variation in $\gamma = c_p / c_v$. There will also be variation in the specific relation between the gas internal energy and the gas pressure, but this is more difficult to quantify from stellar evolution model output. To address this problem, a small grid of models was run with masses in the range $0.1$ &mdash; $0.9 M_{\odot}$ with a mass resolution of $0.1 M_{\odot}$. Output from the code was modified to yield $\gamma$ directly as a function of radius, temperature, pressure, and density.
%matplotlib inline import matplotlib.pyplot as plt Teffs = np.arange(3000., 6100., 250.) Tspot_fixed_gamma = Teffs*(0.4)**0.4 # gamma where tau ~ 1 (note, no 0.2 Msun point) Gammas = np.array([1.22, 1.29, 1.30, 1.29, 1.36, 1.55, 1.63, 1.65]) ModelT = np.array([3.51, 3.57, 3.59, 3.61, 3.65, 3.71, 3.74, 3.77]) # tau = 1 ModelT = 10**ModelT ModelTeff = 10**np.array([3.47, 3.53, 3.55, 3.57, 3.60, 3.65, 3.69, 3.73]) # T = Teff Tratio = 0.4**((Gammas - 1.0)/Gammas) Tspot_physical_gamma = np.array([ModelT[i]*0.4**((Gammas[i] - 1.0)/Gammas[i]) for i in range(len(ModelT))]) # smoothed curve from scipy.interpolate import interp1d icurve = interp1d(ModelT, Tspot_physical_gamma, kind='cubic') Tphot_smoothed = np.arange(3240., 5880., 20.) Tspot_smoothed = icurve(Tphot_smoothed) # approximate Berdyugina data DeltaT = np.array([ 350., 450., 700., 1000., 1300., 1650., 1850.]) BerdyT = np.array([3300., 3500., 4000., 4500., 5000., 5500., 5800.]) BSpotT = BerdyT - DeltaT print Tratio fig, ax = plt.subplots(1, 1, figsize=(8,4)) ax.set_xlabel('Photospheric Temperature (K)', fontsize=20.) ax.set_ylabel('Spot Temperature (K)', fontsize=20.) ax.tick_params(which='major', axis='both', length=10., labelsize=16.) ax.grid(True) ax.plot(Teffs, Tspot_fixed_gamma, '--', lw=2, dashes=(10., 10.), markersize=9.0, c='#1e90ff') ax.plot(BerdyT, BSpotT, '-', lw=2 , dashes=(25., 15.), c='#000080') ax.fill_between(BerdyT, BSpotT - 200., BSpotT + 200., facecolor='#000080', alpha=0.1, edgecolor='#eeeeee') ax.plot(ModelT, Tspot_physical_gamma, 'o', lw=2, markersize=9.0, c='#800000') ax.plot(Tphot_smoothed, Tspot_smoothed, '-', lw=3, c='#800000') fig, ax = plt.subplots(1, 1, figsize=(8,4)) ax.set_xlabel('Photospheric Temperature (K)', fontsize=20.) ax.set_ylabel('T(phot) - T(spot) (K)', fontsize=20.) ax.tick_params(which='major', axis='both', length=10., labelsize=16.) ax.grid(True) ax.plot(Teffs, Teffs - Tspot_fixed_gamma, '--', lw=2, dashes=(10., 10.), markersize=9.0, c='#1e90ff') ax.plot(BerdyT, DeltaT, '-', lw=2 , dashes=(25., 15.), c='#000080') ax.fill_between(BerdyT, DeltaT - 200., DeltaT + 200., facecolor='#000080', alpha=0.1, edgecolor='#eeeeee') ax.plot(ModelT, ModelT - Tspot_physical_gamma, 'o', lw=2, markersize=9.0, c='#800000') ax.plot(Tphot_smoothed, Tphot_smoothed - Tspot_smoothed, '-', lw=3, c='#800000')
Daily/20150803_pressure_reduc_spots.ipynb
gfeiden/Notebook
mit
Results of this simple model indicate that equilibration of a polytropic gas within a magnetic structure located near the photosphere ($\tau_{\rm ross} = 1$) provides a reasonable approximation to observed spot temperatures from low-mass M dwarfs up to solar-type stars. Above 5400 K, the gas is sufficiently ideal that the model predicted relationship (red line) is asymptotic to the case of a purely ideal gas (small-dashed light-blue line). Below that temperature, the simple model traces the relationship provided by Berdyugina (2005). Difficulties below 4000 K may be the result of either model inaccuracies, either stemming from atmospheric structure or the simple approximation of energy equipartition, or observational complications that arise from measuring M dwarf photospheric temperatures. We can also estimate umbral magnetic field strengths by extracting the model gas pressure at the same optical depth ($\tau_{\rm ross} = 1$),
# log(Pressure) p_gas = np.array([6.37, 5.90, 5.80, 5.70, 5.60, 5.45, 5.30, 5.15]) B_field_Eeq = np.sqrt(12.*np.pi*0.4*10**p_gas)/1.0e3 # in kG B_field_Peq = np.sqrt( 8.*np.pi*10**p_gas)/1.0e3 # smooth curves icurve = interp1d(ModelT, B_field_Eeq, kind='cubic') B_field_Eeq_smooth = icurve(np.arange(3240., 5880., 20.)) icurve = interp1d(ModelT, B_field_Peq, kind='cubic') B_field_Peq_smooth = icurve(np.arange(3240., 5880., 20.)) fig, ax = plt.subplots(1, 1, figsize=(8,4)) ax.set_xlabel('Photospheric Temperature (K)', fontsize=20.) ax.set_ylabel('Equipartition Magnetic Field (kG)', fontsize=20.) ax.tick_params(which='major', axis='both', length=10., labelsize=16.) ax.grid(True) ax.plot(ModelT, B_field_Eeq, 'o', lw=2, markersize=9.0, c='#800000') ax.plot(np.arange(3240., 5880., 20.), B_field_Eeq_smooth, '-', lw=2, c='#800000') ax.plot(ModelT, B_field_Peq, 'o', lw=2, markersize=9.0, c='#1e90ff') ax.plot(np.arange(3240., 5880., 20.), B_field_Peq_smooth, '-', lw=2, dashes=(20., 5.), c='#1e90ff')
Daily/20150803_pressure_reduc_spots.ipynb
gfeiden/Notebook
mit
The two curves represent two different approximations, one is energy equipartition (red curve) and the other that the magnetic pressure is precisely equal to the gas pressure (blue curve). These values do not represent surface averaged magnetic field strengths, but the strengths of local concentrations of magnetic flux. Based on energy equipartition, we do not expect spot magnetic field strengths to be considerably larger than those estimated from the red curve. Finally, we can estimate a "curve of cooling" relating the strength of a magnetic field to the temperature within the magnetic sturcture. Since stars the properties of the photospheric layers are so different for stars as a function of effeictve temperature, it'll be helpful to compute curves at several effective temperatures characteristic of low-mass M dwarf, a late K dwarf, and an early K/G dwarf.
B_field_strengths = np.arange(0.1, 8.1, 0.1)*1.0e3 log_P_phot = np.array([5.15, 5.60, 5.90]) Gamma_phot = np.array([1.65, 1.36, 1.29]) Exponents = (Gamma_phot - 1.0)/Gamma_phot print Exponents fig, ax = plt.subplots(3, 1, figsize=(8, 12), sharex=True) i = 0 ax[2].set_xlabel('Spot Magnetic Field (kG)', fontsize=20.) for axis in ax: B_field_eq = np.sqrt(8.*np.pi*(0.4*10**log_P_phot[i])) axis.grid(True) axis.set_ylabel('$T_{\\rm spot} / T_{\\rm phot}$', fontsize=20.) axis.tick_params(which='major', axis='both', length=10., labelsize=16.) axis.plot(B_field_strengths/1.0e3, (1.0 - B_field_strengths**2/(8.*np.pi*10**log_P_phot[i]))**Exponents[i], '-', lw=3, c='#800000') axis.plot(B_field_eq/1.0e3, (1.0 - B_field_eq**2/(8.0*np.pi*10**log_P_phot[i]))**Exponents[i], 'o', markersize=12.0, c='#555555') i += 1
Daily/20150803_pressure_reduc_spots.ipynb
gfeiden/Notebook
mit
M Dwarfs Shulyak et al. (2010, 2014) have measured distribution of magnetic fields on the surfaces of M dwarfs by modeling FeH bands in high resolution, high S/N Stokes I sectra. They find M dwarf spectra are typically best fit by a uniform 1 kG magnetic field everywhere on the star, but with the addition of local concentrations across the surface. These local concentrations can reach upward of 7 &ndash; 8 kG. We now ask, do the stars that possess these field lie close to the locus defined by models?
Shulyak_max_B = np.array([[3400., 100., 6.5, 0.5], [3400., 100., 6.0, 0.5], [3300., 100., 7.5, 0.5], [3100., 100., 6.5, 0.5], [3100., 50., 1.0, 0.5]]) fig, ax = plt.subplots(1, 1, figsize=(8,4)) ax.set_xlabel('Photospheric Temperature (K)', fontsize=20.) ax.set_ylabel('Equipartition Magnetic Field (kG)', fontsize=20.) ax.tick_params(which='major', axis='both', length=10., labelsize=16.) ax.grid(True) ax.fill_between(np.arange(3240., 5880., 20.), B_field_Peq_smooth, B_field_Eeq_smooth, facecolor='#555555', alpha=0.3, edgecolor='#eeeeee') ax.errorbar(Shulyak_max_B[:,0], Shulyak_max_B[:,2], xerr=Shulyak_max_B[:,1], yerr=Shulyak_max_B[:,3], lw=2, fmt='o', c='#555555')
Daily/20150803_pressure_reduc_spots.ipynb
gfeiden/Notebook
mit
We find that the data from Shulyak et al. (2014) for the maximum mangetic field strengths (in local concentrations) of active stars are on the order of those expected from either energy of pressure equipartition. The fact that two have weaker magnetic fields only indicates that the there were no strong local concentrations, similar to the surface of the quiet Sun. Note, however, that there is some uncertainty in this comparison. Shulyak et al. (2014) quote the effective temperature, which for M dwarfs is not necessarily equal to the photospheric temperature. Model atmospheres predict that the "effective temperature" for an M dwarf occurs in the optically thin layers above the opaque photosphere. Thus, it is more likely that the photospheric temperature for the Shulyak sample is greater than those quoted above. If we instead convert the quoted effective temperatures to photospheric temperatures using stellar model atmospheres, we find
Shulyak_max_B = np.array([[3685., 100., 6.5, 0.5], [3685., 100., 6.0, 0.5], [3578., 100., 7.5, 0.5], [3379., 100., 6.5, 0.5], [3379., 50., 1.0, 0.5]]) fig, ax = plt.subplots(1, 1, figsize=(8,4)) ax.set_xlabel('Photospheric Temperature (K)', fontsize=20.) ax.set_ylabel('Equipartition Magnetic Field (kG)', fontsize=20.) ax.tick_params(which='major', axis='both', length=10., labelsize=16.) ax.grid(True) ax.fill_between(np.arange(3240., 5880., 20.), B_field_Peq_smooth, B_field_Eeq_smooth, facecolor='#555555', alpha=0.3, edgecolor='#eeeeee') ax.errorbar(Shulyak_max_B[:,0] + 100., Shulyak_max_B[:,2], xerr=Shulyak_max_B[:,1], yerr=Shulyak_max_B[:,3], lw=2, fmt='o', c='#555555')
Daily/20150803_pressure_reduc_spots.ipynb
gfeiden/Notebook
mit
The loss function You can linearly reweight the chaos into doing very useful computation. We call this learner the readout readout unit. Model an output or readout unit for the network as: $$z = \mathbf{w}^T \tanh[\mathbf{x}]$$ The output $z$ is a scalar formed by the dot product of two N-dimensional vectors ($\mathbf{w}^T$ denotes the transpose of $\mathbf{w}$). We will implement the FORCE learning rule (Susillo & Abbott, 2009), by adjusting the readout weights, $w_i$, so that $z$ matches a target function: $$f(t) = \cos\left(\frac{2 \pi t}{50} \right)$$ The rule works by implementing recursive least-squares: $$\mathbf{w} \rightarrow \mathbf{w} + c(f-z) \mathbf{q}$$ $$\mathbf{q} = P \tanh [\mathbf{x}]$$ $$c = \frac{1}{1+ \mathbf{q}^T \tanh(\mathbf{x})}$$ $$P_{ij} \rightarrow P_{ij} - c q_i q_j$$ Real FORCE Let's teach the chaos how to be a sin wave
target = lambda t0: cos(2 * pi * t0 / 50) # target pattern def f3(t0, x): return -x + g * dot(J, tanh_x) + dot(w, tanh_x) * u dt = 1 # time step tmax = 800 # simulation length tstop = 300 N = 300 J = normal(0, sqrt(1 / N), (N, N)) x0 = uniform(-0.5, 0.5, N) t = linspace(0, 50, 500) g = 1.5 u = uniform(-1, 1, N) w = uniform(-1 / sqrt(N), 1 / sqrt(N), N) # initial weights P = eye(N) # Running estimate of the inverse correlation matrix lr = 1.0 # learning rate # simulation data: state, output, time, weight updates x, z, t, wu = [x0], [], [0], [0] # Set up ode solver solver = ode(f3) solver.set_initial_value(x0) # Integrate ode, update weights, repeat while t[-1] < tmax: tanh_x = tanh(x[-1]) # cache z.append(dot(w, tanh_x)) error = target(t[-1]) - z[-1] q = dot(P, tanh_x) c = lr / (1 + dot(q, tanh_x)) P = P - c * outer(q, q) w = w + c * error * q # Stop leaning here if t[-1] > tstop: lr = 0 wu.append(np.sum(np.abs(c * error * q))) solver.integrate(solver.t + dt) x.append(solver.y) t.append(solver.t) # last update for readout neuron z.append(dot(w, tanh_x)) x = np.array(x) t = np.array(t) plt.figure(figsize=(10, 5)) plt.subplot(2, 1, 1) plt.plot(t, target(t), '-r', lw=2) plt.plot(t, z, '-b') plt.legend(('target', 'output')) plt.ylim([-1.1, 3]) plt.xticks([]) plt.subplot(2, 1, 2) plt.plot(t, wu, '-k') plt.yscale('log') plt.ylabel('$|\Delta w|$', fontsize=20) plt.xlabel('time', fontweight='bold', fontsize=16) plt.show()
.ipynb_checkpoints/DFORCE-checkpoint.ipynb
avicennax/jedi
mit
The binary loss function Now let's learn, with binary codes.
target = lambda t0: cos(2 * pi * t0 / 50) # target pattern def f3(t0, x): return -x + g * dot(J, tanh_x) + dot(w, tanh_x) * u dt = 1 # time step tmax = 800 # simulation length tstop = 500 N = 300 J = normal(0, sqrt(1 / N), (N, N)) x0 = uniform(-0.5, 0.5, N) t = linspace(0, 50, 500) rho = 0.1 # Set and rand vec rho = uniform(0, 0.1, N) g = 1.5 u = uniform(-1, 1, N) w = uniform(-1 / sqrt(N), 1 / sqrt(N), N) # initial weights P = eye(N) # Running estimate of the inverse correlation matrix lr = .4 # learning rate rho = repeat(0.05, N) # simulation data: state, # output, time, weight updates x, z, t, wu = [x0], [], [0], [0] # Set up ode solver solver = ode(f3) solver.set_initial_value(x0) # Integrate ode, update weights, repeat while t[-1] < tmax: tanh_x = tanh(x[-1]) tanh_xd = decode(tanh_x, rho) # BINARY CODE INTRODUCED HERE! z.append(dot(w, tanh_xd)) error = target(t[-1]) - z[-1] q = dot(P, tanh_xd) c = lr / (1 + dot(q, tanh_xd)) P = P - c * outer(q, q) w = w + c * error * q # Stop training time if t[-1] > tstop: lr = 0 wu.append(np.sum(np.abs(c * error * q))) solver.integrate(solver.t + dt) x.append(solver.y) t.append(solver.t) # last update for readout neuron z.append(dot(w, tanh_x)) # plot x = np.array(x) t = np.array(t) plt.figure(figsize=(10, 5)) plt.subplot(2, 1, 1) plt.plot(t, target(t), '-r', lw=2) plt.plot(t, z, '-b') plt.legend(('target', 'output')) plt.ylim([-1.1, 3]) plt.xticks([]) plt.subplot(2, 1, 2) plt.plot(t, wu, '-k') plt.yscale('log') plt.ylabel('$|\Delta w|$', fontsize=20) plt.xlabel('time', fontweight='bold', fontsize=16) plt.show()
.ipynb_checkpoints/DFORCE-checkpoint.ipynb
avicennax/jedi
mit
MNIST SGD Get the 'pickled' MNIST dataset from http://deeplearning.net/data/mnist/mnist.pkl.gz. We're going to treat it as a standard flat dataset with fully connected layers, rather than using a CNN.
path = Config().data/'mnist' path.ls() with gzip.open(path/'mnist.pkl.gz', 'rb') as f: ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1') plt.imshow(x_train[0].reshape((28,28)), cmap="gray") x_train.shape x_train,y_train,x_valid,y_valid = map(torch.tensor, (x_train,y_train,x_valid,y_valid)) n,c = x_train.shape x_train.shape, y_train.min(), y_train.max()
dev_nbs/course/lesson5-sgd-mnist.ipynb
fastai/fastai
apache-2.0
In lesson2-sgd we did these things ourselves: python x = torch.ones(n,2) def mse(y_hat, y): return ((y_hat-y)**2).mean() y_hat = x@a Now instead we'll use PyTorch's functions to do it for us, and also to handle mini-batches (which we didn't do last time, since our dataset was so small).
from torch.utils.data import TensorDataset bs=64 train_ds = TensorDataset(x_train, y_train) valid_ds = TensorDataset(x_valid, y_valid) train_dl = TfmdDL(train_ds, bs=bs, shuffle=True) valid_dl = TfmdDL(valid_ds, bs=2*bs) dls = DataLoaders(train_dl, valid_dl) x,y = dls.one_batch() x.shape,y.shape class Mnist_Logistic(Module): def __init__(self): self.lin = nn.Linear(784, 10, bias=True) def forward(self, xb): return self.lin(xb) model = Mnist_Logistic().cuda() model model.lin model(x).shape [p.shape for p in model.parameters()] lr=2e-2 loss_func = nn.CrossEntropyLoss() def update(x,y,lr): wd = 1e-5 y_hat = model(x) # weight decay w2 = 0. for p in model.parameters(): w2 += (p**2).sum() # add to regular loss loss = loss_func(y_hat, y) + w2*wd loss.backward() with torch.no_grad(): for p in model.parameters(): p.sub_(lr * p.grad) p.grad.zero_() return loss.item() losses = [update(x,y,lr) for x,y in dls.train] plt.plot(losses); class Mnist_NN(Module): def __init__(self): self.lin1 = nn.Linear(784, 50, bias=True) self.lin2 = nn.Linear(50, 10, bias=True) def forward(self, xb): x = self.lin1(xb) x = F.relu(x) return self.lin2(x) model = Mnist_NN().cuda() losses = [update(x,y,lr) for x,y in dls.train] plt.plot(losses); model = Mnist_NN().cuda() def update(x,y,lr): opt = torch.optim.Adam(model.parameters(), lr) y_hat = model(x) loss = loss_func(y_hat, y) loss.backward() opt.step() opt.zero_grad() return loss.item() losses = [update(x,y,1e-3) for x,y in dls.train] plt.plot(losses); learn = Learner(dls, Mnist_NN(), loss_func=loss_func, metrics=accuracy) from fastai.callback.all import * learn.lr_find() learn.fit_one_cycle(1, 1e-2) learn.recorder.plot_sched() learn.recorder.plot_loss()
dev_nbs/course/lesson5-sgd-mnist.ipynb
fastai/fastai
apache-2.0
Zaporedje je padajoče, če je naslednji člen manjši od prejšnjega: $a_n>a_{n+1}$. Iz grafa vidimo, da je od približno 15 člena naprej zaporedje padajoče. Poiščimo $n$ pri katerem se to zgodi še s programom.
n = 0 while a(n)<a(n+1): n += 1 n
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Graf in program seveda nista dokaz, da je zaporedje padajoče za vse $n>14$. Lahko pa to preverimo, tako da rešimo neenačbo $$ a_n>a_{n+1}$$ $$ \frac{n^{10}}{2^n}>\frac{(n+1)^{10}}{2^{n+1}}$$
import sympy as sym sym.init_printing() # lepši izpis formul from sympy.solvers.inequalities import solve_univariate_inequality from sympy import Symbol n = Symbol('n', real=True) # n simbol, ki je realno število neenacba = a(n) > a(n+1) sym.simplify(neenacba) solve_univariate_inequality(neenacba,n)
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Neenačba je previsoke stopnje, zato je SymPy ne zna rešiti. Rešitev lahko poiščemo sami, tako da najprej rešimo enačbo. Nato določimo, na katerih intervalih neeančba velja.
from sympy.solvers import solve resitve = solve(neenacba.lhs-neenacba.rhs) resitve [float(x) for x in resitve] neenacba.subs(n,15) # v enačbo vstavimo n=15
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Rekurzivna zaporedja Zaporedje lahko podamo tudi z rekurzivno formulo $$a_{n+1}=f(a_n)$$ in prvim členom $a_0$. Primer Zaporedje je dano z rekurzivno formulo $$a_{n+1}=1+\frac{1}{a_n}$$ in začetnim členom $a_0=\frac{1}{2}$. Nariši nekaj členov zaporedja Pokaži, da sta zaporedji $a_{2n}$ in $a_{2n+1}$ monotoni in omejeni. Poišči limito.
# narišemo nekaj prvih členov n0 = 20 a = n0*[0] a[0] = 1/2 for n in range(n0-1): a[n+1] = 1+1./a[n] print("Prvih %d členov zaporedja\n" % n0) print((n0*"%0.8f\n")%tuple(a)) from matplotlib import pyplot as plt %matplotlib inline plt.plot(range(n0),a,'o') plt.title("Členi zaporedja $a_n$") plt.show()
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Prvih nekaj členov nam razkrije, da je zaporedje lihih členov $a_{2n+1}$ padajoče, zaporedje sodih členov $a_{2n}$ pa naraščajoče. Da bi to preverili/dokazali, zapišimo rekurzivno enačbo, ki ji zadoščata podzaporedji $a_{2n+1}$ in $a_{2n}$. Rekurzivna enačba $$a_{n+1}=f(a_n)=1+\frac{1}{a_n}$$ je določena s funkcijo $f(x)=1+\frac{1}{x}$. Za zaporedji sodih in lihih členov $a_{2n}$ in $a_{2n+1}$ dobimo rekurzivni formuli, če dvakrat uporabimo funkcijo $f$ $$a_{2(n+1)}=f(a_{2n+1})=f(f(a_{2n}))$$ in $$a_{2(n+1)+1} = f(f(a_{2n+1}))$$
import sympy as sym sym.init_printing() # zaporedje je podano s funkcijo f(x) = 1+1/x f = lambda x: 1+1/x n = sym.Symbol('n') a_n = sym.Symbol('a_n') f(f(a_n)) sym.simplify(f(f(a_n)))
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Zaporedji lihih členov in sodih členov zadoščata rekurzivni formuli $$a_{n+2}=\frac{2 a_{n} + 1}{a_{n} + 1}.$$ Zaporedje $a_{2n}$ bo naraščalo, če bo veljalo $$a_{2n+2} > a_{2n}$$ oziroma $$\frac{2 a_{2n} + 1}{a_{2n} + 1} > a_{2n}.$$
ff = lambda x: f(f(x)) a_2n = sym.Symbol('a_2n') sym.simplify(ff(a_2n) > a_2n) sym.solve_univariate_inequality(ff(a_2n)>a_2n,a_2n)
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Zaporedje sodih členov $a_{2n}$ bo padajoče, če so členi $$a_{2n}>\frac{1}{2}+\frac{\sqrt{5}}{2}$$ in naraščajoče, če so členi $$a_{2n}<\frac{1}{2}+\frac{\sqrt{5}}{2}.$$ Podobno velja za zaporedje lihih členov $a_{2n+1}$.
neenacba = ff(a_2n)<1/sym.Integer(2)+sym.sqrt(sym.Integer(5))/2 neenacba sym.solve_univariate_inequality(neenacba,a_2n)
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Vidimo, da je $$a_{2n}<\frac{1}{2}+\frac{\sqrt{5}}{2}\implies a_{2n+2}<\frac{1}{2}+\frac{\sqrt{5}}{2}.$$ Po indukciji sklepamo, da so vsi členi $a_{2n}<\frac{1}{2}+\frac{\sqrt{5}}{2}$, če to velja za prvi člen $$a_0= \frac{1}{2}<\frac{1}{2}+\frac{\sqrt{5}}{2}.$$ Ker so vsi členi $a_{2n}<\frac{1}{2}+\frac{\sqrt{5}}{2}$, je zaporedje sodih členov naraščajoče. Podobno dokažemo, da je zaporedje lihih členov $a_{2n+1}$ padajoče. Limita Limita $a = \lim_{n\to\infty} a_n$ rekurzivnega zaporedja zadošča rekurzivni enačbi $$ a = f(a). $$ Res! \begin{eqnarray} a_{n+1} & = &f(a_n)\ \lim_{n\to\infty} a_{n+1} &=& \lim_{n\to\infty} f(a_n)\ \lim_{n\to\infty} a_{n} &=& f\left(\lim_{n\to\infty} a_n\right)\ a &=&f(a) \end{eqnarray}
resitve = sym.solve(sym.Eq(a_n,f(a_n)),a_n) resitve
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Rešitvi sta nam že poznani iz predhodne analize. Limita je očitno enaka prvi rešitvi $$\frac{1}{2}+\frac{\sqrt{5}}{2},$$ saj je $a_n>0$.
resitve[0].evalf() # numerični približek za limito se ujema s prej izračunanimi členi zaporedja
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Grafična predstavitev rekurzivnega zaporedja Rekurzivno podano zaporedje z rekurzivno formulo $$a_{n+1}=f(a_n)$$ si lahko predstavljamo tudi grafično. Limita zaporedja $a_n$ je rešitev enačbe $$a=f(a)$$ Grafično prestavlja limita presečišče grafa funkcije $y=f(x)$ in simetrale lihih kvadrantov $y=x$.
import numpy as np t=np.linspace(0.4,4) l1, = plt.plot(t,t,label="$y=x$") l2, = plt.plot(t,f(t),label="$y=f(x)$") x = sym.Symbol('x') resitve = sym.solve(sym.Eq(x,f(x)),x) # limita/rešitev enačbe x=f(x) plt.plot(resitve[0],f(resitve[0]),'o') # oznake in legenda plt.annotate("$x=f(x)$",xy=(resitve[0],resitve[0]),xytext=(resitve[0]+0.3,resitve[0])) plt.title("Fiksna točka funkcije $f(x)$") legend1 = plt.legend(handles=[l1],loc=3) plt.gca().add_artist(legend1) plt.legend(handles=[l2],loc=2) plt.axes().set_aspect('equal', 'datalim') plt.show() # JSAnimation import available at https://github.com/jakevdp/JSAnimation from JSAnimation.JSAnimation import IPython_display import matplotlib.animation as animation t=np.linspace(0.4,4) fig = plt.figure() plt.plot(t,t) plt.plot(t,f(t)) plt.title("Konstrukcija zaporedja $a_{n+1}=f(a_n)$") plt.ylim(-0.05,4) plt.axes().set_aspect('equal', 'datalim') ims = [] pts = [] sled = [] for i in range(4): if a[i]>resitve[0]: dx1,dx2 = -0.2,1 else: dx1,dx2 = 1,-0.2 pts = pts + plt.plot(a[i],0,'ob') + [plt.annotate('$a_%d$'%i,xy=(a[i], 0), xytext=(a[i], 0.2))] ims.append(sled+pts) vl = plt.plot([a[i],a[i]],[0,f(a[i])],'-ro') vl = vl + [plt.annotate('$(a_%d,f(a_%d))$'%(i,i),xy=(a[i], f(a[i])), xytext=(a[i]-dx1, f(a[i])+0.2))] ims.append(sled+pts+vl) hl = plt.plot([a[i],f(a[i])],[f(a[i]),f(a[i])],'-ro') hl = hl + [plt.annotate('$(a_%d=f(a_%d))$'%(i+1,i),xy=(f(a[i]), f(a[i])), xytext=(f(a[i])-dx2, f(a[i])+0.2))] ims.append(sled+pts+vl+hl) #ims.append(hl) vll = plt.plot([f(a[i]),f(a[i])],[f(a[i]),0],'-ro') sled = sled + plt.plot([a[i],a[i],f(a[i]),f(a[i])],[a[i],f(a[i]),f(a[i]),f(f(a[i]))],'-',color='0.75') ims.append(sled+pts+vl+hl+vll) pts = pts + plt.plot(a[i+1],0,'ob') + [plt.annotate('$a_%d$'%i,xy=(a[i+1], 0), xytext=(a[i+1], 0.2))] ims.append(pts) animation.ArtistAnimation(fig,ims,interval=800,blit=True,repeat_delay=3000)
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Zaporedje grafično konstruiramo v več korakih. Za vsak nov člen zaporedja $a_{n+1}$ izvedemo naslednje korake: na grafu funkcije odčitamo $f(a_n)$ (navpična črta) vrednost $a_{n+1}=f(a_n)$ prenesemo na premico $y=x$ (vodoravna črta) iz premice $y=x$ prenesemo vrednost $a_{n+1}$ na $x$-os (navpična črta) Dodatek: Zaprtje za rekurzivno zaporedje Zaprtje (ang. closure) omogoča, da funkciji dodamo „spomin“, ne da bi pri tem uporabili globalne spremenljivke. Funkcijo, ki računa rekurzivno zaporedje, bomo popravili, da si bo zapomnila že izračunane člene.
def gen_rekurzijo(a0,fun): """Funkcija definira rekurzivno zaporedje z rekurzivno formulo a(n+1)=fun(a(n)) in začetnim členom a(0)=a0""" zaporedje = [a0] def a(n): if len(zaporedje)-1<n: # če člen zaporedja še ni izračunam vrednost = fun(a(n-1)) # rekurzivna formula zaporedje.append(vrednost) # shranimo vrednost za kasneje return vrednost else: print("Že izračunani členi:", zaporedje) return zaporedje[n] # člen a(n) je že izračunan return a
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Primer Zaporedje za računanje $\sqrt{2}$ je dano z rekurzivno formulo $$a_{n+1}=\frac{1}{2}\left(a_n+\frac{2}{a_n}\right).$$ Izračunaj $\sqrt{2}$ na 12 decimalk natančno.
koren2 = gen_rekurzijo(2,lambda x:(x+2/x)/2) koren2(1) koren2(3) n = 0 while abs(koren2(n)**2-2)>1e-12: n += 1 print("n: %d, koren: %1.12f" % (n,koren2(n)))
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
>> naprej: vrste
import disqus %reload_ext disqus %disqus matpy
02a_zaporedja.ipynb
mrcinv/matpy
gpl-2.0
Data The data used in this notebook is also available on FigShare: Geers AJ, Larrabide I, Radaelli AG, Bogunovic H, Kim M, Gratama van Andel HAF, Majoie CB, VanBavel E, Frangi AF. Reproducibility of hemodynamic simulations of cerebral aneurysms across imaging modalities 3DRA and CTA: Geometric and hemodynamic data. FigShare, 2015. DOI: 10.6084/m9.figshare.1354056 Variables are defined as follows (TA: time-averaged; PS: peak systole; ED: end diastole): * A_N: Aneurysm neck area * V_A: Aneurysm volume * Q_P: TA flow rate in the parent vessel just proximal to the aneurysm * Q_A: TA flow rate into the aneurysm * NQ_A: Q_A / Q_P * WSS_P: Average TA WSS on the wall of a parent vessel segment just proximal to the aneurysm * WSS_A: Average TA WSS on the aneurysm wall * NWSS_A: WSS_A / WSS_P * LWSS_A: Portion of the aneurysm wall with WSS < 0.4 Pa at ED * MWSS_A: Maximum WSS on the aneurysm wall at PS * 90WSS_A: 90th percentile value of the WSS on the aneurysm wall at PS * N90WSS_A: 90WSS_A normalized by the average WSS on the aneurysm wall at PS
df_input = pd.read_csv(os.path.join('data', '3dracta.csv'), index_col=[0, 1]) df_input
data_analysis.ipynb
ajgeers/3dracta
bsd-2-clause
Extract separate dataframes for 3DRA and CTA.
df_3dra = df_input.xs('3dra', level='modality') df_cta = df_input.xs('cta', level='modality')
data_analysis.ipynb
ajgeers/3dracta
bsd-2-clause
Statistics Calculate the relative difference between 3DRA and CTA wrt 3DRA. Per variable, get the mean and standard error of this relative difference over all aneurysms.
df_reldiff = 100 * abs(df_3dra - df_cta)/df_3dra s_mean = df_reldiff.mean() s_standarderror = pd.Series(stats.sem(df_reldiff), index=df_input.columns)
data_analysis.ipynb
ajgeers/3dracta
bsd-2-clause
Test differences between 3DRA and CTA with the Wilcoxon signed rank test. Note: MATLAB was used to perform this test for the paper. Its 'signrank' function defaults to using the 'exact method' if a dataset has 15 or fewer observations and the 'approximate method' otherwise. See the documentation for more details. SciPy's 'wilcoxon' function has currently (version 1.3.0) no equivalent option and always uses the 'approximate method'.
pvalue = np.empty(len(df_input.columns)) for i, variable in enumerate(df_input.columns): pvalue[i] = stats.wilcoxon(df_3dra[variable], df_cta[variable])[1] s_pvalue = pd.Series(pvalue, index=df_input.columns)
data_analysis.ipynb
ajgeers/3dracta
bsd-2-clause