markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
```{admonition} Ejercicio :class: tip Implementar la regla de Simpson compuesta con NumPy, Cython y Cython + OpenMP en una máquina de AWS con las mismas características que la que se presenta en esta nota y medir tiempo de ejecución. ``` Numba Utiliza compilación JIT at runtime mediante el compilador llvmlite. Puede utilizarse para funciones built in de Python o de NumPy. Tiene soporte para cómputo en paralelo en CPU/GPU. Utiliza CFFI y ctypes para llamar a funciones de C. Ver numba architecture para una explicación detallada de su funcionamiento. Se utiliza un decorator para anotar cuál función se desea compilar. Ejemplo de uso con Numba
from numba import jit
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
```{margin} En glossary: nopython se da la definición de nopython mode en Numba. Ahí se indica que se genera código que no usa Python C API y requiere que los tipos de valores nativos de Python puedan ser inferidos. Puede usarse el decorator njit que es un alias para @jit(nopython=True). ``` (RCFNUMBA)= Rcf_numba
@jit(nopython=True) def Rcf_numba(a,b,n): """ Compute numerical approximation using rectangle or mid-point method in an interval. Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n Args: a (float): left point of interval. b (float): right point of interval. n (int): number of subintervals. Returns: sum_res (float): numerical approximation to integral of f in the interval a,b """ h_hat = (b-a)/n sum_res = 0 for i in range(n): x = a+(i+1/2)*h_hat sum_res += np.exp(-x**2) return h_hat*sum_res start_time = time.time() res_numba = Rcf_numba(a,b,n) end_time = time.time()
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
```{margin} Se mide dos veces el tiempo de ejecución para no incluir el tiempo de compilación. Ver 5minguide. ```
secs = end_time-start_time print("Rcf_numba con compilación tomó", secs, "segundos" ) start_time = time.time() res_numba = Rcf_numba(a,b,n) end_time = time.time() secs = end_time-start_time print("Rcf_numba tomó", secs, "segundos" )
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
Verificamos que después de la optimización de código continuamos resolviendo correctamente el problema:
print(res_numba == approx(obj))
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
Con la función inspect_types nos ayuda para revisar si pudo inferirse información de los tipos de valores a partir del código escrito.
print(Rcf_numba.inspect_types())
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
Ejemplo de uso de Numba con cómputo en paralelo Ver numba: parallel, numba: threading layer
from numba import prange
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
(RCFNUMBAPARALLEL)= Rcf_numba_parallel
@jit(nopython=True, parallel=True) def Rcf_numba_parallel(a,b,n): """ Compute numerical approximation using rectangle or mid-point method in an interval. Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n Args: a (float): left point of interval. b (float): right point of interval. n (int): number of subintervals. Returns: sum_res (float): numerical approximation to integral of f in the interval a,b """ h_hat = (b-a)/n sum_res = 0 for i in prange(n): x = a+(i+1/2)*h_hat sum_res += np.exp(-x**2) return h_hat*sum_res start_time = time.time() res_numba_parallel = Rcf_numba_parallel(a,b,n) end_time = time.time() secs = end_time-start_time print("Rcf_numba_parallel con compilación tomó", secs, "segundos" ) start_time = time.time() res_numba_parallel = Rcf_numba_parallel(a,b,n) end_time = time.time()
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
```{margin} Ver parallel-diagnostics para información relacionada con la ejecución en paralelo. Por ejemplo ejecutar Rcf_numba_parallel.parallel_diagnostics(level=4). ```
secs = end_time-start_time print("Rcf_numba_parallel tomó", secs, "segundos" )
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
Verificamos que después de la optimización de código continuamos resolviendo correctamente el problema:
print(res_numba_parallel == approx(obj))
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
Ejemplo Numpy y Numba En el siguiente ejemplo se utiliza la función linspace para auxiliar en la creación de los nodos y obsérvese que Numba sin problema trabaja los ciclos for (en el caso por ejemplo que no hubiéramos podido vectorizar la operación de creación de nodos).
@jit(nopython=True) def Rcf_numpy_numba(a,b,n): """ Compute numerical approximation using rectangle or mid-point method in an interval. Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n Args: a (float): left point of interval. b (float): right point of interval. n (int): number of subintervals. Returns: sum_res (float): numerical approximation to integral of f in the interval a,b """ h_hat = (b-a)/n aux_vec = np.linspace(a, b, n+1) sum_res = 0 for i in range(n-1): x = (aux_vec[i]+aux_vec[i+1])/2 sum_res += np.exp(-x**2) return h_hat*sum_res start_time = time.time() res_numpy_numba = Rcf_numpy_numba(a, b,n) end_time = time.time() secs = end_time-start_time print("Rcf_numpy_numba con compilación tomó",secs,"segundos" ) start_time = time.time() res_numpy_numba = Rcf_numpy_numba(a, b,n) end_time = time.time() secs = end_time-start_time print("Rcf_numpy_numba tomó",secs,"segundos" ) print(res_numpy_numba == approx(obj)) @jit(nopython=True) def Rcf_numpy_numba_2(a,b,n): """ Compute numerical approximation using rectangle or mid-point method in an interval. Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n Args: a (float): left point of interval. b (float): right point of interval. n (int): number of subintervals. Returns: sum_res (float): numerical approximation to integral of f in the interval a,b """ h_hat = (b-a)/n aux_vec = np.linspace(a, b, n+1) nodes = (aux_vec[:-1]+aux_vec[1:])/2 return h_hat*np.sum(np.exp(-nodes**2)) start_time = time.time() res_numpy_numba_2 = Rcf_numpy_numba_2(a, b,n) end_time = time.time() secs = end_time-start_time print("Rcf_numpy_numba_2 con compilación tomó",secs,"segundos" ) start_time = time.time() res_numpy_numba_2 = Rcf_numpy_numba_2(a, b,n) end_time = time.time() secs = end_time-start_time print("Rcf_numpy_numba_2 tomó",secs,"segundos" ) print(res_numpy_numba_2 == approx(obj))
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
```{admonition} Observación :class: tip Obsérvese que no se mejora el tiempo de ejecución utilizando vectorización y linspace que usando un ciclo for en la implementación anterior Rcf_numpy_numba. De hecho en Rcf_numpy_numba_2 tiene un tiempo de ejecución igual que {ref}Rcf_numpy <RCFNUMPY>. ``` ```{admonition} Ejercicio :class: tip Implementar la regla de Simpson compuesta con Numba, Numpy y Numba, Numba con cómputo en paralelo en una máquina de AWS con las mismas características que la que se presenta en esta nota y medir tiempo de ejecución. ``` Rcpp Rcpp permite integrar C++ y R de forma sencilla mediante su API. ¿Por qué usar Rcpp? Con Rcpp nos da la posibilidad de obtener eficiencia en ejecución de un código con C++ conservando la flexibilidad de trabajar con R. C o C++ aunque requieren más líneas de código, son órdenes de magnitud más rápidos que R. Sacrificamos las ventajas que tiene R como facilidad de escribir códigos por velocidad en ejecución. ¿Cuando podríamos usar Rcpp? En loops que no pueden vectorizarse de forma sencilla. Si tenemos loops en los que una iteración depende de la anterior. Si hay que llamar una función millones de veces. ¿Por qué no usamos C? Sí es posible llamar funciones de C desde R pero resulta en más trabajo por parte de nosotros. Por ejemplo, de acuerdo a H. Wickham: "...R’s C API. Unfortunately this API is not well documented. I’d recommend starting with my notes at R’s C interface. After that, read “The R API” in “Writing R Extensions”. A number of exported functions are not documented, so you’ll also need to read the R source code to figure out the details." Y como primer acercamiento a la compilación de código desde R es preferible seguir las recomendaciones de H. Wickham en utilizar la API de Rcpp. Ejemplo con Rcpp En la siguiente implementación se utiliza vapply que es más rápida que sapply pues se especifica con anterioridad el tipo de valor que devuelve.
Rcf <- function(f,a,b,n){ ' Compute numerical approximation using rectangle or mid-point method in an interval. Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n Args: f (float): function expression of integrand. a (float): left point of interval. b (float): right point of interval. n (int): number of subintervals. Returns: sum_res (float): numerical approximation to integral of f in the interval a,b ' h_hat <- (b-a)/n sum_res <- 0 x <- vapply(0:(n-1),function(j)a+(j+1/2)*h_hat,numeric(1)) for(j in 1:n){ sum_res <- sum_res+f(x[j]) } h_hat*sum_res } a <- 0 b <- 1 f <- function(x)exp(-x^2) n <- 10**7 system.time(res <- Rcf(f,a,b,n)) err_relativo <- function(aprox,obj)abs(aprox-obj)/abs(obj)
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
```{margin} En la documentación de integrate se menciona que se utilice Vectorize. ```
obj <- integrate(Vectorize(f),0,1) print(err_relativo(res,obj$value)) Rcf_2 <- function(f,a,b,n){ ' Compute numerical approximation using rectangle or mid-point method in an interval. Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n Args: f (float): function expression of integrand. a (float): left point of interval. b (float): right point of interval. n (int): number of subintervals. Returns: sum_res (float): numerical approximation to integral of f in the interval a,b ' h_hat <- (b-a)/n x <- vapply(0:(n-1),function(j)a+(j+1/2)*h_hat,numeric(1)) h_hat*sum(f(x)) } system.time(res_2 <- Rcf_2(f,a,b,n)) print(err_relativo(res_2,obj$value)) library(Rcpp)
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
En Rcpp se tiene la función cppFunction que recibe código escrito en C++ para definir una función que puede ser utilizada desde R. Primero reescribamos la implementación en la que no utilicemos vapply.
Rcf_3 <- function(f,a,b,n){ ' Compute numerical approximation using rectangle or mid-point method in an interval. Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n Args: f (float): function expression of integrand. a (float): left point of interval. b (float): right point of interval. n (int): number of subintervals. Returns: sum_res (float): numerical approximation to integral of f in the interval a,b ' h_hat <- (b-a)/n sum_res <- 0 for(i in 0:(n-1)){ x <- a+(i+1/2)*h_hat sum_res <- sum_res+f(x) } h_hat*sum_res } system.time(res_3 <- Rcf_3(f,a,b,n)) print(err_relativo(res_3,obj$value))
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
(RCFRCPP)= Rcf_Rcpp Escribimos source code en C++ que será el primer parámetro que recibirá cppFunction.
f_str <- 'double Rcf_Rcpp(double a, double b, int n){ double h_hat; double sum_res=0; int i; double x; h_hat=(b-a)/n; for(i=0;i<=n-1;i++){ x = a+(i+1/2.0)*h_hat; sum_res += exp(-pow(x,2)); } return h_hat*sum_res; }' cppFunction(f_str)
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
Si queremos obtener más información de la ejecución de la línea anterior podemos usar lo siguiente. ```{margin} Se utiliza rebuild=TRUE para que se vuelva a compilar, ligar con la librería en C++ y más operaciones de cppFunction. ```
cppFunction(f_str, verbose=TRUE, rebuild=TRUE)
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
```{admonition} Comentarios Al ejecutar la línea de cppFunction, Rcpp compilará el código de C++ y construirá una función de R que se conecta con la función compilada de C++. Si se lee la salida de la ejecución con verbose=TRUE se utiliza un tipo de valor SEXP. De acuerdo a H. Wickham: ...functions that talk to R must use the SEXP type for both inputs and outputs. SEXP, short for S expression, is the C struct used to represent every type of object in R. A C function typically starts by converting SEXPs to atomic C objects, and ends by converting C objects back to a SEXP. (The R API is designed so that these conversions often don’t require copying.) La función Rcpp::wrap convierte objetos de C++ a objetos de R y Rcpp:as viceversa. ```
system.time(res_4 <- Rcf_Rcpp(a,b,n)) print(err_relativo(res_4,obj$value))
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
Otras funcionalidades de Rcpp NumericVector En Rcpp se definen clases para relacionar tipos de valores de R con tipo de valores de C++ para el manejo de vectores. Entre éstas se encuentran NumericVector, IntegerVector, CharacterVector y LogicalVector que se relacionan con vectores tipo numeric, integer, character y logical respectivamente. Por ejemplo, para el caso de NumericVector se tiene el siguiente ejemplo.
f_str <- 'NumericVector my_f(NumericVector x){ return exp(log(x)); }' cppFunction(f_str) print(my_f(seq(0,1,by=.1)))
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
Ejemplo con NumericVector Para mostrar otro ejemplo en el caso de la regla de integración del rectángulo considérese la siguiente implementación.
Rcf_implementation_example <- function(f,a,b,n){ ' Compute numerical approximation using rectangle or mid-point method in an interval. Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n Args: f (float): function expression of integrand. a (float): left point of interval. b (float): right point of interval. n (int): number of subintervals. Returns: sum_res (float): numerical approximation to integral of f in the interval a,b ' h_hat <- (b-a)/n fx <- f(vapply(0:(n-1),function(j)a+(j+1/2)*h_hat,numeric(1))) h_hat*sum(fx) } res_numeric_vector <- Rcf_implementation_example(f,a,b,n) print(err_relativo(res_numeric_vector,obj$value))
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
Utilicemos Rcpp para definir una función que recibe un NumericVector para realizar la suma. ```{margin} El método .size() regresa un integer. ```
f_str<-'double Rcf_numeric_vector(NumericVector f_x,double h_hat){ double sum_res=0; int i; int n = f_x.size(); for(i=0;i<=n-1;i++){ sum_res+=f_x[i]; } return h_hat*sum_res; }' h_hat <- (b-a)/n fx <- f(vapply(0:(n-1),function(j)a+(j+1/2)*h_hat,numeric(1))) print(tail(fx)) cppFunction(f_str,rebuild=TRUE) res_numeric_vector <- Rcf_numeric_vector(fx,h_hat) print(err_relativo(res_numeric_vector,obj$value))
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
Otro ejemplo en el que se devuelve un vector tipo NumericVector para crear los nodos.
f_str <- 'NumericVector Rcf_nodes(double a, double b, int n){ double h_hat=(b-a)/n; int i; NumericVector x(n); for(i=0;i<n;i++) x[i]=a+(i+1/2.0)*h_hat; return x; }' cppFunction(f_str,rebuild=TRUE) print(Rcf_nodes(0,1,2))
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
Ejemplo de llamado a función definida en ambiente global con Rcpp También en Rcpp es posible llamar funciones definidas en el ambiente global, por ejemplo. ```{margin} RObject es una clase de C++ para definir un objeto de R. ```
f_str <- 'RObject fun(double x){ Environment env = Environment::global_env(); Function f=env["f"]; return f(x); }' cppFunction(f_str,rebuild=TRUE) fun(1) f(1) print(fun)
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
```{admonition} Comentario .Call es una función base para llamar funciones de C desde R: There are two ways to call C functions from R: .C() and .Call(). .C() is a quick and dirty way to call an C function that doesn’t know anything about R because .C() automatically converts between R vectors and the corresponding C types. .Call() is more flexible, but more work: your C function needs to use the R API to convert its inputs to standard C data types. H. Wickham. ```
print(f)
libro_optimizacion/temas/5.optimizacion_de_codigo/5.3/Compilacion_a_C.ipynb
ITAM-DS/analisis-numerico-computo-cientifico
apache-2.0
File Size
%%bash ls -l /root/models/optimize_me/linear/cpu/
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/09_Deploy_Optimized_Model.ipynb
fluxcapacitor/source.ml
apache-2.0
Graph
%%bash summarize_graph --in_graph=/root/models/optimize_me/linear/cpu/fully_optimized_frozen_cpu.pb from __future__ import absolute_import from __future__ import division from __future__ import print_function import re from google.protobuf import text_format from tensorflow.core.framework import graph_pb2 def convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary): graph = graph_pb2.GraphDef() with open(input_graph, "rb") as fh: if is_input_graph_binary: graph.ParseFromString(fh.read()) else: text_format.Merge(fh.read(), graph) with open(output_dot, "wt") as fh: print("digraph graphname {", file=fh) for node in graph.node: output_name = node.name print(" \"" + output_name + "\" [label=\"" + node.op + "\"];", file=fh) for input_full_name in node.input: parts = input_full_name.split(":") input_name = re.sub(r"^\^", "", parts[0]) print(" \"" + input_name + "\" -> \"" + output_name + "\";", file=fh) print("}", file=fh) print("Created dot file '%s' for graph '%s'." % (output_dot, input_graph)) input_graph='/root/models/optimize_me/linear/cpu/fully_optimized_frozen_cpu.pb' output_dot='/root/notebooks/fully_optimized_frozen_cpu.dot' convert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True) %%bash dot -T png /root/notebooks/fully_optimized_frozen_cpu.dot \ -o /root/notebooks/fully_optimized_frozen_cpu.png > /tmp/a.out from IPython.display import Image Image('/root/notebooks/fully_optimized_frozen_cpu.png')
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/09_Deploy_Optimized_Model.ipynb
fluxcapacitor/source.ml
apache-2.0
Run Standalone Benchmarks Note: These benchmarks are running against the standalone models on disk. We will benchmark the models running within TensorFlow Serving soon.
%%bash benchmark_model --graph=/root/models/optimize_me/linear/cpu/fully_optimized_frozen_cpu.pb \ --input_layer=weights,bias,x_observed \ --input_layer_type=float,float,float \ --input_layer_shape=:: \ --output_layer=add
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/09_Deploy_Optimized_Model.ipynb
fluxcapacitor/source.ml
apache-2.0
Save Model for Deployment and Inference Reset Default Graph
import tensorflow as tf tf.reset_default_graph()
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/09_Deploy_Optimized_Model.ipynb
fluxcapacitor/source.ml
apache-2.0
Create New Session
sess = tf.Session()
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/09_Deploy_Optimized_Model.ipynb
fluxcapacitor/source.ml
apache-2.0
Generate Version Number
from datetime import datetime version = int(datetime.now().strftime("%s"))
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/09_Deploy_Optimized_Model.ipynb
fluxcapacitor/source.ml
apache-2.0
Load Optimized, Frozen Graph
%%bash inspect_checkpoint --file_name=/root/models/optimize_me/linear/cpu/model.ckpt saver = tf.train.import_meta_graph('/root/models/optimize_me/linear/cpu/model.ckpt.meta') saver.restore(sess, '/root/models/optimize_me/linear/cpu/model.ckpt') optimize_me_parent_path = '/root/models/optimize_me/linear/cpu' fully_optimized_frozen_model_graph_path = '%s/fully_optimized_frozen_cpu.pb' % optimize_me_parent_path print(fully_optimized_frozen_model_graph_path) with tf.gfile.GFile(fully_optimized_frozen_model_graph_path, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def( graph_def, input_map=None, return_elements=None, name="", op_dict=None, producer_op_list=None ) print("weights = ", sess.run("weights:0")) print("bias = ", sess.run("bias:0"))
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/09_Deploy_Optimized_Model.ipynb
fluxcapacitor/source.ml
apache-2.0
Create SignatureDef Asset for TensorFlow Serving
from tensorflow.python.saved_model import utils from tensorflow.python.saved_model import signature_constants from tensorflow.python.saved_model import signature_def_utils graph = tf.get_default_graph() x_observed = graph.get_tensor_by_name('x_observed:0') y_pred = graph.get_tensor_by_name('add:0') tensor_info_x_observed = utils.build_tensor_info(x_observed) print(tensor_info_x_observed) tensor_info_y_pred = utils.build_tensor_info(y_pred) print(tensor_info_y_pred) prediction_signature = signature_def_utils.build_signature_def(inputs = {'x_observed': tensor_info_x_observed}, outputs = {'y_pred': tensor_info_y_pred}, method_name = signature_constants.PREDICT_METHOD_NAME)
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/09_Deploy_Optimized_Model.ipynb
fluxcapacitor/source.ml
apache-2.0
Save Model with Assets
from tensorflow.python.saved_model import builder as saved_model_builder from tensorflow.python.saved_model import tag_constants fully_optimized_saved_model_path = '/root/models/linear_fully_optimized/cpu/%s' % version print(fully_optimized_saved_model_path) builder = saved_model_builder.SavedModelBuilder(fully_optimized_saved_model_path) builder.add_meta_graph_and_variables(sess, [tag_constants.SERVING], signature_def_map={'predict':prediction_signature, signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature}, clear_devices=True, ) builder.save(as_text=False) import os print(fully_optimized_saved_model_path) os.listdir(fully_optimized_saved_model_path) os.listdir('%s/variables' % fully_optimized_saved_model_path) sess.close()
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/09_Deploy_Optimized_Model.ipynb
fluxcapacitor/source.ml
apache-2.0
Advanced color schemes In some cases, matplotlib's built-in color maps may not be sufficient. You may want to create custom color schemes, either as a matter of personal style or to create schemes that place more emphasis on certain values. To do this, the diverging_cmap keyword in Contact Map Explorer's plotting function can be useful, as can some matplotlib techniques. Customizing whether a color map is treated as diverging or sequential As discussed in Changing the color map, Contact Map Explorer tries to be smart about how it treats diverging color maps: if the data includes negative values (as possible with a contact difference), then the color map spans the values from -1 to 1. On the other hand, if the data only includes positive values and if the color map is diverging, then only the upper half of of the color map is used. The diverging color maps that Contact Map Explorer recognizes are the ones listed as "Diverging" in the matplotlib documentation. The sequential color maps recognized by Contact Map Explorer are the ones listed as "Perceptually Uniform Sequential", "Sequential", or "Sequential (2)". Other color maps are not recognized by Contact Map Explorer, and by default will be treated as sequential while raising a warning:
traj_contacts.residue_contacts.plot(cmap="gnuplot");
examples/advanced_matplotlib.ipynb
dwhswenson/contact_map
lgpl-2.1
If you want to either force a sequential color map to use the only the upper half of the color space, or to force a diverging color map to use the full color space, use the diverging_cmap option in the color map. Setting diverging_cmap will also silence the warning for an unknown color map. This is particularly useful for user-defined custom color maps, which could be either diverging or sequential.
fig, axs = plt.subplots(1, 2, figsize=(10, 4)) # force a diverging color map to use the full color space, not just upper half # (as if it is not diverging) traj_contacts.residue_contacts.plot_axes(ax=axs[0], cmap='PRGn', diverging_cmap=False); # force sequential color map to use only the upper half (as if it is diverging) traj_contacts.residue_contacts.plot_axes(ax=axs[1], cmap='Blues', diverging_cmap=True);
examples/advanced_matplotlib.ipynb
dwhswenson/contact_map
lgpl-2.1
"Clipping" at high or low values You might be interested in somehow marking which values are very high or very low. This can be done by creating a custom color map. Details on this can be found in the matplotlib documentation on colormap manipulation. The basic idea we'll implement here is to use the 'PRGn' colormap, but to make values below -0.9 show up as red, and values above 0.9 show up as blue. We do this by making a color map based on 200 colors; the first 10 (-1.0 to -0.9) are red, then we use 180 representing the PRGn map (-0.9 to 0.9), and finally the last 10 (0.9 to 1.0) are blue. Note that colors in this approach are actually discrete, so you need enough colors from the PRGn map to make it a reasonable model for continuous behavior. For truly continuous color maps, see matplotlib documentation on LinearSegmentedColormap. This is very similar in principle to one of the matplotlib examples on "Creating listed colormaps".
from matplotlib import cm from matplotlib.colors import ListedColormap import numpy as np PRGn = cm.get_cmap('PRGn', 180) red = np.array([1.0, 0.0, 0.0, 1.0]) blue = np.array([0.0, 0.0, 1.0, 1.0]) # custom color map of 200 colors with bottom 10 red; top 10 blue; rest is normal PRGb new_colors = np.array([red] * 10 + list(PRGn(np.linspace(0, 1, 180))) + [blue] * 10) custom_cmap = ListedColormap(new_colors) # must give diverging_cmap here; custom map is unknown to Contact Map Explorer diff.residue_contacts.plot(cmap=custom_cmap, diverging_cmap=True);
examples/advanced_matplotlib.ipynb
dwhswenson/contact_map
lgpl-2.1
TODO: - https://matplotlib.org/basemap/users/intro.html - http://scitools.org.uk/cartopy/docs/latest/gallery.html - https://waterprogramming.wordpress.com/2016/12/19/plotting-geographic-data-from-geojson-files-using-python/ - http://maxberggren.se/2015/08/04/basemap/ La cartographie avec basemap Références Python for Data Analysis de Wes McKinney, ed. O'Reilly, 2013 (p.261) Attention Le développement de Basemap a été stoppé et il est conseillé d'utiliser Cartopy à la place. "Starting in 2016, Basemap came under new management. The Cartopy project will replace Basemap, but it hasn’t yet implemented all of Basemap’s features. All new software development should try to use Cartopy whenever possible, and existing software should start the process of switching over to use Cartopy. All maintenance and development efforts should be focused on Cartopy." (http://matplotlib.org/basemap/users/intro.html) Installation Basemap n'est pas installé par défaut avec Matplotlib. Pour l'installer avec conda: conda install basemap Exemple
from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt fig, ax = plt.subplots() lllat = 41.0 # latitude of lower left hand corner of the desired map domain (degrees). urlat = 52.0 # latitude of upper right hand corner of the desired map domain (degrees). lllon = -5.0 # longitude of lower left hand corner of the desired map domain (degrees). urlon = 9.5 # longitude of upper right hand corner of the desired map domain (degrees). m = Basemap(ax=ax, projection='stere', lon_0=(urlon+lllon)/2., lat_0=(urlat+lllat)/2., llcrnrlat=lllat, urcrnrlat=urlat, llcrnrlon=lllon, urcrnrlon=urlon, resolution='l') # Can be ``c`` (crude), ``l`` (low), ``i`` (intermediate), ``h`` (high), ``f`` (full) or None. m.drawcoastlines() m.drawstates() m.drawcountries() #m.drawrivers() #m.drawcounties() # Eiffel tower's coordinates pt_lat = 48.858223 pt_lon = 2.2921653 x, y = m(pt_lon, pt_lat) print(pt_lat, pt_lon) print(x, y) m.plot(x, y, 'ro') #plt.savefig("map.png") plt.show()
python_matplotlib_geo_fr.ipynb
jdhp-docs/python-notebooks
mit
Source Data The PUMS data is a sample, so both household and person records have weights. We use those weights to replicate records. We are not adjusting the values for CPI, since we don't have a CPI for 2015, and because the medians for income comes out pretty close to those from the 2015 5Y ACS. The HHINCOME and VALUEH have the typical distributions for income and home values, both of which look like Poisson distributions.
# Check the weights for the whole file to see if they sum to the number # of households and people in the county. They don't, but the sum of the weights for households is close, # 126,279,060 vs about 116M housholds con = sqlite3.connect("ipums.sqlite") wt = pd.read_sql_query("SELECT YEAR, DATANUM, SERIAL, HHWT, PERNUM, PERWT FROM ipums " "WHERE PERNUM = 1 AND YEAR = 2015", con) wt.drop(0, inplace=True) nd_s = wt.drop_duplicates(['YEAR', 'DATANUM','SERIAL']) country_hhwt_sum = nd_s[nd_s.PERNUM == 1]['HHWT'].sum() len(wt), len(nd_s), country_hhwt_sum import sqlite3 # PERNUM = 1 ensures only record for each household con = sqlite3.connect("ipums.sqlite") senior_hh = pd.read_sql_query( "SELECT DISTINCT SERIAL, HHWT, PERWT, HHINCOME, VALUEH " "FROM ipums " "WHERE " # "AGE >= 65 AND " "HHINCOME < 9999999 AND VALUEH < 9999999 AND " "STATEFIP = 6 AND COUNTYFIPS=73 ", con) # Since we're doing a probabilistic simulation, the easiest way to deal with the weight is just to repeat rows. # However, adding the weights doesn't change the statistics much, so they are turned off now, for speed. def generate_data(): for index, row in senior_hh.drop_duplicates('SERIAL').iterrows(): #for i in range(row.HHWT): yield (row.HHINCOME, row.VALUEH) incv = pd.DataFrame(list(generate_data()), columns=['HHINCOME', 'VALUEH']) sns.jointplot(x="HHINCOME", y="VALUEH", marker='.', scatter_kws={'alpha': 0.1}, data=incv, kind='reg'); from matplotlib.ticker import FuncFormatter fig = plt.figure(figsize = (20,12)) ax = fig.add_subplot(111) fig.suptitle("Distribution Plot of Home Values in San Diego County\n" "( Truncated at $2.2M )", fontsize=18) sns.distplot(incv.VALUEH[incv.VALUEH <2200000], ax=ax); ax.set_xlabel('Home Value ($)', fontsize=14) ax.set_ylabel('Density', fontsize=14); ax.get_xaxis().set_major_formatter(FuncFormatter(lambda x, p: format(int(x), ','))) from matplotlib.ticker import FuncFormatter fig = plt.figure(figsize = (20,12)) ax = fig.add_subplot(111) fig.suptitle("Distribution Plot of Home Values in San Diego County\n" "( Truncated at $2.2M )", fontsize=18) sns.kdeplot(incv.VALUEH[incv.VALUEH <2200000], ax=ax); sns.kdeplot(incv.VALUEH[incv.VALUEH <1900000]+300000, ax=ax); ax.set_xlabel('Home Value ($)', fontsize=14) ax.set_ylabel('Density', fontsize=14); ax.get_xaxis().set_major_formatter(FuncFormatter(lambda x, p: format(int(x), ',')))
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
CivicKnowledge/metatab-packages
mit
Procedure After extracting the data for HHINCOME and VALUEH, we rank both values and then quantize the rankings into 10 groups, 0 through 9, hhincome_group and valueh_group. The HHINCOME variable correlates with VALUEH at .36, and the quantized rankings hhincome_group and valueh_group correlate at .38. Initial attempts were made to fit curves to the income and home value distributions, but it is very difficult to find well defined models that fit real income distributions. Bordley (bordley) analyzes the fit for 15 different distributions, reporting success with variations of the generalized beta distribution, gamma and Weibull. Majumder (majumder) proposes a four parameter model with variations for special cases. None of these models were considered well established enough to fit within the time contraints for the project, so this analysis will use empirical distributions that can be scale to fit alternate parameters.
incv['valueh_rank'] = incv.rank()['VALUEH'] incv['valueh_group'] = pd.qcut(incv.valueh_rank, 10, labels=False ) incv['hhincome_rank'] = incv.rank()['HHINCOME'] incv['hhincome_group'] = pd.qcut(incv.hhincome_rank, 10, labels=False ) incv[['HHINCOME', 'VALUEH', 'hhincome_group', 'valueh_group']] .corr() from metatab.pands import MetatabDataFrame odf = MetatabDataFrame(incv) odf.name = 'income_homeval' odf.title = 'Income and Home Value Records for San Diego County' odf.HHINCOME.description = 'Household income' odf.VALUEH.description = 'Home value' odf.valueh_rank.description = 'Rank of the VALUEH value' odf.valueh_group.description = 'The valueh_rank value quantized into 10 bins, from 0 to 9' odf.hhincome_rank.description = 'Rank of the HHINCOME value' odf.hhincome_group.description = 'The hhincome_rank value quantized into 10 bins, from 0 to 9' %mt_add_dataframe odf --materialize
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
CivicKnowledge/metatab-packages
mit
Then, we group the dataset by valueh_group and collect all of the income values for each group. These groups have different distributions, with the lower numbered group shewing to the left and the higher numbered group skewing to the right. To use these groups in a simulation, the user would select a group for a subject's home value, then randomly select an income in that group. When this is done many times, the original VALUEH correlates to the new distribution ( here, as t_income ) at .33, reasonably similar to the original correlations.
import matplotlib.pyplot as plt import numpy as np mk = MultiKde(odf, 'valueh_group', 'HHINCOME') fig,AX = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(15,15)) incomes = [30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000, 110000] for mi, ax in zip(incomes, AX.flatten()): s, d, icdf, g = mk.make_kde(mi) syn_d = mk.syn_dist(mi, 10000) syn_d.plot.hist(ax=ax, bins=40, title='Median Income ${:0,.0f}'.format(mi), normed=True, label='Generated') ax.plot(s,d, lw=2, label='KDE') fig.suptitle('Income Distributions By Median Income\nKDE and Generated Distribution') plt.legend(loc='upper left') plt.show() import matplotlib.pyplot as plt import numpy as np mk = MultiKde(odf, 'valueh_group', 'HHINCOME') fig = plt.figure(figsize = (20,12)) ax = fig.add_subplot(111) incomes = [30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000, 110000] for mi in incomes: s, d, icdf, g = mk.make_kde(mi) syn_d = mk.syn_dist(mi, 10000) #syn_d.plot.hist(ax=ax, bins=40, normed=True, label='Generated') ax.plot(s,d, lw=2, label=str(mi)) fig.suptitle('Income Distributions By Median Income\nKDE and Generated Distribution\n< $250,000') plt.legend(loc='upper left') ax.set_xlim([0,250000]) plt.show() df_kde = incv[ (incv.HHINCOME <200000) & (incv.VALUEH < 1000000) ] ax = sns.kdeplot(df_kde.HHINCOME, df_kde.VALUEH, cbar=True)
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
CivicKnowledge/metatab-packages
mit
A scatter matrix show similar structure for VALUEH and t_income.
t = incv.copy() t['t_income'] = mk.syn_dist(t.HHINCOME.median(), len(t)) t[['HHINCOME','VALUEH','t_income']].corr() sns.pairplot(t[['VALUEH','HHINCOME','t_income']]);
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
CivicKnowledge/metatab-packages
mit
The simulated incomes also have similar statistics to the original incomes. However, the median income is high. In San Diego county, the median household income for householders 65 and older in the 2015 5 year ACS about \$51K, versus \$56K here. For home values, the mean home value for 65+ old homeowners is \$468K in the 5 year ACS, vs \$510K here.
display(HTML("<h3>Descriptive Stats</h3>")) t[['VALUEH','HHINCOME','t_income']].describe() display(HTML("<h3>Correlations</h3>")) t[['VALUEH','HHINCOME','t_income']].corr()
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
CivicKnowledge/metatab-packages
mit
Bibliography
%mt_bibliography # Tests
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
CivicKnowledge/metatab-packages
mit
Create a new KDE distribution, based on the home values, including only home values ( actually KDE supports ) between $130,000 and $1.5M.
s,d = make_prototype(incv.VALUEH.astype(float), 130_000, 1_500_000) plt.plot(s,d)
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
CivicKnowledge/metatab-packages
mit
Overlay the prior plot with the histogram of the original values. We're using np.histogram to make the histograph, so it appears as a line chart.
v = incv.VALUEH.astype(float).sort_values() #v = v[ ( v > 60000 ) & ( v < 1500000 )] hist, bin_edges = np.histogram(v, bins=100, density=True) bin_middles = 0.5*(bin_edges[1:] + bin_edges[:-1]) bin_width = bin_middles[1] - bin_middles[0] assert np.isclose(sum(hist*bin_width),1) # == 1 b/c density==True hist, bin_edges = np.histogram(v, bins=100) # Now, without 'density' # And, get back to the counts, but now on the KDE fig = plt.figure() ax = fig.add_subplot(111) ax.plot(s,d * sum(hist*bin_width)); ax.plot(bin_middles, hist);
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
CivicKnowledge/metatab-packages
mit
Show an a home value curve, interpolated to the same values as the distribution. The two curves should be co-incident.
def plot_compare_curves(p25, p50, p75): fig = plt.figure(figsize = (20,12)) ax = fig.add_subplot(111) sp, dp = interpolate_curve(s, d, p25, p50, p75) ax.plot(pd.Series(s), d, color='black'); ax.plot(pd.Series(sp), dp, color='red'); # Re-input the quantiles for the KDE # Curves should be co-incident plot_compare_curves(2.800000e+05,4.060000e+05,5.800000e+05)
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
CivicKnowledge/metatab-packages
mit
Now, interpolate to the values for the county, which shifts the curve right.
# Values for SD County home values plot_compare_curves(349100.0,485900.0,703200.0)
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
CivicKnowledge/metatab-packages
mit
Here is an example of creating an interpolated distribution, then generating a synthetic distribution from it.
sp, dp = interpolate_curve(s, d, 349100.0,485900.0,703200.0) v = syn_dist(sp, dp, 10000) plt.hist(v, bins=100); pd.Series(v).describe()
ipums.org/ipums.org-income_homevalue/notebooks/ipums.org-income_homevalue.ipynb
CivicKnowledge/metatab-packages
mit
Sparse 2d interpolation In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain: The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$. The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points. The value of $f$ is known at a single interior point: $f(0,0)=1.0$. The function $f$ is not known at any other points. Create arrays x, y, f: x should be a 1d array of the x coordinates on the boundary and the 1 interior point. y should be a 1d array of the y coordinates on the boundary and the 1 interior point. f should be a 1d array of the values of f at the corresponding x and y coordinates. You might find that np.hstack is helpful.
x = np.hstack((np.linspace(-4,4,9), np.full(11, -5), np.linspace(-4,4,9), np.full(11, 5), [0])) y = np.hstack((np.full(9,-5), np.linspace(-5, 5,11), np.full(9,5), np.linspace(-5,5,11), [0])) f = np.hstack((np.zeros(20), np.zeros(20),[1.0])) print(f)
assignments/assignment08/InterpolationEx02.ipynb
CalPolyPat/phys202-2015-work
mit
The following plot should show the points on the boundary and the single point in the interior:
plt.scatter(x, y); assert x.shape==(41,) assert y.shape==(41,) assert f.shape==(41,) assert np.count_nonzero(f)==1
assignments/assignment08/InterpolationEx02.ipynb
CalPolyPat/phys202-2015-work
mit
Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain: xnew and ynew should be 1d arrays with 100 points between $[-5,5]$. Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid. Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew). Use cubic spline interpolation.
xnew = np.linspace(-5, 5, 100) ynew = np.linspace(-5, 5, 100) Xnew, Ynew = np.meshgrid(xnew, ynew) Fnew = griddata((x, y), f , (Xnew, Ynew), method='cubic') plt.imshow(Fnew, extent=(-5,5,-5,5)) assert xnew.shape==(100,) assert ynew.shape==(100,) assert Xnew.shape==(100,100) assert Ynew.shape==(100,100) assert Fnew.shape==(100,100)
assignments/assignment08/InterpolationEx02.ipynb
CalPolyPat/phys202-2015-work
mit
Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
plt.contourf(Xnew, Ynew, Fnew, cmap='hot') plt.colorbar(label='Z') plt.box(False) plt.title("The interpolated 2d grid of our data.") plt.xlabel('X') plt.ylabel('Y'); assert True # leave this to grade the plot
assignments/assignment08/InterpolationEx02.ipynb
CalPolyPat/phys202-2015-work
mit
Trying MountainCar
mc_env = gym.make("MountainCar-v0") mc_n_weights, mc_feature_vec = fourier_fa.make_feature_vec( np.array([mc_env.low, mc_env.high]), n_acts=3, order=2) mc_experience = linfa.init(lmbda=0.9, init_alpha=1.0, epsi=0.1, feature_vec=mc_feature_vec, n_weights=mc_n_weights, act_space=mc_env.action_space, theta=None, is_use_alpha_bounds=True) mc_experience, mc_spe, mc_ape = driver.train(mc_env, linfa, mc_experience, n_episodes=400, max_steps=200, is_render=False) fig, ax1 = pyplot.subplots() ax1.plot(mc_spe, color='b') ax2 = ax1.twinx() ax2.plot(mc_ape, color='r') pyplot.show() def mc_Q_at_x(e, x, a): return scipy.integrate.quad( lambda x_dot: e.feature_vec(np.array([x, x_dot]), a).dot(e.theta), mc_env.low[1], mc_env.high[1]) def mc_Q_fun(x): return mc_Q_at_x(mc_experience, x, 0) sample_xs = np.arange(mc_env.low[0], mc_env.high[0], (mc_env.high[0] - mc_env.low[0]) / 8.0) mc_num_Qs = np.array( map(mc_Q_fun, sample_xs) ) mc_num_Qs mc_sym_Q_s0 = fourier_fa_int.make_sym_Q_s0( np.array([mc_env.low, mc_env.high]), 2) mc_sym_Qs = np.array( [mc_sym_Q_s0(mc_experience.theta, 0, s0) for s0 in sample_xs] ) mc_sym_Qs mc_sym_Qs - mc_num_Qs[:,0]
notebooks/IntegrationExperiments.ipynb
rmoehn/cartpole
mit
Let's try some arbitrary thetas And see what the ratio depends on. I've seen above that it's probably not the order of the Fourier FA, but the number of dimensions.
# Credits: http://stackoverflow.com/a/1409496/5091738 def make_integrand(feature_vec, theta, s0, n_dim): argstr = ", ".join(["s{}".format(i) for i in xrange(1, n_dim)]) code = "def integrand({argstr}):\n" \ " return feature_vec(np.array([s0, {argstr}]), 0).dot(theta)\n" \ .format(argstr=argstr) #print code compiled = compile(code, "fakesource", "exec") fakeglobals = {'feature_vec': feature_vec, 'theta': theta, 's0': s0, 'np': np} fakelocals = {} eval(compiled, fakeglobals, fakelocals) return fakelocals['integrand'] print make_integrand(None, None, None, 4) for order in xrange(1,3): for n_dim in xrange(2, 4): print "\norder {} dims {}".format(order, n_dim) min_max = np.array([np.zeros(n_dim), 3 * np.ones(n_dim)]) n_weights, feature_vec = fourier_fa.make_feature_vec( min_max, n_acts=1, order=order) theta = np.cos(np.arange(0, 2*np.pi, 2*np.pi/n_weights)) sample_xs = np.arange(0, 3, 0.3) def num_Q_at_x(s0): integrand = make_integrand(feature_vec, theta, s0, n_dim) return scipy.integrate.nquad(integrand, min_max.T[1:]) num_Qs = np.array( map(num_Q_at_x, sample_xs) ) #print num_Qs sym_Q_at_x = fourier_fa_int.make_sym_Q_s0(min_max, order) sym_Qs = np.array( [sym_Q_at_x(theta, 0, s0) for s0 in sample_xs] ) #print sym_Qs print sym_Qs / num_Qs[:,0]
notebooks/IntegrationExperiments.ipynb
rmoehn/cartpole
mit
If the bounds of the states are [0, n], the ratio between symbolic and numeric results is $1/n^{n_{dim}-1}$. Or this is at least what I think I see. This looks like there's a problem with normalization. (What also very strongly suggested it, was that numeric and symbolic results were equal over [0, 1], but started to differ when I changed to [0, 2].)
np.arange(0, 1, 10) import sympy as sp a, b, x, f = sp.symbols("a b x f") b_int = sp.Integral(1, (x, a, b)) sp.init_printing() u_int = sp.Integral((1-a)/(b-a), (x, 0, 1)) u_int (b_int / u_int).simplify() b_int.subs([(a,0), (b,2)]).doit() u_int.subs([(a,0), (b,2)]).doit() (u_int.doit()*b).simplify()
notebooks/IntegrationExperiments.ipynb
rmoehn/cartpole
mit
Read the data frame containing the eigenmode data and filter out the parameter values relevant for this plot.
df = pd.read_csv('../data/eigenmode_info_data_frame.csv') df = df.query('(has_particle == True) and (x == 0) and (y == 0) and ' '(d_particle == 20) and (Ms_particle == 1e6) and (N == 1)') df = df.sort_values('d')
notebooks/fig_9c_comparison_of_frequency_change_for_various_external_field_strengths.ipynb
maxalbert/paper-supplement-nanoparticle-sensing
mit
Define helper function to plot $\Delta f$ vs. particle separation for a single value of the external field strength.
def plot_freq_change_vs_particle_distance_for_field_strength(ax, Hz, H_ext_descr, color): """ Plot frequency change vs. particle distance for a single field strength `Hz`. """ df_filtered = df.query('Hz == {Hz} and N == 1'.format(Hz=Hz)) d_vals = df_filtered['d'] freq_diffs = df_filtered['freq_diff'] * 1e3 # frequency change in MHz ax.plot(d_vals, freq_diffs, color=color, label='H={}'.format(H_ext_descr)) def add_eigenmode_profile(filename): imagebox = OffsetImage(read_png(filename), zoom=0.75) ab = AnnotationBbox(imagebox, (40, 0.4), xybox=(60, 220), xycoords='data', boxcoords='data', frameon=False) ax.add_artist(ab)
notebooks/fig_9c_comparison_of_frequency_change_for_various_external_field_strengths.ipynb
maxalbert/paper-supplement-nanoparticle-sensing
mit
Produce the plot for Fig. 9(c).
fig, ax = plt.subplots(figsize=(6, 6)) for H_z, H_ext_descr, color in [(0, '0 T', '#4DAF4A'), (8e4, '0.1 T', '#377EB8'), (80e4, '1 T', '#E41A1C') ]: plot_freq_change_vs_particle_distance_for_field_strength(ax, H_z, H_ext_descr, color) add_eigenmode_profile("../images/eigenmode_profile_with_particle_at_x_neg30_y_0_d_5.png") ax.set_xlim(0, 95) ax.set_xticks(range(0, 100, 10)) ax.set_xlabel('Particle separation d (nm)') ax.set_ylabel(r'Frequency change $\Delta f$ (MHz)') ax.legend(numpoints=1, loc='upper right')
notebooks/fig_9c_comparison_of_frequency_change_for_various_external_field_strengths.ipynb
maxalbert/paper-supplement-nanoparticle-sensing
mit
We can save any Marvin object with the save method. This methods accepts a string filename+path as the name of the pickled file. If a full file path is not specified, it defaults to the current directory. save also accepts an overwrite boolean keyword in case you want to overwrite an existing file.
haflux.save('my_haflux_map')
docs/sphinx/jupyter/saving_and_restoring.ipynb
bretthandrews/marvin
bsd-3-clause
Now we have a saved map. We can restore it anytime we want using the restore class method. A class method means you call it from the imported class itself, and not on the instance. restore accepts a string filename as input and returns the instantianted object.
# import the individual Map class from marvin.tools.map import Map # restore the Halpha flux map into a new variable filename = '/Users/Brian/Work/github_projects/Marvin/docs/sphinx/jupyter/my_haflux_map' newflux = Map.restore(filename) print(newflux)
docs/sphinx/jupyter/saving_and_restoring.ipynb
bretthandrews/marvin
bsd-3-clause
We can also save and restore Marvin Queries and Results. First let's create and run a simple query...
from marvin.tools.query import Query, Results from marvin import config config.mode = 'remote' config.switchSasUrl('local') # let's make a query f = 'nsa.z < 0.1' q = Query(searchfilter=f) print(q) # and run it r = q.run() print(r)
docs/sphinx/jupyter/saving_and_restoring.ipynb
bretthandrews/marvin
bsd-3-clause
Let's save both the query and results for later use. Without specifiying a filename, by default Marvin will name the query or results using your provided search filter.
q.save() r.save()
docs/sphinx/jupyter/saving_and_restoring.ipynb
bretthandrews/marvin
bsd-3-clause
By default, if you don't specify a filename for the pickled file, Marvin will auto assign one for you with extension .mpf (MaNGA Pickle File). Now let's restore...
newquery = Query.restore('/Users/Brian/marvin_query_nsa.z<0.1.mpf') print('query', newquery) print('filter', newquery.searchfilter) myresults = Results.restore('/Users/Brian/marvin_results_nsa.z<0.1.mpf') print(myresults.results)
docs/sphinx/jupyter/saving_and_restoring.ipynb
bretthandrews/marvin
bsd-3-clause
To make this tutorial easy to follow, we use the UCI Airquality dataset, and try to forecast the AH value at the different timesteps. Some basic preprocessing has also been performed on the dataset as it required cleanup. A Simple Example The first step is to prepare your data. Here we use the UCI Airquality dataset as an example.
dataset = tf.keras.utils.get_file( fname="AirQualityUCI.csv", origin="https://archive.ics.uci.edu/ml/machine-learning-databases/00360/" "AirQualityUCI.zip", extract=True, ) dataset = pd.read_csv(dataset, sep=";") dataset = dataset[dataset.columns[:-2]] dataset = dataset.dropna() dataset = dataset.replace(",", ".", regex=True) val_split = int(len(dataset) * 0.7) data_train = dataset[:val_split] validation_data = dataset[val_split:] data_x = data_train[ [ "CO(GT)", "PT08.S1(CO)", "NMHC(GT)", "C6H6(GT)", "PT08.S2(NMHC)", "NOx(GT)", "PT08.S3(NOx)", "NO2(GT)", "PT08.S4(NO2)", "PT08.S5(O3)", "T", "RH", ] ].astype("float64") data_x_val = validation_data[ [ "CO(GT)", "PT08.S1(CO)", "NMHC(GT)", "C6H6(GT)", "PT08.S2(NMHC)", "NOx(GT)", "PT08.S3(NOx)", "NO2(GT)", "PT08.S4(NO2)", "PT08.S5(O3)", "T", "RH", ] ].astype("float64") # Data with train data and the unseen data from subsequent time steps. data_x_test = dataset[ [ "CO(GT)", "PT08.S1(CO)", "NMHC(GT)", "C6H6(GT)", "PT08.S2(NMHC)", "NOx(GT)", "PT08.S3(NOx)", "NO2(GT)", "PT08.S4(NO2)", "PT08.S5(O3)", "T", "RH", ] ].astype("float64") data_y = data_train["AH"].astype("float64") data_y_val = validation_data["AH"].astype("float64") print(data_x.shape) # (6549, 12) print(data_y.shape) # (6549,)
docs/ipynb/timeseries_forecaster.ipynb
keras-team/autokeras
apache-2.0
The second step is to run the TimeSeriesForecaster. As a quick demo, we set epochs to 10. You can also leave the epochs unspecified for an adaptive number of epochs.
predict_from = 1 predict_until = 10 lookback = 3 clf = ak.TimeseriesForecaster( lookback=lookback, predict_from=predict_from, predict_until=predict_until, max_trials=1, objective="val_loss", ) # Train the TimeSeriesForecaster with train data clf.fit( x=data_x, y=data_y, validation_data=(data_x_val, data_y_val), batch_size=32, epochs=10, ) # Predict with the best model(includes original training data). predictions = clf.predict(data_x_test) print(predictions.shape) # Evaluate the best model with testing data. print(clf.evaluate(data_x_val, data_y_val))
docs/ipynb/timeseries_forecaster.ipynb
keras-team/autokeras
apache-2.0
============================================================ Define target events based on time lag, plot evoked response ============================================================ This script shows how to define higher order events based on time lag between reference and target events. For illustration, we will put face stimuli presented into two classes, that is 1) followed by an early button press (within 590 milliseconds) and followed by a late button press (later than 590 milliseconds). Finally, we will visualize the evoked responses to both 'quickly-processed' and 'slowly-processed' face stimuli.
# Authors: Denis Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import mne from mne import io from mne.event import define_target_events from mne.datasets import sample import matplotlib.pyplot as plt print(__doc__) data_path = sample.data_path()
0.17/_downloads/4c66a907fef8e4e049497d46de605e3a/plot_define_target_events.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # Set up pick list: EEG + STI 014 - bad channels (modify to your needs) include = [] # or stim channels ['STI 014'] raw.info['bads'] += ['EEG 053'] # bads # pick MEG channels picks = mne.pick_types(raw.info, meg='mag', eeg=False, stim=False, eog=True, include=include, exclude='bads')
0.17/_downloads/4c66a907fef8e4e049497d46de605e3a/plot_define_target_events.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Find stimulus event followed by quick button presses
reference_id = 5 # presentation of a smiley face target_id = 32 # button press sfreq = raw.info['sfreq'] # sampling rate tmin = 0.1 # trials leading to very early responses will be rejected tmax = 0.59 # ignore face stimuli followed by button press later than 590 ms new_id = 42 # the new event id for a hit. If None, reference_id is used. fill_na = 99 # the fill value for misses events_, lag = define_target_events(events, reference_id, target_id, sfreq, tmin, tmax, new_id, fill_na) print(events_) # The 99 indicates missing or too late button presses # besides the events also the lag between target and reference is returned # this could e.g. be used as parametric regressor in subsequent analyses. print(lag[lag != fill_na]) # lag in milliseconds # ############################################################################# # Construct epochs tmin_ = -0.2 tmax_ = 0.4 event_id = dict(early=new_id, late=fill_na) epochs = mne.Epochs(raw, events_, event_id, tmin_, tmax_, picks=picks, baseline=(None, 0), reject=dict(mag=4e-12)) # average epochs and get an Evoked dataset. early, late = [epochs[k].average() for k in event_id]
0.17/_downloads/4c66a907fef8e4e049497d46de605e3a/plot_define_target_events.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
View evoked response
times = 1e3 * epochs.times # time in milliseconds title = 'Evoked response followed by %s button press' fig, axes = plt.subplots(2, 1) early.plot(axes=axes[0], time_unit='s') axes[0].set(title=title % 'late', ylabel='Evoked field (fT)') late.plot(axes=axes[1], time_unit='s') axes[1].set(title=title % 'early', ylabel='Evoked field (fT)') plt.show()
0.17/_downloads/4c66a907fef8e4e049497d46de605e3a/plot_define_target_events.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Explain what the cell below will produce and why. Can you change it so the answer is correct?
2/3
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Answer: Because Python 2 performs classic division for integers. Use floats to perform true division. For example: 2.0/3 Answer these 3 questions without typing code. Then type code to check your answer. What is the value of the expression 4 * (6 + 5) What is the value of the expression 4 * 6 + 5 What is the value of the expression 4 + 6 * 5
4 * (6 + 5) 4 * 6 + 5 4 + 6 * 5
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
What is the type of the result of the expression 3 + 1.5 + 4? Answer: Floating Point Number What would you use to find a number’s square root, as well as its square?
100 ** 0.5 10 ** 2
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Strings Given the string 'hello' give an index command that returns 'e'. Use the code below:
s = 'hello' # Print out 'e' using indexing s[1]
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Reverse the string 'hello' using indexing:
s ='hello' # Reverse the string using indexing s[::-1]
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Given the string hello, give two methods of producing the letter 'o' using indexing.
s ='hello' # Print out the s[-1] s[4]
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Lists Build this list [0,0,0] two separate ways.
#Method 1 [0]*3 #Method 2 l = [0,0,0] l
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Reassign 'hello' in this nested list to say 'goodbye' item in this list:
l = [1,2,[3,4,'hello']] l[2][2] = 'goodbye' l
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Sort the list below:
l = [5,3,4,6,1] #Method 1 sorted(l) #Method 2 l.sort() l
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Dictionaries Using keys and indexing, grab the 'hello' from the following dictionaries:
d = {'simple_key':'hello'} # Grab 'hello' d['simple_key'] d = {'k1':{'k2':'hello'}} # Grab 'hello' d['k1']['k2'] # Getting a little tricker d = {'k1':[{'nest_key':['this is deep',['hello']]}]} # This was harder than I expected... d['k1'][0]['nest_key'][1][0] # This will be hard and annoying! d = {'k1':[1,2,{'k2':['this is tricky',{'tough':[1,2,['hello']]}]}]} # Phew d['k1'][2]['k2'][1]['tough'][2][0]
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Can you sort a dictionary? Why or why not? Answer: No! Because normal dictionaries are mappings not a sequence. Tuples What is the major difference between tuples and lists? Tuples are immutable! How do you create a tuple?
t = (1,2,3)
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Sets What is unique about a set? Answer: They don't allow for duplicate items! Use a set to find the unique values of the list below:
l = [1,2,2,33,4,4,11,22,3,3,2] set(l)
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Booleans For the following quiz questions, we will get a preview of comparison operators: <table class="table table-bordered"> <tr> <th style="width:10%">Operator</th><th style="width:45%">Description</th><th>Example</th> </tr> <tr> <td>==</td> <td>If the values of two operands are equal, then the condition becomes true.</td> <td> (a == b) is not true.</td> </tr> <tr> <td>!=</td> <td>If values of two operands are not equal, then condition becomes true.</td> </tr> <tr> <td>&lt;&gt;</td> <td>If values of two operands are not equal, then condition becomes true.</td> <td> (a &lt;&gt; b) is true. This is similar to != operator.</td> </tr> <tr> <td>&gt;</td> <td>If the value of left operand is greater than the value of right operand, then condition becomes true.</td> <td> (a &gt; b) is not true.</td> </tr> <tr> <td>&lt;</td> <td>If the value of left operand is less than the value of right operand, then condition becomes true.</td> <td> (a &lt; b) is true.</td> </tr> <tr> <td>&gt;=</td> <td>If the value of left operand is greater than or equal to the value of right operand, then condition becomes true.</td> <td> (a &gt;= b) is not true. </td> </tr> <tr> <td>&lt;=</td> <td>If the value of left operand is less than or equal to the value of right operand, then condition becomes true.</td> <td> (a &lt;= b) is true. </td> </tr> </table> What will be the resulting Boolean of the following pieces of code (answer fist then check by typing it in!)
# Answer before running cell 2 > 3 # Answer before running cell 3 <= 2 # Answer before running cell 3 == 2.0 # Answer before running cell 3.0 == 3 # Answer before running cell 4**0.5 != 2
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Final Question: What is the boolean output of the cell block below?
# two nested lists l_one = [1,2,[3,4]] l_two = [1,2,{'k1':4}] #True or False? l_one[2][0] >= l_two[2]['k1']
Objects and Data Structures Assessment Test-Solution.ipynb
jserenson/Python_Bootcamp
gpl-3.0
Let us also define a generic print function that will be the callback trigger when the selected_values trait of all the widgets changes. The function must have a single argument, which will be a dict with the following keys: * 'name': The name of the trait that is monitored and triggers the callback. In the case of a MenpoWidget subclass, this is always 'selected_values'. * 'type': The type of event that happens on the trait. In the case of a MenpoWidget subclass, this is always 'change'. * 'new': The currently selected value attached to selected_values. * 'old': The previous value of selected_values. * 'owner': Pointer to the widget object. Consequently, the selected values of a widget object (e.g. wid) can be retrieved in any of the following 3 equivalent ways: 1. wid.selected_values 2. change['new'] 3. change['owner'].selected_values For this notebook, we choose the second way which is independent of the widget object.
from menpo.visualize import print_dynamic def render_function(change): print(change['new'])
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:logo"></a>2. Menpo Logo This is a simple widget that can be used for embedding an image into an ipywidgets widget are using the ipywidgets.Image class.
from menpowidgets.tools import LogoWidget LogoWidget(style='danger')
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:list"></a>3. List Definition and Slicing MenpoWidgets has a widget for defining a list of numbers. The widget is smart enough to accept any valid python command, such as python 'range(10)', '[1, 2, 3]', '10' and complain about syntactic mistakes. It can be defined to expect either int or float numbers and has an optional example as guide.
list_cmd = [0, 1, 2] wid = ListWidget(list_cmd, mode='int', description='List:', render_function=render_function, example_visible=True) wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Note that you need to press Enter in order to pass a new value into the textbox. Also, try typing a wrong command, such as python '10, 20,,', '10, a, None' to see the corresponding error messages. The styling of the widget can be changed using the style() method.
wid.style(box_style='danger', font_size=15)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
The state of the widget can be updated with the set_widget_state() method. Note that since allow_callback=False, nothing gets printed after running the command, even though selected_values is updated.
wid.set_widget_state([20, 16], allow_callback=False)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Similar to the list widget, MenpoWidgets has a widget for defining a command for slicing a list (or numpy.array). Commands can have any vald Python syntax, such as python ':3:', '::2', '1:2:10', '-1::', '0, 3, 7', 'range(5)' The widget gets as argument a dict with the initial slicing command as well as the length of the list.
# Initial options slice_cmd = {'command': ':3', 'length': 10} # Create widget wid = SlicingCommandWidget(slice_cmd, description='Command:', render_function=render_function, example_visible=True, orientation='horizontal') # Display widget wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Note that by defining a single int number, then an ipywidget.IntSlider appears that allows to select the index. Similarly, by inserting any slicing command with a constant step, then an ipywidgets.IntRangeSlider appears. The sliders are disabled when inserting a slicing command with non-constant step. The placement of the sliders with respect to the textbox is controlled by the orientation argument. Additionally, similar to the ListWidget, the widget is smart enough to detect any syntactic errors and print a relevant message. The styling of the widget can be changed as
wid.style(border_visible=True, border_style='dashed', font_weight='bold')
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
To update the widget's state, you need to pass in a new dict of options, as
wid.set_widget_state({'command': ':40', 'length': 40}, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:colour"></a>4. Colour Selection MenpoWidgets is using the standard Java colour picker defined in ipywidgets.ColorPicker. However, ColourSelectionWidget has the additional functionality to select colours for a set of objects. Thus the widget constructor gets a list of colours (either the colour name str or the RGB values), as well as the labels list that has the names of the objects.
wid = ColourSelectionWidget([[255, 38, 31], 'blue', 'green'], labels=['a', 'b', 'c'], render_function=render_function) # Set styling wid.style(box_style='warning', apply_to_all_style='info', label_colour='black', label_background_colour=map_styles_to_hex_colours('info', background=True), font_weight='bold') # Display widget wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
The Apply to all button sets the currently selected colour to all the labels. The colours can also be updated with the set_colours() function as
wid.set_colours(['red', 'orange', 'pink'], allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
In case there is only one label, defined either with a list of length 1 or by setting labels=None, then the drop-down menu to select object does not appear. For example, let's update the state of the widget:
wid.set_widget_state(['red'], None)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:index"></a>5. Index Selection The following two widgets give the ability to select a single integer number from a specified range. Thus, they can be seen as index selectors. The user must pass in a dict that defines the minimum, maximum and step of the allowed range, as well as the initially selected index. Then the selected_values trait always keeps track of the selected index, thus it has int type. An index selection widget, where the selector is an ipywidgets.IntSlider can be created as
# Initial options index = {'min': 0, 'max': 100, 'step': 1, 'index': 10} # Crete widget wid = IndexSliderWidget(index, description='Index: ', render_function=render_function, continuous_update=False) # Set styling wid.style(box_style='danger', slider_handle_colour=map_styles_to_hex_colours('danger'), slider_bar_colour=map_styles_to_hex_colours('danger')) # Display widget wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
As with all widgets, the state can be updated as:
wid.set_widget_state({'min': 10, 'max': 500, 'step': 2, 'index': 50}, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
An index selection widget where the selection can be performed with -/+ (previous/next) buttons can be created as:
index = {'min': 0, 'max': 100, 'step': 1, 'index': 10} wid = IndexButtonsWidget(index, render_function=render_function, loop_enabled=False, text_editable=True) wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Note that since text_editable is True, you can actually edit the index directly from the textbox. Additionally, by setting loop_enabled=True means that by pressing '+' when the textbox is at the last index, it takes you to the minimum index. Let's update the styling of the widget:
wid.style(box_style='danger', plus_style='success', minus_style='danger', text_colour='blue', text_background_colour=map_styles_to_hex_colours('info', background=True))
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
Let's also update its state with a new set of options:
wid.set_widget_state({'min': 20, 'max': 500, 'step': 2, 'index': 50}, loop_enabled=True, text_editable=True, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
<a name="sec:zoom"></a>6. Zoom There are two widgets for zooming into a figure. Both are using ipywidgets.FloatSLider and get as input a dict with the minimum and maximum values, the step of the slider(s) and the initial zoom value. The first one defines a single zoom float, as
# Initial options zoom_options = {'min': 0.1, 'max': 4., 'step': 0.05, 'zoom': 1.} # Create widget wid = ZoomOneScaleWidget(zoom_options, render_function=render_function) # Set styling wid.style(box_style='danger') wid.zoom_slider.background_color = map_styles_to_hex_colours('info') wid.zoom_slider.slider_color = map_styles_to_hex_colours('danger') # Display widget wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
and its state can be updated as:
wid.set_widget_state({'zoom': 0.5, 'min': 0., 'max': 4., 'step': 0.2}, allow_callback=True)
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause
The second one defines two zoom values that are intended to control the height and width of a figure.
# Initial options zoom_options = {'min': 0.1, 'max': 4., 'step': 0.1, 'zoom': [1., 1.], 'lock_aspect_ratio': False} # Create widget wid = ZoomTwoScalesWidget(zoom_options, render_function=render_function, continuous_update=True) # Set styling wid.style(box_style='danger') # Display widget wid
menpowidgets/Custom Widgets/Widgets Tools.ipynb
grigorisg9gr/menpo-notebooks
bsd-3-clause