markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Compare results to reference implementation.
f,(ax1,ax2) = pyplot.subplots(1,2,sharey=False,figsize = (8,8)) ax1.imshow(imsd[0],aspect='equal',cmap="gray") ax2.imshow(out_c,aspect ='equal',cmap="gray") pyplot.show()
clib/jupyter_notebooks/ncs_tensorflow_simulation.ipynb
HuanglabPurdue/NCS
gpl-3.0
Tensorflow
py_otf_mask = numpy.fft.fftshift(rcfilter.astype(numpy.float32)) FITMIN = tf.constant(1.0e-6) tf_alpha = tf.constant(numpy.float32(alpha)) tf_data = tf.Variable(imsd[0].astype(numpy.float32), shape = (128, 128), trainable=False) tf_gamma = tf.constant(gamma.astype(numpy.float32)) tf_rc = tf.constant(py_otf_mask*py_ot...
clib/jupyter_notebooks/ncs_tensorflow_simulation.ipynb
HuanglabPurdue/NCS
gpl-3.0
A dictionary can store values for a key. In this example, we will store the value "world", at the key "hello".
responses["hello"] = "world" responses
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
One nice property of dicts is that they can store more key/value pairs.
responses["hola"] = "mundo" responses def greet(salutation): try: print(salutation, responses[salutation]) except KeyError: print("Sorry, don't know how to respond to", salutation) greet("hello") greet("hola")
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
What happens if you ask for an unknown key?
greet("你好")
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
We can update the dict, and ask again:
responses["你好"] = "世界" greet("你好")
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
More tricks with dicts Keys and values don't have to be strings, many other python data types can be used. For example, we can rewrite some of our counting loops to use a dict instead of variables to count nucleotides
counts = {} sequence = "acggtattcggt" for base in sequence: if not base in counts: counts[base] = 1 else: counts[base] += 1 counts
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
Since keys can be any string, we can use a similar loop to count triplet (or codon) frequency
k = 3 kmercounts = {} for i in range(len(sequence) - k + 1): kmer = sequence[i:i+k] if not kmer in kmercounts: kmercounts[kmer] = 1 else: kmercounts[kmer] += 1 kmercounts
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
Using dicts to build graphs We can use dicts to build data structures to represent graphs. I want to build a graph with 4 nodes, labeled 1, 2, 3, and 4, with edges between 1-2, 1-3, 1-4, and 2-3 as shown in Figure 1. We can use a dict, with the nodes as keys, and the list of nodes connected to the key as the value.
from IPython.core.display import SVG, display display(SVG(filename='fig-graph.svg')) graph = {} graph graph[1] = [2, 3, 4] graph graph[2] = [1, 3] graph[3] = [1, 2] graph[4] = [1] graph
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
Now we can loop over the nodes, and extract the edges.
for node in graph: print(node, graph[node])
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
Following demonstrates how to find matches between source scan: 25133-3-11 and target scan 25133-4-13. We also have a list of unit_ids from the source scan for which we want to find the match.
source_scan = dict(animal_id=25133, session=3, scan_idx=11) target_scan = dict(animal_id=25133, session=4, scan_idx=13)
python/example/how_to_fill_proxmity_cell_match.ipynb
cajal/pipeline
lgpl-3.0
Designate the pairing as what needs to be matched:
pairing = (meso.ScanInfo & source_scan).proj(src_session='session', src_scan_idx='scan_idx') * (meso.ScanInfo & target_scan).proj() meso.ScansToMatch.insert(pairing)
python/example/how_to_fill_proxmity_cell_match.ipynb
cajal/pipeline
lgpl-3.0
Now also specify which units from the source should be matched
# 150 units from the source scan unit_ids = [ 46, 75, 117, 272, 342, 381, 395, 408, 414, 463, 537, 568, 581, 633, 670, 800, 801, 842, 873, 1042, 1078, 1085, 1175, 1193, 1246, 1420, 1440, 1443, 1451, 1464, 1719, 1755, 1823, 1863, 2107, 2128, 2161, 2199, 2231, 2371, 2438, 2522, 2572, 2585, 2644, 2764, ...
python/example/how_to_fill_proxmity_cell_match.ipynb
cajal/pipeline
lgpl-3.0
Now we have specified scans to match and source scan units, we can populate ProximityCellMatch
meso.ProximityCellMatch.populate(display_progress=True)
python/example/how_to_fill_proxmity_cell_match.ipynb
cajal/pipeline
lgpl-3.0
Now find the best proximity match
meso.BestProximityCellMatch().populate() meso.BestProximityCellMatch()
python/example/how_to_fill_proxmity_cell_match.ipynb
cajal/pipeline
lgpl-3.0
Cada celda la puedes usar para escribir el código que tu quieras y si de repente se te olvida alguna función o tienes duda de si el nombre es correcto IPython es muy amable en ese sentido. Para saber acerca de una función, es decir cuál es su salida o los parámetros que necesita puedes usar el signo de interrogación a...
variable = 50 saludo = 'Hola'
UsoJupyter/.ipynb_checkpoints/CuadernoJupyter-checkpoint.ipynb
PyladiesMx/Empezando-con-Python
mit
Ejercicio 3 Empieza a escribir las primeras tres letras de cada elemento de la celda anterior y presiona tab para ver si se puede autocompletar También hay funciones mágicas que nos permitirán hacer diversas tareas como mostrar las gráficas que se produzcan en el código dentro de una celda, medir el tiempo de ejecución...
# Importa matplotlib (paquete para graficar) y numpy (paquete para arreglos). # Fíjate en el la función mágica para que aparezca nuestra gráfica en la celda. %matplotlib inline import matplotlib.pyplot as plt import numpy as np # Crea un arreglo de 30 valores para x que va de 0 a 5. x = np.linspace(0, 5, 30) y = x**...
UsoJupyter/.ipynb_checkpoints/CuadernoJupyter-checkpoint.ipynb
PyladiesMx/Empezando-con-Python
mit
La gráfica que estás viendo sigue la siguiente ecuación $$y=x^2$$ Ejercicio 4 Edita el código de arriba y vuélvelo a correr pero ahora intenta reemplazar la expresión: y = x**2 con: y=np.sin(x) Gráficas interactivas
# Importa matplotlib y numpy # con la misma "magia". %matplotlib inline import matplotlib.pyplot as plt import numpy as np # Importa la función interactiva de IPython usada # para construir los widgets interactivos from IPython.html.widgets import interact def plot_sine(frequency=4.0, grid_points=12, plot_original=T...
UsoJupyter/.ipynb_checkpoints/CuadernoJupyter-checkpoint.ipynb
PyladiesMx/Empezando-con-Python
mit
Lets define parameters of a 50 periods undulator with a small deflection parameter $K_0=0.1$, and an electron with $\gamma_e=100$
K0 = 0.1 Periods=50 g0=100.0 StepsPerPeriod=24 gg = g0/(1.+K0**2/2)**.5 vb = (1.-gg**-2)**0.5 k_res = 2*gg**2 dt = 1./StepsPerPeriod Steps2Do = int((Periods+2)/dt)+1
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
Electron is added by creating a dummy specie equipped with the undulator device, and a single particle with $p_x = \sqrt{\gamma_e^2-1}$ is added
specie_in = {'TimeStep':dt, 'Devices':([chimera.undul_analytic,np.array([K0, 1., 0, Periods])],) } beam = Specie(specie_in) NumParts = 1 beam.Data['coords'] = np.zeros((3,NumParts)) beam.Data['momenta'] = np.zeros((3,NumParts)) beam.Data['momenta'][0] = np.sqrt(g0**2-1) beam.Data['coords_ha...
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
The SR calculator constractor takes the mode specifier Mode which can be 'far', 'near' and 'near-circ' which define the far field angular mapping, near field coordinate mapping in Cartesian geometry and near field coordinate mapping in cylindrical geometry. The mapping domain is defined by the Grid - for 'far' it is [...
sr_in_far = {'Grid':[(0.02*k_res,1.1*k_res),(0,2./g0),(0.,2*np.pi),(160,80,24)], 'TimeStep':dt,'Features':(), } sr_in_nearcirc = {'Grid':[(0.02*k_res,1.1*k_res),(0,15),(0.,2*np.pi),1e3,(160,80,24)], 'TimeStep':dt,'Features':(),'Mode':'near-circ', } sr_in_near = {'Grid':[(0.02*k_res,1...
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
The simulation is run as usually, but after each (or selected) step the track point is added to the track container using add_track method. After the orbits are recorded the spectrum is calculated with calculate_spectrum method, which can take the component as an argument (e.g. comp='z'). In contrast to the axes conve...
t0 = time.time() for i in range(Steps2Do): Chimera.make_step(i) sr_calc_far.add_track(beam) sr_calc_nearcirc.add_track(beam) sr_calc_near.add_track(beam) print('Done orbits in {:g} sec'.format(time.time()-t0)) t0 = time.time() sr_calc_far.calculate_spectrum(comp='z') print('Done farfield spectrum ...
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
The few useful functions are available: - get_full_spectrum returns the full energy spectral-angular distribution, $\mathrm{d}\mathcal{W}/(\mathrm{d}\varepsilon\, \mathrm{d}\Theta)$ (dimensionless) - get_energy_spectrum returns $\Theta$-integrated get_full_spectrum - get_spot returns $\varepsilon$-integrated get_full_s...
args_calc = {'chim_units':False, 'lambda0_um':1} FullSpect_far = sr_calc_far.get_full_spectrum(**args_calc) FullSpect_nearcirc = sr_calc_nearcirc.get_full_spectrum(**args_calc) FullSpect_near = sr_calc_near.get_full_spectrum(**args_calc) spotXY_far,ext_far = sr_calc_far.get_spot_cartesian(bins=(120,120),**args_calc) ...
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
Let us compare the obtained spectrum with analytical estimates, derived with Periods>1 and $K_0<<1$ approximations. Here we use expressions from [I. A. Andriyash et al, Phys. Rev. ST Accel. Beams 16 100703 (2013)], which can be easily derived or found in textbooks on undulator radiation. - the resonant frequency depend...
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(12,5)) extent = np.array(sr_calc_far.Args["Grid"][0]+sr_calc_far.Args["Grid"][1]) extent[:2] /= k_res extent[2:] *= g0 th,ph = sr_calc_far.Args['theta'], sr_calc_far.Args['phi'] ax1.plot( 1./(1.+(th*g0)**2), th*g0, ':', c='k',lw=1.5) ax1.imshow(FullSpect_far.mean(-1).T, ...
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
As mentioned before, the radiation profile for a given (e.g. fundumantal) wavenumber, can be specified as k0 argument to get_spot.
spotXY_k0,ext = sr_calc_far.get_spot_cartesian(k0=k_res,th_part=0.2,**args_calc) Spect1D = sr_calc_far.get_energy_spectrum(**args_calc) k_ax = sr_calc_far.get_spectral_axis()/k_res fig,(ax1,ax2) = plt.subplots(1,2,figsize=(14,5)) ax1.plot(k_ax, Spect1D/Spect1D.max()) ax1.plot(k_ax, FullSpect_far[1:,0,0]/FullSpect_far...
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
There are some linking issues with boost's program options in the below (commented) cells.
#standalone_prog = nativesys.as_standalone('chem_kinet', compile_kwargs=dict(options=['warn', 'pic', 'openmp', 'debug'])) #standalone_prog #p = subprocess.Popen([standalone_prog, '--return-on-root', '1', '--special-settings', '1000'], # stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subproce...
examples/_native_override_chemical_kinetics.ipynb
bjodah/pyodesys
bsd-2-clause
Time to reach steady state If we define steady state to occur when the change in chemical concentrations is below a certain threshold, then the obtained time will depend on that threshold. Here we investigate how that choice affects the answer (with respect to numerical precision etc.)
native = native_sys['cvode'].from_other(kineticsys, namespace_override=native_override) def plot_tss_conv(factor, tols, ax): tol_kw = dict(plot=False, return_on_root=True, nsteps=2000, special_settings=[factor]) tss = [integrate_and_plot(native, atol=tol, rtol=tol, **tol_kw).xout[-1] for tol in tols] ax.se...
examples/_native_override_chemical_kinetics.ipynb
bjodah/pyodesys
bsd-2-clause
Le but de la fonction stupid_generator est de lister les entiers inférieurs à end. Cependant, elle ne retourne pas directement la liste mais un générateur sur cette liste. Comparez avec la fonction suivante :
def stupid_list(end): i = 0 result = [] while i < end: result.append(i) i+=1 return result stupid_list(3)
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Pour récupérer les objets de stupid_generator, il faut le transformer explicitement en liste ou alors parcourir les objets à travers une boucle.
it = stupid_generator(3) next(it) list(stupid_generator(3)) for v in stupid_generator(3): print(v)
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Remarque : les instructions de stupid_generator ne sont pas exécutées lors de l'appel initial de la fonction mais seulement lorsque l'on commence à parcourir le générateur pour récupérer le premier objet. L'instruction yield stoppe alors l'exécution et retourne le premier objet. Si l'on demande un deuxième objet, l'exé...
def test_generator(): print("Cette instruction est exécutée lors de l'appel du premier objet") yield 1 print("Cette instruction est exécutée lors de l'appel du deuxième objet") yield 2 print("Cette instruction est exécutée lors de l'appel du troisième objet") yield 3 it = test_generator() next...
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Exercice : implanter la fonction suivante dont le but est d'engendrer les n premiers nombre de Fibonacci. La suite de fibonacci est définie par : $f_0 = 1$ $f_1 = 1$ $f_n = f_{n-1} + f_{n-2}$ pour $n \geq 2$. Remarques : La fonction doit renvoyer l'ensemble des nombres de Fibonacci, pas seulement le dernier. Pour cett...
def first_fibonacci_generator(n): """ Return a generator for the first ``n`` Fibonacci numbers """ # écrire le code ici it = first_fibonacci_generator(5) it next(it)
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Votre fonction doit passer les tests suivants :
import types assert type(first_fibonacci_generator(3)) == types.GeneratorType assert list(first_fibonacci_generator(0)) == [] assert list(first_fibonacci_generator(1)) == [1] assert list(first_fibonacci_generator(2)) == [1,1] assert list(first_fibonacci_generator(8)) == [1,1,2,3,5,8,13,21]
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Dans les cas précédent, le générateur s'arrête de lui même au bout d'un certain temps. Cependant, il est aussi possible d'écrire des générateurs infinis. Dans ce cas, la responsabilité de l'arrêt revient à la l'appelant.
def powers2(): v = 1 while True: yield v v*=2 for v in powers2(): print(v) if v > 1000000: break
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Exercice: Implantez les fonctions suivantes
def fibonacci_generator(): """ Return an infinite generator for Fibonacci numbers """ # écrire le code ici it = fibonacci_generator() next(it) def fibonacci_finder(n): """ Return the first Fibonacci number greater than or equal to n """ # écrire le code ici assert fibonacci_finder(...
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
En vous inspirant des fonctions précédentes (mais sans les utiliser) ou en reprenant la fonction du cours, implantez de façon récursive la fonction suivante qui engendre l'ensemble des mots binaires d'une taille donnée.
def binary_word_generator(n): """ Return a generator on binary words of size n in lexicographic order Input : - n, the length of the words """ # écrire le code ici list(binary_word_generator(3)) # tests import types assert type(binary_word_generator(0)) == types.GeneratorType assert ...
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Sur le même modèle, implanter la fonction suivante. (un peu plus dur) Indications : * Posez-vous la question de cette façon : si j'ai un mot de taille $n$ qui termine par 0 et qui contient $k$ fois 1, combien de 1 contenait le mot taille $n-1$ à partir duquel il a été créé ? De même s'il termine par 1. * Bien que la ...
def binary_kword_generator(n,k): """ Return a generator on binary words of size n such that each word contains exacty k occurences of 1 Input : - n, the size of the words - k, the number of 1 """ # écrire le code ici list(binary_kword_generator(4,2)) # tests import types assert ...
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Et pour finir On appelle un prefixe de Dyck un mot binaire de taille $n$ avec $k$ occurences de 1, tel que dans tout préfixe, le nombre de 1 soit supérieur ou égal au nombre de 0. Par exemple : $1101$ est un préfixe de Dyck pour $n=4$ et $k=3$. Mais $1001$ n'en est pas un car dans le prefixe $100$ le nombre de 0 est su...
def dyck_prefix_generator(n,k): """ Return a generator on binary words of size n such that each word contains exacty k occurences of 1, and in any prefix, the number of 1 is greater than or equal to the number of 0. Input : - n, the size of the words - k, the number of 1 """ #...
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Exécutez la ligne suivante et copiez la liste des nombres obtenus dans Google.
[len(set(dyck_prefix_generator(2*n, n))) for n in range(8)]
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Vertex client library: AutoML image classification model for export to edge <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb"> <img src="https:...
import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG
notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Tutorial Now you are ready to start creating your own AutoML image classification model. Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in...
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client ...
notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Construct the task requirements Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion. The minimal fields we need to s...
PIPE_NAME = "flowers_pipe-" + TIMESTAMP MODEL_NAME = "flowers_model-" + TIMESTAMP task = json_format.ParseDict( { "multi_label": False, "budget_milli_node_hours": 8000, "model_type": "MOBILE_TF_LOW_LATENCY_1", "disable_early_stopping": False, }, Value(), ) response = create...
notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Export as Edge model You can export an AutoML image classification model as an Edge model which you can then custom deploy to an edge device, such as a mobile phone or IoT device, or download locally. Use this helper function export_model to export the model to Google Cloud, which takes the following parameters: name:...
MODEL_DIR = BUCKET_NAME + "/" + "flowers" def export_model(name, format, gcs_dest): output_config = { "artifact_destination": {"output_uri_prefix": gcs_dest}, "export_format_id": format, } response = clients["model"].export_model(name=name, output_config=output_config) print("Long runn...
notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import de fonctions et de modules spécifiques à Openfisca
from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_bar_list from openfisca_france_indirect_taxation.examples.dataframes_from_legislation.get_accises import get_accises_carburants from openfisca_france_indirect_taxation.examples.dataframes_from_legislation.get_tva import get_tva_taux_ple...
openfisca_france_indirect_taxation/examples/notebooks/taux_implicite_ticpe.ipynb
openfisca/openfisca-france-indirect-taxation
agpl-3.0
Appel des paramètres de la législation et des prix
ticpe = ['ticpe_gazole', 'ticpe_super9598'] accise_diesel = get_accises_carburants(ticpe) prix_ttc = ['diesel_ttc', 'super_95_ttc'] prix_carburants = get_prix_carburants(prix_ttc) tva_taux_plein = get_tva_taux_plein()
openfisca_france_indirect_taxation/examples/notebooks/taux_implicite_ticpe.ipynb
openfisca/openfisca-france-indirect-taxation
agpl-3.0
Création d'une dataframe contenant ces paramètres
df_taux_implicite = concat([accise_diesel, prix_carburants, tva_taux_plein], axis = 1) df_taux_implicite.rename(columns = {'value': 'taux plein tva'}, inplace = True)
openfisca_france_indirect_taxation/examples/notebooks/taux_implicite_ticpe.ipynb
openfisca/openfisca-france-indirect-taxation
agpl-3.0
A partir des paramètres, calcul des taux de taxation implicites
df_taux_implicite['taux_implicite_diesel'] = ( df_taux_implicite['accise ticpe gazole'] * (1 + df_taux_implicite['taux plein tva']) / (df_taux_implicite['prix diesel ttc'] - (df_taux_implicite['accise ticpe gazole'] * (1 + df_taux_implicite['taux plein tva']))) ) df_taux_implicite['taux_implicite_sp95'...
openfisca_france_indirect_taxation/examples/notebooks/taux_implicite_ticpe.ipynb
openfisca/openfisca-france-indirect-taxation
agpl-3.0
Réalisation des graphiques
graph_builder_bar_list(df_taux_implicite['taux_implicite_diesel'], 1, 1) graph_builder_bar_list(df_taux_implicite['taux_implicite_sp95'], 1, 1)
openfisca_france_indirect_taxation/examples/notebooks/taux_implicite_ticpe.ipynb
openfisca/openfisca-france-indirect-taxation
agpl-3.0
Closures A closure closes over free variables from their environment.
def html_tag(tag): def wrap_text(msg): print('<{0}>{1}<{0}>'.format(tag, msg)) return wrap_text print_h1 = html_tag('h1') print_h1('Test Headline') print_h1('Another Headline') print_p = html_tag('p') print_p('Test Paragraph!')
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
Decorators Decorators are a way to dynamically alter the functionality of your functions. So for example, if you wanted to log information when a function is run, you could use a decorator to add this functionality without modifying the source code of your original function.
def decorator_function(original_function): def wrapper_function(): print("wrapper executed this before {}".format(original_function.__name__)) return original_function() return wrapper_function def display(): print("display function ran!") decorated_display = decorator_function(display) de...
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
Some practical applications of decorators
#Let's say we want to keep track of how many times a specific function was run and what argument were passed to that function def my_logger(orig_func): import logging logging.basicConfig(filename='{}.log'.format(orig_func.__name__), level=logging.INFO) #Generates a log file in current directory with the name of...
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
Chaining of Decorators
@my_timer @my_logger def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25) # This is equivalent to my_timer(my_logger(display_info('John', 25))) # The above code will give us some unexpected results. # Instead of printing "display_info ran in: ---...
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
Let's see if switching the order of decorators helps
@my_logger @my_timer def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25) # This is equivalent to my_logger(my_timer(display_info('John', 25))) # Now this would create wrapper.log instead of display_info.log like we expected. # To understand why...
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
Decorators with Arguments
def prefix_decorator(prefix): def decorator_function(original_function): def wrapper_function(*args, **kwargs): print(prefix, "Executed before {}".format(original_function.__name__)) result = original_function(*args, **kwargs) print(prefix, "Executed after {}".format(orig...
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
通过运行以下代码单元格进行验证。
print(env.observation_space) print(env.action_space)
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
执行以下代码单元格以按照随机策略玩二十一点。 (代码当前会玩三次二十一点——你可以随意修改该数字,或者多次运行该单元格。该单元格旨在让你体验当智能体与环境互动时返回的输出结果。)
for i_episode in range(3): state = env.reset() while True: print(state) action = env.action_space.sample() state, reward, done, info = env.step(action) if done: print('End game! Reward: ', reward) print('You won :)\n') if reward > 0 else print('You lost :(...
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
执行以下代码单元格以按照该策略玩二十一点。 (代码当前会玩三次二十一点——你可以随意修改该数字,或者多次运行该单元格。该单元格旨在让你熟悉 generate_episode_from_limit 函数的输出结果。)
for i in range(3): print(generate_episode_from_limit(env))
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
现在你已经准备好自己编写 MC 预测的实现了。你可以选择实现首次经历或所有经历 MC 预测;对于 Blackjack 环境,这两种技巧是对等的。 你的算法将有四个参数: - env:这是 OpenAI Gym 环境的实例。 - num_episodes:这是通过智能体-环境互动生成的阶段次数。 - generate_episode:这是返回互动阶段的函数。 - gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。 该算法会返回以下输出结果: - V:这是一个字典,其中 V[s] 是状态 s 的估算值。例如,如果代码返回以下输出结果:
{(4, 7, False): -0.38775510204081631, (18, 6, False): -0.58434296365330851, (13, 2, False): -0.43409090909090908, (6, 7, False): -0.3783783783783784, ...
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
则状态 (4, 7, False) 的值估算为 -0.38775510204081631。 如果你不知道如何在 Python 中使用 defaultdict,建议查看此源代码。
from collections import defaultdict import numpy as np import sys def mc_prediction_v(env, num_episodes, generate_episode, gamma=1.0): # initialize empty dictionary of lists returns = defaultdict(list) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if...
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
使用以下单元格计算并绘制状态值函数估算值。 (用于绘制值函数的代码来自此源代码,并且稍作了修改。) 要检查你的实现是否正确,应将以下图与解决方案 notebook Monte_Carlo_Solution.ipynb 中的对应图进行比较。
from plot_utils import plot_blackjack_values # obtain the value function V = mc_prediction_v(env, 500000, generate_episode_from_limit) # plot the value function plot_blackjack_values(V)
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
现在你已经准备好自己编写 MC 预测的实现了。你可以选择实现首次经历或所有经历 MC 预测;对于 Blackjack 环境,这两种技巧是对等的。 你的算法将有四个参数: - env: 这是 OpenAI Gym 环境的实例。 - num_episodes:这是通过智能体-环境互动生成的阶段次数。 - generate_episode:这是返回互动阶段的函数。 - gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。 该算法会返回以下输出结果: Q:这是一个字典(一维数组),其中 Q[s][a] 是状态 s 和动作 a 对应的估算动作值。
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0): # initialize empty dictionaries of arrays returns_sum = defaultdict(lambda: np.zeros(env.action_space.n)) N = defaultdict(lambda: np.zeros(env.action_space.n)) Q = defaultdict(lambda: np.zeros(env.action_space.n)) # loop over episo...
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
请使用以下单元格获取动作值函数估值 $Q$。我们还绘制了相应的状态值函数。 要检查你的实现是否正确,应将以下图与解决方案 notebook Monte_Carlo_Solution.ipynb 中的对应图进行比较。
# obtain the action-value function Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic) # obtain the state-value function V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \ for k, v in Q.items()) # plot the state-value function plot_blackjack_value...
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
第 3 部分:MC 控制 - GLIE 在此部分,你将自己编写常量-$\alpha$ MC 控制的实现。 你的算法将有四个参数: env: 这是 OpenAI Gym 环境的实例。 num_episodes:这是通过智能体-环境互动生成的阶段次数。 generate_episode:这是返回互动阶段的函数。 gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。 该算法会返回以下输出结果: Q:这是一个字典(一维数组),其中 Q[s][a] 是状态 s 和动作 a 对应的估算动作值。 policy:这是一个字典,其中 policy[s] 会返回智能体在观察状态 s 之后选择的动作。 (你可以随意...
def mc_control_GLIE(env, num_episodes, gamma=1.0): nA = env.action_space.n # initialize empty dictionaries of arrays Q = defaultdict(lambda: np.zeros(nA)) N = defaultdict(lambda: np.zeros(nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i...
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
通过以下单元格获取估算的最优策略和动作值函数。
# obtain the estimated optimal policy and action-value function policy_glie, Q_glie = mc_control_GLIE(env, 500000)
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
真最优策略 $\pi_*$ 可以在该教科书的第 82 页找到(下文也提供了)。请将你的最终估算值与最优策略进行比较——它们能够有多接近?如果你对算法的效果不满意,请花时间调整 $\epsilon$ 的衰减率和/或使该算法运行更多个阶段,以获得更好的结果。 第 4 部分:MC 控制 - 常量-$\alpha$ 在此部分,你将自己编写常量-$\alpha$ MC 控制的实现。 你的算法将有三个参数: env: 这是 OpenAI Gym 环境的实例。 num_episodes:这是通过智能体-环境互动生成的阶段次数。 generate_episode:这是返回互动阶段的函数。 alpha:这是更新步骤的步长参数。 gamma:这...
def mc_control_alpha(env, num_episodes, alpha, gamma=1.0): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1000 == 0: pr...
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
通过以下单元格获得估算的最优策略和动作值函数。
# obtain the estimated optimal policy and action-value function policy_alpha, Q_alpha = mc_control_alpha(env, 500000, 0.008)
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
Minimal example with random data
def aquire_audio_data(): D, T = 4, 10000 y = np.random.normal(size=(D, T)) return y y = aquire_audio_data() Y = stft(y, **stft_options).transpose(2, 0, 1) with tf.Session() as session: Y_tf = tf.placeholder( tf.complex128, shape=(None, None, None)) Z_tf = wpe(Y_tf) Z = session.run(Z_tf,...
examples/WPE_Tensorflow_offline.ipynb
fgnt/nara_wpe
mit
STFT For simplicity reasons we calculate the STFT in Numpy and provide the result in form of the Tensorflow feed dict.
Y = stft(y, **stft_options).transpose(2, 0, 1)
examples/WPE_Tensorflow_offline.ipynb
fgnt/nara_wpe
mit
iterative WPE A placeholder for Y is declared. The wpe function is fed with Y via the Tensorflow feed dict. Finally, an inverse STFT in Numpy is performed to obtain a dereverberated result in time domain.
from nara_wpe.tf_wpe import get_power with tf.Session()as session: Y_tf = tf.placeholder(tf.complex128, shape=(None, None, None)) Z_tf = wpe(Y_tf, taps=taps, iterations=iterations) Z = session.run(Z_tf, {Y_tf: Y}) z = istft(Z.transpose(1, 2, 0), size=stft_options['size'], shift=stft_options['shift']) IPytho...
examples/WPE_Tensorflow_offline.ipynb
fgnt/nara_wpe
mit
Power spectrum Before and after applying WPE
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(20, 8)) im1 = ax1.imshow(20 * np.log10(np.abs(Y[:, 0, 200:400])), origin='lower') ax1.set_xlabel('') _ = ax1.set_title('reverberated') im2 = ax2.imshow(20 * np.log10(np.abs(Z[:, 0, 200:400])), origin='lower') _ = ax2.set_title('dereverberated') cb = fig.colorbar(im1)
examples/WPE_Tensorflow_offline.ipynb
fgnt/nara_wpe
mit
The histogram A KDE may be thought of as an extension to the familiar histogram. The purpose of the KDE is to estimate an unknown probability density function $f(x)$ given data sampled from it. A natural first thought is to use a histogram – it’s well known, simple to understand and works reasonably well. To see how ...
x = np.linspace(np.min(data) - 1, np.max(data) + 1, num=2**10) plt.hist(data, density=True, label='Histogram', edgecolor='#1f77b4', color='w') plt.scatter(data, np.zeros_like(data), marker='|', c='r', label='Data', zorder=9) plt.plot(x, distribution.pdf(x), label='True pdf', c='r', ls='--') plt.legend(loc...
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Centering the histogram In an effort to reduce the arbitrary placement of the histogram bins, we center a box function $K$ on each data point $x_i$ and average those functions to obtain a probability density function. This is a (very simple) kernel density estimator. $$\hat{f}(x) = \frac{1}{N} \sum_{i=1}^N K \left( x-x...
from KDEpy import TreeKDE np.random.seed(123) data = [1, 2, 4, 8, 16] # Plot the points plt.figure(figsize=(14, 3)); plt.subplot(1, 3, 1) plt.title('Data points') plt.scatter(data, np.zeros_like(data), marker='|', c='r', label='Data') # Plot a kernel on each data point plt.subplot(1, 3, 2); plt.title('Data points wit...
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Now let us grapically review our progress so far: We have a collection of data points ${x_1, x_2, \dots, x_N}$. These data points are drawn from a true probability density function $f(x)$, unknown to us. A very simple way to construct an estimate $\hat{f}(x)$ is to use a histogram. A KDE $\hat{f}(x)$ with a box kerne...
np.random.seed(123) data = distribution.rvs(32) # Use a box function with the FFTKDE to obtain a density estimate x, y = FFTKDE(kernel='box', bw=0.7).fit(data).evaluate() plt.plot(x, y, zorder=10, color='#ff7f0e', label='KDE with box kernel') plt.scatter(data, np.zeros_like(data), marker='|', c='r', labe...
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Choosing a smooth kernel The true function $f(x)$ is continuous, while our estimate $\hat{f}(x)$ is not. To alleviate the problem of discontinuity, we substitute the box function used above for a gaussian function (a normal distribution). $$K = \text{box function} \quad \to \quad K = \text{gaussian function}$$ The gau...
# Use the FFTKDE with a smooth Gaussian x, y = FFTKDE(kernel='gaussian', bw=0.7).fit(data).evaluate() plt.plot(x, y, zorder=10, label='KDE with box kernel') plt.scatter(data, np.zeros_like(data), marker='|', c='r', label='Data') plt.plot(x, distribution.pdf(x), label='True pdf', c='r', ls='--') plt.legend(loc='best');
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Selecting a suitable bandwidth To control the bandwidth $h$ of the kernel, we'll add a factor $h >0$ to the equation above. $$\hat{f}(x) = \frac{1}{N} \sum_{i=1}^N K \left( x-x_i \right) \quad \to \quad \hat{f}(x) = \frac{1}{N h} \sum_{i=1}^N K \left( \frac{x-x_i}{h} \right)$$ When $h \to 0$, the estimate becomes jagg...
for bw in [0.25, 'silverman']: x, y = FFTKDE(kernel='gaussian', bw=bw).fit(data).evaluate() plt.plot(x, y, label='bw="{}"'.format(bw)) plt.scatter(data, np.zeros_like(data), marker='|', c='r', label='Data') plt.plot(x, distribution.pdf(x), label='True pdf', c='r', ls='--') plt.legend(loc='best');
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Weighted data As a final modification to the problem, let's weight each indiviual data point $x_i$ with a weight $w_i \geq 0$. The contribution to the sum from the data point $x_i$ is controlled by $w_i$. We divide by $\sum_{i=1}^N w_i$ to ensure that $\int \hat{f}(x) \, dx = 1$. Here is how the equation is transformed...
plt.figure(figsize=(14, 3)); plt.subplot(1, 2, 1) plt.scatter(data, np.zeros_like(data), marker='o', c='None', edgecolor='r', alpha=0.75) # Unweighted KDE x, y = FFTKDE().fit(data)() plt.plot(x, y) plt.subplot(1, 2, 2); np.random.seed(123) weights = np.exp(data) * 25 plt.scatter(data, np.zeros_like(data)...
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Speed There is much more to say about kernel density estimation, but let us conclude with a speed test. Note (Speed of FFTKDE). Millions of data points pose no trouble for the FFTKDE implementation. Computational time scales linearily with the number of points in practical settings. The theoretical runtime is $\mathca...
import time import statistics import itertools import operator def time_function(function, n=10, t=25): times = [] for _ in range(t): data = np.random.randn(n) * 10 weights = np.random.randn(n) ** 2 start_time = time.perf_counter() function(data, weights) times.append(ti...
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Let's pick a descriptor. Allowed types are: cnt: atom counts bob: bag of bonds soap: smooth overlap of atomic positions; choose from: soap.sum - all atoms summed together soap.mean - mean of all atom SOAP soap.centre - computed at the central point mbtr: many-body tensor representation cm: Coulomb matrix
# TYPE is the descriptor type TYPE = "cm" #show descriptor details print("\nDescriptor details") desc = open("./data/descriptor."+TYPE.split('.')[0]+".txt","r").readlines() for l in desc: print(l.strip()) print(" ")
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
and load the databases with the descriptors (input) and the correct charge densities (output). Databases are quite big, so we can decide how many samples to use for training.
# load input/output data trainIn = load_npz("./data/energy.input."+TYPE+".npz").toarray() trainOut = numpy.load("./data/energy.output.npy") trainIn = trainIn.astype(dtype=numpy.float64, casting='safe') # decide how many samples to take from the database samples = min(trainIn.shape[0], 9000) vsamples = min(trainIn.sha...
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
Next we setup a multilayer perceptron of suitable size. Out package of choice is scikit-learn, but more efficient ones are available.<br> Check the scikit-learn <a href="http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html">documentation</a> for a list of parameters.
# setup the neural network nn = MLPRegressor(hidden_layer_sizes=(1000,200,50,50), activation='tanh', solver='lbfgs', alpha=0.01, learning_rate='adaptive')
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
Training Now comes the tough part! The idea of training is to evaluate the ANN with the training inputs and measure its error (since we know the correct outputs). It is then possible to compute the derivative (gradient) of the error w.r.t. each parameter (connections and biases). By shifting the parameters in the oppos...
# use this to change some parameters during training if the NN gets stuck in a bad spot nn.set_params(solver='lbfgs') nn.fit(trainIn, trainOut);
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
Check the ANN quality with a regression plot, showing the mismatch between the exact and NN predicted outputs for the validation set.
# evaluate the training and validation error trainMLOut = nn.predict(trainIn) validMLOut = nn.predict(validIn) print ("Mean Abs Error (training) : ", (numpy.abs(trainMLOut-trainOut)).mean()) print ("Mean Abs Error (validation): ", (numpy.abs(validMLOut-validOut)).mean()) plt.plot(validOut,validMLOut,'o') plt.plot([m...
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
Exercises 1. Compare descriptors Keeping the size of the NN constant, test the accuracy of different descriptors with the same NN size, and find the best one for these systems.
# DIY code here...
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
These are the imports from the Keras API. Note the long format which can hopefully be shortened in the future to e.g. from tf.keras.models import Model.
from tensorflow.python.keras.models import Model, Sequential from tensorflow.python.keras.layers import Dense, Flatten, Dropout from tensorflow.python.keras.applications import VGG16 from tensorflow.python.keras.applications.vgg16 import preprocess_input, decode_predictions from tensorflow.python.keras.preprocessing.im...
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Function for calculating the predicted classes of the entire test-set and calling the above function to plot a few examples of mis-classified images.
def example_errors(): # The Keras data-generator for the test-set must be reset # before processing. This is because the generator will loop # infinitely and keep an internal index into the dataset. # So it might start in the middle of the test-set if we do # not reset it first. This makes it imposs...
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Input Pipeline The Keras API has its own way of creating the input pipeline for training a model using files. First we need to know the shape of the tensors expected as input by the pre-trained VGG16 model. In this case it is images of shape 224 x 224 x 3.
input_shape = model.layers[0].output_shape[1:3] input_shape
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Get the number of classes for the dataset.
num_classes = generator_train.num_class num_classes
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Training the new model is just a single function call in the Keras API. This takes about 6-7 minutes on a GTX 1070 GPU.
history = new_model.fit_generator(generator=generator_train, epochs=epochs, steps_per_epoch=steps_per_epoch, class_weight=class_weight, validation_data=generator_test, ...
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
After training we can also evaluate the new model's performance on the test-set using a single function call in the Keras API.
result = new_model.evaluate_generator(generator_test, steps=steps_test) print("Test-set classification accuracy: {0:.2%}".format(result[1]))
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
The training can then be continued so as to fine-tune the VGG16 model along with the new classifier.
history = new_model.fit_generator(generator=generator_train, epochs=epochs, steps_per_epoch=steps_per_epoch, class_weight=class_weight, validation_data=generator_test, ...
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
We can then plot the loss-values and classification accuracy from the training. Depending on the dataset, the original model, the new classifier, and hyper-parameters such as the learning-rate, this may improve the classification accuracies on both training- and test-set, or it may improve on the training-set but worse...
plot_training_history(history) result = new_model.evaluate_generator(generator_test, steps=steps_test) print("Test-set classification accuracy: {0:.2%}".format(result[1]))
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
The first distinction to make is between Figure, which is the outer frame of a canvas, and the rectangular XY grids or coordinate systems we place within the figure. XY grid objects are known as "axes" (plural) and most of the attributes we associate with plots are actually connected to axes. What may be confusing to ...
fig = plt.figure("main", figsize=(5,5)) # name or int id optional, as is figsize ax1 = fig.add_axes([0.1, 0.5, 0.8, 0.4], xticklabels=[], ylim=(-1.2, 1.2)) # no x axis tick marks ax2 = fig.add_axes([0.1, 0.1, 0.8, 0.4], ylim=(-1.2, 1.2)) x = np.linspace(0, 10) ax1.plot(np.sin(x)) ...
SUBPLOTS_PYT_DS_SAISOFT.ipynb
4dsolutions/Python5
mit
Here's subplot in action, creating axes automatically based on how many rows and columns we specify, followed by a sequence number i, 1 through however many (in this case six). Notice how plt is keeping track of which subplot axes are current, and we talk to said axes through plt.
for i in range(1, 7): plt.subplot(2, 3, i) plt.text(0.5, 0.5, str((2, 3, i)), fontsize=18, ha='center') plt.xticks([]) # get rid of tickmarks on x axis plt.yticks([]) # get rid of tickmarks on y axis
SUBPLOTS_PYT_DS_SAISOFT.ipynb
4dsolutions/Python5
mit
Here we're talking to the axes objects more directly by calling "get current axes". Somewhat confusingly, the instances return have an "axes" attribute which points to the same instance, a wrinkle I explore below. Note the slight difference between the last two lines.
for i in range(1, 7): plt.subplot(2, 3, i) plt.text(0.5, 0.5, str((2, 3, i)), fontsize=18, ha='center') # synonymous. gca means 'get current axes' plt.gca().get_yaxis().set_visible(False) plt.gca().axes.get_xaxis().set_visible(False) # axes optional, self referential plt.gcf().subplots...
SUBPLOTS_PYT_DS_SAISOFT.ipynb
4dsolutions/Python5
mit
LAB: You might need to install pillow to get the code cells to work. Pillow is a Python 3 fork of PIL, the Python Imaging Library, still imported using that name. conda install pillow from the most compatible repo for whatever Anaconda environment you're using would be one way to get it. Using pip would be another. ...
from PIL import Image # Image is a module! plt.subplot(1, 2, 1) plt.xticks([]) # get rid of tickmarks on x axis plt.yticks([]) # get rid of tickmarks on y axis im = Image.open("face.png") plt.imshow(im) plt.subplot(1, 2, 2) plt.xticks([]) # get rid of tickmarks on x axis plt.yticks([]) # get rid of tickmarks on y axi...
SUBPLOTS_PYT_DS_SAISOFT.ipynb
4dsolutions/Python5
mit
The script below, borrowed from the matplotlib gallery, shows another common idiom for getting a figure and axes pair. Call plt.suplots with no arguments. Then talk to ax directly, for the most part. We're also rotating the x tickmark labels by 45 degrees. Fancy! Uncommenting the use('classic') command up top makes ...
# plt.style.use('classic') vegetables = ["cucumber", "tomato", "lettuce", "asparagus", "potato", "wheat", "barley"] farmers = ["Farmer Joe", "Upland Bros.", "Smith Gardening", "Agrifun", "Organiculture", "BioGoods Ltd.", "Cornylee Corp."] harvest = np.array([[0.8, 2.4, 2.5, 3.9, 0.0, 4.0, 0.0]...
SUBPLOTS_PYT_DS_SAISOFT.ipynb
4dsolutions/Python5
mit
Task I: Build the Model Now, using Keras we will build a multivariate regression model. Remember, these kinds of models can be represented as artifical neural networks, hence why we can implement them using Keras. <img src="resources/linear-regression-net.png" alt="Linear regression as an artificial neural network" wid...
# Import what we need from keras.layers import (Input, Dense) from keras.models import Model def simple_model(nb_inputs, nb_outputs): """Return a Keras Model. """ model = None return model ### Do *not* modify the following line ### # Test and see that the model has been created correctly tests.test_...
1-regression/2-multivariate-linear-regression.ipynb
torgebo/deep_learning_workshop
mit
Selecting Hyperparameters As opposed to standard model parameters, such as the weights in a linear model, hyperparamters are user-specified parameters not learned by the training process, i.e. they are specified a priori. In the following section we will look at how we can define and evaluate a few different hyperparam...
"""Do not modify the following code. It is to be used as a refence for future tasks. """ # Create a simple model model = simple_model(nb_features, nb_outputs) # # Define hyperparameters # lr = 0.2 nb_epochs = 10 batch_size = 10 # Fraction of the training data held as a validation set validation_split = 0.1 # Define...
1-regression/2-multivariate-linear-regression.ipynb
torgebo/deep_learning_workshop
mit
Now, with this model, let's try to optimize the regularization factor $\lambda$. This adjusts the strength of the regularizer. <div class="alert alert-success"> **Task**: Alter the regularization factor and assess the performance over 100 epochs using a batch size of 128. At a minimum, test out the following regulariza...
# Regularization factor (lambda) reg_factor = 0.005 # Create a simple model model = None # # Define hyperparameters # lr = 0.0005 nb_epochs = 100 batch_size = 128 reg_factor = 0.0005 # Fraction of the training data held as a validation set validation_split = 0.1 # Define optimiser # Compile model, use mean squar...
1-regression/2-multivariate-linear-regression.ipynb
torgebo/deep_learning_workshop
mit
Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog *...
%matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 4 sample_id = 420 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function mins = np.min(x, axis=0) maxs = np.max(x, axis=0) rng = maxs - mins ...
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit