markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
ADMM Optimizer Introduction The ADMM Optimizer can solve classes of mixed-binary constrained optimization problems, hereafter (MBCO), which often appear in logistic, finance, and operation research. In particular, the ADMM Optimizer here designed can tackle the following optimization problem $(P)$:$$\min_{x \in \mathcal{X},u\in\mathcal{U} \subseteq \mathbb{R}^l } \quad q(x) + \varphi(u),$$subject to the constraints:$$\mathrm{s.t.:~} \quad G x = b, \quad g(x) \leq 0, \quad \ell(x, u) \leq 0, $$with the corresponding functional assumptions.1. Function $q: \mathbb{R}^n \to \mathbb{R}$ is quadratic, i.e., $q(x) = x^{\intercal} Q x + a^{\intercal} x$ for a given symmetric squared matrix $Q \in \mathbb{R}^n \times \mathbb{R}^n, Q = Q^{\intercal}$, and vector $a \in \mathbb{R}^n$;2. The set $\mathcal{X} = \{0,1\}^n = \{x_{(i)} (1-x_{(i)}) = 0, \forall i\}$ enforces the binary constraints;3. Matrix $G\in\mathbb{R}^n \times \mathbb{R}^{n'}$, vector $b \in \mathbb{R}^{n'}$, and function $g: \mathbb{R}^n \to \mathbb{R}$ is convex;4. Function $\varphi: \mathbb{R}^l \to \mathbb{R}$ is convex and $\mathcal{U}$ is a convex set;5. Function $\ell: \mathbb{R}^n\times \mathbb{R}^l \to \mathbb{R}$ is *jointly* convex in $x, u$. In order to solve MBO problems, [1] proposed heuristics for $(P)$ based on the Alternating Direction Method of Multipliers (ADMM) [2]. ADMM is an operator splitting algorithm with a long history in convex optimization, and it is known to have residual, objective and dual variable convergence properties, provided that convexity assumptions are holding.The method of [1] (referred to as 3-ADMM-H) leverages the ADMM operator-splitting procedure to devise a decomposition for certain classes of MBOs into:- a QUBO subproblem to be solved by on the quantum device via variational algorithms, such as VQE or QAOA;- continuous convex constrained subproblem, which can be efficiently solved with classical optimization solvers.The algorithm 3-ADMM-H works as follows:0. Initialization phase (set the parameters and the QUBO and convex solvers);1. For each ADMM iterations ($k = 1, 2, \ldots, $) untill termination: - Solve a properly defined QUBO subproblem (with a classical or quantum solver); - Solve properly defined convex problems (with a classical solver); - Update the dual variables.2. Return optimizers and cost. A comprehensive discussion on the conditions for convergence, feasibility and optimality of the algorithm can be found in [1]. A variant with 2 ADMM blocks, namely a QUBO subproblem, and a continuous convex constrained subproblem, is also introduced in [1]. References[1] [C. Gambella and A. Simonetto, *Multi-block ADMM heuristics for mixed-binary optimization, on classical and quantum computers,* arXiv preprint arXiv:2001.02069 (2020).](https://arxiv.org/abs/2001.02069)[2] [S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, *Distributed optimization and statistical learning via the alternating direction method of multipliers,* Foundations and Trends in Machine learning, 3, 1–122 (2011).](https://web.stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf) InitializationFirst of all we load all the packages that we need.
import time from typing import List, Optional, Any import numpy as np import matplotlib.pyplot as plt from docplex.mp.model import Model from qiskit import BasicAer from qiskit.aqua.algorithms import QAOA, NumPyMinimumEigensolver from qiskit.optimization.algorithms import CobylaOptimizer, MinimumEigenOptimizer from qiskit.optimization.problems import QuadraticProgram from qiskit.optimization.algorithms.admm_optimizer import ADMMParameters, ADMMOptimizer # If CPLEX is installed, you can uncomment this line to import the CplexOptimizer. # CPLEX can be used in this tutorial to solve the convex continuous problem, # but also as a reference to solve the QUBO, or even the full problem. # # from qiskit.optimization.algorithms import CplexOptimizer
_____no_output_____
Apache-2.0
tutorials/optimization/5_admm_optimizer.ipynb
prasad-kumkar/qiskit-tutorials
We first initialize all the algorithms we plan to use later in this tutorial.To solve the QUBO problems we can choose between - `MinimumEigenOptimizer` using different `MinimumEigensolver`, such as `VQE`, `QAOA` or `NumpyMinimumEigensolver` (classical)- `GroverOptimizer`- `CplexOptimizer` (classical, if CPLEX is installed)and to solve the convex continuous problems we can choose between the following classical solvers:- `CplexOptimizer` (if CPLEX is installed)- `CobylaOptimizer`In case CPLEX is not available, the `CobylaOptimizer` (for convex continuous problems) and the `MinimumEigenOptimizer` using the `NumpyMinimumEigensolver` (for QUBOs) can be used as classical alternatives to CPLEX for testing, validation, and benchmarking.
# define COBYLA optimizer to handle convex continuous problems. cobyla = CobylaOptimizer() # define QAOA via the minimum eigen optimizer qaoa = MinimumEigenOptimizer(QAOA(quantum_instance=BasicAer.get_backend('statevector_simulator'))) # exact QUBO solver as classical benchmark exact = MinimumEigenOptimizer(NumPyMinimumEigensolver()) # to solve QUBOs # in case CPLEX is installed it can also be used for the convex problems, the QUBO, # or as a benchmark for the full problem. # # cplex = CplexOptimizer()
_____no_output_____
Apache-2.0
tutorials/optimization/5_admm_optimizer.ipynb
prasad-kumkar/qiskit-tutorials
ExampleWe test 3-ADMM-H algorithm on a simple Mixed-Binary Quadratic Problem with equality and inequality constraints (Example 6 reported in [1]). We first construct a docplex problem and then load it into a `QuadraticProgram`.
# construct model using docplex mdl = Model('ex6') v = mdl.binary_var(name='v') w = mdl.binary_var(name='w') t = mdl.binary_var(name='t') u = mdl.continuous_var(name='u') mdl.minimize(v + w + t + 5 * (u-2)**2) mdl.add_constraint(v + 2 * w + t + u <= 3, "cons1") mdl.add_constraint(v + w + t >= 1, "cons2") mdl.add_constraint(v + w == 1, "cons3") # load quadratic program from docplex model qp = QuadraticProgram() qp.from_docplex(mdl) print(qp.export_as_lp_string())
\ This file has been generated by DOcplex \ ENCODING=ISO-8859-1 \Problem name: ex6 Minimize obj: v + w + t - 20 u + [ 10 u^2 ]/2 + 20 Subject To cons1: v + 2 w + t + u <= 3 cons2: v + w + t >= 1 cons3: v + w = 1 Bounds 0 <= v <= 1 0 <= w <= 1 0 <= t <= 1 Binaries v w t End
Apache-2.0
tutorials/optimization/5_admm_optimizer.ipynb
prasad-kumkar/qiskit-tutorials
Classical Solution3-ADMM-H needs a QUBO optimizer to solve the QUBO subproblem, and a continuous optimizer to solve the continuous convex constrained subproblem. We first solve the problem classically: we use the `MinimumEigenOptimizer` with the `NumPyMinimumEigenSolver` as a classical and exact QUBO solver and we use the `CobylaOptimizer` as a continuous convex solver. 3-ADMM-H supports any other suitable solver available in Qiskit. For instance, VQE, QAOA, and GroverOptimizer can be invoked as quantum solvers, as demonstrated later.If CPLEX is installed, the `CplexOptimizer` can also be used as both, a QUBO and convex solver. ParametersThe 3-ADMM-H are wrapped in class `ADMMParameters`. Customized parameter values can be set as arguments of the class. In this example, parameters $\rho, \beta$ are initialized to $1001$ and $1000$, respectively. The penalization `factor_c` of equality constraints $Gx = b$ is set to $900$. The tolerance `tol` for primal residual convergence is set to `1.e-6`. In this case, the 3-block implementation is guaranteed to converge for Theorem 4 of [1], because the inequality constraint with the continuous variable is always active. The 2-block implementation can be run by setting `three_block=False`, and practically converges to a feasible not optimal solution.
admm_params = ADMMParameters( rho_initial=1001, beta=1000, factor_c=900, max_iter=100, three_block=True, tol=1.e-6 )
_____no_output_____
Apache-2.0
tutorials/optimization/5_admm_optimizer.ipynb
prasad-kumkar/qiskit-tutorials
Calling 3-ADMM-H algorithmTo invoke the 3-ADMM-H algorithm, an instance of the `ADMMOptimizer` class needs to be created. This takes ADMM-specific parameters and the subproblem optimizers separately into the constructor. The solution returned is an instance of `OptimizationResult` class.
# define QUBO optimizer qubo_optimizer = exact # qubo_optimizer = cplex # uncomment to use CPLEX instead # define classical optimizer convex_optimizer = cobyla # convex_optimizer = cplex # uncomment to use CPLEX instead # initialize ADMM with classical QUBO and convex optimizer admm = ADMMOptimizer(params=admm_params, qubo_optimizer=qubo_optimizer, continuous_optimizer=convex_optimizer) # run ADMM to solve problem result = admm.solve(qp)
_____no_output_____
Apache-2.0
tutorials/optimization/5_admm_optimizer.ipynb
prasad-kumkar/qiskit-tutorials
Classical Solver ResultThe 3-ADMM-H solution can be then printed and visualized. The `x` attribute of the solution contains respectively, thevalues of the binary decision variables and the values of the continuous decision variables. The `fval` is the objectivevalue of the solution.
print("x={}".format(result.x)) print("fval={:.2f}".format(result.fval))
x=[0.0, 1.0, 0.0, 1.0000000000000002] fval=6.00
Apache-2.0
tutorials/optimization/5_admm_optimizer.ipynb
prasad-kumkar/qiskit-tutorials
Solution statistics can be accessed in the `state` field and visualized. We here display the convergence of 3-ADMM-H, in terms of primal residuals.
plt.plot(result.state.residuals) plt.xlabel("Iterations") plt.ylabel("Residuals") plt.show()
_____no_output_____
Apache-2.0
tutorials/optimization/5_admm_optimizer.ipynb
prasad-kumkar/qiskit-tutorials
Quantum SolutionWe now solve the same optimization problem with QAOA as QUBO optimizer, running on simulated quantum device. First, one need to select the classical optimizer of the eigensolver QAOA. Then, the simulation backened is set. Finally, the eigensolver is wrapped into the `MinimumEigenOptimizer` class. A new instance of `ADMMOptimizer` is populated with QAOA as QUBO optimizer.
# define QUBO optimizer qubo_optimizer = qaoa # define classical optimizer convex_optimizer = cobyla # convex_optimizer = cplex # uncomment to use CPLEX instead # initialize ADMM with quantum QUBO optimizer and classical convex optimizer admm_q = ADMMOptimizer(params=admm_params, qubo_optimizer=qubo_optimizer, continuous_optimizer=convex_optimizer) # run ADMM to solve problem result_q = admm_q.solve(qp)
_____no_output_____
Apache-2.0
tutorials/optimization/5_admm_optimizer.ipynb
prasad-kumkar/qiskit-tutorials
Quantum Solver ResultsHere we present the results obtained from the quantum solver. As in the example above `x` stands for the solution, the `fval` is for objective value.
print("x={}".format(result_q.x)) print("fval={:.2f}".format(result_q.fval)) plt.clf() plt.plot(result_q.state.residuals) plt.xlabel("Iterations") plt.ylabel("Residuals") plt.show() import qiskit.tools.jupyter %qiskit_version_table %qiskit_copyright
_____no_output_____
Apache-2.0
tutorials/optimization/5_admm_optimizer.ipynb
prasad-kumkar/qiskit-tutorials
CMAES parallel run for 50 trials hard barrier
folder = 'result_cmaes_50tr_30n' config = pickle.load(open(folder + "/config.p", "rb")) exps = pickle.load(open(folder + "/exps.p", "rb")) res = pickle.load(open(list(glob.glob(folder + '/final_res*.p'))[0], 'rb')) plot_results(res, config['opt'])
_____no_output_____
MIT
notebooks/Plot_results-parallel_run-cmaes.ipynb
hannanabdul55/seldonian-fairness
CMAES run for 50 trials with stratification
folder = 'result_cmaes_50tr_30n_stratify' config = pickle.load(open(folder + "/config.p", "rb")) exps = pickle.load(open(folder + "/exps.p", "rb")) res = pickle.load(open(list(glob.glob(folder + '/final_res*.p'))[0], 'rb')) plot_results(res, config['opt'])
_____no_output_____
MIT
notebooks/Plot_results-parallel_run-cmaes.ipynb
hannanabdul55/seldonian-fairness
4 Setting the initial SoC Setting the initial SoC for your pack is performed with an argument passed to the solve algorithm. Currently the same value is applied to each battery but in future it will be possible to vary the SoC across the pack.
import liionpack as lp import pybamm import numpy as np import matplotlib.pyplot as plt
c:\users\tom\code\pybamm\pybamm\expression_tree\functions.py:204: RuntimeWarning: invalid value encountered in sign return self.function(*evaluated_children)
MIT
examples/notebooks/04 Initial SoC.ipynb
brosaplanella/liionpack
Lets set up the most simple pack possible with 1 battery and very low busbar resistance to compare to a pure PyBaMM simulation
Rsmall = 1e-6 netlist = lp.setup_circuit(Np=1, Ns=1, Rb=Rsmall, Rc=Rsmall, Ri=5e-2, V=4.0, I=1.0) # Heat transfer coefficients htc = np.ones(1) * 10 # PyBaMM parameters chemistry = pybamm.parameter_sets.Chen2020 parameter_values = pybamm.ParameterValues(chemistry=chemistry) # Cycling experiment experiment = pybamm.Experiment( [ ( "Discharge at 1 A for 1000 s or until 3.3 V", "Rest for 1000 s", "Charge at 1 A for 1000 s or until 4.0 V", "Rest for 1000 s", ) ] * 3, period="10 s" ) SoC = 0.5 # Solve pack output = lp.solve(netlist=netlist, parameter_values=parameter_values, experiment=experiment, htc=htc, initial_soc=SoC)
c:\users\tom\code\pybamm\pybamm\expression_tree\functions.py:204: RuntimeWarning: invalid value encountered in sign return self.function(*evaluated_children) Solving Pack: 100%|███████████████████████████████████████████████████████████████| 1200/1200 [00:06<00:00, 191.77it/s]
MIT
examples/notebooks/04 Initial SoC.ipynb
brosaplanella/liionpack
Let's compare to the PyBaMM simulation
parameter_values = pybamm.ParameterValues(chemistry=chemistry) parameter_values.update({"Total heat transfer coefficient [W.m-2.K-1]": 10.0}) sim = lp.create_simulation(parameter_values, experiment, make_inputs=False) sol = sim.solve(initial_soc=SoC) def compare(sol, output): # Get pack level results time = sol["Time [s]"].entries v_pack = output["Pack terminal voltage [V]"] i_pack = output["Pack current [A]"] v_batt = sol["Terminal voltage [V]"].entries i_batt = sol["Current [A]"].entries # Plot pack voltage and current _, (axl, axr) = plt.subplots(1, 2, tight_layout=True, figsize=(15, 10), sharex=True, sharey=True) axl.plot(time[1:], v_pack, color="green", label="simulation") axl.set_xlabel("Time [s]") axl.set_ylabel("Pack terminal voltage [V]", color="green") axl2 = axl.twinx() axl2.plot(time[1:], i_pack, color="black", label="simulation") axl2.set_ylabel("Pack current [A]", color="black") axl2.set_title("Liionpack Simulation") axr.plot(time, v_batt, color="red", label="simulation") axr.set_xlabel("Time [s]") axr.set_ylabel("Battery terminal voltage [V]", color="red") axr2 = axr.twinx() axr2.plot(time, i_batt, color="blue", label="simulation") axr2.set_ylabel("Battery current [A]", color="blue") axr2.set_title("Single PyBaMM Simulation") compare(sol, output)
_____no_output_____
MIT
examples/notebooks/04 Initial SoC.ipynb
brosaplanella/liionpack
Now lets start the simulation from a different state of charge
SoC = 0.25 # Solve pack output = lp.solve(netlist=netlist, parameter_values=parameter_values, experiment=experiment, htc=htc, initial_soc=SoC) compare(sol, output)
_____no_output_____
MIT
examples/notebooks/04 Initial SoC.ipynb
brosaplanella/liionpack
Here we are still comparing to the PyBaMM simulation at 0.5 SoC and we can see that liionpack started at a lower voltage corresponding to a lower SoC.
parameter_values = pybamm.ParameterValues(chemistry=chemistry) parameter_values.update({"Total heat transfer coefficient [W.m-2.K-1]": 10.0}) sim = lp.create_simulation(parameter_values, experiment, make_inputs=False) sol = sim.solve(initial_soc=SoC)
_____no_output_____
MIT
examples/notebooks/04 Initial SoC.ipynb
brosaplanella/liionpack
Now we can re-run the PyBaMM simulation and compare again
compare(sol, output) lp.draw_circuit(netlist)
_____no_output_____
MIT
examples/notebooks/04 Initial SoC.ipynb
brosaplanella/liionpack
Découverte du format CSV - *Comma-Separated values* **Plan du document**- Le format **CSV**- Représenter des données CSV avec Python - Première solution: un tableau de tuples - **Deuxième solution**: un tableau de *tuples nommés* (dictionnaires) - l'*unpacking*, - l'opération *zip* - syntaxe en *compréhension des dictionnaires* - **synthèse**: CSV -> tableau de tuples nommés Le format CSV *Comma*: virgule; *CSV*: valeurs séparées par des virgule. CSV est un format **textuel** (par opposition à *binaire*) qui sert à représenter des **données en tables**; voici à quoi cela ressemble:```nom,prenom,date_naissanceDurand,Jean-Pierre,23/05/1985Dupont,Christophe,15/12/1967Terta,Henry,12/06/1978 ``` On devine qu'il s'agit d'informations à propos d'individus: Jean pierre Durand né le 23 mai 1985, etc. En informatique on parle de **collection de données**.La première ligne précise le sens des valeurs trouvées aux lignes suivantes; ses valeurs `nom`, `prenom`, `date_naissance` sont appelées **descripteurs** ou encore **attributs**.Les lignes suivantes correspondent à des individus différents; en informatique on parle souvent d'**objets** ou d'**entités**.Chaque «*objet*» (ici individu) correspond à une ligne: les **valeurs** qu'on y trouve sont associées aux *descripteurs* de même position. On peut (re)présenter la même information plus agréablement avec un rendu table:| nom | prenom | date_naissance || ------------- |:-------------:| -----:|| Durand | Jean-Pierre | 23/05/1985 || Dupont | Christophe | 15/12/1967 || Tertra | Henry | 12/06/1978 | Représenter des données CSV avec Python Première solution: une liste de tuples Cela donnerait `[('Durand', 'Jean-Pierre', '23/05/1985'), ('Dupont',..),(..)]` On y parvient assez simplement à l'aide de `str.split(..)`:
donnees_CSV = """nom,prenom,date_naissance Durand,Jean-Pierre,23/05/1985 Dupont,Christophe,15/12/1967 Terta,Henry,12/06/1978""" etape1 = donnees_CSV.split('\n') etape1 etape2 = [obj.split(',') for obj in etape1] etape2 # une liste de liste etape3 = [tuple(obj) for obj in etape2] etape3 # une liste de tuple fin = etape3[1:] # un petit slice fin # sans l'en-tête
_____no_output_____
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
À faire toi-mêmeOn peut parvenir à `fin` à partir de `donnees_CSV` en **une seule fois** par *composition* ... essais!
# deux_en_un = [ obj.split(',') for obj in donnees_CSV.split('\n') ] # trois_en_un = [ tuple( obj.split(',') ) for obj in donnees_CSV.split('\n') ] # tu peux essayer de faire deux_en_un, puis trois_en_un avant. quatre_en_un = [ tuple( obj.split(',') ) for obj in donnees_CSV.split('\n') ][1:] # pour tester assert quatre_en_un == fin
_____no_output_____
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
___ L'inconvénient de cette représentation c'est qu'elle «oublie» les descripteurs.Pourquoi ne pas les conserver comme à l'étape3? Pour éviter d'avoir un tableau *hétérogène*: le premier élément ne serait pas un «objet». De tels tableaux sont plus difficile à manipuler. Deuxième solution: un tableau de *tuples nommés* **n-uplet (ou tuples) nommés**: tuple dont chaque valeur est associée à un descripteur.Malheureusement Python ne possède pas un tel type par défaut (il existe toutefois dans la bibliothèque standard).Pour représenter ce type, nous utiliserons un dictionnaire dont les clés sont les descripteurs; voici un exemple:```python{'nom': 'Durand', 'prenom': 'Jean-Pierre', 'date_naissance': '23/05/1985'}``` Pour y parvenir, nous partons de:
donnees_CSV = """nom,prenom,date_naissance Durand,Jean-Pierre,23/05/1985 Dupont,Christophe,15/12/1967 Terta,Henry,12/06/1978"""
_____no_output_____
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
Les étapes qui suivent servent à séparer les descripteurs et les objets:
tmp = donnees_CSV.split('\n') tmp descripteurs_str = tmp[0] descripteurs = tuple(descripteurs_str.split(',')) print(f"le tuple des descripteurs: {descripteurs}") donnees_str = tmp[1:] donnees_str objets = [tuple(obj.split(',')) for obj in donnees_str] print(f"la liste des objets (des personnes ici):\n {objets}")
la liste des objets (des personnes ici): [('Durand', 'Jean-Pierre', '23/05/1985'), ('Dupont', 'Christophe', '15/12/1967'), ('Terta', 'Henry', '12/06/1978')]
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
À faire toi-mêmePeux-tu compléter les parties manquantes pour obtenir le même résultat plus rapidement?
descripteurs = tuple( donnees_CSV.split('\n')[0].split(',') ) objets = [ tuple( ligne.split(',') ) for ligne in donnees_CSV.split('\n')[1:] ] print(f"- les descripteurs:\n\t {descripteurs}\n- les objets:\n\t {objets}")
- les descripteurs: ('nom', 'prenom', 'date_naissance') - les objets: [('Durand', 'Jean-Pierre', '23/05/1985'), ('Dupont', 'Christophe', '15/12/1967'), ('Terta', 'Henry', '12/06/1978')]
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
______ À faire toi-même - *découverte de l'**unpacking** (déballage)*Peux-tu réaliser le traitement précédent en **vraiment** une seule ligne? Pour cela observe les trois exemples qui suivent:
# exemple1 d'unpacking tete, *queue = [1, 2, 3, 4] print(f"La tête: {tete} et la queue: {queue}") # exemple2 d'unpacking un, deux, *reste = [1, 2, 3, 4] print(f"un: {un}\ndeux: {deux}\nreste: {reste}") # exemple3 d'unpacking tete, *corps, pied = [1,2,3,4] print(f"tete: {tete}\ncorps: {corps}\npied: {pied}") # À toi de jouer! descripteurs, *objets = [tuple(d.split(',')) for d in donnees_CSV.split('\n')] print(f"les descripteurs:\n\t {descripteurs}\nles objets:\n\t {objets}")
les descripteurs: ('nom', 'prenom', 'date_naissance') les objets: [('Durand', 'Jean-Pierre', '23/05/1985'), ('Dupont', 'Christophe', '15/12/1967'), ('Terta', 'Henry', '12/06/1978')]
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
____ Arrivé à ce stade nous voudrions combiner:- `('descr1', 'descr2', ...)` et `('v1', 'v2', ...)` en ...- `{'descr1': 'v1', 'descr2': 'v2', ..}` (n-uplet nommé) Appareiller deux séquences - `zip` On a souvent besoin de grouper par paires deux séquences de même longueur `len`.*Ex*: je **dispose** de `['a', 'b', 'c']` et `[3, 2, 1]`j'ai **besoin de** `[('a', 3), ('b', 2), ('c', 1)]`. À faire toi-même La fonction `appareiller(t1, t2)` prend deux tableaux de même taille en argument et renvoie un tableau obtenue en appararillant les éléments de `t1` et `t2` de même index.Compléter le code qui suit pour résoudre ce problème
def appareiller(t1, t2): assert len(t1) == len(t2) t = [] for i in range(len(t1)): couple = (t1[i], t2[i]) t.append( couple ) return t # autre solution avec la syntaxe en compréhension def appareiller2(t1, t2): assert len(t1) == len(t2) return [ (t1[i], t2[i]) for i in range(len(t1)) ] # vérifier votre solution tab1 = ['a', 'b', 'c'] tab2 = [3, 2, 1] assert appareiller(tab1, tab2) == [('a', 3), ('b', 2), ('c', 1)] assert appareiller2(tab1, tab2) == [('a', 3), ('b', 2), ('c', 1)]
_____no_output_____
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
___ Un cas d'utilisation fréquent de l'apparaillement est la lecture dans une boucle des paires
# tester moi tab1 = ['a', 'b', 'c'] tab2 = [3, 2, 1] for a, b in appareiller(tab1, tab2): print(f'a vaut "{a}" et b vaut "{b}"')
a vaut "a" et b vaut "3" a vaut "b" et b vaut "2" a vaut "c" et b vaut "1"
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
en fait, Python dispose d'une fonction prédéfinie `zip(seq1, seq2, ...)` qui fait la même chose avec des «séquences» (`list` est un cas particulier de séquence).*note*: `zip`?? penser à la «fermeture-éclair» d'un blouson ...
z = zip(tab1, tab2) print(z) print(list(z))
_____no_output_____
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
*note*: elle renvoie un objet spécial de type `zip` car on l'utilise souvent dans une boucle directement c'est-à-dire sans mémoriser le zip (un peu comme avec `range`)
# tester moi tab1 = ['a', 'b', 'c'] tab2 = [3, 2, 1] for a, b in zip(tab1, tab): print(f'a vaut "{a}" et b vaut "{b}"')
a vaut "a" et b vaut "3" a vaut "b" et b vaut "2"
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
Découverte: la syntaxe en compréhension est aussi valable pour les `dict` Voici un exemple simple:
modele_tuple_nomme = {desc: None for desc in descripteurs} modele_tuple_nomme
_____no_output_____
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
Bien Noter que la partie avant `for` est de la forme `: `.On utilise généralement cela avec `zip`:
cles = ("cle1", "cle2", "cle3") valeurs = ("ah", "oh", "hein") {c: v for c, v in zip(cles, valeurs)} # zip fonctionne aussi avec des tuples de même longueur!
_____no_output_____
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
Voici encore un exemple bien utile pour réaliser un tableau à partir de données CSV.
cles = ("cle1", "cle2", "cle3") objets = [("ah", "oh", "hein"), ('riri', 'fifi', 'loulou')] # on veut un tableau de tuples nommés [ {desc: val for desc, val in zip(cles, objet)} for objet in objets ]
_____no_output_____
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
Synthèse: retour au problème des données au format CSV En combinant tout ce que tu as appris et les exemples précédents, tu devrais être capable d'obtenir notre liste de n-uplets nommés en quelques lignes ... Non?*rappel*: au départ, on **dispose de**```pythondonnees_CSV = """nom,prenom,date_naissanceDurand,Jean-Pierre,23/05/1985Dupont,Christophe,15/12/1967Terta,Henry,12/06/1978"""```au final, on veut **produire** une liste de *tuples nommés*:```python[ {'nom': 'Durand', 'prenom': 'Jean-Pierre', 'date_naissance': '23/05/1985'}, {'nom': 'Dupont', 'prenom': 'Christophe', 'date_naissance': '15/12/1967'}, {'nom': 'Terta', 'prenom': 'Henry', 'date_naissance': '12/06/1978'}]``` Voici comment y parvenir en deux «compréhensions»
descripteurs, *objets = [tuple(ligne.split(',')) for ligne in donnees_CSV.split('\n')] objets = [ # sur plusieurs ligne pour plus de clarté. { desc: val for desc, val in zip(descripteurs, obj) } for obj in objets ] objets
_____no_output_____
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
À faire toi-même La syntaxe en compréhension des listes et des dictionnaires est utile et puissante mais nécessite pas mal d'investissement pour être bien maîtrisée.Pour cette raison, reprend le problème en écrivant une fonction `csv_vers_objets(csv_str)` qui prend en argument la chaîne au format csv et renvoie le tableau de n-uplets nommés correspondant.Nous la réutiliserons dans le 05_applications...
def csv_vers_objets(csv_str): descripteurs, *objets = [tuple(ligne.split(',')) for ligne in csv_str.split('\n')] objets = [ # sur plusieurs ligne pour plus de clarté. { desc: val for desc, val in zip(descripteurs, obj) } for obj in objets ] return objets csv_vers_objets(donnees_CSV) assert csv_vers_objets(donnees_CSV) == objets
_____no_output_____
CC0-1.0
01_donnees_en_tables/correction/03_1_format_csv_correction.ipynb
efloti/cours-nsi-premiere
TF neural net with normalized ISO spectra
# TensorFlow and tf.keras import tensorflow as tf from tensorflow import keras # Helper libraries import glob import matplotlib.pyplot as plt import numpy as np import pandas as pd from concurrent.futures import ProcessPoolExecutor from IPython.core.debugger import set_trace as st from sklearn.model_selection import train_test_split from time import time # My modules from swsnet import helpers print(tf.__version__)
1.10.0
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
Dataset: ISO-SWS (normalized, culled)
# Needed directories base_dir = '../data/isosws_atlas/' # Pickles containing our spectra in the form of pandas dataframes: spec_dir = base_dir + 'spectra_normalized/' spec_files = np.sort(glob.glob(spec_dir + '*.pkl')) # Metadata pickle (pd.dataframe). Note each entry contains a pointer to the corresponding spectrum pickle. metadata = base_dir + 'metadata_sorted_normalized_culled.pkl'
_____no_output_____
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
Labels ('group'):1. Naked stars2. Stars with dust3. Warm, dusty objects4. Cool, dusty objects5. Very red objects6. Continuum-free objects but having emission lines7. Flux-free and/or fatally flawed spectra Subset 1: all data included
features, labels = helpers.load_data(base_dir=base_dir, metadata=metadata, only_ok_data=False, clean=False, verbose=False) print(features.shape) print(labels.shape)
(1235, 359) (1235,)
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
Subset 2: exclude group 7
features_clean, labels_clean = \ helpers.load_data(base_dir=base_dir, metadata=metadata, only_ok_data=False, clean=True, verbose=False) print(features_clean.shape) print(labels_clean.shape)
(1058, 359) (1058,)
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
Subset 3: exclude group 7, uncertain data
features_certain, labels_certain = \ helpers.load_data(base_dir=base_dir, metadata=metadata, only_ok_data=True, clean=False, verbose=False) print(features_certain.shape) print(labels_certain.shape)
(851, 359) (851,)
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
Testing l2norms
def neural(features, labels, test_size=0.3, l2norm=0.01): X_train, X_test, y_train, y_test = \ train_test_split(features, labels, test_size=test_size, random_state = 42) # Sequential model, 7 classes of output. model = keras.Sequential() model.add(keras.layers.Dense(64, activation='relu', kernel_regularizer=keras.regularizers.l2(l2norm), input_dim=359)) model.add(keras.layers.Dense(64, activation='relu', kernel_regularizer=keras.regularizers.l2(l2norm))) model.add(keras.layers.Dense(64, activation='relu', kernel_regularizer=keras.regularizers.l2(l2norm))) model.add(keras.layers.Dense(64, activation='relu', kernel_regularizer=keras.regularizers.l2(l2norm))) model.add(keras.layers.Dense(7, activation='softmax')) # Early stopping condition. callback = [tf.keras.callbacks.EarlyStopping(monitor='acc', patience=5, verbose=0)] # Recompile model and fit. model.compile(optimizer=keras.optimizers.Adam(0.0005), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # model.fit(X_train, y_train, epochs=50, batch_size=32, verbose=False) model.fit(X_train, y_train, epochs=100, batch_size=32, callbacks=callback, verbose=False) # Check accuracy. score = model.evaluate(X_test, y_test, verbose=0) accuracy = score[1] print("L2 norm, accuracy: ", l2norm, accuracy) return model, test_size, accuracy # for l2norm in (0.1, 0.01, 0.001, 0.0001, 0.00001): # model, test_size, accuracy = neural(features, labels, l2norm=l2norm) # for l2norm in (0.1, 0.01, 0.001, 0.0001, 0.00001): # model, test_size, accuracy = neural(features_clean, labels_clean, l2norm=l2norm) # for l2norm in (0.1, 0.01, 0.001, 0.0001, 0.00001): # model, test_size, accuracy = neural(features_certain, labels_certain, l2norm=l2norm) # for l2norm in (0.001, 0.0001, 0.00001, 0.000001): # model, test_size, accuracy = neural(features_certain, labels_certain, l2norm=l2norm)
L2 norm, accuracy: 0.001 0.88671875 L2 norm, accuracy: 0.0001 0.859375 L2 norm, accuracy: 1e-05 0.86328125 L2 norm, accuracy: 1e-06 0.859375
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
*** Testing training size vs. accuracy Model:
def run_NN(input_tuple): """Run a Keras NN for the purpose of examining the effect of training set size. Args: features (ndarray): Array containing the spectra (fluxes). labels (ndarray): Array containing the group labels for the spectra. test_size (float): Fraction of test size relative to (test + training). Returns: test_size (float): Input test_size, just a sanity check! accuracy (float): Accuracy of this neural net when applied to the test set. """ features, labels, test_size = input_tuple l2norm = 0.001 X_train, X_test, y_train, y_test = \ train_test_split(features, labels, test_size=test_size, random_state = 42) # Sequential model, 7 classes of output. model = keras.Sequential() model.add(keras.layers.Dense(64, activation='relu', kernel_regularizer=keras.regularizers.l2(l2norm), input_dim=359)) model.add(keras.layers.Dense(64, activation='relu', kernel_regularizer=keras.regularizers.l2(l2norm))) model.add(keras.layers.Dense(64, activation='relu', kernel_regularizer=keras.regularizers.l2(l2norm))) model.add(keras.layers.Dense(64, activation='relu', kernel_regularizer=keras.regularizers.l2(l2norm))) model.add(keras.layers.Dense(7, activation='softmax')) # Early stopping condition. callback = [tf.keras.callbacks.EarlyStopping(monitor='acc', patience=5, verbose=0)] # Recompile model and fit. model.compile(optimizer=keras.optimizers.Adam(0.0005), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # model.fit(X_train, y_train, epochs=50, batch_size=32, verbose=False) model.fit(X_train, y_train, epochs=100, batch_size=32, callbacks=callback, verbose=False) # Check accuracy. score = model.evaluate(X_test, y_test, verbose=0) accuracy = score[1] # print("Test size, accuracy: ", test_size, accuracy) return test_size, accuracy def run_networks(search_map): # Run the networks in parallel. start = time() pool = ProcessPoolExecutor(max_workers=14) results = list(pool.map(run_NN, search_map)) end = time() print('Took %.3f seconds' % (end - start)) run_matrix = np.array(results) return run_matrix def plot_results(run_matrix): # Examine results. plt.plot(run_matrix.T[0], run_matrix.T[1], 's', mfc='w', ms=5, mew=2, mec='r'); plt.xlabel('Test size (fraction)'); plt.ylabel('Test accuracy'); plt.minorticks_on(); # plt.xlim(left=0); return
_____no_output_____
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
Search space (training size):
# Values of test_size to probe. search_space = np.arange(0.14, 0.60, 0.02) print('Size of test set considered: ', search_space) # Number of iterations for each test_size value. n_iterations = 20 # Create a vector to iterate over. rx = np.array([search_space] * n_iterations).T search_space_full = rx.flatten() print('Number of iterations per test_size: ', n_iterations) print('Total number of NN iterations required: ', n_iterations * len(search_space)) # Wrap up tuple inputs for running in parallel. search_map = [(features, labels, x) for x in search_space_full] search_map_clean = [(features_clean, labels_clean, x) for x in search_space_full] search_map_certain = [(features_certain, labels_certain, x) for x in search_space_full] run_matrix = run_networks(search_map) run_matrix_clean = run_networks(search_map_clean) run_matrix_certain = run_networks(search_map_certain)
Took 344.395 seconds Took 307.249 seconds Took 264.682 seconds
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
Full set:
plot_results(run_matrix)
_____no_output_____
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
Clean set:
plot_results(run_matrix_clean)
_____no_output_____
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
Certain set:
plot_results(run_matrix_certain)
_____no_output_____
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
*** Based on the above, probably need to do more data preprocessing:- e.g., remove untrustworthy data
# save_path = '../models/nn_sorted_normalized_culled.h5' # model.save(save_path)
_____no_output_____
BSD-3-Clause
ipy_notebooks/old/current_neural_net-normalized-culled.ipynb
mattjshannon/swsnet
Filename: MNIST_data.ipynbFrom this bookAbbreviation: MNIST = Modified (handwritten digits data set from the U.S.) National Institute of Standards and TechnologyPurpose: Explore the MNIST digits data to get familiar with the content and quality of the data.
import mnist_loader import pandas as pd import numpy as np import matplotlib.pyplot as plt # training_data, validation_data, test_data = mnist_loader.load_data_wrapper() #training_data, validation_data, test_data = mnist_loader.load_data() training, validation, test = mnist_loader.load_data() struct = [{'name': 'training', 'data': training[0], 'label': training[1]}, {'name': 'validation', 'data': validation[0], 'label': validation[1]}, {'name': 'test', 'data': test[0], 'label': test[1]}]
_____no_output_____
MIT
MNIST_data.ipynb
hotpocket/DigitRecognizer
Training, validation, and test data structures are 2 element tuples having the following structure:* [[p,i,x,e,l,s, , i,n, i,m,a,g,e, ,1], [...]]* [num_represented_by_image1, ...]
fig, axes = plt.subplots(1, 3) train = pd.Series(training_data[1]) train.hist(ax=axes[0]) axes[0].set_title("Training Data") #display(train.describe()) validate = pd.Series(validation_data[1]) validate.hist(ax=axes[1]) axes[1].set_title("Validation Data") #display(validate.describe()) test = pd.Series(test_data[1]) test.hist(ax=axes[2]) axes[2].set_title("Test Data") #display(test.describe()) display("Distribution of validation data values") values = pd.Series(validation_data[1]) values.hist() values.describe() display("Distribution of validation data values") values = pd.Series(test_data[1]) values.hist() values.describe()
_____no_output_____
MIT
MNIST_data.ipynb
hotpocket/DigitRecognizer
Training Data
pixels = pd.DataFrame(training_data[0]) display("Images: {} Pixels-per-image: {}".format(*pixels.shape)) pixels.head() #pixels.T.describe() # takes FOREVER ... print('\033[1m'+"validation_data:"+'\033[0m') display(validation_data) print('{1:32s}{0}'.format(type(validation_data),'\033[1m'+"validation_data type:"+'\033[0m')) print('{1:32s}{0}'.format(len(validation_data),'\033[1m'+"num of components:"+'\033[0m')) print('') print('{1:32s}{0}'.format(type(validation_data[0]),'\033[1m'+"first component type:"+'\033[0m')) print('{1:32s}{0}'.format(len(validation_data[0]),'\033[1m'+"num of sub-components:"+'\033[0m')) print('') print('{1:32s}{0}'.format(type(validation_data[1]),'\033[1m'+"second component type:"+'\033[0m')) print('{1:32s}{0}'.format(len(validation_data[1]),'\033[1m'+"num of sub-components:"+'\033[0m')) print('\033[1m'+"test_data:"+'\033[0m') display(test_data) print('{1:32s}{0}'.format(type(test_data),'\033[1m'+"test_data type:"+'\033[0m')) print('{1:32s}{0}'.format(len(test_data),'\033[1m'+"num of components:"+'\033[0m')) print('') print('{1:32s}{0}'.format(type(test_data[0]),'\033[1m'+"first component type:"+'\033[0m')) print('{1:32s}{0}'.format(len(test_data[0]),'\033[1m'+"num of sub-components:"+'\033[0m')) print('') print('{1:32s}{0}'.format(type(test_data[1]),'\033[1m'+"second component type:"+'\033[0m')) print('{1:32s}{0}'.format(len(test_data[1]),'\033[1m'+"num of sub-components:"+'\033[0m')) import numpy as np import matplotlib.pyplot as plt print(type(training_data[0][0])) print(len(training_data[0][0])) print(28*28) # break data into 28 x 28 square array (from 1 x 784 array) plottable_image = np.reshape(training_data[0][0], (28, 28)) #display(plottable_image) # plot plt.imshow(plottable_image, cmap='gray_r') plt.show() import matplotlib.pyplot as plt %matplotlib inline table_cols = 18 table_rows = 12 rand_grid = np.random.rand(28, 28) # plottable_image = np.reshape(training_data[0][0], (28, 28)) k = 0 for i in range(0,table_cols) : for j in range(0,table_rows) : if i==0 and j==0 : plottable_images = [rand_grid] else : plottable_images.append( np.reshape(training_data[0][k], (28, 28)) ) k += 1 print(len(plottable_images)) fig, axes = plt.subplots(table_rows, table_cols, figsize=(15, 12), subplot_kw={'xticks': [], 'yticks': []}) fig.subplots_adjust(hspace=0.5, wspace=0) for i, ax in enumerate(axes.flat): ax.imshow(plottable_images[i], cmap='gray_r') if i == 0 : ax.set_title("rand") else : digit = str(training_data[1][i-1]) index = str(i-1) ax.set_title("({}) {}".format(index, digit)) plt.show() print(len(list(training_data))) print(len(list(validation_data))) print(len(list(test_data))) training_data, validation_data, test_data = mnist_loader.load_data_wrapper() training_data = list(training_data) validation_data = list(validation_data) test_data = list(test_data) print(len(training_data)) print(len(validation_data)) print(len(test_data)) print(len(training_data)) print(len(validation_data)) print(len(test_data)) print(training_data) print(training_data[0][1]) training_data.shape
_____no_output_____
MIT
MNIST_data.ipynb
hotpocket/DigitRecognizer
`asyncio` BeispielAb IPython≥7.0 könnt ihr `asyncio` direkt in Jupyter Notebooks verwenden; seht auch [IPython 7.0, Async REPL](https://blog.jupyter.org/ipython-7-0-async-repl-a35ce050f7f7). Wenn ihr die Fehlermeldung `RuntimeError: This event loop is already running` erhaltet, hilft euch vielleicht [nest-asyncio] weiter.Ihr könnt das Paket in eurer Jupyter- oder JupyterHub-Umgebung installieren mit```bash$ pipenv install nest-asyncio```Ihr könnt es dann in euer Notebook importieren und verwenden mit:
import nest_asyncio nest_asyncio.apply()
_____no_output_____
BSD-3-Clause
docs/refactoring/performance/asyncio-example.ipynb
veit/jupyter-tutorial-de
Einfaches *Hello world*-Beispiel
import asyncio async def hello(): print('Hello') await asyncio.sleep(1) print('world') await hello()
Hello world
BSD-3-Clause
docs/refactoring/performance/asyncio-example.ipynb
veit/jupyter-tutorial-de
Ein bisschen näher an einem realen Beispiel
import asyncio import random async def produce(queue, n): for x in range(1, n + 1): # produce an item print('producing {}/{}'.format(x, n)) # simulate i/o operation using sleep await asyncio.sleep(random.random()) item = str(x) # put the item in the queue await queue.put(item) # indicate the producer is done await queue.put(None) async def consume(queue): while True: # wait for an item from the producer item = await queue.get() if item is None: # the producer emits None to indicate that it is done break # process the item print('consuming {}'.format(item)) # simulate i/o operation using sleep await asyncio.sleep(random.random()) loop = asyncio.get_event_loop() queue = asyncio.Queue(loop=loop) asyncio.ensure_future(produce(queue, 10), loop=loop) loop.run_until_complete(consume(queue))
producing 1/10 producing 2/10 consuming 1 producing 3/10 consuming 2 producing 4/10 consuming 3 producing 5/10 consuming 4 producing 6/10 consuming 5 producing 7/10 consuming 6 producing 8/10 consuming 7 producing 9/10 consuming 8 producing 10/10 consuming 9 consuming 10
BSD-3-Clause
docs/refactoring/performance/asyncio-example.ipynb
veit/jupyter-tutorial-de
Ausnahmebehandlung> **Siehe auch:** [set_exception_handler](https://docs.python.org/3/library/asyncio-eventloop.htmlasyncio.loop.set_exception_handler)
def main(): loop = asyncio.get_event_loop() # May want to catch other signals too signals = (signal.SIGHUP, signal.SIGTERM, signal.SIGINT) for s in signals: loop.add_signal_handler( s, lambda s=s: asyncio.create_task(shutdown(loop, signal=s))) loop.set_exception_handler(handle_exception) queue = asyncio.Queue()
_____no_output_____
BSD-3-Clause
docs/refactoring/performance/asyncio-example.ipynb
veit/jupyter-tutorial-de
Testen mit `pytest` Beispiel:
import pytest @pytest.mark.asyncio async def test_consume(mock_get, mock_queue, message, create_mock_coro): mock_get.side_effect = [message, Exception("break while loop")] with pytest.raises(Exception, match="break while loop"): await consume(mock_queue)
_____no_output_____
BSD-3-Clause
docs/refactoring/performance/asyncio-example.ipynb
veit/jupyter-tutorial-de
Bibliotheken von Drittanbietern* [pytest-asyncio](https://github.com/pytest-dev/pytest-asyncio) hat hilfreiche Dinge wie Test-Fixtures für `event_loop`, `unused_tcp_port`, und `unused_tcp_port_factory`; und die Möglichkeit zum Erstellen eurer eigenen [asynchronen Fixtures](https://github.com/pytest-dev/pytest-asyncio/async-fixtures).* [asynctest](https://asynctest.readthedocs.io/en/latest/index.html) verfügt über hilfreiche Werkzeuge, einschließlich Coroutine-Mocks und [exhaust_callbacks](https://asynctest.readthedocs.io/en/latest/asynctest.helpers.htmlasynctest.helpers.exhaust_callbacks) so dass wir `await task` nicht manuell erstellen müssen.* [aiohttp](https://docs.aiohttp.org/en/stable/) hat ein paar wirklich nette eingebaute Test-Utilities. Debugging`asyncio` hat bereits einen [debug mode](https://docs.python.org/3.6/library/asyncio-dev.htmldebug-mode-of-asyncio) in der Standardbibliothek. Ihr könnt ihn einfach mit der Umgebungsvariablen `PYTHONASYNCIODEBUG` oder im Code mit `loop.set_debug(True)` aktivieren. Verwendet den Debug-Modus zum Identifizieren langsamer asynchroner AufrufeDer Debug-Modus von `asyncio` hat einen kleinen eingebauten Profiler. Wenn der Debug-Modus aktiviert ist, protokolliert `asyncio` alle asynchronen Aufrufe, die länger als 100 Millisekunden dauern. Debugging im Produktivbetrieb mit `aiodebug`[aiodebug](https://github.com/qntln/aiodebug) ist eine kleine Bibliothek zum Überwachen und Testen von Asyncio-Programmen. Beispiel
from aiodebug import log_slow_callbacks def main(): loop = asyncio.get_event_loop() log_slow_callbacks.enable(0.05)
_____no_output_____
BSD-3-Clause
docs/refactoring/performance/asyncio-example.ipynb
veit/jupyter-tutorial-de
Logging[aiologger](https://github.com/b2wdigital/aiologger) ermöglicht eine nicht-blockierendes Logging. Asynchrone Widgets> **Seht auch:** [Asynchronous Widgets](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Asynchronous.html)
def wait_for_change(widget, value): future = asyncio.Future() def getvalue(change): # make the new value available future.set_result(change.new) widget.unobserve(getvalue, value) widget.observe(getvalue, value) return future from ipywidgets import IntSlider slider = IntSlider() async def f(): for i in range(10): print('did work %s'%i) x = await wait_for_change(slider, 'value') print('async function continued with value %s'%x) asyncio.ensure_future(f()) slider
_____no_output_____
BSD-3-Clause
docs/refactoring/performance/asyncio-example.ipynb
veit/jupyter-tutorial-de
B - A Closer Look at Word EmbeddingsWe have very briefly covered how word embeddings (also known as word vectors) are used in the tutorials. In this appendix we'll have a closer look at these embeddings and find some (hopefully) interesting results.Embeddings transform a one-hot encoded vector (a vector that is 0 in elements except one, which is 1) into a much smaller dimension vector of real numbers. The one-hot encoded vector is also known as a *sparse vector*, whilst the real valued vector is known as a *dense vector*. The key concept in these word embeddings is that words that appear in similar _contexts_ appear nearby in the vector space, i.e. the Euclidean distance between these two word vectors is small. By context here, we mean the surrounding words. For example in the sentences "I purchased some items at the shop" and "I purchased some items at the store" the words 'shop' and 'store' appear in the same context and thus should be close together in vector space.You may have also heard about *word2vec*. *word2vec* is an algorithm (actually a bunch of algorithms) that calculates word vectors from a corpus. In this appendix we use *GloVe* vectors, *GloVe* being another algorithm to calculate word vectors. If you want to know how *word2vec* works, check out a two part series [here](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) and [here](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/), and if you want to find out more about *GloVe*, check the website [here](https://nlp.stanford.edu/projects/glove/).In PyTorch, we use word vectors with the `nn.Embedding` layer, which takes a _**[sentence length, batch size]**_ tensor and transforms it into a _**[sentence length, batch size, embedding dimensions]**_ tensor.In tutorial 2 onwards, we also used pre-trained word embeddings (specifically the GloVe vectors) provided by TorchText. These embeddings have been trained on a gigantic corpus. We can use these pre-trained vectors within any of our models, with the idea that as they have already learned the context of each word they will give us a better starting point for our word vectors. This usually leads to faster training time and/or improved accuracy.In this appendix we won't be training any models, instead we'll be looking at the word embeddings and finding a few interesting things about them.A lot of the code from the first half of this appendix is taken from [here](https://github.com/spro/practical-pytorch/blob/master/glove-word-vectors/glove-word-vectors.ipynb). For more information about word embeddings, go [here](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/). Loading the GloVe vectorsFirst, we'll load the GloVe vectors. The `name` field specifies what the vectors have been trained on, here the `6B` means a corpus of 6 billion words. The `dim` argument specifies the dimensionality of the word vectors. GloVe vectors are available in 50, 100, 200 and 300 dimensions. There is also a `42B` and `840B` glove vectors, however they are only available at 300 dimensions.
import torchtext.vocab glove = torchtext.vocab.GloVe(name = '6B', dim = 100) print(f'There are {len(glove.itos)} words in the vocabulary')
There are 400000 words in the vocabulary
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
As shown above, there are 400,000 unique words in the GloVe vocabulary. These are the most common words found in the corpus the vectors were trained on. **In these set of GloVe vectors, every single word is lower-case only.**`glove.vectors` is the actual tensor containing the values of the embeddings.
glove.vectors.shape
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
We can see what word is associated with each row by checking the `itos` (int to string) list. Below implies that row 0 is the vector associated with the word 'the', row 1 for ',' (comma), row 2 for '.' (period), etc.
glove.itos[:10]
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
We can also use the `stoi` (string to int) dictionary, in which we input a word and receive the associated integer/index. If you try get the index of a word that is not in the vocabulary, you receive an error.
glove.stoi['the']
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
We can get the vector of a word by first getting the integer associated with it and then indexing into the word embedding tensor with that index.
glove.vectors[glove.stoi['the']].shape
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
We'll be doing this a lot, so we'll create a function that takes in word embeddings and a word then returns the associated vector. It'll also throw an error if the word doesn't exist in the vocabulary.
def get_vector(embeddings, word): assert word in embeddings.stoi, f'*{word}* is not in the vocab!' return embeddings.vectors[embeddings.stoi[word]]
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
As before, we use a word to get the associated vector.
get_vector(glove, 'the').shape
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
Similar ContextsNow to start looking at the context of different words. If we want to find the words similar to a certain input word, we first find the vector of this input word, then we scan through our vocabulary calculating the distance between the vector of each word and our input word vector. We then sort these from closest to furthest away.The function below returns the closest 10 words to an input word vector:
import torch def closest_words(embeddings, vector, n = 10): distances = [(word, torch.dist(vector, get_vector(embeddings, word)).item()) for word in embeddings.itos] return sorted(distances, key = lambda w: w[1])[:n]
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
Let's try it out with 'korea'. The closest word is the word 'korea' itself (not very interesting), however all of the words are related in some way. Pyongyang is the capital of North Korea, DPRK is the official name of North Korea, etc.Interestingly, we also get 'Japan' and 'China', implies that Korea, Japan and China are frequently talked about together in similar contexts. This makes sense as they are geographically situated near each other.
word_vector = get_vector(glove, 'korea') closest_words(glove, word_vector)
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
Looking at another country, India, we also get nearby countries: Thailand, Malaysia and Sri Lanka (as two separate words). Australia is relatively close to India (geographically), but Thailand and Malaysia are closer. So why is Australia closer to India in vector space? This is most probably due to India and Australia appearing in the context of [cricket](https://en.wikipedia.org/wiki/Cricket) matches together.
word_vector = get_vector(glove, 'india') closest_words(glove, word_vector)
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
We'll also create another function that will nicely print out the tuples returned by our `closest_words` function.
def print_tuples(tuples): for w, d in tuples: print(f'({d:02.04f}) {w}')
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
A final word to look at, 'sports'. As we can see, the closest words are most of the sports themselves.
word_vector = get_vector(glove, 'sports') print_tuples(closest_words(glove, word_vector))
(0.0000) sports (3.5875) sport (4.4590) soccer (4.6508) basketball (4.6561) baseball (4.8028) sporting (4.8763) football (4.9624) professional (4.9824) entertainment (5.0975) media
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
AnalogiesAnother property of word embeddings is that they can be operated on just as any standard vector and give interesting results.We'll show an example of this first, and then explain it:
def analogy(embeddings, word1, word2, word3, n=5): #get vectors for each word word1_vector = get_vector(embeddings, word1) word2_vector = get_vector(embeddings, word2) word3_vector = get_vector(embeddings, word3) #calculate analogy vector analogy_vector = word2_vector - word1_vector + word3_vector #find closest words to analogy vector candidate_words = closest_words(embeddings, analogy_vector, n+3) #filter out words already in analogy candidate_words = [(word, dist) for (word, dist) in candidate_words if word not in [word1, word2, word3]][:n] print(f'{word1} is to {word2} as {word3} is to...') return candidate_words print_tuples(analogy(glove, 'man', 'king', 'woman'))
man is to king as woman is to... (4.0811) queen (4.6429) monarch (4.9055) throne (4.9216) elizabeth (4.9811) prince
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
This is the canonical example which shows off this property of word embeddings. So why does it work? Why does the vector of 'woman' added to the vector of 'king' minus the vector of 'man' give us 'queen'?If we think about it, the vector calculated from 'king' minus 'man' gives us a "royalty vector". This is the vector associated with traveling from a man to his royal counterpart, a king. If we add this "royality vector" to 'woman', this should travel to her royal equivalent, which is a queen!We can do this with other analogies too. For example, this gets an "acting career vector":
print_tuples(analogy(glove, 'man', 'actor', 'woman'))
man is to actor as woman is to... (2.8133) actress (5.0039) comedian (5.1399) actresses (5.2773) starred (5.3085) screenwriter
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
For a "baby animal vector":
print_tuples(analogy(glove, 'cat', 'kitten', 'dog'))
cat is to kitten as dog is to... (3.8146) puppy (4.2944) rottweiler (4.5888) puppies (4.6086) pooch (4.6520) pug
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
A "capital city vector":
print_tuples(analogy(glove, 'france', 'paris', 'england'))
france is to paris as england is to... (4.1426) london (4.4938) melbourne (4.7087) sydney (4.7630) perth (4.7952) birmingham
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
A "musician's genre vector":
print_tuples(analogy(glove, 'elvis', 'rock', 'eminem'))
elvis is to rock as eminem is to... (5.6597) rap (6.2057) rappers (6.2161) rapper (6.2444) punk (6.2690) hop
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
And an "ingredient vector":
print_tuples(analogy(glove, 'beer', 'barley', 'wine'))
beer is to barley as wine is to... (5.6021) grape (5.6760) beans (5.8174) grapes (5.9035) lentils (5.9454) figs
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
Correcting Spelling MistakesAnother interesting property of word embeddings is that they can actually be used to correct spelling mistakes! We'll put their findings into code and briefly explain them, but to read more about this, check out the [original thread](http://forums.fast.ai/t/nlp-any-libraries-dictionaries-out-there-for-fixing-common-spelling-errors/16411) and the associated [write-up](https://blog.usejournal.com/a-simple-spell-checker-built-from-word-vectors-9f28452b6f26).First, we need to load up the much larger vocabulary GloVe vectors, this is due to the spelling mistakes not appearing in the smaller vocabulary. **Note**: these vectors are very large (~2GB), so watch out if you have a limited internet connection.
glove = torchtext.vocab.GloVe(name = '840B', dim = 300)
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
Checking the vocabulary size of these embeddings, we can see we now have over 2 million unique words in our vocabulary!
glove.vectors.shape
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
As the vectors were trained with a much larger vocabulary on a larger corpus of text, the words that appear are a little different. Notice how the words 'north', 'south', 'pyongyang' and 'dprk' no longer appear in the most closest words to 'korea'.
word_vector = get_vector(glove, 'korea') print_tuples(closest_words(glove, word_vector))
(0.0000) korea (3.9857) taiwan (4.4022) korean (4.9016) asia (4.9593) japan (5.0721) seoul (5.4058) thailand (5.6025) singapore (5.7010) russia (5.7240) hong
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
Our first step to correcting spelling mistakes is looking at the vector for a misspelling of the word 'reliable'.
word_vector = get_vector(glove, 'relieable') print_tuples(closest_words(glove, word_vector))
(0.0000) relieable (5.0366) relyable (5.2610) realible (5.4719) realiable (5.5402) relable (5.5917) relaible (5.6412) reliabe (5.8802) relaiable (5.9593) stabel (5.9981) consitant
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
Notice how the correct spelling, "reliable", does not appear in the top 10 closest words. Surely the misspellings of a word should appear next to the correct spelling of the word as they appear in the same context, right? The hypothesis is that misspellings of words are all equally shifted away from their correct spelling. This is because articles of text that contain spelling mistakes are usually written in an informal manner where correct spelling doesn't matter as much (such as tweets/blog posts), thus spelling errors will appear together as they appear in context of informal articles.Similar to how we created analogies before, we can create a "correct spelling" vector. This time, instead of using a single example to create our vector, we'll use the average of multiple examples. This will hopefully give better accuracy!We first create a vector for the correct spelling, 'reliable', then calculate the difference between the "reliable vector" and each of the 8 misspellings of 'reliable'. As we are going to concatenate these 8 misspelling tensors together we need to unsqueeze a "batch" dimension to them.
reliable_vector = get_vector(glove, 'reliable') reliable_misspellings = ['relieable', 'relyable', 'realible', 'realiable', 'relable', 'relaible', 'reliabe', 'relaiable'] diff_reliable = [(reliable_vector - get_vector(glove, s)).unsqueeze(0) for s in reliable_misspellings]
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
We take the average of these 8 'difference from reliable' vectors to get our "misspelling vector".
misspelling_vector = torch.cat(diff_reliable, dim = 0).mean(dim = 0)
_____no_output_____
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
We can now correct other spelling mistakes using this "misspelling vector" by finding the closest words to the sum of the vector of a misspelled word and the "misspelling vector".For a misspelling of "because":
word_vector = get_vector(glove, 'becuase') print_tuples(closest_words(glove, word_vector + misspelling_vector))
(6.1090) because (6.4250) even (6.4358) fact (6.4914) sure (6.5094) though (6.5601) obviously (6.5682) reason (6.5856) if (6.6099) but (6.6415) why
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
For a misspelling of "definitely":
word_vector = get_vector(glove, 'defintiely') print_tuples(closest_words(glove, word_vector + misspelling_vector))
(5.4070) definitely (5.5643) certainly (5.7192) sure (5.8152) well (5.8588) always (5.8812) also (5.9557) simply (5.9667) consider (5.9821) probably (5.9948) definately
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
For a misspelling of "consistent":
word_vector = get_vector(glove, 'consistant') print_tuples(closest_words(glove, word_vector + misspelling_vector))
(5.9641) consistent (6.3674) reliable (7.0195) consistant (7.0299) consistently (7.1605) accurate (7.2737) fairly (7.3037) good (7.3520) reasonable (7.3801) dependable (7.4027) ensure
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
For a misspelling of "package":
word_vector = get_vector(glove, 'pakage') print_tuples(closest_words(glove, word_vector + misspelling_vector))
(6.6117) package (6.9315) packages (7.0195) pakage (7.0911) comes (7.1241) provide (7.1469) offer (7.1861) reliable (7.2431) well (7.2434) choice (7.2453) offering
MIT
B - A Closer Look at Word Embeddings.ipynb
Andrews2017/pytorch-sentiment-analysis
This notebook explores the calendar of Munich listings to answer the question: What is the most expensive and the cheapest time to visit Munich?
import pandas as pd import numpy as np from matplotlib import pyplot as plt import seaborn as sns sns.set() LOCATION = 'munich' df_list = pd.read_csv(LOCATION + '/listings.csv.gz') df_reviews = pd.read_csv(LOCATION + '/reviews.csv.gz') df_cal = pd.read_csv(LOCATION + '/calendar.csv.gz') pd.options.display.max_rows=10 pd.options.display.max_columns=None pd.options.display.max_colwidth=30
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
___ Calendar First look into to data and types for each column:
df_cal df_cal.dtypes
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
___ Some data types are wrong. In order to be able to work with the data, we need to change some datatypes. First convert **date** to *datetime* type:
df_cal['date'] = pd.to_datetime(df_cal['date'])
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
**Price** needs to converted to *float* in order to be able to work with it.
df_cal['price']=df_cal['price'].replace(to_replace='[\$,]', value='', regex=True).astype(float) df_cal['adjusted_price']=df_cal['adjusted_price'].replace(to_replace='[\$,]', value='', regex=True).astype(float)
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
This is how it looks now:
df_cal.head()
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
And this are the corrected data types:
df_cal.dtypes
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
___ First question to be answered is, what is the price distribution over the year: Let's calculate the mean price over all listings for each day of the year: First check if we have *NULL* values in the data frame.
df_cal.isnull().sum()
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
*NULL* values have impact to the average (even if very small due to the small number of missing values), let's drop all rows with *NULL* **price**.
df_cal.dropna(subset=['price'], inplace=True)
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
Now let's group all listings by **date** and calculate the average **price** of all listings for each day:
mean_price = df_cal[['date', 'price']].groupby(by='date').mean().reset_index() mean_price
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
And plot the result:
## use the plot method and scale the size based on ploted values scale_from = mean_price['price'][1:-2].min()*0.95 scale_to = mean_price['price'][1:-2].max()*1.02 mean_price.set_index('date')[1:-2].plot(kind='line', y='price', figsize=(20,10), grid=True).set_ylim(scale_from, scale_to);
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
HERE WE ARE! There are two interesting observations: 1. There is a peak in the second half of September: **"Welcome to the Octoberfest!**" 2. The price apparently depends on the day of week. Let's have a closer look at it. ___ Second question: What is the price distribution within a week? To be able to have a close look at prices, let's introduce the **day_of_week** column.
df_cal['day_of_week'] = df_cal['date'].dt.dayofweek
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
Let's group the prices for each day of week and get the average price:
mean_price_dow = df_cal[['day_of_week', 'price']].groupby(by='day_of_week').mean().reset_index() mean_price_dow
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
It's difficult to interpret index-based day of week. Let's convert it to strings from Monday to Sunday:
def convert_day_of_week(day_idx): ''' This function convert index based day of week to string 0 - Monday 6 - Sunday if the day_idx oís out of this range, this index will be returned ''' if(day_idx>6 or day_idx<0): return day_idx lst = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] return lst[day_idx] mean_price_dow['day_of_week'] = mean_price_dow['day_of_week'].apply(convert_day_of_week) mean_price_dow
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
Now we can plot the result:
scale_from = mean_price_dow['price'].min()*0.95 scale_to = mean_price_dow['price'].max()*1.02 sns.set(rc={'figure.figsize':(15,5)}) fig = sns.barplot(data=mean_price_dow, x='day_of_week', y='price', color='#0080bb'); fig.set_ylim(scale_from, scale_to); fig.set_title('Prices for day of the week'); fig.set_xlabel('Day of week'); fig.set_ylabel('Price');
_____no_output_____
MIT
2. Best Time To Visit Munich.ipynb
rmnng/dsblogpost
Matplotlib 快速入门 Matplotlib 是什么- 官方文档: https://matplotlib.org> Matplotlib is a Python 2D plotting library which produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shell (à la MATLAB or Mathematica), web application servers, and various graphical user interface toolkits.- Matplotlib 在 Python 中作为一个绘制 2D 图像的库 为什么用 Matplotlib1. 支持图像种类多1. 绘图简单 安装`pip install matplotlib` 核心- `matplotlib.pyplot` 绘图核心- 布局- 颜色- 图例- 文本- 交互- 输出 术语![模型图](https://matplotlib.org/_images/anatomy.png) Title> 标题 Major tick> 刻度 Legend> 图例 Major tick label> 刻度上的标记 Grid> 后置网格 Line > Plot的一种折线图 Markers> 标记(可用散点图绘制) Y axis label> 纵坐标标注 X axis label > 横坐标标注 Spines > 绘制范围| 英文 | 中文 || ----------------- | --------- || Annotation | 标注 || Artist | 艺术家 || Axes | 轴域 || Axis | 轴/坐标轴 || Bézier | 贝塞尔 || Coordinate | 坐标 || Coordinate System | 坐标系 || Figure | 图形 || Handle | 句柄 || Handler | 处理器 || Image | 图像 || Legend | 图例 || Line | 线条 || Patch | 补丁 || Path | 路径 || Pick | 拾取 || Subplot | 子图 || Text | 文本 || Tick | 刻度 || Tick Label | 刻度标签 || Transformation | 变换 | 第一个Plot
import matplotlib.pyplot as plt plt.plot([1,2,3,2]) plt.ylabel('some numbers') plt.show()
_____no_output_____
Apache-2.0
doc/matplotlib/quickstart.ipynb
huifer/python_doc
End-To-End Example: Bad Password Checker- Read in list of bad passwords from file `bad-passwords.txt`- Main program loop which: - inputs a password - checks whether the password is "good" or "bad" by checking against the list - repeats this until you enter no password.
# read passwords into list #todo: #input: none, output: list of bad passwords # for each line in bad-passwords.txt # add to list def read_passwords(): bad_password_list = [] filename = "ETEE-bad-passwords.txt" with open(filename) as f: for line in f: bad_password_list.append(line.strip()) return bad_password_list # password in list? #input: password list and a password to check, output: True or False #todo # get index of password in list # return true # when ValueError return false def password_in_list(password, bad_password_list): try: index = bad_password_list.index(password) return True except ValueError: return False # main program bad_password_list = read_passwords() print("This program will check for quality passwords against a list of known bad passwords.") while True: password = input("Enter a password or ENTER to quit: ") if password == "": break if password_in_list(password, bad_password_list): print("%s is a bad password. It is on the list." % (password)) else: print("%s seems like an ok password. It is not on the list." % (password))
This program will check for quality passwords against a list of known bad passwords. Enter a password or ENTER to quit: 123456 123456 is a bad password. It is on the list. Enter a password or ENTER to quit: fjdskafjoda;shv fjdskafjoda;shv seems like an ok password. It is not on the list. Enter a password or ENTER to quit: test test is a bad password. It is on the list. Enter a password or ENTER to quit: pasword pasword seems like an ok password. It is not on the list. Enter a password or ENTER to quit: password password is a bad password. It is on the list. Enter a password or ENTER to quit:
MIT
content/lessons/09/End-To-End-Example/ETEE-Bad-Password-Checker.ipynb
MahopacHS/spring2019-mollea1213
Copyright 2019 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en/tutorials/generative/dcgan.ipynb
PhilipMay/docs
Deep Convolutional Generative Adversarial Network View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial demonstrates how to generate images of handwritten digits using a [Deep Convolutional Generative Adversarial Network](https://arxiv.org/pdf/1511.06434.pdf) (DCGAN). The code is written using the [Keras Sequential API](https://www.tensorflow.org/guide/keras) with a `tf.GradientTape` training loop. What are GANs?[Generative Adversarial Networks](https://arxiv.org/abs/1406.2661) (GANs) are one of the most interesting ideas in computer science today. Two models are trained simultaneously by an adversarial process. A *generator* ("the artist") learns to create images that look real, while a *discriminator* ("the art critic") learns to tell real images apart from fakes.![A diagram of a generator and discriminator](./images/gan1.png)During training, the *generator* progressively becomes better at creating images that look real, while the *discriminator* becomes better at telling them apart. The process reaches equilibrium when the *discriminator* can no longer distinguish real images from fakes.![A second diagram of a generator and discriminator](./images/gan2.png)This notebook demonstrates this process on the MNIST dataset. The following animation shows a series of images produced by the *generator* as it was trained for 50 epochs. The images begin as random noise, and increasingly resemble hand written digits over time.![sample output](https://tensorflow.org/images/gan/dcgan.gif)To learn more about GANs, we recommend MIT's [Intro to Deep Learning](http://introtodeeplearning.com/) course. Import TensorFlow and other libraries
from __future__ import absolute_import, division, print_function, unicode_literals try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf tf.__version__ # To generate GIFs !pip install imageio import glob import imageio import matplotlib.pyplot as plt import numpy as np import os import PIL from tensorflow.keras import layers import time from IPython import display
_____no_output_____
Apache-2.0
site/en/tutorials/generative/dcgan.ipynb
PhilipMay/docs