markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
README.md describes the card JSON schema. Cards List all card titles:
# read columns card title and type, sort by title cards[['title','type_code']].sort_values(by='title').head(10)
docs/code/python/pandas.ipynb
vpenso/scripts
gpl-3.0
Find a specific card
# do not truncate strings pandas.set_option('display.max_colwidth', -1) cards[cards['title'].str.match('Noise')][['type_code','faction_code','title','text']]
docs/code/python/pandas.ipynb
vpenso/scripts
gpl-3.0
Card Types List all card types, and the number of cards for a given type
cards['type_code'].value_counts() cards['type_code'].value_counts().plot(kind='bar')
docs/code/python/pandas.ipynb
vpenso/scripts
gpl-3.0
By Faction Select a specific card-type and count the cards per faction
programs = cards[cards['type_code'] == 'program'] programs['faction_code'].value_counts()
docs/code/python/pandas.ipynb
vpenso/scripts
gpl-3.0
ICE with faction and keywords
ice = cards[cards['type_code'] == 'ice'] ice[['title','faction_code','keywords']].head(10)
docs/code/python/pandas.ipynb
vpenso/scripts
gpl-3.0
Variant I/O Initialize the parser
# You only need to do this once per process import hgvs.parser hp = hgvsparser = hgvs.parser.Parser()
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
Parse a simple variant
v = hp.parse_hgvs_variant("NC_000007.13:g.21726874G>A") v v.ac, v.type v.posedit v.posedit.pos v.posedit.pos.start
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
Parsing complex variants
v = hp.parse_hgvs_variant("NM_003777.3:c.13552_*36del57") v.posedit.pos.start, v.posedit.pos.end v.posedit.edit
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
Formatting variants All objects may be formatted simply by "stringifying" or printing them using str, print(), or "".format().
str(v) print(v) "{v} spans the CDS end".format(v=v)
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
Projecting variants between sequences Set up a dataprovider Mapping variants requires exon structures, alignments, CDS bounds, and raw sequence. These are provided by a hgvs.dataprovider instance. The only dataprovider provided with hgvs uses UTA. You may write your own by subsclassing hgvs.dataproviders.interface.
import hgvs.dataproviders.uta hdp = hgvs.dataproviders.uta.connect()
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
Initialize mapper classes The VariantMapper class projects variants between two sequence accessions using alignments from a specified source. In order to use it, you must know that two sequences are aligned. VariantMapper isn't demonstrated here. AssemblyMapper builds on VariantMapper and handles identifying appropriate sequences. It is configured for a particular genome assembly.
import hgvs.variantmapper #vm = variantmapper = hgvs.variantmapper.VariantMapper(hdp) am37 = easyvariantmapper = hgvs.assemblymapper.AssemblyMapper(hdp, assembly_name='GRCh37') am38 = easyvariantmapper = hgvs.assemblymapper.AssemblyMapper(hdp, assembly_name='GRCh38')
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
c_to_g This is the easiest case because there is typically only one alignment between a transcript and the genome. (Exceptions exist for pseudoautosomal regions.)
var_c = hp.parse_hgvs_variant("NM_015120.4:c.35G>C") var_g = am37.c_to_g(var_c) var_g am38.c_to_g(var_c)
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
g_to_c In order to project a genomic variant onto a transcript, you must tell the AssemblyMapper which transcript to use.
am37.relevant_transcripts(var_g) am37.g_to_c(var_g, "NM_015120.4")
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
c_to_p
var_p = am37.c_to_p(var_c) str(var_p) var_p.posedit.uncertain = False str(var_p)
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
Projecting in the presence of a genome-transcript gap As of Oct 2016, 1033 RefSeq transcripts in 433 genes have gapped alignments. These gaps require special handlingin order to maintain the correspondence of positions in an alignment. hgvs uses the precomputed alignments in UTA to correctly project variants in exons containing gapped alignments. This example demonstrates projecting variants in the presence of a gap in the alignment of NM_015120.4 (ALMS1) with GRCh37 chromosome 2. (The alignment with GRCh38 is similarly gapped.) Specifically, the adjacent genomic positions 73613031 and 73613032 correspond to the non-adjacent CDS positions 35 and 39. NM_015120.4 c 15 > > 58 NM_015120.4 n 126 > CCGGGCGAGCTGGAGGAGGAGGAG > 169 ||||||||||| |||||||||| 21=3I20= NC_000002.11 g 73613021 > CCGGGCGAGCT---GGAGGAGGAG > 73613041 NC_000002.11 g 73613021 < GGCCCGCTCGA---CCTCCTCCTC < 73613041
str(am37.c_to_g(hp.parse_hgvs_variant("NM_015120.4:c.35G>C"))) str(am37.c_to_g(hp.parse_hgvs_variant("NM_015120.4:c.39G>C")))
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
Normalizing variants In hgvs, normalization means shifting variants 3' (as requried by the HGVS nomenclature) as well as rewriting variants. The variant "NM_001166478.1:c.30_31insT" is in a poly-T run (on the transcript). It should be shifted 3' and is better written as dup, as shown below: * NC_000006.11:g.49917127dupA NC_000006.11 g 49917117 > AGAAAGAAAAATAAAACAAAG > 49917137 NC_000006.11 g 49917117 < TCTTTCTTTTTATTTTGTTTC < 49917137 ||||||||||||||||||||| 21= NM_001166478.1 n 41 < TCTTTCTTTTTATTTTGTTTC < 21 NM_001166478.1:n.35dupT NM_001166478.1 c 41 < < 21 NM_001166478.1:c.30_31insT
import hgvs.normalizer hn = hgvs.normalizer.Normalizer(hdp) v = hp.parse_hgvs_variant("NM_001166478.1:c.30_31insT") str(hn.normalize(v))
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
A more complex normalization example This example is based on https://github.com/biocommons/hgvs/issues/382/. NC_000001.11 g 27552104 > CTTCACACGCATCCTGACCTTG > 27552125 NC_000001.11 g 27552104 < GAAGTGTGCGTAGGACTGGAAC < 27552125 |||||||||||||||||||||| 22= NM_001029882.3 n 843 < GAAGTGTGCGTAGGACTGGAAC < 822 NM_001029882.3 c 12 < < -10 ^^ NM_001029882.3:c.1_2del NM_001029882.3:n.832_833delAT NC_000001.11:g.27552114_27552115delAT
am38.c_to_g(hp.parse_hgvs_variant("NM_001029882.3:c.1A>G")) am38.c_to_g(hp.parse_hgvs_variant("NM_001029882.3:c.2T>G")) am38.c_to_g(hp.parse_hgvs_variant("NM_001029882.3:c.1_2del"))
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
The genomic coordinates for the SNVs at c.1 and c.2 match those for the del at c.1_2. Good! Now, notice what happens with c.1_3del, c.1_4del, and c.1_5del:
am38.c_to_g(hp.parse_hgvs_variant("NM_001029882.3:c.1_3del")) am38.c_to_g(hp.parse_hgvs_variant("NM_001029882.3:c.1_4del")) am38.c_to_g(hp.parse_hgvs_variant("NM_001029882.3:c.1_5del"))
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
Explanation: On the transcript, c.1_2delAT deletes AT from …AGGATGCG…, resulting in …AGGGCG…. There's no ambiguity about what sequence was actually deleted. c.1_3delATG deletes ATG, resulting in …AGGCG…. Note that you could also get this result by deleting GAT. This is an example of an indel that is subject to normalization and hgvs does this. c.1_4delATGC and 1_5delATGCG have similar behaviors. Normalization is always 3' with respect to the reference sequence. So, after projecting from a - strand transcript to the genome, normalization will go in the opposite direction to the transcript. It will have roughly the same effect as being 5' shifted on the transcript (but revcomp'd). For more precise control, see the normalize and replace_reference options of AssemblyMapper. Validating variants hgvs.validator.Validator is a composite of two classes, hgvs.validator.IntrinsicValidator and hgvs.validator.ExtrinsicValidator. Intrinsic validation evaluates a given variant for internal consistency, such as requiring that insertions specify adjacent positions. Extrinsic validation evaluates a variant using external data, such as ensuring that the reference nucleotide in the variant matches that implied by the reference sequence and position. Validation returns True if successful, and raises an exception otherwise.
import hgvs.validator hv = hgvs.validator.Validator(hdp) hv.validate(hp.parse_hgvs_variant("NM_001166478.1:c.30_31insT")) from hgvs.exceptions import HGVSError try: hv.validate(hp.parse_hgvs_variant("NM_001166478.1:c.30_32insT")) except HGVSError as e: print(e)
examples/using-hgvs.ipynb
biocommons/hgvs
apache-2.0
Circuits 3: Low rank, arbitrary basis molecular simulations <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/openfermion/tutorials/circuits_3_arbitrary_basis_trotter"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/tutorials/circuits_3_arbitrary_basis_trotter.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/OpenFermion/blob/master/docs/tutorials/circuits_3_arbitrary_basis_trotter.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/tutorials/circuits_3_arbitrary_basis_trotter.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table> Setup Install the OpenFermion package:
try: import openfermion except ImportError: !pip install git+https://github.com/quantumlib/OpenFermion.git@master#egg=openfermion
docs/tutorials/circuits_3_arbitrary_basis_trotter.ipynb
kevinsung/OpenFermion
apache-2.0
Low rank decomposition of the Coulomb operator The algorithm discussed in this tutorial is described in arXiv:1808.02625. In Circuits 1 we discussed methods for very compiling single-particle basis transformations of fermionic operators in $O(N)$ depth on a linearly connected architecture. We looked at the particular example of simulating a free fermion model by using Bogoliubov transformations to diagonalize the model. In Circuits 2 we discussed methods for compiling Trotter steps of electronic structure Hamiltonian in $O(N)$ depth on a linearly connected architecture when expressed in a basis diagonalizing the Coulomb operator so that $$ H = \sum_{pq} T_{pq} a^\dagger_p a_q + \sum_{pq} V_{pq} a^\dagger_p a_p a^\dagger_q a_q. $$ Here we will discuss how both of those techniques can be combined, along with some insights from electronic structure, in order to simulate arbitrary basis molecular Hamiltonians taking the form $$ H = \sum_{pq} T_{pq} a^\dagger_p a_q + \sum_{pqrs} V_{pqrs} a^\dagger_p a_q a^\dagger_r a_s $$ in depth scaling only as $O(N^2)$ on a linear array of qubits. First, we note that the one-body part of the above expression is easy to simulate using the techniques introduced in Circuits 1. Thus, the real challenge is to simulate the two-body part of the operator. We begin with the observation that the rank-4 tensor $V$, with the values $V_{pqrs}$ representing the coefficient of $a^\dagger_p a_q a^\dagger_r a_s$ can be flattened into an $N^2 \times N^2$ array by making $p,q$ one index and $r,s$ the other. This is the electronic repulsion integral (ERI) matrix in chemist notation. We will refer to the ERI matrix as $W$. By diagonalizing $W$, one obtains $W g_\ell = \lambda_\ell g_\ell$ where the eigenvector $g_\ell$ is a vector of dimension $N^2$. If we reshape $g_\ell$ into an $N \times N$ vector, we realize that $$ \sum_{pqrs} V_{pqrs} a^\dagger_p a_q a^\dagger_r a_s = \sum_{\ell=0}^{L-1} \lambda_\ell \left(\sum_{pq} \left[g_{\ell}\right]{pq} a^\dagger_p a_q\right)^2. $$ This is related to the concept of density fitting in electronic structure, which is often accomplished using a Cholesky decomposition. It is fairly well known in the quantum chemistry community that the ERI matrix is positive semi-definite and despite having linear dimension $N^2$, has rank of only $L = O(N)$. Thus, the eigenvalues $\lambda\ell$ are positive and there are only $O(N)$ of them. Next, we diagonalize the one-body operators inside of the square so that $$ R_\ell \left(\sum_{pq} \left[g_\ell\right]{pq} a^\dagger_p a_q\right) R\ell^\dagger = \sum_{p} f_{\ell p} a^\dagger_p a_p $$ where the $R_\ell$ represent single-particle basis transformations of the sort we compiled in Circuits 1. Then, $$ \sum_{\ell=0}^{L-1} \lambda_\ell \left(\sum_{pq} \left[g_{\ell}\right]{pq} a^\dagger_p a_q\right)^2 = \sum{\ell=0}^{L-1} \lambda_\ell \left(R_\ell \left(\sum_{p} f_{\ell p} a^\dagger_p a_p\right) R_\ell^\dagger\right)^2 = \sum_{\ell=0}^{L-1} \lambda_\ell \left(R_\ell \left(\sum_{p} f_{\ell p} a^\dagger_p a_p\right) R_\ell^\dagger R_\ell \left(\sum_{p} f_{\ell p} a^\dagger_p a_p\right) R_\ell^\dagger\right) = \sum_{\ell=0}^{L-1} \lambda_\ell R_\ell \left(\sum_{pq} f_{\ell p} f_{\ell q} a^\dagger_p a_p a^\dagger_q a_q\right) R_\ell^\dagger. $$ We now see that we can simulate a Trotter step under the arbitrary basis two-body operator as $$ \prod_{\ell=0}^{L-1} R_\ell \exp\left(-i\sum_{pq} f_{\ell p} f_{\ell q} a^\dagger_p a_p a^\dagger_q a_q\right) R_\ell^\dagger $$ where we note that the operator in the exponential take the form of a diagonal Coulomb operator. Since we can implement the $R_\ell$ circuits in $O(N)$ depth (see Circuits 1) and we can implement Trotter steps under diagonal Coulomb operators in $O(N)$ layers of gates (see Circuits 2) we see that we can implement Trotter steps under arbitrary basis electronic structure Hamiltionians in $O(L N) = O(N^2)$ depth, and all on a linearly connected device. This is a big improvement over the usual way of doing things, which would lead to no less than $O(N^5)$ depth! In fact, it is also possible to do better by truncating rank on the second diagonalization but we have not implemented that (details will be discussed in aforementioned paper-in-preparation). Note that these techniques are also applicable to realizing evolution under other two-body operators, such as the generator of unitary coupled cluster. Note that one can create variational algorithms where a variational parameter specifies the rank at which to truncate the $\lambda_\ell$. Example implementation: Trotter steps of LiH in molecular orbital basis We will now use these techniques to implement Trotter steps for an actual molecule. We will focus on LiH at equilibrium geometry, since integrals for that system are provided with every OpenFermion installation. However, by installing OpenFermion-PySCF or OpenFermion-Psi4 one can use these techniques for any molecule at any geometry. We will generate LiH in an active space consisting of 4 qubits. First, we obtain the Hamiltonian as an InteractionOperator.
import openfermion # Set Hamiltonian parameters for LiH simulation in active space. diatomic_bond_length = 1.45 geometry = [('Li', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))] basis = 'sto-3g' multiplicity = 1 active_space_start = 1 active_space_stop = 3 # Generate and populate instance of MolecularData. molecule = openfermion.MolecularData(geometry, basis, multiplicity, description="1.45") molecule.load() # Get the Hamiltonian in an active space. molecular_hamiltonian = molecule.get_molecular_hamiltonian( occupied_indices=range(active_space_start), active_indices=range(active_space_start, active_space_stop)) print(openfermion.get_fermion_operator(molecular_hamiltonian))
docs/tutorials/circuits_3_arbitrary_basis_trotter.ipynb
kevinsung/OpenFermion
apache-2.0
We see from the above output that this is a fairly complex Hamiltonian already. Next we will use the simulate_trotter function from Circuits 1, but this time using a different type of Trotter step associated with these low rank techniques. To keep this circuit very short for pedagogical purposes we will force a truncation of the eigenvalues $\lambda_\ell$ at a predetermined value of final_rank. While we also support a canned LOW_RANK option for the Trotter steps, in order to pass this value of final_rank we will instantiate a custom Trotter algorithm type.
import cirq import openfermion from openfermion.circuits import trotter # Trotter step parameters. time = 1. final_rank = 2 # Initialize circuit qubits in a line. n_qubits = openfermion.count_qubits(molecular_hamiltonian) qubits = cirq.LineQubit.range(n_qubits) # Compile the low rank Trotter step using OpenFermion. custom_algorithm = trotter.LowRankTrotterAlgorithm(final_rank=final_rank) circuit = cirq.Circuit( trotter.simulate_trotter( qubits, molecular_hamiltonian, time=time, omit_final_swaps=True, algorithm=custom_algorithm), strategy=cirq.InsertStrategy.EARLIEST) # Print circuit. cirq.drop_negligible_operations(circuit) print(circuit.to_text_diagram(transpose=True))
docs/tutorials/circuits_3_arbitrary_basis_trotter.ipynb
kevinsung/OpenFermion
apache-2.0
We were able to print out the circuit this way but forcing final_rank of 2 is not very accurate. In the cell below, we compile the Trotter step with full rank so $L = N^2$ and depth is actually $O(N^3)$ and repeat the Trotter step multiple times to show that it actually converges to the correct result. Since we are not forcing the rank truncation we can use the built-in LOW_RANK Trotter step type. Note that the rank of the Coulomb operators is asymptotically $O(N)$ but for very small molecules in small basis sets only a few eigenvalues can be truncated.
# Initialize a random initial state. import numpy random_seed = 8317 initial_state = openfermion.haar_random_vector( 2 ** n_qubits, random_seed).astype(numpy.complex64) # Numerically compute the correct circuit output. import scipy hamiltonian_sparse = openfermion.get_sparse_operator(molecular_hamiltonian) exact_state = scipy.sparse.linalg.expm_multiply( -1j * time * hamiltonian_sparse, initial_state) # Trotter step parameters. n_steps = 3 # Compile the low rank Trotter step using OpenFermion. qubits = cirq.LineQubit.range(n_qubits) circuit = cirq.Circuit( trotter.simulate_trotter( qubits, molecular_hamiltonian, time=time, n_steps=n_steps, algorithm=trotter.LOW_RANK), strategy=cirq.InsertStrategy.EARLIEST) # Use Cirq simulator to apply circuit. simulator = cirq.Simulator() result = simulator.simulate(circuit, qubit_order=qubits, initial_state=initial_state) simulated_state = result.final_state_vector # Print final fidelity. fidelity = abs(numpy.dot(simulated_state, numpy.conjugate(exact_state))) ** 2 print('Fidelity with exact result is {}.\n'.format(fidelity)) # Print circuit. cirq.drop_negligible_operations(circuit) print(circuit.to_text_diagram(transpose=True))
docs/tutorials/circuits_3_arbitrary_basis_trotter.ipynb
kevinsung/OpenFermion
apache-2.0
total_CFUs: Read the data and fit the model
data_total_CFUs = pd.read_csv("flygut_cfus_expts1345_totals_processed.csv") lm_total_CFUs = smf.ols(formula="total_CFUs ~ a + a1 + a2 + a3 + a4 + a5 +" "b12 + b13 + b14 + b15 + b23 + b24 + b25 + b34 + b35 + b45 +" "c123 + c124 + c125 + c134 + c135 + c145 + c234 + c235 + c245 + c345 +" "d1234 + d1235 + d1245 + d1345 + d2345 + e12345", data=data_total_CFUs).fit()
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
Output summary statistics
lm_total_CFUs.summary()
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
Plot inferred coefficients with confidence intervals
conf_int_total_CFUs = pd.DataFrame(lm_total_CFUs.conf_int()) conf_int_total_CFUs[2] = [lm_total_CFUs.params.a, lm_total_CFUs.params.a, lm_total_CFUs.params.a1, lm_total_CFUs.params.a2, lm_total_CFUs.params.a3, lm_total_CFUs.params.a4, lm_total_CFUs.params.a5, lm_total_CFUs.params.b12, lm_total_CFUs.params.b13, lm_total_CFUs.params.b14, lm_total_CFUs.params.b15, lm_total_CFUs.params.b23, lm_total_CFUs.params.b24, lm_total_CFUs.params.b25, lm_total_CFUs.params.b34, lm_total_CFUs.params.b35, lm_total_CFUs.params.b45, lm_total_CFUs.params.c123, lm_total_CFUs.params.c124, lm_total_CFUs.params.c125, lm_total_CFUs.params.c134, lm_total_CFUs.params.c135, lm_total_CFUs.params.c145, lm_total_CFUs.params.c234, lm_total_CFUs.params.c235, lm_total_CFUs.params.c245, lm_total_CFUs.params.c345, lm_total_CFUs.params.d1234, lm_total_CFUs.params.d1235, lm_total_CFUs.params.d1245, lm_total_CFUs.params.d1345, lm_total_CFUs.params.d2345, lm_total_CFUs.params.e12345] conf_int_total_CFUs.columns = ["95% conf. int. bottom", "95% conf. int. top", "coef"] # Set Intercept and a to 0, as otherwise the rest of the plot vanishes. conf_int_total_CFUs["coef"].Intercept = 0 conf_int_total_CFUs["95% conf. int. bottom"].Intercept = 0 conf_int_total_CFUs["95% conf. int. top"].Intercept = 0 conf_int_total_CFUs["coef"].a = 0 conf_int_total_CFUs["95% conf. int. bottom"].a = 0 conf_int_total_CFUs["95% conf. int. top"].a = 0 conf_int_total_CFUs.plot.bar(figsize=(20,10))
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
DailyFecundity: Read the data and fit the model
data_DailyFecundity = pd.read_csv("DailyFecundityData_processed.csv") lm_DailyFecundity = smf.ols(formula="DailyFecundity ~ a + a1 + a2 + a3 + a4 + a5 +" "b12 + b13 + b14 + b15 + b23 + b24 + b25 + b34 + b35 + b45 +" "c123 + c124 + c125 + c134 + c135 + c145 + c234 + c235 + c245 + c345 +" "d1234 + d1235 + d1245 + d1345 + d2345 + e12345", data=data_DailyFecundity).fit()
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
Output summary statistics
lm_DailyFecundity.summary()
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
Plot inferred coefficients with confidence intervals
conf_int_DailyFecundity = pd.DataFrame(lm_DailyFecundity.conf_int()) conf_int_DailyFecundity[2] = [lm_DailyFecundity.params.a, lm_DailyFecundity.params.a, lm_DailyFecundity.params.a1, lm_DailyFecundity.params.a2, lm_DailyFecundity.params.a3, lm_DailyFecundity.params.a4, lm_DailyFecundity.params.a5, lm_DailyFecundity.params.b12, lm_DailyFecundity.params.b13, lm_DailyFecundity.params.b14, lm_DailyFecundity.params.b15, lm_DailyFecundity.params.b23, lm_DailyFecundity.params.b24, lm_DailyFecundity.params.b25, lm_DailyFecundity.params.b34, lm_DailyFecundity.params.b35, lm_DailyFecundity.params.b45, lm_DailyFecundity.params.c123, lm_DailyFecundity.params.c124, lm_DailyFecundity.params.c125, lm_DailyFecundity.params.c134, lm_DailyFecundity.params.c135, lm_DailyFecundity.params.c145, lm_DailyFecundity.params.c234, lm_DailyFecundity.params.c235, lm_DailyFecundity.params.c245, lm_DailyFecundity.params.c345, lm_DailyFecundity.params.d1234, lm_DailyFecundity.params.d1235, lm_DailyFecundity.params.d1245, lm_DailyFecundity.params.d1345, lm_DailyFecundity.params.d2345, lm_DailyFecundity.params.e12345] conf_int_DailyFecundity.columns = ["95% conf. int. bottom", "95% conf. int. top", "coef"] # Set Intercept and a to 0, as otherwise the rest of the plot vanishes. conf_int_DailyFecundity["coef"].Intercept = 0 conf_int_DailyFecundity["95% conf. int. bottom"].Intercept = 0 conf_int_DailyFecundity["95% conf. int. top"].Intercept = 0 conf_int_DailyFecundity["coef"].a = 0 conf_int_DailyFecundity["95% conf. int. bottom"].a = 0 conf_int_DailyFecundity["95% conf. int. top"].a = 0 conf_int_DailyFecundity.plot.bar(figsize=(20,10))
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
Development: Read the data and fit the model
data_Development = pd.read_csv("DevelopmentData_processed.csv") lm_Development = smf.ols(formula="Development ~ a + a1 + a2 + a3 + a4 + a5 +" "b12 + b13 + b14 + b15 + b23 + b24 + b25 + b34 + b35 + b45 +" "c123 + c124 + c125 + c134 + c135 + c145 + c234 + c235 + c245 + c345 +" "d1234 + d1235 + d1245 + d1345 + d2345 + e12345", data=data_Development).fit()
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
Output summary statistics
lm_Development.summary()
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
Plot inferred coefficients with confidence intervals
conf_int_Development = pd.DataFrame(lm_Development.conf_int()) conf_int_Development[2] = [lm_Development.params.a, lm_Development.params.a, lm_Development.params.a1, lm_Development.params.a2, lm_Development.params.a3, lm_Development.params.a4, lm_Development.params.a5, lm_Development.params.b12, lm_Development.params.b13, lm_Development.params.b14, lm_Development.params.b15, lm_Development.params.b23, lm_Development.params.b24, lm_Development.params.b25, lm_Development.params.b34, lm_Development.params.b35, lm_Development.params.b45, lm_Development.params.c123, lm_Development.params.c124, lm_Development.params.c125, lm_Development.params.c134, lm_Development.params.c135, lm_Development.params.c145, lm_Development.params.c234, lm_Development.params.c235, lm_Development.params.c245, lm_Development.params.c345, lm_Development.params.d1234, lm_Development.params.d1235, lm_Development.params.d1245, lm_Development.params.d1345, lm_Development.params.d2345, lm_Development.params.e12345] conf_int_Development.columns = ["95% conf. int. bottom", "95% conf. int. top", "coef"] # Comment all the following lines out to plot the Intercept and a. conf_int_Development["coef"].Intercept = 0 conf_int_Development["95% conf. int. bottom"].Intercept = 0 conf_int_Development["95% conf. int. top"].Intercept = 0 conf_int_Development["coef"].a = 0 conf_int_Development["95% conf. int. bottom"].a = 0 conf_int_Development["95% conf. int. top"].a = 0 conf_int_Development.plot.bar(figsize=(20,10))
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
Survival: Read the data and fit the model
data_Survival = pd.read_csv("SurvivalData_processed.csv") lm_Survival = smf.ols(formula="Survival ~ a + a1 + a2 + a3 + a4 + a5 +" "b12 + b13 + b14 + b15 + b23 + b24 + b25 + b34 + b35 + b45 +" "c123 + c124 + c125 + c134 + c135 + c145 + c234 + c235 + c245 + c345 +" "d1234 + d1235 + d1245 + d1345 + d2345 + e12345", data=data_Survival).fit()
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
Output summary statistics
lm_Survival.summary()
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
Plot inferred coefficients with confidence intervals
conf_int_Survival = pd.DataFrame(lm_Survival.conf_int()) conf_int_Survival[2] = [lm_Survival.params.a, lm_Survival.params.a, lm_Survival.params.a1, lm_Survival.params.a2, lm_Survival.params.a3, lm_Survival.params.a4, lm_Survival.params.a5, lm_Survival.params.b12, lm_Survival.params.b13, lm_Survival.params.b14, lm_Survival.params.b15, lm_Survival.params.b23, lm_Survival.params.b24, lm_Survival.params.b25, lm_Survival.params.b34, lm_Survival.params.b35, lm_Survival.params.b45, lm_Survival.params.c123, lm_Survival.params.c124, lm_Survival.params.c125, lm_Survival.params.c134, lm_Survival.params.c135, lm_Survival.params.c145, lm_Survival.params.c234, lm_Survival.params.c235, lm_Survival.params.c245, lm_Survival.params.c345, lm_Survival.params.d1234, lm_Survival.params.d1235, lm_Survival.params.d1245, lm_Survival.params.d1345, lm_Survival.params.d2345, lm_Survival.params.e12345] conf_int_Survival.columns = ["95% conf. int. bottom", "95% conf. int. top", "coef"] # Comment all the following lines out to plot the Intercept and a. conf_int_Survival["coef"].Intercept = 0 conf_int_Survival["95% conf. int. bottom"].Intercept = 0 conf_int_Survival["95% conf. int. top"].Intercept = 0 conf_int_Survival["coef"].a = 0 conf_int_Survival["95% conf. int. bottom"].a = 0 conf_int_Survival["95% conf. int. top"].a = 0 conf_int_Survival.plot.bar(figsize=(20,10))
taylor_lin_fit.ipynb
gavruskin/microinteractions
mit
Importing preprocessing data
#this function reads the file def read_data(archive, rows, columns): data = open(archive, 'r') mylist = data.read().split() data.close() myarray = np.array(mylist).reshape(( rows, columns)).astype(float) return myarray data = read_data('../get_data_example/set.txt',72, 12) X = data[:, [0, 2, 4, 6, 7, 8, 9, 10, 11]] #print pre_X.shape, data.shape y = data[:,1] #print y.shape #getting the time vector for plotting purposes time_stamp = np.zeros(data.shape[0]) for i in xrange(data.shape[0]): time_stamp[i] = i*(1.0/60.0) #print X.shape, time_stamp.shape X = np.hstack((X, time_stamp.reshape((X.shape[0], 1)))) print X.shape X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) t_test = X_test[:,-1] t_train = X_train[:, -1] X_train_std = preprocessing.scale(X_train[:,0:-1]) X_test_std = preprocessing.scale(X_test[:, 0:-1])
MLP_final_test/.ipynb_checkpoints/MLP_from_data-checkpoint.ipynb
ithallojunior/NN_compare
mit
Sorting out data (for plotting purposes)
#Here comes the way to sort out the data according to one the elements of it test_sorted = np.hstack( (t_test.reshape(X_test_std.shape[0], 1), X_test_std, y_test.reshape(X_test_std.shape[0], 1))) test_sorted = test_sorted[np.argsort(test_sorted[:,0])] #modified train_sorted = np.hstack((t_train.reshape(t_train.shape[0], 1), y_train.reshape(y_train.shape[0], 1) )) train_sorted = train_sorted[np.argsort(train_sorted[:,0])]
MLP_final_test/.ipynb_checkpoints/MLP_from_data-checkpoint.ipynb
ithallojunior/NN_compare
mit
Artificial Neural Network (Gridsearch, DO NOT RUN)
#Grid search, random state =0: same beginning for all alpha1 = np.linspace(0.001,0.9, 9).tolist() momentum1 = np.linspace(0.3,0.9, 9).tolist() params_dist = {"hidden_layer_sizes":[(20, 40), (15, 40), (10,15), (15, 15, 10), (15, 10), (15, 5)], "activation":['tanh','logistic'],"algorithm":['sgd', 'l-bfgs'], "alpha":alpha1, "learning_rate":['constant'],"max_iter":[500], "random_state":[0], "verbose": [False], "warm_start":[False], "momentum":momentum1} grid = GridSearchCV(MLPRegressor(), param_grid=params_dist) grid.fit(X_train_std, y_train) print "Best score:", grid.best_score_ print "Best parameter's set found:\n" print grid.best_params_ reg = MLPRegressor(warm_start = grid.best_params_['warm_start'], verbose= grid.best_params_['verbose'], algorithm= grid.best_params_['algorithm'],hidden_layer_sizes=grid.best_params_['hidden_layer_sizes'], activation= grid.best_params_['activation'], max_iter= grid.best_params_['max_iter'], random_state= None,alpha= grid.best_params_['alpha'], learning_rate= grid.best_params_['learning_rate'], momentum= grid.best_params_['momentum']) reg.fit(X_train_std, y_train)
MLP_final_test/.ipynb_checkpoints/MLP_from_data-checkpoint.ipynb
ithallojunior/NN_compare
mit
Plotting
%matplotlib inline results = reg.predict(test_sorted[:, 1:-1]) plt.plot(test_sorted[:, 0], results, c='r') # ( sorted time, results) plt.plot(train_sorted[:, 0], train_sorted[:,1], c='b' ) #expected plt.scatter(time_stamp, y, c='k') plt.xlabel("Time(s)") plt.ylabel("Angular velocities(rad/s)") red_patch = mpatches.Patch(color='red', label='Predicted') blue_patch = mpatches.Patch(color='blue', label ='Expected') black_patch = mpatches.Patch(color='black', label ='Original') plt.legend(handles=[red_patch, blue_patch, black_patch]) plt.title("MLP results vs Expected values") plt.show() print "Accuracy:", reg.score(X_test_std, y_test) #print "Accuracy test 2", r2_score(test_sorted[:,-1], results)
MLP_final_test/.ipynb_checkpoints/MLP_from_data-checkpoint.ipynb
ithallojunior/NN_compare
mit
Saving ANN to file through pickle (and using it later)
#This prevents the user from losing a previous important result def save_it(ans): if ans == "yes": f = open('data.ann', 'w') mem = pickle.dumps(grid) f.write(mem) f.close() else: print "Nothing to save" save_it("no") #Loading a successful ANN f = open('data.ann', 'r') nw = f.read() saved_ann = pickle.loads(nw) print "Just the accuracy:", saved_ann.score(X_test_std, y_test), "\n" print "Parameters:" print saved_ann.get_params(), "\n" print "Loss:", saved_ann.loss_ print "Total of layers:", saved_ann.n_layers_ print "Total of iterations:", saved_ann.n_iter_ #print from previously saved data %matplotlib inline results = saved_ann.predict(test_sorted[:, 1:-1]) plt.plot(test_sorted[:, 0], results, c='r') # ( sorted time, results) plt.plot(train_sorted[:, 0], train_sorted[:,1], c='b' ) #expected plt.scatter(time_stamp, y, c='k') plt.xlabel("Time(s)") plt.ylabel("Angular velocities(rad/s)") red_patch = mpatches.Patch(color='red', label='Predicted') blue_patch = mpatches.Patch(color='blue', label ='Expected') black_patch = mpatches.Patch(color='black', label ='Original') plt.legend(handles=[red_patch, blue_patch, black_patch]) plt.title("MLP results vs Expected values (Loaded from file)") plt.show() plt.plot(time_stamp, y,'--.', c='r') plt.xlabel("Time(s)") plt.ylabel("Angular velocities(rad/s)") plt.title("Resuts from patient:\n" " Angular velocities for the right knee") plt.show() print "Accuracy:", saved_ann.score(X_test_std, y_test) #print "Accuracy test 2", r2_score(test_sorted[:,-1], results) print max(y), saved_ann.predict(X_train_std[y_train.tolist().index(max(y_train)),:].reshape((1,9)))
MLP_final_test/.ipynb_checkpoints/MLP_from_data-checkpoint.ipynb
ithallojunior/NN_compare
mit
Use range() to print all the even numbers from 0 to 10.
#Code Here
PythonBootCamp/Complete-Python-Bootcamp-master/Statements Assessment Test.ipynb
yashdeeph709/Algorithms
apache-2.0
Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3.
#Code in this cell []
PythonBootCamp/Complete-Python-Bootcamp-master/Statements Assessment Test.ipynb
yashdeeph709/Algorithms
apache-2.0
Go through the string below and if the length of a word is even print "even!"
st = 'Print every word in this sentence that has an even number of letters' #Code in this cell
PythonBootCamp/Complete-Python-Bootcamp-master/Statements Assessment Test.ipynb
yashdeeph709/Algorithms
apache-2.0
Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".
#Code in this cell
PythonBootCamp/Complete-Python-Bootcamp-master/Statements Assessment Test.ipynb
yashdeeph709/Algorithms
apache-2.0
Use List Comprehension to create a list of the first letters of every word in the string below:
st = 'Create a list of the first letters of every word in this string' #Code in this cell
PythonBootCamp/Complete-Python-Bootcamp-master/Statements Assessment Test.ipynb
yashdeeph709/Algorithms
apache-2.0
Encoding the words The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network. Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0. Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
# Create your dictionary that maps vocab words to integers here from collections import Counter counter = Counter(words) vocab = sorted(counter, key=counter.get, reverse=True) vocab_to_int = {word: i for i, word in enumerate(vocab, 1)} # Convert the reviews to integers, same shape as reviews list, but with integers reviews_ints = [] for review in reviews: reviews_ints.append([vocab_to_int[word] for word in review.split()])
sentiment-rnn/Sentiment_RNN.ipynb
Bismarrck/deep-learning
mit
Encoding the labels Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1. Exercise: Convert labels from positive and negative to 1 and 0, respectively.
# Convert labels to 1s and 0s for 'positive' and 'negative' label_to_int= {"positive": 1, "negative": 0} labels = labels.split() labels = np.array([label_to_int[label.strip().lower()] for label in labels])
sentiment-rnn/Sentiment_RNN.ipynb
Bismarrck/deep-learning
mit
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters. Exercise: First, remove the review with zero length from the reviews_ints list.
# Filter out that review with 0 length reviews_ints = [review for review in reviews_ints if len(review) > 0]
sentiment-rnn/Sentiment_RNN.ipynb
Bismarrck/deep-learning
mit
Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector. This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
seq_len = 200 num_reviews = len(reviews_ints) features = np.zeros((num_reviews, seq_len), dtype=int) for i, review in enumerate(reviews_ints): rlen = min(len(review), seq_len) istart = seq_len - rlen features[i, istart:] = review[:rlen] print(features[0, :100])
sentiment-rnn/Sentiment_RNN.ipynb
Bismarrck/deep-learning
mit
Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets. Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
split_frac = 0.8 split_index = int(num_reviews * split_frac) train_x, val_x = features[:split_index], features[split_index:] train_y, val_y = labels[:split_index], labels[split_index:] split_index = int(len(val_x) * 0.5) val_x, test_x = val_x[:split_index], val_x[split_index:] val_y, test_y = val_y[:split_index], val_y[split_index:] print("\t\t\tFeature Shapes:") print("Train set: \t\t{}".format(train_x.shape), "\nValidation set: \t{}".format(val_x.shape), "\nTest set: \t\t{}".format(test_x.shape))
sentiment-rnn/Sentiment_RNN.ipynb
Bismarrck/deep-learning
mit
For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1 # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): inputs_ = tf.placeholder(tf.int32, [None], name="inputs") labels_ = tf.placeholder(tf.int32, [None, None], name="labels") keep_prob = tf.placeholder(tf.float32, shape=None, name="keep_prob")
sentiment-rnn/Sentiment_RNN.ipynb
Bismarrck/deep-learning
mit
Embedding Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights. Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
# Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 with graph.as_default(): embedding = tf.Variable() embed =
sentiment-rnn/Sentiment_RNN.ipynb
Bismarrck/deep-learning
mit
LSTM cell <img src="assets/network_diagram.png" width=400px> Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph. To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation: tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;) you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like lstm = tf.contrib.rnn.BasicLSTMCell(num_units) to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell: cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list. So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell. Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell. Here is a tutorial on building RNNs that will help you out.
with graph.as_default(): # Your basic LSTM cell lstm = # Add dropout to the cell drop = # Stack up multiple LSTM layers, for deep learning cell = # Getting an initial state of all zeros initial_state = cell.zero_state(batch_size, tf.float32)
sentiment-rnn/Sentiment_RNN.ipynb
Bismarrck/deep-learning
mit
RNN forward pass <img src="assets/network_diagram.png" width=400px> Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network. outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state) Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer. Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
with graph.as_default(): outputs, final_state =
sentiment-rnn/Sentiment_RNN.ipynb
Bismarrck/deep-learning
mit
オブジェクト検出 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/object_detection"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/object_detection.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/object_detection.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/object_detection.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td> <td> <a href="https://tfhub.dev/s?q=google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1%20OR%20google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを参照</a> </td> </table> この Colab では、オブジェクト検出を実行するようにトレーニングされた TF-Hub モジュールの使用を実演します。 セットアップ
#@title Imports and function definitions # For running inference on the TF-Hub module. import tensorflow as tf import tensorflow_hub as hub # For downloading the image. import matplotlib.pyplot as plt import tempfile from six.moves.urllib.request import urlopen from six import BytesIO # For drawing onto the image. import numpy as np from PIL import Image from PIL import ImageColor from PIL import ImageDraw from PIL import ImageFont from PIL import ImageOps # For measuring the inference time. import time # Print Tensorflow version print(tf.__version__) # Check available GPU devices. print("The following GPU devices are available: %s" % tf.test.gpu_device_name())
site/ja/hub/tutorials/object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
使用例 画像のダウンロードと視覚化用のヘルパー関数 必要最低限の単純な機能性を得るために、TF オブジェクト検出 API から採用された視覚化コードです。
def display_image(image): fig = plt.figure(figsize=(20, 15)) plt.grid(False) plt.imshow(image) def download_and_resize_image(url, new_width=256, new_height=256, display=False): _, filename = tempfile.mkstemp(suffix=".jpg") response = urlopen(url) image_data = response.read() image_data = BytesIO(image_data) pil_image = Image.open(image_data) pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS) pil_image_rgb = pil_image.convert("RGB") pil_image_rgb.save(filename, format="JPEG", quality=90) print("Image downloaded to %s." % filename) if display: display_image(pil_image) return filename def draw_bounding_box_on_image(image, ymin, xmin, ymax, xmax, color, font, thickness=4, display_str_list=()): """Adds a bounding box to an image.""" draw = ImageDraw.Draw(image) im_width, im_height = image.size (left, right, top, bottom) = (xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height) draw.line([(left, top), (left, bottom), (right, bottom), (right, top), (left, top)], width=thickness, fill=color) # If the total height of the display strings added to the top of the bounding # box exceeds the top of the image, stack the strings below the bounding box # instead of above. display_str_heights = [font.getsize(ds)[1] for ds in display_str_list] # Each display_str has a top and bottom margin of 0.05x. total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights) if top > total_display_str_height: text_bottom = top else: text_bottom = top + total_display_str_height # Reverse list and print from bottom to top. for display_str in display_str_list[::-1]: text_width, text_height = font.getsize(display_str) margin = np.ceil(0.05 * text_height) draw.rectangle([(left, text_bottom - text_height - 2 * margin), (left + text_width, text_bottom)], fill=color) draw.text((left + margin, text_bottom - text_height - margin), display_str, fill="black", font=font) text_bottom -= text_height - 2 * margin def draw_boxes(image, boxes, class_names, scores, max_boxes=10, min_score=0.1): """Overlay labeled boxes on an image with formatted scores and label names.""" colors = list(ImageColor.colormap.values()) try: font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationSansNarrow-Regular.ttf", 25) except IOError: print("Font not found, using default font.") font = ImageFont.load_default() for i in range(min(boxes.shape[0], max_boxes)): if scores[i] >= min_score: ymin, xmin, ymax, xmax = tuple(boxes[i]) display_str = "{}: {}%".format(class_names[i].decode("ascii"), int(100 * scores[i])) color = colors[hash(class_names[i]) % len(colors)] image_pil = Image.fromarray(np.uint8(image)).convert("RGB") draw_bounding_box_on_image( image_pil, ymin, xmin, ymax, xmax, color, font, display_str_list=[display_str]) np.copyto(image, np.array(image_pil)) return image
site/ja/hub/tutorials/object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
モジュールを適用する Open Images v4 から公開画像を読み込み、ローカルの保存して表示します。
# By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg image_url = "https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg" #@param downloaded_image_path = download_and_resize_image(image_url, 1280, 856, True)
site/ja/hub/tutorials/object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
オブジェクト検出モジュールを選択し、ダウンロードされた画像に適用します。モジュールのリストを示します。 FasterRCNN+InceptionResNet V2: 高精度 ssd+mobilenet V2: 小規模で高速
module_handle = "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1" #@param ["https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1", "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1"] detector = hub.load(module_handle).signatures['default'] def load_img(path): img = tf.io.read_file(path) img = tf.image.decode_jpeg(img, channels=3) return img def run_detector(detector, path): img = load_img(path) converted_img = tf.image.convert_image_dtype(img, tf.float32)[tf.newaxis, ...] start_time = time.time() result = detector(converted_img) end_time = time.time() result = {key:value.numpy() for key,value in result.items()} print("Found %d objects." % len(result["detection_scores"])) print("Inference time: ", end_time-start_time) image_with_boxes = draw_boxes( img.numpy(), result["detection_boxes"], result["detection_class_entities"], result["detection_scores"]) display_image(image_with_boxes) run_detector(detector, downloaded_image_path)
site/ja/hub/tutorials/object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
その他の画像 時間トラッキングを使用して、追加の画像に推論を実行します。
image_urls = [ # Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg "https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg", # By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg", # Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg "https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg", ] def detect_img(image_url): start_time = time.time() image_path = download_and_resize_image(image_url, 640, 480) run_detector(detector, image_path) end_time = time.time() print("Inference time:",end_time-start_time) detect_img(image_urls[0]) detect_img(image_urls[1]) detect_img(image_urls[2])
site/ja/hub/tutorials/object_detection.ipynb
tensorflow/docs-l10n
apache-2.0
Example 1: Implement a simple 'Hello World' program in SystemML <a class="anchor" id="bullet2"></a> First import the classes necessary to implement the 'Hello World' program. The MLContext API offers a programmatic interface for interacting with SystemML from Spark using languages such as Scala, Java, and Python. As a result, it offers a convenient way to interact with SystemML from the Spark Shell and from Notebooks such as Jupyter and Zeppelin. Please refer to the documentation for more detail on the MLContext API. As a sidenote, here are alternative ways by which you can invoke SystemML (not covered in this notebook): - Command-line invocation using either spark-submit or hadoop. - Using the JMLC API.
from systemml import MLContext, dml, dmlFromResource ml = MLContext(sc) print("Spark Version:", sc.version) print("SystemML Version:", ml.version()) print("SystemML Built-Time:", ml.buildTime()) # Step 1: Write the DML script script = """ print("Hello World!"); """ # Step 2: Create a Python DML object script = dml(script) # Step 3: Execute it using MLContext API ml.execute(script)
samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb
niketanpansare/systemml
apache-2.0
Now let's implement a slightly more complicated 'Hello World' program where we initialize a string variable to 'Hello World!' and print it using Python. Note: we first register the output variable in the dml object (in the step 2) and then fetch it after execution (in the step 3).
# Step 1: Write the DML script script = """ s = "Hello World!"; """ # Step 2: Create a Python DML object script = dml(script).output('s') # Step 3: Execute it using MLContext API s = ml.execute(script).get('s') print(s)
samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb
niketanpansare/systemml
apache-2.0
Example 2: Matrix Multiplication <a class="anchor" id="bullet3"></a> Let's write a script to generate a random matrix, perform matrix multiplication, and compute the sum of the output.
# Step 1: Write the DML script script = """ # The number of rows is passed externally by the user via 'nr' X = rand(rows=nr, cols=1000, sparsity=0.5) A = t(X) %*% X s = sum(A) """ # Step 2: Create a Python DML object script = dml(script).input(nr=1e5).output('s') # Step 3: Execute it using MLContext API s = ml.execute(script).get('s') print(s)
samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb
niketanpansare/systemml
apache-2.0
Now, let's generate a random matrix in NumPy and pass it to SystemML.
import numpy as np npMatrix = np.random.rand(1000, 1000) # Step 1: Write the DML script script = """ A = t(X) %*% X s = sum(A) """ # Step 2: Create a Python DML object script = dml(script).input(X=npMatrix).output('s') # Step 3: Execute it using MLContext API s = ml.execute(script).get('s') print(s)
samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb
niketanpansare/systemml
apache-2.0
Load diabetes dataset from scikit-learn for the example 3 <a class="anchor" id="bullet4"></a>
import matplotlib.pyplot as plt import numpy as np from sklearn import datasets plt.switch_backend('agg') %matplotlib inline diabetes = datasets.load_diabetes() diabetes_X = diabetes.data[:, np.newaxis, 2] diabetes_X_train = diabetes_X[:-20] diabetes_X_test = diabetes_X[-20:] diabetes_y_train = diabetes.target[:-20].reshape(-1,1) diabetes_y_test = diabetes.target[-20:].reshape(-1,1) plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb
niketanpansare/systemml
apache-2.0
Example 3: Implement three different algorithms to train linear regression model Linear regression models the relationship between one numerical response variable and one or more explanatory (feature) variables by fitting a linear equation to observed data. The feature vectors are provided as a matrix $X$ an the observed response values are provided as a 1-column matrix $y$. A linear regression line has an equation of the form $y = Xw$. Algorithm 1: Linear Regression - Direct Solve (no regularization) <a class="anchor" id="example3algo1"></a> Least squares formulation The least squares method calculates the best-fitting line for the observed data by minimizing the sum of the squares of the difference between the predicted response $Xw$ and the actual response $y$. $w^* = argmin_w ||Xw-y||^2 \ \;\;\; = argmin_w (y - Xw)'(y - Xw) \ \;\;\; = argmin_w \dfrac{(w'(X'X)w - w'(X'y))}{2}$ To find the optimal parameter $w$, we set the gradient $dw = (X'X)w - (X'y)$ to 0. $(X'X)w - (X'y) = 0 \ w = (X'X)^{-1}(X' y) \ \;\;= solve(X'X, X'y)$
# Step 1: Write the DML script script = """ # add constant feature to X to model intercept X = cbind(X, matrix(1, rows=nrow(X), cols=1)) A = t(X) %*% X b = t(X) %*% y w = solve(A, b) bias = as.scalar(w[nrow(w),1]) w = w[1:nrow(w)-1,] """ # Step 2: Create a Python DML object script = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias') # Step 3: Execute it using MLContext API w, bias = ml.execute(script).get('w','bias') w = w.toNumPy() plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red') plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='blue', linestyle ='dotted')
samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb
niketanpansare/systemml
apache-2.0
Algorithm 2: Linear Regression - Batch Gradient Descent (no regularization) <a class="anchor" id="example3algo2"></a> Algorithm Step 1: Start with an initial point while(not converged) { Step 2: Compute gradient dw. Step 3: Compute stepsize alpha. Step 4: Update: wnew = wold + alpha*dw } Gradient formula dw = r = (X'X)w - (X'y) Step size formula Find number alpha to minimize f(w + alpha*r) alpha = -(r'r)/(r'X'Xr)
# Step 1: Write the DML script script = """ # add constant feature to X to model intercepts X = cbind(X, matrix(1, rows=nrow(X), cols=1)) max_iter = 100 w = matrix(0, rows=ncol(X), cols=1) for(i in 1:max_iter){ XtX = t(X) %*% X dw = XtX %*%w - t(X) %*% y alpha = -(t(dw) %*% dw) / (t(dw) %*% XtX %*% dw) w = w + dw*alpha } bias = as.scalar(w[nrow(w),1]) w = w[1:nrow(w)-1,] """ # Step 2: Create a Python DML object script = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias') # Step 3: Execute it using MLContext API w, bias = ml.execute(script).get('w','bias') w = w.toNumPy() plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red') plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb
niketanpansare/systemml
apache-2.0
Algorithm 3: Linear Regression - Conjugate Gradient (no regularization) <a class="anchor" id="example3algo3"></a> Problem with gradient descent: Takes very similar directions many times Solution: Enforce conjugacy Step 1: Start with an initial point while(not converged) { Step 2: Compute gradient dw. Step 3: Compute stepsize alpha. Step 4: Compute next direction p by enforcing conjugacy with previous direction. Step 4: Update: w_new = w_old + alpha*p }
# Step 1: Write the DML script script = """ # add constant feature to X to model intercepts X = cbind(X, matrix(1, rows=nrow(X), cols=1)) m = ncol(X); i = 1; max_iter = 20; w = matrix (0, rows = m, cols = 1); # initialize weights to 0 dw = - t(X) %*% y; p = - dw; # dw = (X'X)w - (X'y) norm_r2 = sum (dw ^ 2); for(i in 1:max_iter) { q = t(X) %*% (X %*% p) alpha = norm_r2 / sum (p * q); # Minimizes f(w - alpha*r) w = w + alpha * p; # update weights dw = dw + alpha * q; old_norm_r2 = norm_r2; norm_r2 = sum (dw ^ 2); p = -dw + (norm_r2 / old_norm_r2) * p; # next direction - conjugacy to previous direction i = i + 1; } bias = as.scalar(w[nrow(w),1]) w = w[1:nrow(w)-1,] """ # Step 2: Create a Python DML object script = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias') # Step 3: Execute it using MLContext API w, bias = ml.execute(script).get('w','bias') w = w.toNumPy() plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red') plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb
niketanpansare/systemml
apache-2.0
Example 4: Invoke existing SystemML algorithm script LinearRegDS.dml using MLContext API <a class="anchor" id="example4"></a> SystemML ships with several pre-implemented algorithms that can be invoked directly. Please refer to the algorithm reference manual for usage.
# Step 1: No need to write a DML script here. But, keeping it as a placeholder for consistency :) # Step 2: Create a Python DML object script = dmlFromResource('scripts/algorithms/LinearRegDS.dml') script = script.input(X=diabetes_X_train, y=diabetes_y_train).input('$icpt',1.0).output('beta_out') # Step 3: Execute it using MLContext API w = ml.execute(script).get('beta_out') w = w.toNumPy() bias = w[1] w = w[0] plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red') plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb
niketanpansare/systemml
apache-2.0
Example 5: Invoke existing SystemML algorithm using scikit-learn/SparkML pipeline like API <a class="anchor" id="example5"></a> mllearn API allows a Python programmer to invoke SystemML's algorithms using scikit-learn like API as well as Spark's MLPipeline API.
# Step 1: No need to write a DML script here. But, keeping it as a placeholder for consistency :) # Step 2: No need to create a Python DML object. But, keeping it as a placeholder for consistency :) # Step 3: Execute Linear Regression using the mllearn API from systemml.mllearn import LinearRegression regr = LinearRegression(spark) # Train the model using the training sets regr.fit(diabetes_X_train, diabetes_y_train) predictions = regr.predict(diabetes_X_test) # Use the trained model to perform prediction %matplotlib inline plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red') plt.plot(diabetes_X_test, predictions, color='black')
samples/jupyter-notebooks/Linear_Regression_Algorithms_Demo.ipynb
niketanpansare/systemml
apache-2.0
Generate Input Files First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
# Instantiate some Nuclides h1 = openmc.Nuclide('H-1') b10 = openmc.Nuclide('B-10') o16 = openmc.Nuclide('O-16') u235 = openmc.Nuclide('U-235') u238 = openmc.Nuclide('U-238') zr90 = openmc.Nuclide('Zr-90')
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pin.
# 1.6 enriched fuel fuel = openmc.Material(name='1.6% Fuel') fuel.set_density('g/cm3', 10.31341) fuel.add_nuclide(u235, 3.7503e-4) fuel.add_nuclide(u238, 2.2625e-2) fuel.add_nuclide(o16, 4.6007e-2) # borated water water = openmc.Material(name='Borated Water') water.set_density('g/cm3', 0.740582) water.add_nuclide(h1, 4.9457e-2) water.add_nuclide(o16, 2.4732e-2) water.add_nuclide(b10, 8.0042e-6) # zircaloy zircaloy = openmc.Material(name='Zircaloy') zircaloy.set_density('g/cm3', 6.55) zircaloy.add_nuclide(zr90, 7.2758e-3)
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
With our three materials, we can now create a materials file object that can be exported to an actual XML file.
# Instantiate a MaterialsFile, add Materials materials_file = openmc.MaterialsFile() materials_file.add_material(fuel) materials_file.add_material(water) materials_file.add_material(zircaloy) materials_file.default_xs = '71c' # Export to "materials.xml" materials_file.export_to_xml()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
# Create cylinders for the fuel and clad fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218) clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720) # Create boundary planes to surround the geometry # Use both reflective and vacuum boundaries to make life interesting min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective') max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective') min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective') max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective') min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective') max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
# Create a Universe to encapsulate a fuel pin pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin') # Create fuel Cell fuel_cell = openmc.Cell(name='1.6% Fuel') fuel_cell.fill = fuel fuel_cell.region = -fuel_outer_radius pin_cell_universe.add_cell(fuel_cell) # Create a clad Cell clad_cell = openmc.Cell(name='1.6% Clad') clad_cell.fill = zircaloy clad_cell.region = +fuel_outer_radius & -clad_outer_radius pin_cell_universe.add_cell(clad_cell) # Create a moderator Cell moderator_cell = openmc.Cell(name='1.6% Moderator') moderator_cell.fill = water moderator_cell.region = +clad_outer_radius pin_cell_universe.add_cell(moderator_cell)
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
# Create root Cell root_cell = openmc.Cell(name='root cell') root_cell.fill = pin_cell_universe # Add boundary planes root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z # Create root Universe root_universe = openmc.Universe(universe_id=0, name='root universe') root_universe.add_cell(root_cell)
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
# Create Geometry and set root Universe geometry = openmc.Geometry() geometry.root_universe = root_universe # Instantiate a GeometryFile geometry_file = openmc.GeometryFile() geometry_file.geometry = geometry # Export to "geometry.xml" geometry_file.export_to_xml()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles.
# OpenMC simulation parameters batches = 20 inactive = 5 particles = 2500 # Instantiate a SettingsFile settings_file = openmc.SettingsFile() settings_file.batches = batches settings_file.inactive = inactive settings_file.particles = particles settings_file.output = {'tallies': True, 'summary': True} source_bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63] settings_file.set_source_space('box', source_bounds) # Export to "settings.xml" settings_file.export_to_xml()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
# Instantiate a Plot plot = openmc.Plot(plot_id=1) plot.filename = 'materials-xy' plot.origin = [0, 0, 0] plot.width = [1.26, 1.26] plot.pixels = [250, 250] plot.color = 'mat' # Instantiate a PlotsFile, add Plot, and export to "plots.xml" plot_file = openmc.PlotsFile() plot_file.add_plot(plot) plot_file.export_to_xml()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.
# Run openmc in plotting mode executor = openmc.Executor() executor.plot_geometry(output=False) # Convert OpenMC's funky ppm to png !convert materials-xy.ppm materials-xy.png # Display the materials plot inline Image(filename='materials-xy.png')
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.
# Instantiate an empty TalliesFile tallies_file = openmc.TalliesFile() # Create Tallies to compute microscopic multi-group cross-sections # Instantiate energy filter for multi-group cross-section Tallies energy_filter = openmc.Filter(type='energy', bins=[0., 0.625e-6, 20.]) # Instantiate flux Tally in moderator and fuel tally = openmc.Tally(name='flux') tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id])) tally.add_filter(energy_filter) tally.add_score('flux') tallies_file.add_tally(tally) # Instantiate reaction rate Tally in fuel tally = openmc.Tally(name='fuel rxn rates') tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id])) tally.add_filter(energy_filter) tally.add_score('nu-fission') tally.add_score('scatter') tally.add_nuclide(u238) tally.add_nuclide(u235) tallies_file.add_tally(tally) # Instantiate reaction rate Tally in moderator tally = openmc.Tally(name='moderator rxn rates') tally.add_filter(openmc.Filter(type='cell', bins=[moderator_cell.id])) tally.add_filter(energy_filter) tally.add_score('absorption') tally.add_score('total') tally.add_nuclide(o16) tally.add_nuclide(h1) tallies_file.add_tally(tally) # K-Eigenvalue (infinity) tallies fiss_rate = openmc.Tally(name='fiss. rate') abs_rate = openmc.Tally(name='abs. rate') fiss_rate.add_score('nu-fission') abs_rate.add_score('absorption') tallies_file.add_tally(fiss_rate) tallies_file.add_tally(abs_rate) # Resonance Escape Probability tallies therm_abs_rate = openmc.Tally(name='therm. abs. rate') therm_abs_rate.add_score('absorption') therm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625])) tallies_file.add_tally(therm_abs_rate) # Thermal Flux Utilization tallies fuel_therm_abs_rate = openmc.Tally(name='fuel therm. abs. rate') fuel_therm_abs_rate.add_score('absorption') fuel_therm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625])) fuel_therm_abs_rate.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id])) tallies_file.add_tally(fuel_therm_abs_rate) # Fast Fission Factor tallies therm_fiss_rate = openmc.Tally(name='therm. fiss. rate') therm_fiss_rate.add_score('nu-fission') therm_fiss_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625])) tallies_file.add_tally(therm_fiss_rate) # Instantiate energy filter to illustrate Tally slicing energy_filter = openmc.Filter(type='energy', bins=np.logspace(np.log10(1e-8), np.log10(20), 10)) # Instantiate flux Tally in moderator and fuel tally = openmc.Tally(name='need-to-slice') tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id])) tally.add_filter(energy_filter) tally.add_score('nu-fission') tally.add_score('scatter') tally.add_nuclide(h1) tally.add_nuclide(u238) tallies_file.add_tally(tally) # Export to "tallies.xml" tallies_file.export_to_xml()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
Now we a have a complete set of inputs, so we can go ahead and run our simulation.
# Remove old HDF5 (summary, statepoint) files !rm statepoint.* # Run OpenMC with MPI! executor.run_simulation()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
Tally Data Processing Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, the tally results are not read into memory because they might be large, even large enough to exceed the available memory on a computer.
# Load the statepoint file sp = StatePoint('statepoint.20.h5')
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
You may have also noticed we instructed OpenMC to create a summary file with lots of geometry information in it. This can help to produce more sensible output from the Python API, so we will use the summary file to link against.
# Load the summary file and link with statepoint su = Summary('summary.h5') sp.link_with_summary(su)
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
We have a tally of the total fission rate and the total absorption rate, so we can calculate k-infinity as: $$k_\infty = \frac{\langle \nu \Sigma_f \phi \rangle}{\langle \Sigma_a \phi \rangle}$$ In this notation, $\langle \cdot \rangle^a_b$ represents an OpenMC that is integrated over region $a$ and energy range $b$. If $a$ or $b$ is not reported, it means the value represents an integral over all space or all energy, respectively.
# Compute k-infinity using tally arithmetic fiss_rate = sp.get_tally(name='fiss. rate') abs_rate = sp.get_tally(name='abs. rate') keff = fiss_rate / abs_rate keff.get_pandas_dataframe()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
Notice that even though the neutron production rate and absorption rate are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically! Often in textbooks you'll see k-infinity represented using the four-factor formula $$k_\infty = p \epsilon f \eta.$$ Let's analyze each of these factors, starting with the resonance escape probability which is defined as $$p=\frac{\langle\Sigma_a\phi\rangle_T}{\langle\Sigma_a\phi\rangle}$$ where the subscript $T$ means thermal energies.
# Compute resonance escape probability using tally arithmetic therm_abs_rate = sp.get_tally(name='therm. abs. rate') res_esc = therm_abs_rate / abs_rate res_esc.get_pandas_dataframe()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
The fast fission factor can be calculated as $$\epsilon=\frac{\langle\nu\Sigma_f\phi\rangle}{\langle\nu\Sigma_f\phi\rangle_T}$$
# Compute fast fission factor factor using tally arithmetic therm_fiss_rate = sp.get_tally(name='therm. fiss. rate') fast_fiss = fiss_rate / therm_fiss_rate fast_fiss.get_pandas_dataframe()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
The thermal flux utilization is calculated as $$f=\frac{\langle\Sigma_a\phi\rangle^F_T}{\langle\Sigma_a\phi\rangle_T}$$ where the superscript $F$ denotes fuel.
# Compute thermal flux utilization factor using tally arithmetic fuel_therm_abs_rate = sp.get_tally(name='fuel therm. abs. rate') therm_util = fuel_therm_abs_rate / therm_abs_rate therm_util.get_pandas_dataframe()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
The final factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\eta = \frac{\langle \nu\Sigma_f\phi \rangle_T}{\langle \Sigma_a \phi \rangle^F_T}$$
# Compute neutrons produced per absorption (eta) using tally arithmetic eta = therm_fiss_rate / fuel_therm_abs_rate eta.get_pandas_dataframe()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
Now we can calculate $k_\infty$ using the product of the factors form the four-factor formula.
keff = res_esc * fast_fiss * therm_util * eta keff.get_pandas_dataframe()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger. Let's move on to a more complicated example now. Before we set up tallies to get reaction rates in the fuel and moderator in two energy groups for two different nuclides. We can use tally arithmetic to divide each of these reaction rates by the flux to get microscopic multi-group cross sections.
# Compute microscopic multi-group cross-sections flux = sp.get_tally(name='flux') flux = flux.get_slice(filters=['cell'], filter_bins=[(fuel_cell.id,)]) fuel_rxn_rates = sp.get_tally(name='fuel rxn rates') mod_rxn_rates = sp.get_tally(name='moderator rxn rates') fuel_xs = fuel_rxn_rates / flux fuel_xs.get_pandas_dataframe()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections.
# Show how to use Tally.get_values(...) with a CrossScore nu_fiss_xs = fuel_xs.get_values(scores=['(nu-fission / flux)']) print(nu_fiss_xs)
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
The same idea can be used not only for scores but also for filters and nuclides.
# Show how to use Tally.get_values(...) with a CrossScore and CrossNuclide u235_scatter_xs = fuel_xs.get_values(nuclides=['(U-235 / total)'], scores=['(scatter / flux)']) print(u235_scatter_xs) # Show how to use Tally.get_values(...) with a CrossFilter and CrossScore fast_scatter_xs = fuel_xs.get_values(filters=['energy'], filter_bins=[((0.625e-6, 20.),)], scores=['(scatter / flux)']) print(fast_scatter_xs)
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format.
# "Slice" the nu-fission data into a new derived Tally nu_fission_rates = fuel_rxn_rates.get_slice(scores=['nu-fission']) nu_fission_rates.get_pandas_dataframe() # "Slice" the H-1 scatter data in the moderator Cell into a new derived Tally need_to_slice = sp.get_tally(name='need-to-slice') slice_test = need_to_slice.get_slice(scores=['scatter'], nuclides=['H-1'], filters=['cell'], filter_bins=[(moderator_cell.id,)]) slice_test.get_pandas_dataframe()
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mjlong/openmc
mit
Signal-space separation (SSS) and Maxwell filtering This tutorial covers reducing environmental noise and compensating for head movement with SSS and Maxwell filtering. :depth: 2 As usual we'll start by importing the modules we need, loading some example data &lt;sample-dataset&gt;, and cropping it to save on memory:
import os import mne sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False) raw.crop(tmax=60).load_data()
0.19/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Background on SSS and Maxwell filtering ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Signal-space separation (SSS) [1] [2] is a technique based on the physics of electromagnetic fields. SSS separates the measured signal into components attributable to sources inside the measurement volume of the sensor array (the internal components), and components attributable to sources outside the measurement volume (the external components). The internal and external components are linearly independent, so it is possible to simply discard the external components to reduce environmental noise. Maxwell filtering is a related procedure that omits the higher-order components of the internal subspace, which are dominated by sensor noise. Typically, Maxwell filtering and SSS are performed together (in MNE-Python they are implemented together in a single function). Like SSP &lt;tut-artifact-ssp&gt;, SSS is a form of projection. Whereas SSP empirically determines a noise subspace based on data (empty-room recordings, EOG or ECG activity, etc) and projects the measurements onto a subspace orthogonal to the noise, SSS mathematically constructs the external and internal subspaces from spherical harmonics_ and reconstructs the sensor signals using only the internal subspace (i.e., does an oblique projection). <div class="alert alert-danger"><h4>Warning</h4><p>Maxwell filtering was originally developed for Elekta Neuromag® systems, and should be considered *experimental* for non-Neuromag data. See the Notes section of the :func:`~mne.preprocessing.maxwell_filter` docstring for details.</p></div> The MNE-Python implementation of SSS / Maxwell filtering currently provides the following features: Bad channel reconstruction Cross-talk cancellation Fine calibration correction tSSS Coordinate frame translation Regularization of internal components using information theory Raw movement compensation (using head positions estimated by MaxFilter) cHPI subtraction (see :func:mne.chpi.filter_chpi) Handling of 3D (in addition to 1D) fine calibration files Epoch-based movement compensation as described in [1]_ through :func:mne.epochs.average_movements Experimental processing of data from (un-compensated) non-Elekta systems Using SSS and Maxwell filtering in MNE-Python ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For optimal use of SSS with data from Elekta Neuromag® systems, you should provide the path to the fine calibration file (which encodes site-specific information about sensor orientation and calibration) as well as a crosstalk compensation file (which reduces interference between Elekta's co-located magnetometer and paired gradiometer sensor units).
fine_cal_file = os.path.join(sample_data_folder, 'SSS', 'sss_cal_mgh.dat') crosstalk_file = os.path.join(sample_data_folder, 'SSS', 'ct_sparse_mgh.fif')
0.19/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Before we perform SSS we'll set a couple additional bad channels — MEG 2313 has some DC jumps and MEG 1032 has some large-ish low-frequency drifts. After that, performing SSS and Maxwell filtering is done with a single call to :func:~mne.preprocessing.maxwell_filter, with the crosstalk and fine calibration filenames provided (if available):
raw.info['bads'].extend(['MEG 1032', 'MEG 2313']) raw_sss = mne.preprocessing.maxwell_filter(raw, cross_talk=crosstalk_file, calibration=fine_cal_file)
0.19/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
<div class="alert alert-danger"><h4>Warning</h4><p>Automatic bad channel detection is not currently implemented. It is critical to mark bad channels in ``raw.info['bads']`` *before* calling :func:`~mne.preprocessing.maxwell_filter` in order to prevent bad channel noise from spreading.</p></div> To see the effect, we can plot the data before and after SSS / Maxwell filtering.
raw.pick(['meg']).plot(duration=2, butterfly=True) raw_sss.pick(['meg']).plot(duration=2, butterfly=True)
0.19/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Notice that channels marked as "bad" have been effectively repaired by SSS, eliminating the need to perform interpolation &lt;tut-bad-channels&gt;. The heartbeat artifact has also been substantially reduced. The :func:~mne.preprocessing.maxwell_filter function has parameters int_order and ext_order for setting the order of the spherical harmonic expansion of the interior and exterior components; the default values are appropriate for most use cases. Additional parameters include coord_frame and origin for controlling the coordinate frame ("head" or "meg") and the origin of the sphere; the defaults are appropriate for most studies that include digitization of the scalp surface / electrodes. See the documentation of :func:~mne.preprocessing.maxwell_filter for details. Spatiotemporal SSS (tSSS) ^^^^^^^^^^^^^^^^^^^^^^^^^ An assumption of SSS is that the measurement volume (the spherical shell where the sensors are physically located) is free of electromagnetic sources. The thickness of this source-free measurement shell should be 4-8 cm for SSS to perform optimally. In practice, there may be sources falling within that measurement volume; these can often be mitigated by using Spatiotemporal Signal Space Separation (tSSS) [2]_. tSSS works by looking for temporal correlation between components of the internal and external subspaces, and projecting out any components that are common to the internal and external subspaces. The projection is done in an analogous way to SSP &lt;tut-artifact-ssp&gt;, except that the noise vector is computed across time points instead of across sensors. To use tSSS in MNE-Python, pass a time (in seconds) to the parameter st_duration of :func:~mne.preprocessing.maxwell_filter. This will determine the "chunk duration" over which to compute the temporal projection. The chunk duration effectively acts as a high-pass filter with a cutoff frequency of $\frac{1}{\mathtt{st_duration}}~\mathrm{Hz}$; this effective high-pass has an important consequence: In general, larger values of st_duration are better (provided that your computer has sufficient memory) because larger values of st_duration will have a smaller effect on the signal. If the chunk duration does not evenly divide your data length, the final (shorter) chunk will be added to the prior chunk before filtering, leading to slightly different effective filtering for the combined chunk (the effective cutoff frequency differing at most by a factor of 2). If you need to ensure identical processing of all analyzed chunks, either: choose a chunk duration that evenly divides your data length (only recommended if analyzing a single subject or run), or include at least 2 * st_duration of post-experiment recording time at the end of the :class:~mne.io.Raw object, so that the data you intend to further analyze is guaranteed not to be in the final or penultimate chunks. Additional parameters affecting tSSS include st_correlation (to set the correlation value above which correlated internal and external components will be projected out) and st_only (to apply only the temporal projection without also performing SSS and Maxwell filtering). See the docstring of :func:~mne.preprocessing.maxwell_filter for details. Movement compensation ^^^^^^^^^^^^^^^^^^^^^ If you have information about subject head position relative to the sensors (i.e., continuous head position indicator coils, or :term:cHPI &lt;hpi&gt;), SSS can take that into account when projecting sensor data onto the internal subspace. Head position data is loaded with the :func:~mne.chpi.read_head_pos function. The example data &lt;sample-dataset&gt; doesn't include cHPI, so here we'll load a :file:.pos file used for testing, just to demonstrate:
head_pos_file = os.path.join(mne.datasets.testing.data_path(), 'SSS', 'test_move_anon_raw.pos') head_pos = mne.chpi.read_head_pos(head_pos_file) mne.viz.plot_head_positions(head_pos, mode='traces')
0.19/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting with parameters Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$. Customize your visualization to make it effective and beautiful. Customize the box, grid, spines and ticks to match the requirements of this data. Use enough points along the x-axis to get a smooth plot. For the x-axis tick locations use integer multiples of $\pi$. For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$.
def plot_sine1(a, b): x=range(0, 4*np.pi) y= np.sin(a*x + b) plt.plot() #for x in range(0, 4*np.pi, np.dtype(float)): #plt.plot(a, b, x, np.sin(a*x+b)) plot_sine1(5.0, 3.4)
assignments/assignment05/InteractEx02.ipynb
sthuggins/phys202-2015-work
mit
Then use interact to create a user interface for exploring your function: a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$. b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.
# YOUR CODE HERE raise NotImplementedError() assert True # leave this for grading the plot_sine1 exercise
assignments/assignment05/InteractEx02.ipynb
sthuggins/phys202-2015-work
mit